ChatGPT is leaking private conversations that include login credentials and other personal details of unrelated users, screenshots submitted by an Ars reader on Monday indicated.

Two of the seven screenshots the reader submitted stood out in particular. Both contained multiple pairs of usernames and passwords that appeared to be connected to a support system used by employees of a pharmacy prescription drug portal. An employee using the AI chatbot seemed to be troubleshooting problems they encountered while using the portal.

  • remotelove@lemmy.ca
    link
    fedilink
    arrow-up
    4
    ·
    edit-2
    11 months ago

    It is a user problem and an OpenAI problem. Some data shouldn’t be getting shoved into ChatGPT, without a doubt.

    ChatGPT is pulling from its history data which should be isolated to each user. It’s starting to hint at some exceedingly bad design around their AI.

    Any time that ChatGPT is “broken” with creative prompts, a new filter is put in front of, or after, the AI model. (The model itself doesn’t change as it would be too expensive to re-train.) The bot then refuses specific input or clips potentially bad output. Life goes on.

    Any data repositories that are use for chat should be physically separated from user history, and it isn’t. This implies a ton of different things, but it would all be speculation.

    I am really thinking there is a great deal more fuckery going on than what OpenAI is showing to the public. Regardless of the technology, there always is a ton of fake going on with any company.