• Home
  • Blog
  • Android
  • Cars
  • Gadgets
  • Gaming
  • Internet
  • Mobile
  • Sci-Fi
Tech News, Magazine & Review WordPress Theme 2017
  • Home
  • Blog
  • Android
  • Cars
  • Gadgets
  • Gaming
  • Internet
  • Mobile
  • Sci-Fi
No Result
View All Result
  • Home
  • Blog
  • Android
  • Cars
  • Gadgets
  • Gaming
  • Internet
  • Mobile
  • Sci-Fi
No Result
View All Result
Blog - Creative Collaboration
No Result
View All Result
Home Sci-Fi

ChatGPT reportedly leaked private conversations from pharmacy customers

January 30, 2024
Share on FacebookShare on Twitter

Another day, another ChatGPT data leak.

This time, it was login credentials and personal information from a pharmacy customer on a prescription drug portal. According to Ars Technica, a user named Chase Whiteside unwittingly received these chunks of a conversation in response to an unrelated query and submitted it to the tech site.

SEE ALSO:

OpenAI releases ChatGPT data leak patch, but the issue isn’t completely fixed

“I went to make a query (in this case, help coming up with clever names for colors in a palette)” wrote Whiteside in an email. “When I returned to access moments later, I noticed the additional conversations.”

The conversations appear to be from a frustrated employee troubleshooting issues with an app (name redacted by Ars Technica) used by the pharmacy. In addition to the entire text disparaging the app, the leak included a customer’s username, password, and the employee’s store number. It’s unclear whether this is the case, but it looks like the entire feedback ticket was included in the ChatGPT response.

This isn’t the first time ChatGPT had security problems. Hackers and researchers have discovered vulnerabilities that enable them to extract sensitive information, either through prompt injection or jailbreaking.

Last March, a bug was discovered that revealed ChatGPT Plus users’ payment information. Although OpenAI addressed certain issues related to ChatGPT users, it doesn’t protect from personal or confidential information shared with ChatGPT. This was the case when Samsung employees using ChatGPT to help with code accidentally leaked company secrets, and is why many companies have banned ChatGPT usage.

This seems to be what happened here, but OpenAI isn’t off the hook. According to its privacy policy, input data is supposed to be anonymized and stripped of any personally identifiable information. Since the makers themselves can’t always pinpoint what leads to certain outputs, this leak underscores the inherent risks of LLMs.

We’ll say it again: don’t share any sensitive or personal information — especially if it’s not yours — with ChatGPT.

Topics
Artificial Intelligence
ChatGPT

Next Post

YouTube TV lets you 'build a multiview,' and choose the sports you want

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

No Result
View All Result

Recent Posts

  • NYT Pips hints, answers for March 7, 2026
  • House of Moto Indigo offers ‘depth’ to Motorola’s future, alongside a GrapheneOS partnership
  • NYT Connections Sports Edition hints and answers for March 7: Tips to solve Connections #530
  • Valve questions if it can release Steam Machine in 2026
  • NYT Strands hints and answers for Sunday, March 8 (game #735)

Recent Comments

    No Result
    View All Result

    Categories

    • Android
    • Cars
    • Gadgets
    • Gaming
    • Internet
    • Mobile
    • Sci-Fi
    • Home
    • Shop
    • Privacy Policy
    • Terms and Conditions

    © CC Startup, Powered by Creative Collaboration. © 2020 Creative Collaboration, LLC. All Rights Reserved.

    No Result
    View All Result
    • Home
    • Blog
    • Android
    • Cars
    • Gadgets
    • Gaming
    • Internet
    • Mobile
    • Sci-Fi

    © CC Startup, Powered by Creative Collaboration. © 2020 Creative Collaboration, LLC. All Rights Reserved.

    Get more stuff like this
    in your inbox

    Subscribe to our mailing list and get interesting stuff and updates to your email inbox.

    Thank you for subscribing.

    Something went wrong.

    We respect your privacy and take protecting it seriously