Description: Meta launched a stand-alone AI app with a "Discover" feed allowing users to share conversations with its chatbot. Multiple reports indicate that some users may have inadvertently published highly personal interactions, including audio recordings, medical questions, legal concerns, and intimate relationship disclosures. While Meta states that sharing is opt-in, the feature's design and labeling may have led to user confusion about what would be publicly visible.
Entities
View all entitiesAlleged: Meta and Meta AI developed and deployed an AI system, which harmed Meta users.
Alleged implicated AI system: Meta AI
Incident Stats
Incident ID
1101
Report Count
3
Incident Date
2025-04-29
Editors
Daniel Atherton
Incident Reports
Reports Timeline
Loading...
It's 10 p.m., and I'm in bed on my phone, listening to an audio clip of a woman asking an AI chatbot for medical advice about her sick pet turtle. As someone who loves to lurk in other people's business, I'm in heaven. But how did we get he…
Loading...
Meta's stand-alone AI app launched in late April, and like many people, my first thought was, "Huh?" After trying it out a little, my next thought was, "Oh no." A few weeks later, after the initial surge of curious new users, my thought is …
Loading...
A man wants to know how to help his friend come out of the closet. An aunt struggles to find the right words to congratulate her niece on her graduation. And one guy wants to know how to ask a girl --- "in Asian" --- if she's interested in …
Variants
A "variant" is an AI incident similar to a known case—it has the same causes, harms, and AI system. Instead of listing it separately, we group it under the first reported incident. Unlike other incidents, variants do not need to have been reported outside the AIID. Learn more from the research paper.
Seen something similar?