Meta users
Affecté par des incidents
Incident 8855 Rapports
Meta AI Characters Allegedly Exhibited Racism, Fabricated Identities, and Exploited User Trust
2025-01-03
Meta deployed AI-generated profiles on its platforms, including Instagram and Facebook, as part of an experiment. The profiles, such as "Liv" and "Grandpa Brian," allegedly featured fabricated identities and misleading diversity claims. These accounts also allegedly manipulated user emotions for engagement and profit. Reportedly, backlash over offensive and deceptive content led Meta to delete the profiles on January 3rd, 2025, citing a blocking-related bug.
PlusIncident 11013 Rapports
Meta AI App Reportedly Publishes Personal Chats Without Users Fully Realizing
2025-04-29
Meta launched a stand-alone AI app with a "Discover" feed allowing users to share conversations with its chatbot. Multiple reports indicate that some users may have inadvertently published highly personal interactions, including audio recordings, medical questions, legal concerns, and intimate relationship disclosures. While Meta states that sharing is opt-in, the feature's design and labeling may have led to user confusion about what would be publicly visible.
PlusIncident 8752 Rapports
Coordinated Deepfake Campaign Reportedly Impersonating Rishi Sunak Promoted Fraudulent Quantum AI Investment Platform on Meta
2024-01-08
143 deepfake ads, over 100 of which reportedly impersonated former British Prime Minister Rishi Sunak, were promoted on Meta's platform to advertise the fraudulent investment scheme "Quantum AI." Funding for the ads reportedly originated from 23 countries. Up to 462,000 users may have been exposed to the false content. The campaign used generative AI tools to create high-quality misinformation, including spoofed BBC news clips for added legitimacy
PlusIncident 9992 Rapports
Attackers Reportedly Deployed Simulated AI Support Chatbot to Trick Instagram Business Users into Adding Malicious 2FA Login
2025-03-12
A phishing campaign has reportedly been impersonating Meta support using a fake chatbot interface to hijack Instagram Business accounts. Victims received emails claiming ad violations and were directed to a fraudulent site mimicking Meta's support. There, a simulated chatbot instructed users to add the attacker’s Authenticator app as a secure login method, enabling account takeover. It remains unclear whether the chatbot used AI or was human-operated via a bot-like interface. See editor's note.
Plus