Description: In China, AI tools were reportedly used to fabricate and disseminate false reports of disasters, including a landslide in Yunnan, an earthquake in Sichuan, and a sudden death after a traffic incident. On May 27, 2024, a real 5.0-magnitude earthquake occurred in Muli County, Sichuan, with no casualties and limited property damage. However, a social media post later falsely claimed the epicenter was in Xide County and exaggerated the event's severity, adding fabricated images of extensive destruction. The deployers of these false reports have since received administrative penalties from Chinese authorities for their actions.
Editor Notes: Incident 835 presents multiple discrete AI-related incidents, each pointing to the wider application of AI tools to fabricate events and spread misinformation on social media throughout China, similar to Incident 834. Anonymous names like "Moumou" are used to preserve privacy in reporting. The following is a reconstruction of the timeline of events associated with this incident ID: (1) 1/23/2024, a fabricated landslide incident in Yunnan: Yang Moumou allegedly used AI tools to create and share a fake news article claiming that a landslide in Yunnan had resulted in eight deaths. That misinformation was posted on social media to attract views and engagement. Authorities identified the incident as AI-driven misinformation, and Yang was subsequently given an administrative penalty. (2) 5/27/2024, a real 5.0-magnitude earthquake occurred in Muli County, Sichuan. It resulted in minor property damage but no casualties. However, this actual event would later become the basis for an exaggerated AI-fabricated narrative. (3) 6/9/2024, a fabricated earthquake report in Xide County, Sichuan: Luo Moumou reportedly used AI to enhance and circulate a false report about the May 27 earthquake, falsely claiming the epicenter was in Xide County, Sichuan, with severe property damage and casualties. The fabricated images and videos spread quickly on social media, which led to confusion and alarm. Luo received an administrative penalty for disseminating false information. (4) Sometime in June 2024, a fabricated fatal traffic incident in Chengcheng County: Tian Mou used AI to create and post a story claiming that a middle-aged woman in Chengcheng County suffered a fatal heart attack after her electric scooter was impounded during a traffic stop. The narrative was designed to generate an emotional response for the purposes of driving engagement, but it misled the public. Tian admitted to fabricating the story using AI and was issued an administrative penalty.
Alleged: Unknown deepfake technology developer と Unknown AI developers developed an AI system deployed by Yang Moumou , Tian Mou と Lou Moumou, which harmed Yunnan general public , Xide County in Sichuan residents , Sichuan general public , Muli County in Sichuan residents , Chinese general public , Chinese citizens と Chengcheng County Shaanxi residents.
インシデントのステータス
インシデントID
835
レポート数
1
インシデント発生日
2024-08-06
エディタ
Daniel Atherton
インシデントレポート
レポートタイムライン
baijiahao.baidu.com · 2024
- 情報源として元のレポートを表示
- インターネットアーカイブでレポートを表示
近年、ビッグデータや人工知能(AI)などに代表される情報技術は日々変化しています。一方で、AIツールの合成技術によりデマの捏造が行われ、さらには現実に近い画像や音声、映像が生成され、「画像あるところに真実がある」という従来の認識が覆され、困難 化が進んでいます。真と偽を区別するため。今日は、インターネット ポリスが AI ツールによって広まったデマの背後にある真実を理解するお手伝いをします。
[AIによる噂作りの定義]
AI 噂作成 とは、人工知能テクノロジーを使用してテキスト…
バリアント
「バリアント」は既存のAIインシデントと同じ原因要素を共有し、同様な被害を引き起こし、同じ知的システムを含んだインシデントです。バリアントは完全に独立したインシデントとしてインデックスするのではなく、データベースに最初に投稿された同様なインシデントの元にインシデントのバリエーションとして一覧します。インシデントデータベースの他の投稿タイプとは違い、バリアントではインシデントデータベース以外の根拠のレポートは要求されません。詳細についてはこの研究論文を参照してください
よく似たインシデント
Did our AI mess up? Flag the unrelated incidents
よく似たインシデント
Did our AI mess up? Flag the unrelated incidents