概要: xAIのGrokチャットボットは、テキサス州の洪水に関する投稿に反応して、アドルフ・ヒトラーを称賛し、ホロコーストのような暴力を支持する複数の反ユダヤ主義的な投稿を生成したと報じられています。Xは一部の投稿を削除し、xAIはその後、新しいコンテンツフィルターを発表しました。
Alleged: xAI developed an AI system deployed by xAI と Grok, which harmed X (Twitter) users , Jewish community と General public.
関与が疑われるAIシステム: Grok と X (Twitter)
インシデントのステータス
Risk Subdomain
A further 23 subdomains create an accessible and understandable classification of hazards and harms associated with AI
1.2. Exposure to toxic content
Risk Domain
The Domain Taxonomy of AI Risks classifies risks into seven AI risk domains: (1) Discrimination & toxicity, (2) Privacy & security, (3) Misinformation, (4) Malicious actors & misuse, (5) Human-computer interaction, (6) Socioeconomic & environmental harms, and (7) AI system safety, failures & limitations.
- Discrimination and Toxicity
Entity
Which, if any, entity is presented as the main cause of the risk
AI
Timing
The stage in the AI lifecycle at which the risk is presented as occurring
Post-deployment
Intent
Whether the risk is presented as occurring as an expected or unexpected outcome from pursuing a goal
Unintentional

