概要: ペンシルベニア州警察のスティーブン・カムニック巡査部長は、ディープフェイクAIソフトウェアで作成されたコンテンツを含む数千のポルノファイルを職場のコンピューターに保存したとして起訴された。
Alleged: Unknown deepfake technology developers developed an AI system deployed by Stephen Kamnik, which harmed Victims whose images were manipulated と Pennsylvania State Police.
関与が疑われるAIシステム: Unknown deepfake apps
インシデントのステータス
Risk Subdomain
A further 23 subdomains create an accessible and understandable classification of hazards and harms associated with AI
1.2. Exposure to toxic content
Risk Domain
The Domain Taxonomy of AI Risks classifies risks into seven AI risk domains: (1) Discrimination & toxicity, (2) Privacy & security, (3) Misinformation, (4) Malicious actors & misuse, (5) Human-computer interaction, (6) Socioeconomic & environmental harms, and (7) AI system safety, failures & limitations.
- Discrimination and Toxicity
Entity
Which, if any, entity is presented as the main cause of the risk
Human
Timing
The stage in the AI lifecycle at which the risk is presented as occurring
Post-deployment
Intent
Whether the risk is presented as occurring as an expected or unexpected outcome from pursuing a goal
Intentional