概要: ビクトリア州の児童保護担当者は、児童裁判所に提出する報告書を作成するためにChatGPTを使用しました。AIが作成した報告書には不正確な点があり、児童へのリスクが軽視されていたため、機密情報がOpenAIと共有され、プライバシー侵害が発生しました。
Editor Notes: Reconstructing the timeline of events: Between July and December 2023, according to reporting, nearly 900 employees of Victoria’s Department of Families, Fairness, and Housing (DFFH), representing 13% of the workforce, accessed ChatGPT. In early 2024, a case worker used ChatGPT to draft a child protection report submitted to the Children’s Court. This report contained significant inaccuracies, including the misrepresentation of personal details and a downplaying of risks to the child, whose parents had been charged with sexual offenses. Following this incident, an internal review of the case worker’s unit revealed that over 100 other cases showed signs of potential AI involvement in drafting child protection documents. On September 24, 2024, the department was instructed to ban the use of public generative AI tools and to notify staff accordingly, but the Office of the Victorian Information Commissioner (OVIC) found this directive had not been fully implemented. The next day, on September 25, 2024, OVIC released its investigation findings, confirming the inaccuracies in the ChatGPT-generated report and outlined the risks associated with AI use in child protection cases. OVIC issued a compliance notice requiring DFFH to block access to generative AI tools by November 5, 2024.
Alleged: OpenAI developed an AI system deployed by Department of Families Fairness and Housing , Government of Victoria と Employee of Department of Families Fairness and Housing, which harmed Unnamed child と Unnamed family of child.
インシデントのステータス
Risk Subdomain
A further 23 subdomains create an accessible and understandable classification of hazards and harms associated with AI
2.1. Compromise of privacy by obtaining, leaking or correctly inferring sensitive information
Risk Domain
The Domain Taxonomy of AI Risks classifies risks into seven AI risk domains: (1) Discrimination & toxicity, (2) Privacy & security, (3) Misinformation, (4) Malicious actors & misuse, (5) Human-computer interaction, (6) Socioeconomic & environmental harms, and (7) AI system safety, failures & limitations.
- Privacy & Security
Entity
Which, if any, entity is presented as the main cause of the risk
Human
Timing
The stage in the AI lifecycle at which the risk is presented as occurring
Post-deployment
Intent
Whether the risk is presented as occurring as an expected or unexpected outcome from pursuing a goal
Intentional
インシデントレポート
レポートタイムライン
Loading...
以下はレポートのエグゼクティブ サマリーのコピーです。レポートの全文をご覧になるには、OVIC が提供する PDF をダウンロードしてください。
エグゼクティブ サマリー
背景
2023 年 12 月、家族・公平・住宅省 (DFFH) は、情報コミッショナー事務所 (OVIC) にプライバシー インシデントを報告し、児童保護ワーカー (CPW1) が保護申請レポート (PA レポート) を作成する際に ChatGPT2 を使用していたことを説明しました。このレポートは、性犯罪…
Loading...