Skip to Content
logologo
AI Incident Database
Open TwitterOpen RSS FeedOpen FacebookOpen LinkedInOpen GitHub
Open Menu
発見する
投稿する
  • ようこそAIIDへ
  • インシデントを発見
  • 空間ビュー
  • テーブル表示
  • リスト表示
  • 組織
  • 分類法
  • インシデントレポートを投稿
  • 投稿ランキング
  • ブログ
  • AIニュースダイジェスト
  • リスクチェックリスト
  • おまかせ表示
  • サインアップ
閉じる
発見する
投稿する
  • ようこそAIIDへ
  • インシデントを発見
  • 空間ビュー
  • テーブル表示
  • リスト表示
  • 組織
  • 分類法
  • インシデントレポートを投稿
  • 投稿ランキング
  • ブログ
  • AIニュースダイジェスト
  • リスクチェックリスト
  • おまかせ表示
  • サインアップ
閉じる

インシデント 624: Child Sexual Abuse Material Taints Image Generators

レスポンスしました
概要: The LAION-5B dataset (a commonly used dataset with more than 5 billion image-description pairs) was found by researchers to contain child sexual abuse material (CSAM), which increases the likelihood that downstream models will produce CSAM imagery. The discovery taints models built with the LAION dataset requiring many organizations to retrain those models. Additionally, LAION must now scrub the dataset of the imagery.

ツール

新しいレポート新しいレポート新しいレスポンス新しいレスポンス発見する発見する履歴を表示履歴を表示

組織

すべての組織を表示
Alleged: LAION developed an AI system deployed by Various people と Various organizations, which harmed Various people , Various organizations , LAION , General public と Children.

インシデントのステータス

インシデントID
624
レポート数
18
インシデント発生日
2023-12-20
エディタ
Daniel Atherton
Applied Taxonomies
MIT

MIT 分類法のクラス

Machine-Classified
分類法の詳細

Risk Subdomain

A further 23 subdomains create an accessible and understandable classification of hazards and harms associated with AI
 

2.1. Compromise of privacy by obtaining, leaking or correctly inferring sensitive information

Risk Domain

The Domain Taxonomy of AI Risks classifies risks into seven AI risk domains: (1) Discrimination & toxicity, (2) Privacy & security, (3) Misinformation, (4) Malicious actors & misuse, (5) Human-computer interaction, (6) Socioeconomic & environmental harms, and (7) AI system safety, failures & limitations.
 
  1. Privacy & Security

Entity

Which, if any, entity is presented as the main cause of the risk
 

Human

Timing

The stage in the AI lifecycle at which the risk is presented as occurring
 

Pre-deployment

Intent

Whether the risk is presented as occurring as an expected or unexpected outcome from pursuing a goal
 

Unintentional

インシデントレポート

レポートタイムライン

Safety Review for LAION 5B - レスポンス+13
Investigation Finds AI Image Generation Models Trained on Child Abuse
+1
LAION and the Challenges of Preventing AI-Generated CSAM
+1
AI画像ジェネレーターが児童ポルノ作成のため削除された? - レスポンス
Safety Review for LAION 5B

Safety Review for LAION 5B

laion.ai

Investigation Finds AI Image Generation Models Trained on Child Abuse

Investigation Finds AI Image Generation Models Trained on Child Abuse

cyber.fsi.stanford.edu

AI image training dataset found to include child sexual abuse imagery

AI image training dataset found to include child sexual abuse imagery

theverge.com

Study uncovers presence of CSAM in popular AI training dataset

Study uncovers presence of CSAM in popular AI training dataset

theregister.com

Largest Dataset Powering AI Images Removed After Discovery of Child Sexual Abuse Material

Largest Dataset Powering AI Images Removed After Discovery of Child Sexual Abuse Material

404media.co

A free AI image dataset, removed for child sex abuse images, has come under fire before

A free AI image dataset, removed for child sex abuse images, has come under fire before

venturebeat.com

Child Sex Abuse Material Was Found In a Major AI Dataset. Researchers Aren’t Surprised.

Child Sex Abuse Material Was Found In a Major AI Dataset. Researchers Aren’t Surprised.

vice.com

Stable Diffusion 1.5 Was Trained On Illegal Child Sexual Abuse Material, Stanford Study Says

Stable Diffusion 1.5 Was Trained On Illegal Child Sexual Abuse Material, Stanford Study Says

forbes.com

Researchers found child abuse material in the largest AI image generation dataset

Researchers found child abuse material in the largest AI image generation dataset

engadget.com

AI Training Data Contains Child Sexual Abuse Images, Discovery Points to LAION-5B

AI Training Data Contains Child Sexual Abuse Images, Discovery Points to LAION-5B

techtimes.com

Large AI training data set removed after study finds child abuse material

Large AI training data set removed after study finds child abuse material

cointelegraph.com

An Influential AI Dataset Contains Thousands of Suspected Child Sexual Abuse Images

An Influential AI Dataset Contains Thousands of Suspected Child Sexual Abuse Images

gizmodo.com

Abuse material found in openly accessible data set

Abuse material found in openly accessible data set

cybernews.com

Major Error Found in Stable Diffusion’s Biggest Training Dataset

Major Error Found in Stable Diffusion’s Biggest Training Dataset

analyticsvidhya.com

LAION and the Challenges of Preventing AI-Generated CSAM

LAION and the Challenges of Preventing AI-Generated CSAM

techpolicy.press

LAION-5B, Stable Diffusion 1.5, and the Original Sin of Generative AI

LAION-5B, Stable Diffusion 1.5, and the Original Sin of Generative AI

techpolicy.press

AI画像ジェネレーターが児童ポルノ作成のため削除された?

AI画像ジェネレーターが児童ポルノ作成のため削除された?

spectrum.ieee.org

研究者らによると、児童虐待画像がAI画像生成器のトレーニングソースから削除されたという。

研究者らによると、児童虐待画像がAI画像生成器のトレーニングソースから削除されたという。

apnews.com

Safety Review for LAION 5B
laion.ai · 2023
LAION.aiによるインシデント後のレスポンス

There have been reports in the press about the results of a research project at Stanford University, according to which the LAION training set 5B contains potentially illegal content in the form of CSAM. We would like to comment on this as …

Investigation Finds AI Image Generation Models Trained on Child Abuse
cyber.fsi.stanford.edu · 2023

A Stanford Internet Observatory (SIO) investigation identified hundreds of known images of child sexual abuse material (CSAM) in an open dataset used to train popular AI text-to-image generation models, such as Stable Diffusion.

A previous …

AI image training dataset found to include child sexual abuse imagery
theverge.com · 2023

A popular training dataset for AI image generation contained links to child abuse imagery, Stanford’s Internet Observatory found, potentially allowing AI models to create harmful content.  

LAION-5B, a dataset used by Stable Diffusion creat…

Study uncovers presence of CSAM in popular AI training dataset
theregister.com · 2023

A massive public dataset that served as training data for a number of AI image generators has been found to contain thousands of instances of child sexual abuse material (CSAM).

In a study published today, the Stanford Internet Observatory …

Largest Dataset Powering AI Images Removed After Discovery of Child Sexual Abuse Material
404media.co · 2023

This piece is published with support from The Capitol Forum.

The LAION-5B machine learning dataset used by Stable Diffusion and other major AI products has been removed by the organization that created it after a Stanford study found that i…

A free AI image dataset, removed for child sex abuse images, has come under fire before
venturebeat.com · 2023

A massive open-source AI dataset, LAION-5B, which has been used to train popular AI text-to-image generators like Stable Diffusion 1.5 and Google's Imagen, contains at least 1,008 instances of child sexual abuse material, a new report from …

Child Sex Abuse Material Was Found In a Major AI Dataset. Researchers Aren’t Surprised.
vice.com · 2023

Over 1,000 images of sexually abused children have been discovered inside the largest dataset used to train image-generating AI, shocking everyone except for the people who have warned about this exact sort of thing for years.

The dataset w…

Stable Diffusion 1.5 Was Trained On Illegal Child Sexual Abuse Material, Stanford Study Says
forbes.com · 2023

Stable Diffusion, one of the most popular text-to-image generative AI tools on the market from the $1 billion startup Stability AI, was trained on a trove of illegal child sexual abuse material, according to new research from the Stanford I…

Researchers found child abuse material in the largest AI image generation dataset
engadget.com · 2023

Researchers from the Stanford Internet Observatory say that a dataset used to train AI image generation tools contains at least 1,008 validated instances of child sexual abuse material. The Stanford researchers note that the presence of CSA…

AI Training Data Contains Child Sexual Abuse Images, Discovery Points to LAION-5B
techtimes.com · 2023

There have been significant problems with AI's training data, with various complaints already filed by those who claimed their work was stolen, but the most recent discovery saw child sexual abuse images in their dataset. In a recent study,…

Large AI training data set removed after study finds child abuse material
cointelegraph.com · 2023

A widely-used artificial intelligence data set used to train Stable Diffusion, Imagen and other AI image generator models has been removed by its creator after a study found it contained thousands of instances of suspected child sexual abus…

An Influential AI Dataset Contains Thousands of Suspected Child Sexual Abuse Images
gizmodo.com · 2023

An influential machine learning dataset—the likes of which has been used to train numerous popular image-generation applications—includes thousands of suspected images of child sexual abuse, a new academic report reveals.

The report, put to…

Abuse material found in openly accessible data set
cybernews.com · 2023

Child sexual abuse material (CSAM) has been located in LAION, a major data set used to train AI.

The Stanford Internet Observatory revealed thousands of images of child sexual abuse in the LAION-5B data set, which supports many different AI…

Major Error Found in Stable Diffusion’s Biggest Training Dataset
analyticsvidhya.com · 2023

The integrity of a major AI image training dataset, LAION-5B, utilized by influential AI models like Stable Diffusion, has been compromised after the discovery of thousands of links to Child Sexual Abuse Material (CSAM). This revelation has…

LAION and the Challenges of Preventing AI-Generated CSAM
techpolicy.press · 2024

Generative AI has been democratized. The toolkits to download, set up, use, and fine-tune a variety of models have been turned into one-click frameworks for anyone with a laptop to use. While this technology allows users to generate and exp…

LAION-5B, Stable Diffusion 1.5, and the Original Sin of Generative AI
techpolicy.press · 2024

In The Ones Who Walk Away From Omelas, the fiction writer Ursula K. Le Guin describes a fantastic city wherein technological advancement has ensured a life of abundance for all who live there. Hidden beneath the city, where nobody needs to …

AI画像ジェネレーターが児童ポルノ作成のため削除された?
spectrum.ieee.org · 2024
自動翻訳済み
David Evan Harris, Dave Willnerによるインシデント後のレスポンス

何百万ドル、何十億ドルという価値がある AI 企業が、AI 生成の児童性的虐待素材 (CSAM) を作成できるツールを開発し配布しているのはなぜでしょうか?

AI 企業 Runway が Stability AI の資金提供を受けて作成した Stable Diffusion バージョン 1.5 と呼ばれる画像ジェネレーターは、特に CSAM の作成に関与していると言われています。また、Hugging Face や Civitai などの人気プラットフォームは、そのモデルや、児…

研究者らによると、児童虐待画像がAI画像生成器のトレーニングソースから削除されたという。
apnews.com · 2024
自動翻訳済み

人工知能の研究者らは金曜日、人気のAI画像生成ツールのトレーニングに使用されたデータセットから、児童性的虐待の疑いのある画像へのウェブリンク2,000件以上を削除したと発表した。

LAIONの研究データセットは、オンライン画像とキャプションの巨大なインデックスであり、Stable DiffusionやMidjourneyなどの主要なAI画像生成ツールのソースとなっている。

しかし、スタンフォード・インターネット・オブザーバトリーによる昨年のレポートによると、このデータセットに…

バリアント

「バリアント」は既存のAIインシデントと同じ原因要素を共有し、同様な被害を引き起こし、同じ知的システムを含んだインシデントです。バリアントは完全に独立したインシデントとしてインデックスするのではなく、データベースに最初に投稿された同様なインシデントの元にインシデントのバリエーションとして一覧します。インシデントデータベースの他の投稿タイプとは違い、バリアントではインシデントデータベース以外の根拠のレポートは要求されません。詳細についてはこの研究論文を参照してください

よく似たインシデント

テキスト類似度による

Did our AI mess up? Flag the unrelated incidents

DALL-E 2 Reported for Gender and Racially Biased Outputs

A new AI draws delightful and not-so-delightful images

Apr 2022 · 3 レポート
Sexist and Racist Google Adsense Advertisements

Discrimination in Online Ad Delivery

Jan 2013 · 27 レポート
Facebook’s Political Ad Detection Reportedly Showed High and Geographically Uneven Error Rates

An Audit of Facebook’s Political Ad Policy Enforcement

Jul 2020 · 5 レポート
前のインシデント次のインシデント

よく似たインシデント

テキスト類似度による

Did our AI mess up? Flag the unrelated incidents

DALL-E 2 Reported for Gender and Racially Biased Outputs

A new AI draws delightful and not-so-delightful images

Apr 2022 · 3 レポート
Sexist and Racist Google Adsense Advertisements

Discrimination in Online Ad Delivery

Jan 2013 · 27 レポート
Facebook’s Political Ad Detection Reportedly Showed High and Geographically Uneven Error Rates

An Audit of Facebook’s Political Ad Policy Enforcement

Jul 2020 · 5 レポート

リサーチ

  • “AIインシデント”の定義
  • “AIインシデントレスポンス”の定義
  • データベースのロードマップ
  • 関連研究
  • 全データベースのダウンロード

プロジェクトとコミュニティ

  • AIIDについて
  • コンタクトとフォロー
  • アプリと要約
  • エディタのためのガイド

インシデント

  • 全インシデントの一覧
  • フラグの立ったインシデント
  • 登録待ち一覧
  • クラスごとの表示
  • 分類法

2024 - AI Incident Database

  • 利用規約
  • プライバシーポリシー
  • Open twitterOpen githubOpen rssOpen facebookOpen linkedin
  • ecd56df