EleutherAI
開発者と提供者の両方の立場で関わったインシデント
インシデント 9962 レポート
Meta Allegedly Used Books3, a Dataset of 191,000 Pirated Books, to Train LLaMA AI
2020-10-25
Meta and Bloomberg allegedly used Books3, a dataset containing 191,000 pirated books, to train their AI models, including LLaMA and BloombergGPT, without author consent. Lawsuits from authors such as Sarah Silverman and Michael Chabon claim this constitutes copyright infringement. Books3 includes works from major publishers like Penguin Random House and HarperCollins. Meta argues its AI outputs are not "substantially similar" to the original books, but legal challenges continue.
もっとIncidents involved as Developer
インシデント 42112 レポート
Stable Diffusion Allegedly Used Artists' Works without Permission for AI Training
2022-11-20
Text-to-image model Stable Diffusion was reportedly using artists' original works without permission for its AI training.
もっとインシデント 4235 レポート
Lensa AI's Produced Unintended Sexually Explicit or Suggestive "Magic Avatars" for Women
2022-11-22
Lensa AI's "Magic Avatars" were reportedly generating sexually explicit and sexualized features disproportionately for women and Asian women despite not submitting any sexual content.
もっとインシデント 4515 レポート
Stable Diffusion's Training Data Contained Copyrighted Images
2022-10-16
Stability AI reportedly scraped copyrighted images by Getty Images to be used as training data for Stable Diffusion model.
もっとインシデント 5293 レポート
Stable Diffusion Exhibited Biases for Prompts Featuring Professions
2022-08-22
Stable Diffusion reportedly posed risks of bias and stereotyping along gender and cultural lines for prompts containing descriptors and professions.
もっと関連団体
同じインシデントに関連するその他のエンティティ。たとえば、インシデントの開発者がこのエンティティで、デプロイヤーが別のエンティティである場合、それらは関連エンティティとしてマークされます。
関連団体
Stability AI
開発者と提供者の両方の立場で関わったインシデント
- インシデント 42112 レポート
Stable Diffusion Allegedly Used Artists' Works without Permission for AI Training
- インシデント 4515 レポート
Stable Diffusion's Training Data Contained Copyrighted Images