インシデントのステータス
GMF 分類法のクラス
分類法の詳細Known AI Goal Snippets
(Snippet Text: Without sufficient guardrails, models like DALL·E 2 could be used to generate a wide range of deceptive and otherwise harmful content, and could affect how people perceive the authenticity of content more generally., Related Classifications: Visual Art Generation)
CSETv1_Annotator-1 分類法のクラス
分類法の詳細Incident Number
179
インシデントレポート
レポートタイムライン
- 情報源として元のレポートを表示
- インターネットアーカイブでレポートを表示
Summary
Below, we summarize initial findings on potential risks associated with DALL·E 2, and mitigations aimed at addressing those risks as part of the ongoing Preview of this technology. We are sharing these findings in order to enable br…
- 情報源として元のレポートを表示
- インターネットアーカイブでレポートを表示
You may have seen some weird and whimsical pictures floating around the internet recently. There’s a Shiba Inu dog wearing a beret and black turtleneck. And a sea otter in the style of “Girl with a Pearl Earring” by the Dutch painter Vermee…
- 情報源として元のレポートを表示
- インターネットアーカイブでレポートを表示
Researchers experimenting with OpenAI's text-to-image tool, DALL-E 2, noticed that it seems to covertly be adding words such as "black" and "female" to image prompts, seemingly in an effort to diversify its output
Artificial intelligence fi…
バリアント
よく似たインシデント
Did our AI mess up? Flag the unrelated incidents
よく似たインシデント
Did our AI mess up? Flag the unrelated incidents