Incident 399: Meta AI's Scientific Paper Generator Reportedly Produced Inaccurate and Harmful Content

Description: Meta AI trained and hosted a scientific paper generator that sometimes produced bad science and prohibited queries on topics and groups that are likely to produce offensive or harmful content.


New ReportNew ReportNew ResponseNew ResponseDiscoverDiscoverView HistoryView History
Alleged: Meta AI , Meta and Facebook developed and deployed an AI system, which harmed Meta AI , Meta , Facebook and Minority Groups.

Incident Stats

Incident ID
Report Count
Incident Date
Sean McGregor
New Meta AI demo writes racist and inaccurate scientific literature, gets pulled · 2022

On Tuesday, Meta AI unveiled a demo of Galactica, a large language model designed to "store, combine and reason about scientific knowledge." While intended to accelerate writing scientific literature, adversarial users running tests found i…

Why Meta’s latest large language model survived only three days online · 2022

On November 15 Meta unveiled a new large language model called Galactica, designed to assist scientists. But instead of landing with the big bang Meta hoped for, Galactica has died with a whimper after three days of intense criticism. Yeste…

Thread by @osanseviero on Thread Reader App · 2022

🧵 Some thoughts about the recent release of Galactica by @MetaAI (everything here is my personal opinion) 👀

Let's start with the positive / What went well

[1] The model was released and Open Source*

Contrary to the trend of very interest…


A "variant" is an incident that shares the same causative factors, produces similar harms, and involves the same intelligent systems as a known AI incident. Rather than index variants as entirely separate incidents, we list variations of incidents under the first similar incident submitted to the database. Unlike other submission types to the incident database, variants are not required to have reporting in evidence external to the Incident Database. Learn more from the research paper.

Similar Incidents

By textual similarity

Did our AI mess up? Flag the unrelated incidents