Incident 399: Meta AI's Scientific Paper Generator Reportedly Produced Inaccurate and Harmful Content

Description: Meta AI trained and hosted a scientific paper generator that sometimes produced bad science and prohibited queries on topics and groups that are likely to produce offensive or harmful content.
Alleged: Meta AI , Meta and Facebook developed and deployed an AI system, which harmed Meta AI , Meta , Facebook and Minority Groups.

Suggested citation format

Hundt, Andrew. (2022-11-15) Incident Number 399. in McGregor, S. (ed.) Artificial Intelligence Incident Database. Responsible AI Collaborative.

Incident Stats

Incident ID
399
Report Count
4
Incident Date
2022-11-15
Editors
Sean McGregor

Tools

New ReportNew ReportNew ResponseNew ResponseDiscoverDiscover

Incident Reports

New Meta AI demo writes racist and inaccurate scientific literature, gets pulled

On Tuesday, Meta AI unveiled a demo of Galactica, a large language model designed to "store, combine and reason about scientific knowledge." While intended to accelerate writing scientific literature, adversarial users running tests found i…

Why Meta’s latest large language model survived only three days online

On November 15 Meta unveiled a new large language model called Galactica, designed to assist scientists. But instead of landing with the big bang Meta hoped for, Galactica has died with a whimper after three days of intense criticism. Yeste…

Why Meta’s latest large language model survived only three days online

The Meta team behind Galactica argues that language models are better than search engines. “We believe this will be the next interface for how humans access scientific knowledge,” the researchers write.

This is because language models can “…

Thread by @osanseviero on Thread Reader App

🧵 Some thoughts about the recent release of Galactica by @MetaAI (everything here is my personal opinion) 👀

Let's start with the positive / What went well

[1] The model was released and Open Source*

Contrary to the trend of very interest…

Variants

A "variant" is an incident that shares the same causative factors, produces similar harms, and involves the same intelligent systems as a known AI incident. Rather than index variants as entirely separate incidents, we list variations of incidents under the first similar incident submitted to the database. Unlike other submission types to the incident database, variants are not required to have reporting in evidence external to the Incident Database. Learn more from the research paper.

Similar Incidents

By textual similarity

Did our AI mess up? Flag the unrelated incidents