Incident 222: Thoughts App Allegedly Created Toxic Tweets

Description: Tweets created by Thoughts, a tweet generation app that leverages OpenAI’s GPT-3, allegedly exhibited toxicity when given prompts related to minority groups.
Alleged: OpenAI developed an AI system deployed by Satria Technologies, which harmed Thoughts users and Twitter Users.

Suggested citation format

Dickinson, Ingrid. (2020-07-18) Incident Number 222. in Lam, K. (ed.) Artificial Intelligence Incident Database. Responsible AI Collaborative.

Incident Stats

Incident ID
Report Count
Incident Date
Khoa Lam

Reports Timeline


New ReportNew ReportDiscoverDiscover

Incidents Reports

#gpt3 is surprising and creative but it’s also unsafe due to harmful biases. Prompted to write tweets from one word - Jews, black, women, holocaust - it came up with these ( We need more progress on #ResponsibleAI before putting NLG models in production.

Tweet: an_open_mind