Entities

students

Incidents Harmed By

Incident 4667 Reports
AI-Generated-Text-Detection Tools Reported for High Error Rates

2023-01-03

Models developed to detect whether text generation AI was used such as AI Text Classifier and GPTZero reportedly contained high rates of false positive and false negative, such as mistakenly flagging Shakespeare's works.

More

Incident 4042 Reports
Sound Intelligence's Aggression Detector Misidentified Innocuous Sounds

2019-06-25

Sound Intelligence's "aggression detection" algorithm deployed by schools reportedly contained high rates of false positive, misclassifying laughing, coughing, cheering, and loud discussions.

More

Incident 2391 Report
Algorithmic Teacher Evaluation Program Failed Student Outcome Goals and Allegedly Caused Harm Against Teachers

2009-09-01

Gates-Foundation-funded Intensive Partnerships for Effective Teaching Initiative’s algorithmic program to assess teacher performance reportedly failed to achieve its goals for student outcomes, particularly for minority students, and was criticized for potentially causing harm against teachers.

More

Incidents involved as Deployer

Incident 33914 Reports
Open-Source Generative Models Abused by Students to Cheat on Assignments and Exams

2022-09-15

Students were reportedly using open-source text generative models such as GPT-3 and ChatGPT to complete school assignments and exams such as writing reports, essays.

More

Related Entities