Description: AI writing detection tools have reportedly continued to falsely flag genuine student work as AI-generated, disproportionately impacting ESL and neurodivergent students. Specific cases include Moira Olmsted, Ken Sahib, and Marley Stevens, who were penalized despite writing their work independently. Such tools reportedly exhibit biases, leading to academic penalties, probation, and strained teacher-student relationships.
Editor Notes: Reconstructing the timeline of events: (1) Sometime in 2023: Central Methodist University is reported to have used Turnitin to analyze assignments for AI usage. Moira Olmsted’s writing is flagged as AI-generated, leading to her receiving a zero and a warning. (2) Sometime in 2023: Ken Sahib, an ESL student at Berkeley College, is reported to have been penalized after AI detection tools flagged his assignment as AI-generated. (3) Sometime in late 2023 or early 2024: Marley Stevens is reported to have been placed on academic probation after Turnitin falsely identifies her work as AI-generated, though she purports to have only used Grammarly for minor edits. (4) October 18, 2024: Bloomberg publishes findings that leading AI detectors falsely flag 1%-2% of essays as AI-generated, with higher error rates for ESL students. (This date is set as the incident date for convenience.)
Entities
View all entitiesAlleged: Turnitin , GPTZero and Copyleaks developed an AI system deployed by Central Methodist University , Berkeley College , Universities and Colleges, which harmed students , Neurodivergent students , ESL students , Moira Olmsted , Ken Sahib and Marley Stevens.
Incident Stats
Incident ID
849
Report Count
1
Incident Date
2024-10-18
Editors
Daniel Atherton
Incident Reports
Reports Timeline
bloomberg.com · 2024
- View the original report at its source
- View the report at the Internet Archive
After taking some time off from college early in the pandemic to start a family, Moira Olmsted was eager to return to school. For months, she juggled a full-time job and a toddler to save up for a self-paced program that allowed her to lear…
Variants
A "variant" is an incident that shares the same causative factors, produces similar harms, and involves the same intelligent systems as a known AI incident. Rather than index variants as entirely separate incidents, we list variations of incidents under the first similar incident submitted to the database. Unlike other submission types to the incident database, variants are not required to have reporting in evidence external to the Incident Database. Learn more from the research paper.
Similar Incidents
Did our AI mess up? Flag the unrelated incidents
Similar Incidents
Did our AI mess up? Flag the unrelated incidents