Description: AI writing detection tools have reportedly continued to falsely flag genuine student work as AI-generated, disproportionately impacting ESL and neurodivergent students. Specific cases include Moira Olmsted, Ken Sahib, and Marley Stevens, who were penalized despite writing their work independently. Such tools reportedly exhibit biases, leading to academic penalties, probation, and strained teacher-student relationships.
Editor Notes: Reconstructing the timeline of events: (1) Sometime in 2023: Central Methodist University is reported to have used Turnitin to analyze assignments for AI usage. Moira Olmsted’s writing is flagged as AI-generated, leading to her receiving a zero and a warning. (2) Sometime in 2023: Ken Sahib, an ESL student at Berkeley College, is reported to have been penalized after AI detection tools flagged his assignment as AI-generated. (3) Sometime in late 2023 or early 2024: Marley Stevens is reported to have been placed on academic probation after Turnitin falsely identifies her work as AI-generated, though she purports to have only used Grammarly for minor edits. (4) October 18, 2024: Bloomberg publishes findings that leading AI detectors falsely flag 1%-2% of essays as AI-generated, with higher error rates for ESL students. (This date is set as the incident date for convenience.)
Alleged: Turnitin , GPTZero と Copyleaks developed an AI system deployed by Central Methodist University , Berkeley College , Universities と Colleges, which harmed students , Neurodivergent students , ESL students , Moira Olmsted , Ken Sahib と Marley Stevens.
インシデントのステータス
Risk Subdomain
A further 23 subdomains create an accessible and understandable classification of hazards and harms associated with AI
7.3. Lack of capability or robustness
Risk Domain
The Domain Taxonomy of AI Risks classifies risks into seven AI risk domains: (1) Discrimination & toxicity, (2) Privacy & security, (3) Misinformation, (4) Malicious actors & misuse, (5) Human-computer interaction, (6) Socioeconomic & environmental harms, and (7) AI system safety, failures & limitations.
- AI system safety, failures, and limitations
Entity
Which, if any, entity is presented as the main cause of the risk
AI
Timing
The stage in the AI lifecycle at which the risk is presented as occurring
Post-deployment
Intent
Whether the risk is presented as occurring as an expected or unexpected outcome from pursuing a goal
Unintentional