Description: A student at Northeastern University reportedly filed a complaint after discovering that a professor had used generative AI tools, including ChatGPT, to produce course materials despite university policies discouraging undisclosed AI use. The student alleged hypocrisy, citing a syllabus that barred unauthorized AI use by students. The professor acknowledged having used AI-generated materials and expressed regret over not reviewing them more carefully. No tuition refund was reportedly granted.
Editor Notes: This incident record includes the name of the professor because he spoke on the record to The New York Times and offered a public reflection on the event. His comments were framed as part of an ongoing learning process around AI use in higher education. Naming him here acknowledges his own contribution to that process and situates the episode within a society-wide moment of institutional and pedagogical transition. Timeline note: The key events in this incident span from February 2025, which is when the student reportedly discovered her professor's use of generative AI in course materials, to May 2025, when she received the university's formal response denying her tuition reimbursement request. The New York Times report was published on May 14, 2025, and reflects developments through the end of the academic semester, including retrospective comments from the professor and updates to institutional AI policy. This incident ID takes as its date the publication of that report.
Entities
View all entitiesAlleged: OpenAI , Perplexity and Gamma.app developed an AI system deployed by Rick Arrowood, which harmed Rick Arrowood , Ella Stapleton , Northeastern University students and Northeastern University.
Incident Stats
Incident ID
1071
Report Count
1
Incident Date
2025-05-14
Editors
Daniel Atherton
Incident Reports
Reports Timeline
In February, Ella Stapleton, then a senior at Northeastern University, was reviewing lecture notes from her organizational behavior class when she noticed something odd. Was that a query to ChatGPT from her professor?
Halfway through the do…
Variants
A "variant" is an AI incident similar to a known case—it has the same causes, harms, and AI system. Instead of listing it separately, we group it under the first reported incident. Unlike other incidents, variants do not need to have been reported outside the AIID. Learn more from the research paper.
Seen something similar?