Description: A student at Northeastern University reportedly filed a complaint after discovering that a professor had used generative AI tools, including ChatGPT, to produce course materials despite university policies discouraging undisclosed AI use. The student alleged hypocrisy, citing a syllabus that barred unauthorized AI use by students. The professor acknowledged having used AI-generated materials and expressed regret over not reviewing them more carefully. No tuition refund was reportedly granted.
Editor Notes: This incident record includes the name of the professor because he spoke on the record to The New York Times and offered a public reflection on the event. His comments were framed as part of an ongoing learning process around AI use in higher education. Naming him here acknowledges his own contribution to that process and situates the episode within a society-wide moment of institutional and pedagogical transition. Timeline note: The key events in this incident span from February 2025, which is when the student reportedly discovered her professor's use of generative AI in course materials, to May 2025, when she received the university's formal response denying her tuition reimbursement request. The New York Times report was published on May 14, 2025, and reflects developments through the end of the academic semester, including retrospective comments from the professor and updates to institutional AI policy. This incident ID takes as its date the publication of that report.
Entities
View all entitiesAlleged: Perplexity , OpenAI and Gamma.app developed an AI system deployed by Rick Arrowood, which harmed Rick Arrowood , Northeastern University students , Northeastern University and Ella Stapleton.
Incident Stats
Risk Subdomain
A further 23 subdomains create an accessible and understandable classification of hazards and harms associated with AI
5.1. Overreliance and unsafe use
Risk Domain
The Domain Taxonomy of AI Risks classifies risks into seven AI risk domains: (1) Discrimination & toxicity, (2) Privacy & security, (3) Misinformation, (4) Malicious actors & misuse, (5) Human-computer interaction, (6) Socioeconomic & environmental harms, and (7) AI system safety, failures & limitations.
- Human-Computer Interaction
Entity
Which, if any, entity is presented as the main cause of the risk
Human
Timing
The stage in the AI lifecycle at which the risk is presented as occurring
Post-deployment
Intent
Whether the risk is presented as occurring as an expected or unexpected outcome from pursuing a goal
Intentional
Incident Reports
Reports Timeline
Loading...
In February, Ella Stapleton, then a senior at Northeastern University, was reviewing lecture notes from her organizational behavior class when she noticed something odd. Was that a query to ChatGPT from her professor?
Halfway through the do…
Variants
A "variant" is an AI incident similar to a known case—it has the same causes, harms, and AI system. Instead of listing it separately, we group it under the first reported incident. Unlike other incidents, variants do not need to have been reported outside the AIID. Learn more from the research paper.
Seen something similar?