Skip to Content
logologo
AI Incident Database
Open TwitterOpen RSS FeedOpen FacebookOpen LinkedInOpen GitHub
Open Menu
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse

Incident 843: Generative AI Plagiarism Incident at Hingham High School Reportedly Tied to Inaccurate Citation Outputs from Grammarly AI

Description: In December 2023, two Hingham High School students ("RNH" and unnamed) reportedly used Grammarly to create a script for an AP U.S. History project. The AI-generated text included fabricated citations to nonexistent books, which the student copied and pasted without verification or acknowledgment of AI use. This violated the school's academic integrity policies, leading to disciplinary action. RNH's parents later sued the school district, but a federal court ruled in favor of the school.
Editor Notes: The incident itself occurred sometime in December 2023. The court ruling was published on November 20, 2024. It can be read here: https://fingfx.thomsonreuters.com/gfx/legaldocs/lbvgjjqnkpq/11212024ai_ma.pdf.

Tools

New ReportNew ReportNew ResponseNew ResponseDiscoverDiscoverView HistoryView History

Entities

View all entities
Alleged: Grammarly developed an AI system deployed by Hingham High School students and Hingham High School student RNH, which harmed Hingham High School students , Hingham High School student RNH , Hingham High School and Academic integrity.

Incident Stats

Incident ID
843
Report Count
2
Incident Date
2024-11-20
Editors
Daniel Atherton
Applied Taxonomies
MIT

MIT Taxonomy Classifications

Machine-Classified
Taxonomy Details

Risk Subdomain

A further 23 subdomains create an accessible and understandable classification of hazards and harms associated with AI
 

5.1. Overreliance and unsafe use

Risk Domain

The Domain Taxonomy of AI Risks classifies risks into seven AI risk domains: (1) Discrimination & toxicity, (2) Privacy & security, (3) Misinformation, (4) Malicious actors & misuse, (5) Human-computer interaction, (6) Socioeconomic & environmental harms, and (7) AI system safety, failures & limitations.
 
  1. Human-Computer Interaction

Entity

Which, if any, entity is presented as the main cause of the risk
 

Human

Timing

The stage in the AI lifecycle at which the risk is presented as occurring
 

Post-deployment

Intent

Whether the risk is presented as occurring as an expected or unexpected outcome from pursuing a goal
 

Intentional

Incident Reports

Reports Timeline

Massachusetts parents sue school district over student receiving 'D' after using AI for social studies project+1
School did nothing wrong when it punished student for using AI, court rules
Massachusetts parents sue school district over student receiving 'D' after using AI for social studies project

Massachusetts parents sue school district over student receiving 'D' after using AI for social studies project

foxnews.com

School did nothing wrong when it punished student for using AI, court rules

School did nothing wrong when it punished student for using AI, court rules

arstechnica.com

Massachusetts parents sue school district over student receiving 'D' after using AI for social studies project
foxnews.com · 2024

The parents of a Massachusetts high school senior who used artificial intelligence (AI) for a social studies project have filed a lawsuit against his teachers and the school after their son received detention and a "D" grade.

"He’s been acc…

School did nothing wrong when it punished student for using AI, court rules
arstechnica.com · 2024

A federal court yesterday ruled against parents who sued a Massachusetts school district for punishing their son who used an artificial intelligence tool to complete an assignment.

Dale and Jennifer Harris sued Hingham High School officials…

Variants

A "variant" is an incident that shares the same causative factors, produces similar harms, and involves the same intelligent systems as a known AI incident. Rather than index variants as entirely separate incidents, we list variations of incidents under the first similar incident submitted to the database. Unlike other submission types to the incident database, variants are not required to have reporting in evidence external to the Incident Database. Learn more from the research paper.

Similar Incidents

By textual similarity

Did our AI mess up? Flag the unrelated incidents

New Jersey Police Wrongful Arrested Innocent Black Man via FRT

New Jersey Police Wrongful Arrested Innocent Black Man via FRT

Jan 2019 · 4 reports
Defamation via AutoComplete

Defamation via AutoComplete

Apr 2011 · 28 reports
Detroit Police Wrongfully Arrested Black Man Due To Faulty FRT

Detroit Police Wrongfully Arrested Black Man Due To Faulty FRT

Jan 2020 · 11 reports
Previous IncidentNext Incident

Similar Incidents

By textual similarity

Did our AI mess up? Flag the unrelated incidents

New Jersey Police Wrongful Arrested Innocent Black Man via FRT

New Jersey Police Wrongful Arrested Innocent Black Man via FRT

Jan 2019 · 4 reports
Defamation via AutoComplete

Defamation via AutoComplete

Apr 2011 · 28 reports
Detroit Police Wrongfully Arrested Black Man Due To Faulty FRT

Detroit Police Wrongfully Arrested Black Man Due To Faulty FRT

Jan 2020 · 11 reports

Research

  • Defining an “AI Incident”
  • Defining an “AI Incident Response”
  • Database Roadmap
  • Related Work
  • Download Complete Database

Project and Community

  • About
  • Contact and Follow
  • Apps and Summaries
  • Editor’s Guide

Incidents

  • All Incidents in List Form
  • Flagged Incidents
  • Submission Queue
  • Classifications View
  • Taxonomies

2024 - AI Incident Database

  • Terms of use
  • Privacy Policy
  • Open twitterOpen githubOpen rssOpen facebookOpen linkedin
  • 1420c8e