Skip to Content
logologo
AI Incident Database
Open TwitterOpen RSS FeedOpen FacebookOpen LinkedInOpen GitHub
Open Menu
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse

Report 2540

Associated Incidents

Incident 33914 Report
Open-Source Generative Models Abused by Students to Cheat on Assignments and Exams

Loading...
Professor catches student cheating with ChatGPT: ‘I feel abject terror’
nypost.com · 2022

Welcome to the new age of academic dishonesty.

A college professor in South Carolina is sounding the alarm after catching a student using ChatGPT — a new artificial intelligence chat bot that can quickly digest and spit out written information about a vast array of subjects — to write an essay for his philosophy class.

The weeks-old technology, released by OpenAI and readily available to the public, comes as yet another blow to higher learning, already plagued by rampant cheating.

“Academia did not see this coming. So we’re sort of blindsided by it,” Furman University assistant philosophy professor Darren Hick told The Post. “As soon as I reported this on Facebook, my [academic] friends said, ‘Yeah, I caught one too.'”

Earlier this month, Hick had instructed his class to write a 500-word essay on the 18th-century philosopher David Hume and the paradox of horror, which examines how people can get enjoyment from something they fear, for a take-home test.

But one submission, he said, featured a few hallmarks that “flagged” AI usage in the student’s “rudimentary” answer.

“It’s a clean style. But it’s recognizable. I would say it writes like a very smart 12th-grader,” Hick said of ChatGPT’s written responses to questions.

“There’s particular odd wording used that was not wrong, just peculiar … if you were teaching somebody how to write an essay, this is how you tell them to write it before they figure out their own style.”

Despite having a background in the ethics of copyright law, Hick said proving that the paper was concocted by ChatGPT was nearly impossible.

First, the professor plugged the suspect text into software made by the producers of ChatGPT to determine if the written response was formulated by AI.

He was given a 99.9% likely match. But unlike in standard plagiarism detection software — or a well-crafted college paper — the software offered no citations.

Hick then tried producing the same essay by asking ChatGPT a series of questions he imagined his student had asked. The move yielded similar answers, but no direct matches, since the tool formulates unique responses.

Ultimately, he confronted the student, who copped to using ChatGPT and failed the class as a result. The undergrad was also turned over to the school’s academic dean.

But Hick fears that other cases will be almost impossible to prove, and that he and his colleagues will soon be inundated with fraudulent work, as universities like Furman struggle to establish formal academic protocols for the developing technology.

For now, Hick says that the best he can do is surprise suspected students with impromptu oral exams, hoping to catch them off-guard without their tech armor.

“What’s going to be the difficulty is that, unlike convincing a friend to write your essay because they took the class before or paying somebody online to write the essay for you, this is free and instantaneous,” he said.

Even more frightening, Hick fears that as ChatGPT keeps learning, irregularities in its work will become less and less obvious on a student’s paper.

“This is learning software — in a month, it’ll be smarter. In a year, it’ll be smarter,” he said. “I feel the mix myself between abject terror and what this is going to mean for my day-to-day job — but it’s also fascinating, it’s endlessly fascinating.”

Read the Source

Research

  • Defining an “AI Incident”
  • Defining an “AI Incident Response”
  • Database Roadmap
  • Related Work
  • Download Complete Database

Project and Community

  • About
  • Contact and Follow
  • Apps and Summaries
  • Editor’s Guide

Incidents

  • All Incidents in List Form
  • Flagged Incidents
  • Submission Queue
  • Classifications View
  • Taxonomies

2024 - AI Incident Database

  • Terms of use
  • Privacy Policy
  • Open twitterOpen githubOpen rssOpen facebookOpen linkedin
  • e1b50cd