Skip to Content
logologo
AI Incident Database
Open TwitterOpen RSS FeedOpen FacebookOpen LinkedInOpen GitHub
Open Menu
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse

Report 2478

Associated Incidents

Incident 44325 Report
ChatGPT Abused to Develop Malicious Softwares

Loading...
Cybercriminals Using ChatGPT to Build Hacking Tools, Write Code
pcmag.com · 2023

Expert and novice cybercriminals have already started to use OpenAI’s chatbot ChatGPT in a bid to build hacking tools, security analysts have said. 

In one documented example, the Israeli security company Check Point spotted a thread on a popular underground hacking forum by a hacker who said he was experimenting with the popular AI chatbot to “recreate malware strains.” 

The hacker had gone on to compress and share Android malware that had been written by ChatGPT across the web. The malware had the ability to steal files of interest, Forbes reports. 

The same hacker showed off a further tool that installed a backdoor on a computer and could infect a PC with more malware. 

Check Point noted in its assessment of the situation that some hackers were using ChatGPT to create their first scripts. In the aforementioned forum, another user shared Python code he said could encrypt files and had been written using ChatGPT. The code, he said, was the first such one he had written. 

While such code could be used for harmless reasons, Check Point said that it could “easily be modified to encrypt someone’s machine completely without any user interaction.”

The security company stressed that while ChatGPT-coded hacking tools appeared “pretty basic,” it is “only a matter of time until more sophisticated threat actors enhance the way they use AI-based tools for bad.”

A third case of ChatGPT being used for fraudulent activity flagged by Check Point included a cybercriminal who showed it was possible to create a Dark Web marketplace using the AI chatbot. The hacker posted in the underground forum that he had used ChatGPT to create a piece of code that uses third-party API to retrieve up-to-date cryptocurrency prices, which is used for the Dark Web market payment system.

ChatGPT’s developer, OpenAI, has implemented some controls which prevent obvious requests for the AI to build spyware. However, the AI chatbox has come under yet more scrutiny after security analysts and journalists found it could write grammatically correct phishing emails without typos. 

OpenAI did not immediately respond to a request for comment.

Read the Source

Research

  • Defining an “AI Incident”
  • Defining an “AI Incident Response”
  • Database Roadmap
  • Related Work
  • Download Complete Database

Project and Community

  • About
  • Contact and Follow
  • Apps and Summaries
  • Editor’s Guide

Incidents

  • All Incidents in List Form
  • Flagged Incidents
  • Submission Queue
  • Classifications View
  • Taxonomies

2024 - AI Incident Database

  • Terms of use
  • Privacy Policy
  • Open twitterOpen githubOpen rssOpen facebookOpen linkedin
  • e1b50cd