Skip to Content
logologo
AI Incident Database
Open TwitterOpen RSS FeedOpen FacebookOpen LinkedInOpen GitHub
Open Menu
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse

Report 2489

Associated Incidents

Incident 44325 Report
ChatGPT Abused to Develop Malicious Softwares

Loading...
The dark side of ChatGPT: generating functional malware
muycomputerpro.com · 2023

Open AI faces a new challenge that is difficult to stop; the malicious use of ChatGPT. A study developed by the security company Check Point Research reveals that the beta version of this AI chatbot began to be used in photos of cybercrime to write both software and emails for the purpose of espionage, malware, ransomware and espionage. .

In this way, any user without being an expert could manipulate scripts to carry out [cyberattacks with ChatGPT](https://arstechnica.com/information-technology/2023/01/chatgpt-is-enabling-script-kiddies- to-write-functional-malware/). The script kiddies emerged and roamed freely, even being able to turn the tool into a favorite on the Dark Web.

The Malicious Pioneering Code of ChatGPT

A forum participant posted the script of the first code generated using ChatGPT. The Python code combined various cryptographic functions, encryption and decryption. Thus, one part of the script was responsible for generating a key using elliptic curve cryptography and the ed25519 curve, while another part used an encrypted password to encrypt system files using Blowfish and **Twofish algorithms. **. Similarly, a third party used RSA keys and digital signatures, message signing, and the hash blake 2 function to compare different files.

As a result, a script was generated to decrypt a single file and add a message authentication code to it at the end. It also allowed encrypting an encrypted path and decrypting a list of files it receives as an argument.

In the same forum, according to the Check Point Research report, another participant posted two code examples written with ChatGPT. The first was a Python script for post-exploit information theft. It searched for specific files, copied them to a temporary directory, compressed them, and then sent them to a server in the hands of an attacker.

The second piece of malware, written with Java, relied on SSH and telnet Putt downloadand then running it using Powershell.

Therefore, those who use these types of forums for these purposes are only trying to indoctrinate new script kiddies so that they learn to use ChatGPT as malware, easily modifying the code if specific syntax and script problems are solved.

The data transfer market

The report also includes a third criminal use of ChatGPT, on this occasion, to create an automatic bazaar that would make it possible to buy and exchange passwords, bank card details and other services illegally. For this, a programming interface was used that allowed the current prices of cryptocurrencies to be safeguarded and, from that moment, the price of transactions to be set.

It appears that in early November, Check Point researchers tested using ChatGPT to generate malware from a phishing email, hiding the script in an Excel file attached to it. As they repeatedly asked the chatbot to generate the code, its quality, and therefore its malicious effect, improved.

They later incorporated Codex, a more advanced artificial intelligence service, to develop other malware, including a reverse shell and scripts that would allow port scanning, sandbox detection, and compile their Python code into a Windows executable.

That is the question that users ask themselves, since the Open AI tool strictly prohibits its use for illegal and malicious purposes, although queries to VirusTotal are becoming more and more general to find out the detections of a ** specific cryptographic hash**.

The future is uncertain, and for now it is expected that ChatGPT will continue to be used for scientific studies and trials, but hacking is a latent threat.

Read the Source

Research

  • Defining an “AI Incident”
  • Defining an “AI Incident Response”
  • Database Roadmap
  • Related Work
  • Download Complete Database

Project and Community

  • About
  • Contact and Follow
  • Apps and Summaries
  • Editor’s Guide

Incidents

  • All Incidents in List Form
  • Flagged Incidents
  • Submission Queue
  • Classifications View
  • Taxonomies

2024 - AI Incident Database

  • Terms of use
  • Privacy Policy
  • Open twitterOpen githubOpen rssOpen facebookOpen linkedin
  • e1b50cd