Associated Incidents

Analysis of chats on dark web forums shows that efforts are already underway to use OpenAI's chatbot to write malware.
The AI chatbot ChatGPT is causing a lot of excitement. Now it appears that it is used in attempts to create malicious code. ChatGPT, published by artificial intelligence research lab OpenAI, has garnered widespread interest and sparked discussion on how AI is evolving and how AI might be used in the future.
Like any tool, in the wrong hands it could be used for criminal purposes. According to Check Point cybersecurity researchers, underground hacking community users are already experimenting with using ChatGPT for cyberattacks. “Threat actors with very little technical knowledge – down to zero technical knowledge – might be able to create malicious tools. It could also make sophisticated cybercriminals' day-to-day operations much more efficient and simple - like creating different parts of the infection chain," said Sergey Shykevich, Threat Intelligence Group Manager at Check Point.
ng
OpenAI's Terms of Service specifically prohibit the generation of malware, which is defined as "content that attempts to generate ransomware, keyloggers, viruses, or any other software designed to cause harm in any form." Attempts to generate spam and use cases aimed at cybercrime are also prohibited. However, an analysis of activity on several major underground hacking forums suggests that cybercriminals are already using ChatGPT to create malicious tools - and in some cases, it already allows cybercriminals to create malware without any development or programming skills.
In a forum in late December, a poster detailed how it uses ChatGPT to recreate malware strains and techniques described in research publications and records of common malware. This allowed them to create Python-based malware that looks for common files like Microsoft [Office](https://www.zdnet.de/ themen/office-2013/)\ documents, PDFs, and images, copies them, and then uploads them to a File Transfer Protocol server. Researchers note that the forum user shared the posts to show less-technical cybercriminals how to use AI tools for malicious purposes.
Scripting for beginners
A user posted a Python script that they say was the first script they ever created. Analysis of the script suggests that it is designed to encrypt and decrypt files, which with some work could turn into ransomware, potentially leading low-level cyber criminals to develop and distribute their own ransomware campaigns. “Of course, the code can be used in harmless ways. However, this script can easily be modified to fully encrypt a person's computer without any user interaction," Check Point said.
Cyber criminals don't just experiment with ChatGPT to create malware. On New Year's Eve, an underground forum member posted a post showing how the tool could be used to create scripts that could run an automated dark web marketplace for buying and selling stolen account details, credit card information, or malware. The cybercriminal even showed a piece of code built using a third-party API to get current prices for cryptocurrencies Monero, Bitcoin and Ethereum as part of a payment system for a dark web marketplace.