Associated Incidents

Open AI faces a new challenge that is difficult to stop; the malicious use of ChatGPT. A study developed by the security company Check Point Research reveals that the beta version of this AI chatbot began to be used in photos of cybercrime to write both software and emails for the purpose of espionage, malware, ransomware and espionage. .
In this way, any user without being an expert could manipulate scripts to carry out [cyberattacks with ChatGPT](https://arstechnica.com/information-technology/2023/01/chatgpt-is-enabling-script-kiddies- to-write-functional-malware/). The script kiddies emerged and roamed freely, even being able to turn the tool into a favorite on the Dark Web.
The Malicious Pioneering Code of ChatGPT
A forum participant posted the script of the first code generated using ChatGPT. The Python code combined various cryptographic functions, encryption and decryption. Thus, one part of the script was responsible for generating a key using elliptic curve cryptography and the ed25519 curve, while another part used an encrypted password to encrypt system files using Blowfish and **Twofish algorithms. **. Similarly, a third party used RSA keys and digital signatures, message signing, and the hash blake 2 function to compare different files.
As a result, a script was generated to decrypt a single file and add a message authentication code to it at the end. It also allowed encrypting an encrypted path and decrypting a list of files it receives as an argument.
In the same forum, according to the Check Point Research report, another participant posted two code examples written with ChatGPT. The first was a Python script for post-exploit information theft. It searched for specific files, copied them to a temporary directory, compressed them, and then sent them to a server in the hands of an attacker.
The second piece of malware, written with Java, relied on SSH and telnet Putt downloadand then running it using Powershell.
Therefore, those who use these types of forums for these purposes are only trying to indoctrinate new script kiddies so that they learn to use ChatGPT as malware, easily modifying the code if specific syntax and script problems are solved.
The data transfer market
The report also includes a third criminal use of ChatGPT, on this occasion, to create an automatic bazaar that would make it possible to buy and exchange passwords, bank card details and other services illegally. For this, a programming interface was used that allowed the current prices of cryptocurrencies to be safeguarded and, from that moment, the price of transactions to be set.
It appears that in early November, Check Point researchers tested using ChatGPT to generate malware from a phishing email, hiding the script in an Excel file attached to it. As they repeatedly asked the chatbot to generate the code, its quality, and therefore its malicious effect, improved.
They later incorporated Codex, a more advanced artificial intelligence service, to develop other malware, including a reverse shell and scripts that would allow port scanning, sandbox detection, and compile their Python code into a Windows executable.
That is the question that users ask themselves, since the Open AI tool strictly prohibits its use for illegal and malicious purposes, although queries to VirusTotal are becoming more and more general to find out the detections of a ** specific cryptographic hash**.
The future is uncertain, and for now it is expected that ChatGPT will continue to be used for scientific studies and trials, but hacking is a latent threat.