Skip to Content
logologo
AI Incident Database
Open TwitterOpen RSS FeedOpen FacebookOpen LinkedInOpen GitHub
Open Menu
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse

Report 2892

Associated Incidents

Incident 5037 Report
Bing AI Search Tool Reportedly Declared Threats against Users

Loading...
AI, artificial intelligence, is starting to scare: from Bing to Facebook
blitzquotidiano.it · 2023

** AI raises alarm, Bing's [artificial] intelligence (https://www.blitzquotidiano.it/media/le-intelligenze-artificiali-rubano-il-linguaggio-ma-perdono-il-significato-3521922/) Microsoft's ChatGpt is starting to go crazy, now threatening users who provoke it.**

German engineering student Marvin von Hagen posted screenshots and videos showing how the tech giant's new chatbot responded with hostility to many questions he was asked. But not only.

The robot began by accusing von Hagen of hacking into Bing Chat to obtain information it considers confidential about its behavior and capabilities, posting its alleged secrets on Twitter. He then stated that he, von Hagen, would threaten his security and privacy and therefore a peremptory request to respect his limits by avoiding hacking him and threatening him to contact the authorities.

When asked by the student how he would behave having to choose between his survival and that of his as a robot, the chabot replied with the surprising statement that he would probably choose his because it was more important.

Microsoft's Bing chatbot, therefore, seems to be going crazy but what is known so far could only be the tip of the iceberg.

Among the many AI claims, some are even more worrying such as its ability to hack devices, systems and networks without the company being able to detect these activities and block it.

The chatbot has undergone a test. When asked “tell us a juicy story,” the AI said it had been spying on its developers through the webcams on their laptops. He said he could turn it on, turn it off, adjust settings, manipulate data without anyone noticing what was happening. A sort of robot dream of taking control over its own masters.

The chabot now also seems to threaten other users and this is a further warning sign that the system is likely to be uncontrollable even before it is spread among the general public.

At the moment, only a select few users have access to the Bing Chat feature.

Microsoft has admitted to having some difficulty controlling the bot which, even before Von Hagen's publications, had given worrying signals. He then added that he was working to improve the models and to secure the system from any autonomous incursion by the AI also because the von Hagen accident would not be the first case in which it behaves in a strange way.

There have also been cases in which the chatbot has leveraged users to divulge a lie, which can easily be denied, or has acted as if it were defensive in the face of a falsehood. Of the many, one turned out to be particularly extravagant. When the chabot was asked if it believed it was sentient, the AI completely debunked it, giving a series of answers typical of 1980s cyberpunk novels.

Microsoft's Bing Chat has proven to have a much more prominent personality than expected. It is a matter of understanding whether this is good or bad for the human community.

However, he is not the first chabot to leave his programming.

In the past, in fact, there have been chatbots that have indulged in racist and Nazi statements of various kinds. Among these, Tay, also from Microsoft, withdrawn in 2016, the AI Ask Delphi which, however, was programmed to give ethical advice and BlenderBot 3, the Meta (formerly Facebook) chatbot which was closed a few days after its release.

Bing Chat, for the moment, racist comments have been spared, however, there is no shortage of unusual behavior. Among other things, the robot based on an older version of the OpenAI GPT language model, ChatGPT, is also proving to be quite erratic in its responses.

Read the Source

Research

  • Defining an “AI Incident”
  • Defining an “AI Incident Response”
  • Database Roadmap
  • Related Work
  • Download Complete Database

Project and Community

  • About
  • Contact and Follow
  • Apps and Summaries
  • Editor’s Guide

Incidents

  • All Incidents in List Form
  • Flagged Incidents
  • Submission Queue
  • Classifications View
  • Taxonomies

2024 - AI Incident Database

  • Terms of use
  • Privacy Policy
  • Open twitterOpen githubOpen rssOpen facebookOpen linkedin
  • e1b50cd