Skip to Content
logologo
AI Incident Database
Open TwitterOpen RSS FeedOpen FacebookOpen LinkedInOpen GitHub
Open Menu
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse

Report 3072

Associated Incidents

Incident 54317 Report
Purported AI-Generated Image of Explosion Near Pentagon Reportedly Triggers Brief Market Dip and Public Confusion

Loading...
An AI Picture Just Caused The Stock Market To Crash
giantfreakinrobot.com · 2023

An AI-generated photo of an explosion near the Pentagon in Washington, D.C., caused the United States stock market to crash. The image was shared by a verified account on Twitter called Bloomberg Feed, accompanied by a misleading caption that said, “Large Explosion Near the Pentagon Complex in Washington, DC – Initial Report.”

Prime example of the dangers in the pay-to-verify system: This account, which tweeted a (very likely AI-generated) photo of a (fake) story about an explosion at the Pentagon, looks at first glance like a legit Bloomberg news feed. pic.twitter.com/SThErCln0p

— Andy Campbell (@AndyBCampbell) May 22, 2023

Although Twitter verification is meaningless since anyone can pay for the blue checkmark, the AI-generated Pentagon image had real-world consequences. According to The Byte, the “news” went viral after a user with over 650,000 followers shared the photo at 10:06 am. Four minutes later, the stock market fell by 0.26%.

The Arlington Police Department acted quickly to quell the panic by stating that the AI Pentagon image was indeed a fake. “There is NO explosion or incident taking place at or near the Pentagon reservation, and there is no immediate danger or hazards to the public,” the department said on social media.

@PFPAOfficial and the ACFD are aware of a social media report circulating online about an explosion near the Pentagon. There is NO explosion or incident taking place at or near the Pentagon reservation, and there is no immediate danger or hazards to the public. pic.twitter.com/uznY0s7deL

— Arlington Fire & EMS (@ArlingtonVaFD) May 22, 2023

While law enforcement hasn’t confirmed if the image was made using artificial intelligence tools, it does feature some of the hallmarks of AI-generated images. The columns on the building in the hoax photo vary in size and the fence blends into the sidewalk in some places. The post, which was also shared by a Russian state-media Twitter account with more than 3 million followers, has since been deleted.

Fortunately, the markets swiftly bounced back once the AI Pentagon photo was revealed to be a hoax. Similarly, the prominent cryptocurrency Bitcoin faced a short crash as the fake news circulated, causing it to drop to $26,500. The digital currency stabilized shortly after and is currently being traded at $26,882.

This form of online deception has sparked significant concerns among critics of unregulated AI advancement. Experts in the field have previously expressed their apprehension about malicious individuals’ potential misuse of advanced AI systems, enabling the dissemination of misinformation and triggering chaos within online communities.

The AI-generated Pentagon image isn’t the first instance of trickery. Other fake viral images that misled the public included faux photos of Pope Francis in a Balenciaga jacket, a picture of former President Donald Trump being arrested, and deepfake videos featuring celebrities like Elon Musk endorsing cryptocurrency scams. Fake X-rated video footage of Harry Potter star Emma Watson also surfaced online.

As a result, hundreds of tech experts have called for a six-month pause on advanced AI development until proper safety guidelines are established. Dr. Geoffrey Hinton, known as the Godfather of AI, voluntarily exited his role at Google to showcase his concerns about potential risks without harming his former employer’s reputation.

Instances of misinformation, like the AI Pentagon image, add fuel to the ongoing discourse about establishing a comprehensive ethical and regulatory framework for artificial intelligence. As this technology becomes an increasingly prominent tool in the hands of disinformation agents, the consequences can be more chaotic than a temporary stock market crash.

A lack of transparency, accountability, and ethical considerations could amplify these risks. But until some form of regulation is implemented worldwide, instances of fake news and other dangerous trends are bound to increase.

Read the Source

Research

  • Defining an “AI Incident”
  • Defining an “AI Incident Response”
  • Database Roadmap
  • Related Work
  • Download Complete Database

Project and Community

  • About
  • Contact and Follow
  • Apps and Summaries
  • Editor’s Guide

Incidents

  • All Incidents in List Form
  • Flagged Incidents
  • Submission Queue
  • Classifications View
  • Taxonomies

2024 - AI Incident Database

  • Terms of use
  • Privacy Policy
  • Open twitterOpen githubOpen rssOpen facebookOpen linkedin
  • e1b50cd