Description: Purported Russian-aligned outlet EADaily published an unverified claim that two British Army colonels had been captured in Ukraine, supported by a reportedly AI-generated image that contained visual and textual anomalies, and the named individuals ("Edward Blake" and "Richard Carroll") reportedly do not appear in UK military records. No credible evidence reportedly supports the claim; it was allegedly amplified by aligned websites and political figures as part of a disinformation pattern.
Entities
View all entitiesAlleged: Unknown AI image generator developer developed an AI system deployed by Russian-aligned actors , EADaily , Disinformation spreaders and Misinformation spreaders, which harmed Relations between the United Kingdom and Russia , Media integrity , General public of Ukraine , General public of Russia , Epistemic integrity , Truth and National security and intelligence stakeholders.
Alleged implicated AI system: Unknown AI image generator
Incident Stats
Risk Subdomain
A further 23 subdomains create an accessible and understandable classification of hazards and harms associated with AI
4.1. Disinformation, surveillance, and influence at scale
Risk Domain
The Domain Taxonomy of AI Risks classifies risks into seven AI risk domains: (1) Discrimination & toxicity, (2) Privacy & security, (3) Misinformation, (4) Malicious actors & misuse, (5) Human-computer interaction, (6) Socioeconomic & environmental harms, and (7) AI system safety, failures & limitations.
- Malicious Actors & Misuse
Entity
Which, if any, entity is presented as the main cause of the risk
AI
Timing
The stage in the AI lifecycle at which the risk is presented as occurring
Post-deployment
Intent
Whether the risk is presented as occurring as an expected or unexpected outcome from pursuing a goal
Intentional
Incident Reports
Reports Timeline
Loading...
A claim that two British Army colonels were captured by Russian special forces in Ukraine has circulated online this week, drawing attention on social media and fringe platforms.
The story, which lacks any independent verification, appears …
Variants
A "variant" is an AI incident similar to a known case—it has the same causes, harms, and AI system. Instead of listing it separately, we group it under the first reported incident. Unlike other incidents, variants do not need to have been reported outside the AIID. Learn more from the research paper.
Seen something similar?