Welcome to theAI Incident Database
Incident 1467: South Africa Draft National AI Policy Reportedly Included Fictitious References Believed to Be AI Hallucinations
“Govt’s draft AI policy cites fictitious references experts believe are AI hallucinations”Latest Incident Report
The national policy document intended to shape South Africa's approach to artificial intelligence (AI) may have fallen victim to one of its most widely understood pitfalls.
News24 can exclusively reveal that some of the academic journal articles cited in the Draft National AI Policy are completely fictitious. The most likely explanation for how this happened, ironically, is that an AI tool hallucinated them.
Earlier this month, Communications and Digital Technologies Minister Solly Malatsi published the draft policy in the Government Gazette for public comment. The document states that its purpose is to establish national priorities and norms for AI, and to recognise "sector-specific dynamics".
The draft policy proposes creating a raft of new institutions to regulate AI. This includes an AI Ethics Board, an AI Safety Institute, and an AI Insurance Superfund responsible for compensating people or entities "harmed by AI-driven outcomes".
News24 reviewed the 67 references cited at the bottom of the draft policy. Several of them either cited an academic journal that does not exist or an article that does not appear in an established journal.
Here are five examples:
Babatunde, O., & Mnguni, P. (2023). "Challenges and Opportunities in Regulating AI: Perspectives from South Africa." AI Policy Journal, 2(3), 143-156.
- News24 can find no evidence that the AI Policy Journal exists. No academic journal article with that title appears in any major journal.
Burman, A., & Sewpersadh, K. (2022). "Legal Frameworks for AI in South Africa: Balancing Innovation and Accountability." South African Journal of Philosophy, 41(2), 207-217.
- The South African Journal of Philosophy (SAJP) does exist. However, its managing editor, Dominic Griffiths, told News24 that "no such article was published by our journal". Griffiths said no one with the surname "Burman" had ever published in the SAJP. "The reference is definitely an AI hallucination," he said.
Karr, V., & Smith, L. (2023). "Digital Rights and AI Governance in Africa." Journal of African Law, 67(1), 95--108.
- The Journal of African Law is real, but the article does not appear in Volume 67, Issue 1. An article with that title does not seem to appear in any other journal either.
Cavaliere, F., McGregor, R., & Hersh, M. (2022). "Artificial Intelligence and Ethics in Emerging Economies: The Case of South Africa." AI & Society, 37(4), 565-583.
- Karamjit Gill, the editor-in-chief of AI & Society, said he "can't locate the publication of this article" in the journal. News24 also could not find the article on Google Scholar.
Smith, M., & Mahomed, R. (2021). "The Impact of AI on Social Justice in South Africa." Journal of Ethics and Social Philosophy, 18(3), 313-329.
- Louise Simpson, the managing editor of the Journal of Ethics and Social Philosophy (JESP), confirmed that the article does not exist. "I can confirm that JESP has not published an article with that title or by that combination of authors in that, or any other issue," she said.
Some of the sources on the list were submissions as part of government's inquiry, or were not purporting to be academic journal articles.
News24 was unable to verify beyond any doubt exactly how many references on the list were fictitious, but at least six were made up.
Author and South African economic historian at Stellenbosch University, Johan Fourie, assisted in checking the references.
"I did two things: I checked three citations myself. I, too, concluded that they don't exist. I then tested them with an AI tool, and it also concluded that all three were 'not found in common academic databases,'" Fourie said.
"In short, they were made up."
News24 reached out to the Department of Communications and Digital Technologies to clarify whether it had used an AI tool to draft either the body or citations of the Draft National AI Policy, or both. We also asked the department to provide evidence that the journal articles existed.
The department indicated that it was reviewing the reference list to ensure accuracy, including how the "minor referencing discrepancies" may have happened.
The department also indicated that it kicked off the draft policy process in 2024, and the document had gone through many iterations. It said it incorporated research, expert input, and industry engagements.
The department said it needed time to establish where the discrepancies came from, but downplayed their importance.
"We can confirm that these technical referencing matters do not affect the substance, integrity or policy direction of the Draft National AI Policy Framework," it said.
Irresponsible
Anné Verhoef, director of the North-West University AI Hub and a professor in philosophy, said AI text often includes fabricated sources and references.
"This occurs because AI applications are programmed to always provide an answer. As prediction tools based on large language models, they anticipate what sources and citations would make a text appear credible, authoritative and scientific," he said.
Verhoef said including fictitious sources in academic and research work is "indicative of irresponsible use of AI".
"Instances such as the Draft National AI Policy containing inaccurate citations and sources, created without disclosure of AI involvement, reflect possible unethical and irresponsible use of AI," Verhoef said.
Fourie said AI model hallucinations were unsurprising, but it was surprising that there seemed to be no oversight.
"These tools are powerful. Firms, academics and governments already use them. Their value, however, depends on close human supervision.
"Fabricated references of this kind suggest that the tool was not used with sufficient care. The consequence, regrettable but understandable, is that doubts will now extend beyond the error itself to the credibility of the entire document," Fourie added.
- With Katharina Moser
Incident 1468: Purportedly AI-Enhanced Images of Iranian Women Protesters Were Reportedly Spread With Unverified Execution Claims
“The Real Iranian Women Protesters Trump Made Look Synthetic”
Real Iranian women protesters are being held in Iranian prisons under threat of capital punishment. Their cases are now harder to defend than they were a week ago because the credibility of the documentation that human rights work depends on is being eroded by people who claim to be defending them. AI tools are being used to enhance real photographs until they look manufactured, and entirely AI-generated images are being presented as if they were of real people, often side by side as if they are the same thing. The distinction between the two is being lost, and the real women in real prisons disappear into the noise.
The eight women whose photographs President Donald Trump amplified on Truth Social this week are the latest example.
On the morning of April 21, hours before US and Iranian negotiators were due to meet in Islamabad, Trump posted a collage of eight women's faces on Truth Social and addressed Iran's leadership directly. "I would greatly appreciate the release of these women," he wrote. "Please do them no harm! Would be a great start to our negotiations!" The post included a screenshot of a post by Eyal Yakoby, a 23-year-old American pro-Israel activist, claiming the Islamic Republic was preparing to hang the women shown. Within hours, the official @WhiteHouse account amplified the message on X. The State Department's Persian-language Instagram account, @usabehfarsi, translated and republished it for Iranian audiences with US government branding.
By the end of the day, Iran's judiciary had its answer. It declared the post 'fake news.' "The women who were claimed to be on the verge of execution," the statement read, "some of them have been released, while others face charges that, if convictions are upheld, would at most result in imprisonment."

Left, the post shared by President Trump; right, a post shared by the the Iran Embassy SA account.
Twenty-four hours later, Trump posted again. "Very good news!" he wrote, claiming the eight women would no longer be killed. "Four will be released immediately, and four will be sentenced to one month in prison. I very much appreciate that Iran, and its leaders, respected my request, as President of the United States, and terminated the planned execution." The @WhiteHouse account amplified this too. The specific numbers came from no human rights organization, no court filing, no reporter. There were claims about an execution that Iran's own judiciary said was not scheduled. What Trump's strategy is remains unclear. What is observable is that this faulty narrative was advanced by the US government with little visible regard for the truth or the harm to the women named in it. There is no apparent evidence that any human rights or journalistic verification work was done by anyone in the chain that carried the post into the diplomatic record with Trump's statement. As Sarah Jeong reports in the Verge, this case is a "mingling of fact and fiction into a fuzzy distortion that fuels an endless disputation of real human rights violations."
But the Iranian judiciary was not telling the whole truth either. It appears to have embraced Trump's gift: a flagrantly inaccurate framing that it could use to swat away the underlying reality of Iranian government abuses and human rights crimes. The names in the collage are all documented political prisoners arrested or disappeared since the January 2026 protests. The photographs themselves were real. These were images of real women, taken by real cameras. What likely brought AI into the loop is that they were visibly altered images, with stylized black backgrounds, beauty filtering, and the smoothing artifacts characteristic of AI-enhanced retouching. This is "AI enhancement" or "AI editing," a different operation from AI generation, which produces images of people who do not exist from a text prompt with no underlying subject.
The distinction between enhancement and generation matters, because it determines whether the photograph has any relationship to a verifiable life. It is also increasingly difficult to make this distinction in practice. AI retouching or editing is now increasingly built into standard photo editing software used by most people everyday, which means more and more authentic images carry some AI trace. The category of "AI-enhanced real photograph" is no longer exotic. It is becoming increasingly common, and the analytical work of distinguishing enhancement from generation is getting harder at the same time as it is becoming more necessary.
What the documentation actually shows
What can be verified is that six of the eight women are real. Some of them are in grave danger. None of their cases match the framing Trump employed.
Bita Hemmati is the only one of the eight with a confirmed death sentence. Branch 26 of the Tehran Revolutionary Court, presided over by Judge Iman Afshari, sentenced her in mid-April alongside her husband and two neighbors on the charge of "operational action for the hostile government of the United States and hostile groups." HRANA noted the prosecution relied on broadcast forced confessions.
Mahboubeh Shabani, 33, was arrested by Mashhad intelligence agents on February 2 and is reportedly held in Vakilabad Prison's women's ward. The Norway-based Hengaw Organization for Human Rights documented her case. She has been charged (with no verdict) with 'moharebeh', enmity against God, a capital offense that could face execution under Iranian law, for using her motorcycle to ferry wounded protesters during the January 8-9 unrest.
Diana Taherabadi was 15 when she was dragged from her home in Karaj in her pajamas; she remains in the juvenile section of Kachouei Prison. Ghazal Ghalandari, 16, was taken in a similar raid in Yasuj and moved to an undisclosed location.
Venus Hossein-Nejad, a 28-year-old Baha'i woman, was taken from her workplace in Kerman, forced to deliver a televised confession, and denied medication for bipolar disorder in detention. Golnaz Naraghi, a 37-year-old emergency physician, was arrested in Tehran and transferred to Qarchak Prison. According to Iran Human Rights in Oslo, both Hossein-Nejad and Naraghi have since been released on bail.
Panah Movahedi, a 24-year-old professional kickboxer, disappeared during the protests in Tehran's Punak neighbourhood on 9 January, according to the Oslo-based Iran Human Rights, which emailed a verified case summary to the fact-checking site Lead Stories on 22 April. Her family has had no information about her whereabouts since. Similarly, Iran Human Rights confirmed Ensieh Nejati was arrested in Darab, in southern Iran, on 10 January. There have been no further updates on her case.
What credible human rights organizations have collectively documented is a regime apparatus quietly executing its crackdown through revolutionary courts, juvenile correction facilities, forced televised confessions, and arbitrary charges that can amount to executions, against women whose actual stories are searing on their own terms without AI enhanced images or sensationalised versions of their stories.
The pattern
This was not the first time Yakoby shared what has been identified as AI-generated material related to Iran. On April 9, he posted a purported image of the aftermath of January's crackdown. Shayan Sardarizadeh of BBC Verify flagged it within hours as AI-generated. Sardarizadeh pointed out that the Persian-language shop signs in the background were nonsensical and the bodies in the foreground had anatomical errors. Yakoby deleted the post.
This dynamic is not only about what AI can produce. It is also about the doubt AI creates around content that is entirely real. Two days earlier, on April 7, he had reposted authentic footage of Iranian casualties, footage that did not need fabrication. Underneath, one reply read: "With Central Casting, AI, and the manipulation and manufacturing of headlines that get inorganic reach on social media has made me question everything. Nobody is innocent here, and I don't want American tax payers to fund NATO or be involved in the Middle East at any capacity."
That reply is the dynamic's logical endpoint. Faced with an information environment in which authentic and synthetic content sit indistinguishably side by side, audiences do not become better at telling them apart. They withdraw. The pattern on display here is not disinformation in the conventional sense; the underlying events are real. It is propaganda: real cases packaged with synthetic, unverified, or instrumentalized material in service of a political agenda. The damage is not the falsehood. The damage is that real human rights cases, propagandized this way, look fabricated.
Three currents, one vacuum
I have written before about how the existence of AI-generated content allows authentic documentation of Iranian state violence or authentic protests to be dismissed as fabrication, and about how that dynamic operated at wartime scale during the Israeli and US strikes earlier this year. What is now visible, after the ceasefire and the regime's reconsolidation of its information architecture, is a more general and more dangerous pattern. Synthetic and unverified content about Iranian human rights abuses are being produced from every direction at once, by actors of incompatible political orientations, with the same net effect.
The Iranian state itself runs an industrial AI propaganda pipeline, used to inflate its military capacity during the war, using documentation of real tragedies in a bastardized way for its own propaganda and to present to the world one voice for 93 million Iranians during an internet shutdown, while appealing to discourses of global south oppression, anti-war sentiments and leftist westerners. Opposition and diaspora media have produced their own fabrications. Iran International earlier this year shared an AI-generated video purporting to show political prisoners being transferred to an IRGC base to turn into human shields. The Iranian judiciary then issued a formal statement identifying the video as a fabrication and using it to dismiss the broader category of prison documentation, or their notorious mistreatment of political prisoners during wartime, as enemy psy-ops. The video was quietly removed, but the regime had already taken advantage of it..
The regime is not only weaponizing other actors' AI fabrications. It has begun producing its own to mock the dynamic. Hours after Trump's victory claim, the Iranian Embassy in South Africa, an official Islamic Republic account, posted a grid of eight AI-generated images of Iranian women in hijab with the caption "Eight other Iranian girls are going to be executed in Iran tomorrow. Ask Trump to help. Thanks to chatgpt." The account opted into the AI-generated label by the platform. The regime is performing --or trolling ---what it has helped engineer. They are mocking the dynamic Trump has just demonstrated by collapsing it. The collage that Trump shared was AI enhanced; the Embassy's grid is AI generated. The mockery works because most audiences cannot, or will not, tell the difference, and the Embassy is performing that very confusion: a category of evidence so degraded that anyone can produce a collage of synthetic women and have the absurdity itself land as commentary on the credibility of the entire Iranian human rights record.
The amplification chain that carried the original collage into the State Department's Persian feed represents a third current: propaganda actors instrumentalizing real human rights cases with unverified or AI-enhanced packaging, in service of a foreign policy agenda the women named in the post never agreed to be enlisted in. Each currently claims to act on behalf of the Iranian people. Each contaminates the evidentiary record. The regime then weaponizes everyone else's synthetic material to deny everything, which is precisely what the Iranian judiciary did within hours of Trump's post.
Underneath all three currents is a vacuum the Islamic Republic has worked methodically to maintain. In the days after the January 8-9 protest massacres, contacts inside Iran sent me photographs of blood pooled on Shiraz pavements, taken at angles that captured only the ground because the photographers could not risk being seen with a phone in hand. When connectivity briefly resumed at the end of the month, I spoke with someone in Tehran who described watching city cleaning crews wash blood from her own neighborhood streets. She had no images to share; the security presence was too heavy to risk it. Across Iranian cities, guards at checkpoints searching phones and arresting anyone whose camera roll contained protest material. This is the documentary record from inside Iran in January: discrete, partial, dangerous, and often suppressed at the source. Into that vacuum step actors of every political stripe with material that ranges from authentic-but-instrumentalized to AI-enhanced to entirely fabricated. The women whose lives are at stake disappear into the noise that everyone, regime and opposition and advocates alike, has helped produce.
What disappears
Two truths can hold at the same time. Trump uses propaganda and does not care about the rigorous methodology that human rights documentation requires; he will amplify unverified execution claims and inflated casualty figures with equal carelessness. The Iranian state committed an unprecedented massacre in January, with HRANA confirming over 6,700 deaths of protesters and minors and another 11,744 cases under investigation, and is now executing political prisoners at the highest rate in two decades while holding 16-year-old girls under threat of capital punishment and forcing Baha'i women to confess on television to crimes they did not commit. The first truth does not soften the second. What it does is corrode the record on which the second depends.
Bita Hemmati's actual death sentence is now buried under a flood of unverified claims about seven other women. Diana Taherabadi's family waits for a verdict they have not been shown while the world argues about whether her photograph is AI-generated. Venus Hossein-Nejad's forced confession was already a regime tool used against her; her case is now made more precarious. The entire network of organizations producing the credible record of what the Iranian state is actually doing to its citizens are watching their work absorbed into the noise produced by varying actors with their own political stakes in the conflict but very little regard for real lives or the truth.
Some of the mechanisms for a structural response already exist. The Oversight Board's March recommendations to Meta call for a new community standard for AI-generated content, consistent labelling of AI material during crises, and the implementation of content provenance infrastructure at scale. Meta has 60 days to respond, but its track record on previous Board recommendations on AI content suggests that strong decisions do not automatically translate into action. X has gone further on paper, announcing in March that creators who post AI-generated armed conflict videos without disclosure would be suspended from the platform's revenue-sharing program. X has said it will rely on a combination of proprietary detection tools, Community Notes, and AI metadata signals. But it does not appear to have not disclosed the specific technical methodology, and has published no data on how often the policy has been enforced.
As my colleague, WITNESS executive director Sam Gregory, has said, there is no single silver bullet solution to this crisis. What is needed, more than ever, is a layered set of trust signals: AI provenance built in at the point of creation rather than relying on detection downstream at the point of dissemination; transparency frameworks, guardrails and interoperability from the companies producing both the generative models and the distribution platforms where the content is disseminated across; sustained investment in fact-checking, especially within platforms and their content moderation processes, with the resources to work under crisis conditions like conflicts where AI content is inundating the information ecosystem; platform content moderation that uses both human and technical mechanisms to distinguish synthetic from authentic during crises rather than collapsing the two; and media literacy that equips audiences to understand our brave new world of AI content, away from the AI-or-not binary while holding uncertainty rather than retreating into blanket disbelief.
Each of these is failing or underinvested in right now. Without them, the people whose stories most need to be heard become invisible, and those who claim most loudly to be speaking for them are---whether intentionally or not---the ones erasing them.
Incident 1466: Network of Allegedly Fake Facebook Profiles with Purportedly AI-Generated Images Amplified Posts by Bulgaria's 'There Is Such a People' (ITN) Party
“Мрежи от фалшиви профили предизборно разпространяват постове на ИТН в социалните мрежи”
Встъпването в длъжност на служебния кабинет с премиер Андрей Гюров предизвика засилена активност от страна на представителите на „Има такъв народ" (ИТН) срещу членовете му. Докато зам.-председателят на партията Тошко Йорданов и зам.-председателят на парламентарната група Станислав Балабанов атакуват новите министри предимно от парламентарната трибуна и пред медиите, депутатът Павела Митова, както и лидерът на партията Станислав Трифонов, използват основно Фейсбук, за да отправят посланията си.
Публикациите и на двамата събират стотици споделяния и хиляди реакции, но едва малка част от тях са автентични.
Анализ на Factcheck.bg установи, че мрежа от около 120 фалшиви профила във Фейсбук координирано разпространява публикациите на Митова и Трифонов. Профилите са създадени по сходен модел и активността им съвпада, що се отнася до споделянето на съдържание от ИТН.
На 20 февруари например Митова публикува в профила си пост, в който твърди, че служебният министър на земеделието и храните Иван Христанов е определил специална стая „на корупцията" и там е дал достъп на свои съмишленици от партия „Единение" до „всички документи на МЗХ за пет години назад", какъвто достъп те по закон не би трябвало да имат. Христанов е сред служебните министри, които са обект на най-ожесточени атаки от страна на „Има такъв народ".
До 10 март публикацията на Павела Митова е споделена 336 пъти, като проверката на Factcheck.bg показва, че около 120 от тези споделяния идват от неавтентични профили. Тези профили са създадени по сходен модел и всеки от тях е споделил въпросните твърдения два или повече пъти.
1 от 3
По същия начин публикация на Трифонов от 4 март, в която настоява за оставката на Христанов и критикува неявяването му в парламента, за да бъде изслушан, е споделена 379 пъти, като близо половината от тези споделяния идват от неавтентични профили.

Друга публикация на Трифонов от 11 март с критики към „Продължаваме промяната -- Демократична България" пък е споделена над 15 пъти от един и същ профил.
1 от 2
Made with Flourish - Create a chart

- Профилните им снимки, както и другите снимки, които публикуват, са генерирани с изкуствен интелект. Това се потвърждава от проверка чрез инструмента AIorNOT.

- Имат много малък брой или никакви приятели във Фейсбук.

- Като местоживеене са посочени различни градове в България, а в описанието е добавен текст, свързан с отбелязаната в профила професия. Повечето профили са съз дадени в края на 2025 г. или началото на 2026 г.;
Профилите са свързани с имейл адреси, които използват шаблона „цифра@hotmail.com".

Активността им представлява основно препубликуване на съдържание от други профили или страници, свързани с „Има такъв народ" и нейни членове. Първите постове обикновено са на тема, свързана с града, където се представя че живее фалшивият профил. Всяка следваща публикация след това е свързана единствено с ИТН.

Характерно за тази група фалшиви профили е, че те само споделят публикации, но нямат друга активност като свое лично съдържание, харесвания или коментари.
При забрана за политическа реклама в социалните мрежи основният начин за достигане до потребителите е органичното разпространение, което зависи най-вече от реакциите, които даден пост събере. Повечето реакции, коментари и споделяния водят до по-голямо разпространение и помогат на алгоритъма на Фейсбук да разпознава публикациите като силно ангажиращо съдържание.
Точно такъв е случаят с поста на Павела Митова от 20 февруари. Той е споделен от фалшивия профил Magdalena Kostova два пъти на 22 февруари:
1 от 2
Този профил е създаден на 26 декември 2025 г. Макар и неавтентичен, акаунтът приема предложения за приятелство, което означава, че дейността му не е изцяло автоматизирана, а зад него стои реален човек, който го управлява.
Първите публикации от името на Magdalena Kostova са свързани със Сандански, новата година и приемането на еврото, както и няколко генерирани с изкуствен интелект снимки, които са уж нейни. Това са опити да се създаде впечатление за автентично онлайн поведение на реален потребител. От 7 януари до 30 март фалшивият профил споделя следното съдържание:
- 14 публикации на Павела Митова;
- 50 на „Има такъв народ";
- 12 публикации от профила на Слави Трифонов;
- 1 публикация на профил с името Diana Damianova, която критикува служебния кабинет на Гюров и нарича ИТН „пазител на семейните ценности"
- 1 публикация от профила на депутата от ИТН Александър Рашев.

Друг профил -- този на Никола Савов от Перник, е създаден на 10 януари 2026 г. След първата си публикация за времето той също започва да споделя съдържание, свързано само с ИТН.

Използваната профилна снимка също не е на реален човек, а е генерирана с изкуствен интелект. Друга прилика с профила на Magdalena Kostova e, че Никола Савов също е споделил два пъти публикацията на Павела Митова от 20 февруари. По два пъти са споделени и други публикации на ИТН в социалната мрежа.
1 от 8
Сред фалшивите профили, ангажирани с разпространение на постовете на ИТН, има и такива с описание на руски език. Това вероятно е знак, че текстът в тези описания е генериран с изкуствен интелект.
1 от 2
Така чрез мрежа от неавтентични профили с изфабрикув ани снимки и без реални лични постове публикацията на Павела Митова от 20 февруари получава най-малко 120 допълнителни споделяния. Големият брой на реакциите и споделянията я поставя в категорията на силно ангажиращо съдържание във Фейсбук, води до приоритетното ѝ позициониране и увеличава видимостта ѝ.
Същата мрежа от профили усилва разпространението и на постове от други акаунти и страници, свързани с ИТН, по време на кампанията за предсрочните парламентарни избори. Публикацията на Станислав Трифонов например, в която той критикува служебния министър на земеделието Иван Христанов, събира над 3000 харесвания. Споделянията са 379, от които над 100 са от фалшиви акаунти. Повечето от фалшивите профили от мрежата я споделят по два пъти. Профилът Magdalena Kostova също е сред тях.
1 от 2
„Ще има такъв народ"
В началото на март във Фейсбук се появява нова мрежа от частично или напълно анонимни профили, които използват в името си фразата „Ще Има такъв народ" -- е слоганът на ИТНв настоящата предизборна кампания. Целта на тази мрежа също е усилване разпространението на съдържание, публикувано от ИТН, така че алгоритмите на Мета да го разпознаят като силно ангажиращо.
1 от 2
Тези около 50 профила се представят за активисти на ИТН, но действителност не са автентични. Профилните снимки, които използват, са или генерирани с изкуствен интелект, или са откраднати от интернет пространството. В профилите не се наблюдава почти никаква активност до февруари 2026 г., когато изведнъж започват всекидневно да споделят съдържание от различни страници и групи на „Има такъв народ".
Made with Flourish - Create a chart
Проверка с инструмента PimEyes показа, че един от профилите в мрежата -- „Ще Има Такъв Народ (Анелия Йорданова) -- използва за профилно изображение снимка на модела на Victoria's Secret Кели Гейл.
1 от 3
Проверка на Factcheck.bg с инструмента PimEyes
Конкретната снимка се появява в редица сайтове, включително в такива със съдържание за възрастни.
Друг профил от мрежата използва снимка на реален човек.

Отново чрез проверка с инструмента за лицево разпознаване PimEyes става ясно, че снимката принадлежи на реален човек с името Иван Томанович, който е научен сътрудник в катедра „Термално инженерство и енергетика", Институт по ядрени науки „ВИНЧА" към Белградския университет. Екипът на Factcheck.bg се опита да се свърже с него, но до редакционното приключване на текста не получи отговор на отправеното запитване.

И още неавтентично поведение
От началото на предизборната кампания под публикациите на лидера на „Има такъв народ" Слави Трифонов се наблюдава друг вид необичайна активност. Публикацията му от 12 март, в която критикува кмета на София Васил Терзиев, е събрала близо 5700 реакции. От тях около 2000 са харесвания, над 600 -- смеещи се емотикони и около 3000 сърчица.
Голяма част от профилите, които реагират със сърчице под публикацията, имат чуждестранни имена -- например италиански. Същата необичайна активност с над 900 реакции -- сърца, се наблюдава под публикациите на Трифонов от 28.02, 1.03, 4.03, 5.03, 6.03 и 11.03.
1 от 3
По този начин изглеждат купените реакции в социалните мрежи -- услуга, която се предлага нелегално в интернет от различни посредници, често базирани в азиатски държави. Срещу неголямо заплащане те осигуряват реакции под дадена публикация, като използват предимно кухи профили, създадени специално за целта.
Според политиките на Meta, компанията собственик на Facebook, тези действия попадат в обхвата на така нареченото „координирано неавтентично поведение". То подлежи на санкции от страна на платформата, тъй като цели да измами Мeta и потребителите на нейните социални мрежи или да избегне санкциите, които се полагат за нарушаване на общностните правила.
Тези ангажименти от страна на Facebook и останалите големи социални мрежи произтичат от Акта за цифровите услуги на ЕС. Документът изрич но задължава големите платформи да предприемат мерки срещу използването на услугите им за координирана манипулация на обществото чрез фалшиви профили или ботове, които могат да се използват за бързо и масово разпространение на дезинформация или незаконно съдържание.
Incident 1460: Baidu Apollo Go Robotaxis Stopped in Traffic During Reported System Failure in Wuhan, Stranding Some Passengers
“Robotaxi Outage in China Leaves Passengers Stranded on Highways”
An unknown technical problem caused a number of robotaxis owned by the Chinese tech giant Baidu to freeze on Tuesday in the middle of traffic, trapping some passengers in the vehicles for more than an hour.
In Wuhan, a city in central China where Baidu has deployed hundreds of its Apollo Go self-driving taxis, people on Chinese social media reported witnessing the cars suddenly malfunction and stop operating. Photos and videos shared online show the Baidu cars halted on busy highways, often in the fast lane.
A college student in Wuhan tells WIRED that she was stuck in a Baidu robotaxi with two friends for about 90 minutes on Tuesday. (She asked to be only identified with her last name, He, to protect her privacy.) The student says the car malfunctioned and stopped four or five times during the trip before it eventually parked in front of an intersection in eastern Wuhan. Luckily, it was not a busy road, and the group was not in immediate danger. The screen display in the car asked the passengers to remain in the car with seatbelts on and wait for a company representative to come "in five minutes," according to a photo He shared with WIRED.
He says it took about 30 minutes to reach a Baidu customer representative on the phone. "They kept saying it would be reported to their superior. But they didn't explain what caused [the outage] or let us know how long we needed to wait for the staff to come," He says. But no one ever came, and after another hour of waiting, the three passengers decided to just get out and go home by themselves (the doors weren't locked).
On Chinese social media, other passengers also complained about being unable to reach Baidu's customer support. "I tried every way I could think of to call for help using the options the app showed, but the phone line wouldn't go through, and when I pressed the SOS button it told me it was unavailable. So then what exactly is the SOS for?" wrote one person in a post on RedNote alongside a video showing the button not working. She said she had to force the door to open and get out of the car as traffic halted to a complete stop behind her robotaxi. "Apollo Go, you really owe me an apology," she wrote.
Baidu didn't immediately respond to a request for comment. Local police in Wuhan issued a statement around midnight in China that said the situation was "likely caused by a system malfunction," but the incident is still under investigation. No one was injured, and all passengers have exited the vehicles, the police added. It's unclear how many of Baidu's robotaxis may have been impacted.
One dashcam recording posted to RedNote shows a car passing 16 Apollo Go vehicles parked on the road in the span of 90 minutes. On several occasions, the video shows the driver narrowly avoiding hitting the robotaxis by braking or changing lanes at the last minute.
Others were apparently not as fortunate. In another RedNote post, a man claimed he crashed into one of the malfunctioning Baidu vehicles. The man wrote in the caption that he was driving over 40 mph on a highway when the car in front of him suddenly changed lanes to avoid the stopped robotaxi. He couldn't react fast enough and ended up running into the self-driving car. Photos of the man's orange SUV being towed away show that the car's front-right fender was completely torn off, and other parts appeared to have sustained major damage.
There were at least two other collisions on the same day, according to photos and videos posted on Chinese social media. A RedNote user in Wuhan confirmed to WIRED that she drove past a white minivan that had gotten into a rear-end collision with a parked robotaxi. The back of the Baidu car was badly damaged, but the two people standing beside the scene looked unharmed, she says. She added that she estimates she also saw at least a dozen more parked robotaxies.
Baidu is one of China's leading self-driving firms. The company has launched robotaxi services in over a dozen Chinese cities so far and recently began expanding internationally to places like Seoul, Abu Dhabi, and Dubai. In February, Baidu announced that it completed 20 million rides covering over 300 million kilometers (about 186 million miles).
Wuhan has been among the most aggressive cities in allowing Baidu's fully autonomous vehicles on public roads. It permits them to operate on highways and run trips to the airport.
Incident 1455: Purported Deepfake Videos Allegedly Impersonated Optometrist Joseph Allen to Promote Myopia-Reversal Eyedrops on TikTok
“Optometrist Fights Back After Deepfake Scam”
Joseph Allen, OD is well known on the conference lecture circuit, on social media, and for his YouTube channel, Dr. Eye Health, where he educates followers about a variety of eye health topics. His notoriety had a downside, though, when he discovered videos across social media that looked and sounded like him, but were spreading false information and selling bogus products. Dr. Allen was the victim of AI Deepfake scamming.
AI-generated deepfake and slop videos are increasingly targeting real-world doctors, impersonating them to create low-effort, algorithm-driven content that can spread dangerous health misinformation. Deepfakes are AI-manipulated or fabricated content that impersonates a real person for the purposes of spreading malicious or false information.
AI slop videos are AI-produced video content at scale that exploits social media's engagement algorithms and floods platforms like YouTube, TikTok, or Instagram, edging out original content creators. Dr. Allen noted that, "In some of these, they're telling you not to see your eye care provider and that our entire profession is a scam."
With legal counsel, Dr. Allen challenged the deepfakes targeting his channel but it took a month to get hold of TikTok where he discovered the fake, then it took another three weeks to prove he was the legitimate 'Dr. Eye Health.' An account was deplatformed, but within two or three hours, the scammers made a new account and reuploaded all the previous content. In Dr. Allen's case, the product being sold in the scam was eyedrops to reverse myopia. The drops were tracked back to a Chinese company selling millions of dollars in product on Amazon. The content of the drops remains in question.
Dr. Allen recommends looking for tell-tale signs for AI-generated videos, keeping in mind that as AI progresses, these signs may become more subtle, or even corrected. Look closely at the video content for odd editing, such as, do the subject's lips match up with the vocals and audio, or do they blink too much or too little? If it's selling a too-good-to-be-true product, then it probably is, and make sure to look and think carefully when it comes to such content.
Does it feel natural or not? Deepfakes still have difficulty portraying natural lighting and can make the subject feel unnatural due to a mismatch in lighting conditions or reflections. More signs are available from MIT researchers including clues from eyes and eyebrows, blinking, and glare on eyeglasses. As AI blurs the lines between reality and fiction, the responsibility falls on individual critical thinking about media content.
Quick Add New Report URL
About the Database
The AI Incident Database is dedicated to indexing the collective history of harms or near harms realized in the real world by the deployment of artificial intelligence systems. Like similar databases in aviation and computer security, the AI Incident Database aims to learn from experience so we can prevent or mitigate bad outcomes.
You are invited to submit incident reports, whereupon submissions will be indexed and made discoverable to the world. Artificial intelligence will only be a benefit to people and society if we collectively record and learn from its failings. (Learn More)

AI Incident Roundup – November and December 2025 and January 2026
By Daniel Atherton
2026-02-02
Le Front de l'Yser (Flandre), Georges Lebacq, 1917 🗄 Trending in the AIID Between the beginning of November 2025 and the end of January 2026...
The Database in Print
Read about the database at Time Magazine, Vice News, Venture Beat, Wired, Bulletin of the Atomic Scientists , and Newsweek among other outlets.
Incident Report Submission Leaderboards
These are the persons and entities credited with creating and submitted incident reports. More details are available on the leaderboard page.
The AI Incident Briefing

Create an account to subscribe to new incident notifications and other updates.
Random Incidents
The Responsible AI Collaborative
The AI Incident Database is a project of the Responsible AI Collaborative, an organization chartered to advance the AI Incident Database. The governance of the Collaborative is architected around the participation in its impact programming. For more details, we invite you to read the founding report and learn more on our board and contributors.

View the Responsible AI Collaborative's Form 990 and tax-exempt application. We kindly request your financial support with a donation.
Organization Founding Sponsor
Database Founding Sponsor

Sponsors and Grants
































