Skip to Content
logologo
AI Incident Database
Open TwitterOpen RSS FeedOpen FacebookOpen LinkedInOpen GitHub
Open Menu
Descubrir
Enviar
  • Bienvenido a la AIID
  • Descubrir Incidentes
  • Vista espacial
  • Vista Tabular
  • Vista de lista
  • Entidades
  • Taxonomías
  • Enviar Informes de Incidentes
  • Ranking de Reportadores
  • Blog
  • Resumen de noticias de IA
  • Control de Riesgos
  • Incidente aleatorio
  • Registrarse
Colapsar
Descubrir
Enviar
  • Bienvenido a la AIID
  • Descubrir Incidentes
  • Vista espacial
  • Vista Tabular
  • Vista de lista
  • Entidades
  • Taxonomías
  • Enviar Informes de Incidentes
  • Ranking de Reportadores
  • Blog
  • Resumen de noticias de IA
  • Control de Riesgos
  • Incidente aleatorio
  • Registrarse
Colapsar

Bienvenido a
la base de datos de incidentes de IA

Loading...
Facebook’s fraud files

Incidente 1268: Meta's Automated Ad and Targeting Systems Reportedly Enabled Large-Scale Fraud Revenue

“Facebook’s fraud files”Último informe
doctorow.medium.com2025-11-15

A blockbuster Reuters report by Jeff Horwitz analyzes leaked internal documents that reveal that: 10% of Meta's gross revenue comes from ads for fraudulent goods and scams, and; the company knows it, and; they decided not to do anything about it, because; the fines for facilitating this life-destroying fraud are far less than the expected revenue from helping to destroy its users' lives:

https://www.reuters.com/investigations/meta-is-earning-fortune-deluge-fraudulent-ads-documents-show-2025-11-06/

The crux of the enshittification hypothesis is that companies deliberately degrade their products and services to benefit themselves at your expense because they can. An enshittogenic policy environment that rewards cheating, spying and monopolization will inevitably give rise to cheating, spying monopolists:

https://pluralistic.net/2025/09/10/say-their-names/#object-permanence

You couldn't ask for a better example than Reuters' Facebook Fraud Files. The topline description hardly does this scandal justice. Meta's depravity and greed in the face of truly horrifying fraud and scams on its platform is breathtaking.

Here's some details: first, the company's own figures estimate that they are delivering 15 billion scam ads every single day, which generate $7 billion in revenue every year. Despite its own automatic systems flagging the advertisers behind these scams, Meta does not terminate their account --- rather, it charges them more money as a "disincentive." In other words, fraudulent ads are more profitable for Meta than non-scam ads.

Meta's own internal memos also acknowledge that they help scammers automatically target their most vulnerable users: if a user clicks on a scam, the automated ad-targeting system floods that user's feed with more scams. The company knows that the global fraud economy is totally dependent on Meta, with one third of all US scams going through Facebook (in the UK, the figure is 54% of all "payment-related scam losses"). Meta also concludes that it is uniquely hospitable to scammers, with one internal 2025 memo revealing the company's conclusion that "It is easier to advertise scams on Meta platforms than Google."

Internally, Meta has made plans to reduce the fraud on the platform, but the effort is being slow-walked because the company estimates that the most it will ultimately pay in fines worldwide ads up to $1 billion, while it currently books $7 billion/year in revenue from fraud. The memo announcing the anti-fraud effort concludes that scam revenue dwarfs "the cost of any regulatory settlement involving scam ads." Another memo concludes that the company will not take any pro-active measures to fight fraud, and will only fight fraud in response to regulatory action.

Meta's anti-fraud team operates under an internal quota system that limits how many scam ads they are allowed to fight. A Feb 2025 memo states that the anti-fraud team is only allowed to take measures that will reduce ad revenue by 0.15% ($135m) --- even though Meta's own estimate is that scam ads generate $7 billion per year for the company. The manager in charge of the program warns their underlings that "We have specific revenue guardrails."

What does Meta fraud look like? One example cited by Reuters is the company's discovery of a "six-figure network of accounts" that impersonated US military personnel, who attempted to trick other Meta users sending them money. Reuters also describes "a torrent of fake accounts pretending to be celebrities or represent major consumer brands" in order to steal Meta users' money.

Another common form of fraud is "sextortion" scams. That's when someone acquires your nude images and threatens to publish them unless you pay them money and/or perform more sexual acts on camera for them. These scams disproportionately target teenagers and have led to children committing suicide:

https://www.usatoday.com/story/life/health-wellness/2025/02/25/teenage-boys-mental-health-suicide-sextortion-scams/78258882007/

In 2022, a Meta manager sent a memo complaining about a "lack of investment" in fraud-fighting systems. The company had classed this kind of fraud as a "low severity" problem and was deliberately starving enforcement efforts of resources.

This only got worse in the years that followed, when Meta engaged in mass layoffs from the anti-fraud side of the business in order to free up capital to work on perpetrating a different kind of scam --- the mass investor frauds of metaverse and AI:

https://pluralistic.net/2025/05/07/rah-rah-rasputin/#credulous-dolts

These layoffs sometimes led to whole departments being shuttered. For example, in 2023, the entire team that handled "advertiser concerns about brand-rights issues" was fired. Meanwhile, Meta's metaverse and AI divisions were given priority over the company's resources, to the extent that safety teams were ordered to stop making any demanding use of company infrastructure, ordered instead to operate so minimally that they were merely "keeping the lights on."

Those safety teams, meanwhile, were receiving about 10,000 valid fraud reports from users every week, but were --- by their own reckoning --- ignoring or incorrectly rejecting 96% of them. The company responded to this revelation by vowing to reduce the share of valid fraud reports that it ignored to a mere 75% by 2023.

When Meta roundfiles and wontfixes valid fraud reports, Meta users lose everything. Reuters reports out the case of a Canadian air force recruiter whose account was taken over by fraudsters. Despite the victim repeatedly reporting the account takeover to Meta, the company didn't act on any of these reports. The scammers who controlled the account started to impersonate the victim to her trusted contacts, shilling crypto scams, claiming that she had bought land for a dream home with her crypto gains.

While Meta did nothing, the victim's friends lost everything. One colleague, Mike Lavery, was taken for CAD40,000 by the scammers. He told Reuters, "I thought I was talking to a trusted friend who has a really good reputation. Because of that, my guard was down." Four other colleagues were also scammed.

The person whose account had been stolen begged her friends to report the fraud to Meta. They sent hundreds of reports to the company, which ignored them all --- even the ones she got the Royal Canadian Mounted Police to deliver to Meta's Canadian anti-fraud contact.

Meta calls this kind of scam, where scammers impersonate users, "organic," differentiating it from scam ads, where scammers pay to reach potential victims. Meta estimates that it hosts 22 billion "organic" scam pitches per day. These organic scams are actually often permitted by Meta's terms of service: when Singapore police complained to Meta about 146 scam posts, the company concluded that only 23% of these scams violated their Terms of Service. The others were all allowed.

These permissible frauds included "too good to be true" come-ons for 80% discounts on leading fashion brands, offers for fake concert tickets, and fake job listings --- all permitted under Meta's own policies. The internal memos seen by Reuters show Meta's anti-fraud staffers growing quite upset to realize that these scams were not banned on the platform, with one Meta employee writing, "Current policies would not flag this account!"

But even if a fraudster does violate Meta's terms of service, the company will not act. Per Meta's own policies, a "High Value Account" (one that spends a lot on fraudulent ads) has to accrue more than 500 "strikes" (adjudicated violations of Meta policies) before the company will take down the account.

Meta's safety staff grew so frustrated by the company's de facto partnership with the fraudsters that preyed on its users that they created a weekly "Scammiest Scammer" award, given to the advertiser that generated the most complaints that week. But this didn't actually spark action --- Reuters found that 40% of Scammiest Scammers were still operating on the platform six months after being flagged as the company's most prolific fraudster.

This callous disregard for Meta's users isn't the result of a new, sadistic streak in the company's top management. As the whistleblower Sarah Wynn-Williams' memoir Careless People comprehensively demonstrates, the company has always been helmed by awful people who would happily subject you to grotesque tormets to make a buck:

https://pluralistic.net/2025/04/23/zuckerstreisand/#zdgaf

The thing that's changed over time is whether they can make a buck by screwing you over. The company's own internal calculus reveals how this works: they make more money from fraud --- $7 billion/year --- than they will ever have to pay in fines for exposing you to fraud. A fine is a price, and the price is right (for fraud).

The company could reduce fraud, but it's expensive. To lower the amount of fraud, they must spend money on fraud-fighting employees who review automated and user-generated fraud flags, and accept losses from "false positives" --- overblocking ads that look fraudulent, but aren't. Note that these two outcomes are inversely correlated: the more the company spends on human review, the fewer dolphins they'll catch in their tuna nets.

Committing more resources to fraud fighting isn't the same thing as vowing to remove all fraud from the platform. That's likely impossible, and trying to do so would involve invasively intervening in users' personal interactions. But it's not necessary for Meta to sit inside every conversation among friends, trying to decide whether one of them is scamming the others, for the company to investigate and act on user complaints. It's not necessary for Meta to invade your conversations for it to remove prolific and profitable fraudsters without waiting for them to rack up 500 policy violations.

And of course, there is one way that Meta could dramatically reduce fraud: eliminate its privacy-invasive ad-targeting system. The top of the Meta ad-funnel starts with the nonconsensual dossiers Meta has assembled on more than 4 billion people around the world. Scammers pay to access these dossiers, targeting their pitches to users who are most vulnerable.

This is an absolutely foreseeable outcome of deeply, repeatedly violating billions of peoples' human rights by spying on them. Gathering and selling access to all this surveillance data is like amassing a mountain of oily rags so large that you can make billions by processing them into low-grade fuel. This is only profitable if you can get someone else to pay for the inevitable fires:

https://locusmag.com/feature/cory-doctorow-zucks-empire-of-oily-rags/

That's what Meta is doing here: privatizing the gains to be had from spying on us, and socializing the losses we all experience from the inevitable fallout. They are only able to do this, though, because of supine regulators. Here in the USA, Congress hasn't delivered a new consumer privacy law since 1988, when they made it a crime for video-store clerks to disclose your VHS rentals:

https://pluralistic.net/2023/12/06/privacy-first/#but-not-just-privacy

Meta spies on us and then allows predators to use that surveillance to destroy our lives for the same reason that your dog licks its balls: because they can. They are engaged in conduct that is virtually guaranteed by the enshittogenic policy environment, which allows Meta to spy on us without limit and which fines them $1b for making $7b on our misery.

Mark Zuckerberg has always been an awful person, but --- as Sarah Wynn-Williams demonstrates in her book --- he was once careful, worried about the harms he would suffer if he harmed us. Once we took those consequences away, Zuck did exactly what his nature dictated he must: destroyed our lives to increase his own fortune.

Leer Más
Loading...
Students Relocated After AI Reports Possible Gun At Parkville High

Incidente 1267: Omnilert AI Reportedly Triggered False Gun Alert at Parkville High, Prompting Student Relocation

“Students Relocated After AI Reports Possible Gun At Parkville High”
patch.com2025-11-15

PARKVILLE, MD --- Police said they found no threat after an artificial intelligence security system reported a possible gun Friday at a Baltimore County school.

Officers said they responded to Parkville High School, located in the 2600 block of Putty Hill Avenue, around 5 p.m.

Authorities said students were relocated as police searched the school.

The Baltimore County Police Department said normal school activities resumed after the all-clear.

"Out of an abundance of caution, a Baltimore County police supervisor requested a police search of the property. Students were immediately relocated to a safe area where they were supervised," Parkville Principal Maureen Astarita wrote in a message to families Friday night, according to The Baltimore Sun.

The Parkville gun search followed an October Omnilert that caused a stir at Kenwood High School in Essex. Omnilert flagged a football player's bag of chips as a potential gun, prompting officers to handcuff teenage students until they realized there was no threat.

The encounter led officials to call for more human oversight before police respond to Omnilert reports. Baltimore County Public Schools promised annual training on protocols for Omnilert, which only flags potential threats and sends them to humans to verify.

Leer Más
Loading...
Waymo Was on a Roll in San Francisco. Then One of Its Driverless Cars Killed a Cat.

Incidente 1269: Waymo Autonomous Vehicle Reportedly Ran Over and Killed a Cat in San Francisco

“Waymo Was on a Roll in San Francisco. Then One of Its Driverless Cars Killed a Cat.”
nytimes.com2025-11-15

At Delirium, a dive bar in San Francisco's Mission District, the décor is dark, the drinks are strong, and the emotions are raw. The punk rockers and old-school city natives here look tough, but they are in mourning.

Kit Kat used to bar-hop along the block, slinking into Delirium for company and chin rubs. Everybody knew the bodega cat, affectionately calling him the Mayor of 16th Street. Kit Kat was their "dawg," the guys hanging out on the corner said.

But shortly before midnight on Oct. 27, the tabby was run over just outside the bar and left for dead. The culprit?

A robot taxi.

Hundreds of animals are killed by human drivers in San Francisco each year. But the death of a single cat, crushed by the back tire of a Waymo self-driving taxi, has infuriated some residents in the Mission who loved Kit Kat --- and led to consternation among those who resent how automation has encroached on so many parts of society.

"Waymo? Hell, no. I'm terrified of those things," said Margarita Lara, a bartender who loved Kit Kat. "There's so many of them now. They just released them out into our city, and it's unnecessary."

Kit Kat's death has sparked outrage and debate for the past three weeks in San Francisco. A feline shrine quickly emerged. Tempers flared on social media, with some bemoaning the way robot taxis had taken over the city and others wondering why there hadn't been the same level of concern over the San Francisco pedestrians and pets killed by human drivers over the years.

A city supervisor called for state leaders to give residents local control over self-driving taxis. And, this being San Francisco, there are now rival Kit Kat meme coins inspired by the cat's demise.

But all of that is noise at Delirium. Kit Kat was loved there. And now he is gone.

"Kit Kat had star quality," said Lee Ellsworth, wearing a San Francisco 49ers hat and drinking a can of Pabst Blue Ribbon beer.

Before Kit Kat's death made headlines, Waymo was on a roll. The driverless car company owned by Alphabet, the parent company of Google, fully rolled out its San Francisco taxi service in 2024 and now has a fleet of 1,000 vehicles in the Bay Area. It announced an expansion this month with freeway service down the Peninsula and pickups at the airport in San Jose. Waymo expects to serve San Francisco International Airport soon, too.

Just a couple of years ago, the white Jaguars with whirring cameras on top were considered oddities. Passers-by would do double takes when they saw the steering wheel turning with nobody in the driver's seat.

Waymos are now a top tourist attraction, however. Many women find them a safer choice than relying on an Uber or Lyft driven by a man. So many parents have ordered them for their children that some schools can look like Waymo parking lots.

And Grow SF, a moderate political group with ties to the tech industry, found that San Francisco voter support of Waymo had jumped from 44 percent in September 2023 to 67 percent this July.

Still, Kit Kat's death has given new fuel to detractors. They argue that robot taxis steal riders from public transit, eliminate jobs for people, enrich Silicon Valley executives --- and are just plain creepy.

Jackie Fielder, a progressive San Francisco supervisor who represents the Mission District, has been among the most vocal critics. She introduced a city resolution after Kit Kat's death that calls for the state Legislature to let voters decide if driverless cars can operate where they live. (Currently, the state regulates autonomous vehicles in California.)

"A human driver can be held accountable, can hop out, say sorry, can be tracked down by police if it's a hit-and-run," Ms. Fielder said in an interview. "Here, there is no one to hold accountable."

Ms. Fielder has strong ties to labor unions, including the Teamsters, which has fought for more regulation of autonomous vehicles, largely out of concern for members who could eventually lose their own driving jobs in other sectors.

Ms. Fielder has posted videos to social media, showing her walking the streets of the Mission as she discusses Kit Kat.

"We will never forget our sweet Kit Kat," she says in one of them. "The poor thing ... suffered a horrible, horrible, long unaliving."

(The word "unaliving" is used by some social media users to avoid algorithms that suppress videos using words such as "death.")

Memorials have sprung up at Randa's Market, where the owner, Mike Zeidan, took in Kit Kat six years ago to catch mice. The cat hung out on the shop's counter when he wasn't roaming 16th Street. One neighbor used to bring him slices of salmon every day; another texted a photo of Kit Kat to his mother each morning.

On a tree outside hang photos of the cat and a sketch of him with a halo above his head.

"Save a cat," the drawing reads. "Don't ride Waymo!"

Floral bouquets, a stuffed mouse and a Kit Kat candy wrapper round out the memorial.

One tree over is a display of a different sort.

"Waymo killed my toddler's favorite cat," a sign reads. "Human drivers killed 42 people last year." (Actually, according to city data, human drivers killed 43 people in San Francisco last year, including 24 pedestrians, 16 people in cars and three bicyclists. None were killed by Waymos.)

The sign was an attempt to put the cat's death in context, in a walkable city where pedestrians still face peril. In 2014, the city pledged to end traffic fatalities within 10 years, but last year's total was one of the highest on record.

The city does not track how many animals are killed by cars each year, but the number is in the hundreds, according to Deb Campbell, a spokeswoman for Animal Care and Control in San Francisco.

She said the agency's cooler last week contained the bodies of 12 cats thought to have been hit by cars in recent weeks. None of them seemed to have prompted media coverage, shrines or meme coins.

Waymo does not dispute that one of its cars killed Kit Kat. The company released a statement saying that when one of its vehicles was picking up passengers, a cat "darted under our vehicle as it was pulling away."

"We send our deepest sympathies to the cat's owner and the community who knew and loved him," Waymo said in a statement.

Waymo is adamant that its cars are much safer than those driven by humans, reporting 91 percent fewer serious crashes compared to human drivers covering the same number of miles in the same cities. The data was in a company research paper that was peer-reviewed and published in a journal. Waymo also operates full taxi services in Los Angeles and Phoenix and provides rides through a partnership with Uber in Atlanta and Austin, Texas.

Mayor Daniel Lurie of San Francisco has been a big fan. He said earlier this year he would allow Waymo to use Market Street, the city's central thoroughfare, which for five years had been accessible mainly to pedestrians and public transit vehicles. He also defended the autonomous taxis in an interview on Thursday with the tech journalist Kara Swisher after she brought up Kit Kat.

"Waymo is incredibly safe," he said. "It's safer than you or I getting behind a wheel."

Rick Norris, who works at the Roxie Theater in the Mission, said that he liked Waymos and had noticed that they were navigating the city's tricky streets better and better. But he was concerned after he spoke with several people who had witnessed Kit Kat's last moments and recounted how they had tried to stop the Waymo when they saw the cat beneath it.

The car just drove away.

It was at that moment that Sheau-Wha Mou, a Delirium bartender and karaoke host, took her cigarette break. She saw people panicking as they stood on the sidewalk. She rushed over and found Kit Kat suffering, with blood streaming from his mouth.

"I knelt down, and I was talking to him," she recalled. "'What happened? Are you OK?'"

She said she had used the bar's sandwich board sign as a stretcher. A stranger then drove her and Kit Kat to a nearby emergency animal clinic. Mr. Zeidan, the bodega owner, arrived soon after.

An hour later, the veterinarian told Mr. Zeidan that Kit Kat was dead.

Photos of Kit Kat still sit next to the cash register at Randa's Market, alongside the dice and cigarette lighters for sale. Mr. Zeidan said he still misses the mouser that became the block's mayor.

Darrell Smith stopped by the market on Monday, part of a weekly ritual that also involves ordering a mixed plate at the nearby Hawaiian barbecue spot. He missed Kit Kat, he said, but felt that dwelling on the robot car seemed like a waste of time.

"I'm skeptical about those Waymo cars myself," he said. "But A.I. is the future. We can't stop it whether we like it or not."

Leer Más
Loading...
A.I. Cheating Rattles Top Universities in South Korea

Incidente 1270: Multiple Purported AI-Assisted Cheating Incidents Reported Across South Korea's SKY Universities During October 2025 Midterms

“A.I. Cheating Rattles Top Universities in South Korea”
nytimes.com2025-11-15

Many college students in South Korea are enjoying downtime, relieved to wrap up midterm exams. But the nation's elite universities have been left scrambling after it emerged that testing season was marred by a spate of mass cheating incidents involving A.I.

One high profile incident, at Yonsei University in Seoul, became public on Sunday. Local news media reported that a professor had found that dozens of students may have cheated by using textbooks, computer programs or even ChatGPT during an online midterm examination for a course on ChatGPT. Hundreds of undergraduates took the test, and 40 of them admitted to cheating, the school said.

Within days, similar episodes of mass cheating emerged at two other top-tier schools in South Korea --- Seoul National University and Korea University, which also said students had used A.I. to cheat on recent tests. Collectively, the colleges are known by the acronym SKY, which is also a nod to their status in the hypercompetitive world of Korean education.

While the questionable use of artificial intelligence in colleges is becoming widespread, it is rare for a nation's most prestigious universities to simultaneously be embroiled in A.I. scandals.

Education is still seen as a driver of social mobility in South Korea, which has one of the highest proportions of college graduates among developed countries. For most students, the goal is to secure a spot at the SKY schools.

To do that, they need a top score on an eight-hour college entrance exam testing their knowledge of Korean, math, English and other subjects. On Thursday, more than half a million high school seniors in South Korea sat for the exam, a decades-old tradition that disrupts the rhythm of the entire nation. Flights are grounded, construction is halted and traffic restrictions are enforced, and the public is urged to keep noise at a minimum so the students can concentrate.

In recent years, A.I. has become entrenched in higher education. Over 90 percent of South Korean college students who have some experience with generative A.I. said that they used those tools on school assignments, according to a 2024 survey by the Korea Research Institute for Vocational Education and Training.

Some educators say colleges have failed to keep pace.

"A.I. is a tool for retaining and organizing information so we can no longer evaluate college students on those skills," said Park Joo-Ho, a professor of education at Hanyang University. Since students are already using A.I., he added, they should instead be tested on their creativity, something A.I. cannot replicate.

"The current method of education is already out of date," he said.

Yonsei students taking the "Natural Language Processing and ChatGPT" class were forbidden from using A.I. for the Oct. 15 midterm. It was administered online, and test-takers were told to keep their laptop cameras on so proctors could monitor them. After examining the camera footage, a professor said he found evidence of dozens of students cheating. They will be given a 0 on the test, the school said. On a separate occasion, students at Yonsei were caught sharing test answers on a phone app that uses A.I., the school said.

"It's inevitable that A.I. will affect our education," said Ju Yuntae, an undergraduate at Yonsei who is studying physical education. He said he used ChatGPT to find research papers and for help with translating between English and Korean.

"But if students break a pact with their professors to refrain from using it," he said, "then it is a matter of trust and a bigger issue."

That covenant also appears to have been broken at Korea University in Seoul. Several students admitted to using A.I. during an online test last month for a class about aging societies, a university spokeswoman said, after one student reported that some had used a group chat to share recordings of their screens and answers throughout the test. Those students will be given a score of 0, the spokeswoman said.

In a statement on Wednesday, Seoul National University said that it had discovered that students used A.I. to cheat in a statistics exam, but did not disclose further details. The exam will be given again, the university said.

In recent years, these schools have set forth some A.I. guidelines. Korea University has an 82-page guidebook that states that "unauthorized use or submission of AI-generated content is considered academic misconduct." Yonsei's rules declare that using A.I to "generate the quintessential and creative output of research is prohibited."

Lee T.H., a graduate student in computer engineering at Seoul National, said he started noticing students using A.I. soon after OpenAI released ChatGPT in 2022. He now uses A.I. to speed up his coding.

"Some professors don't like us using A.I.," he said. "Some encourage it because it helps solve problems quickly."

But, he added, "there isn't really a way you can stop students from using it."

Leer Más
Loading...
Greek Finance Minister Sues Facebook Page Over Deepfake Investment Scam

Incidente 1271: Purported Deepfake of Greek Finance Minister Kyriakos Pierrakakis Reportedly Used in Facebook Investment Scam

“Greek Finance Minister Sues Facebook Page Over Deepfake Investment Scam”
greekcitytimes.com2025-11-15

A lawsuit accuses unknown Facebook page administrators of using AI-generated deepfakes to falsely portray the minister promoting fraudulent "high-yield" investment schemes.

Greece's Ministry of Economy and Finance, along with Minister Kyriakos Pierrakakis, filed a lawsuit on Friday against the unidentified operators of a Facebook page for running deceptive advertisements.

The suit claims the page used artificial intelligence to produce a deepfake video showing Pierrakakis endorsing a scam investment program.

The fabricated content "falsely showed Minister Pierrakakis encouraging citizens to invest in alleged 'high-yield programs,'" the ministry stated, emphasizing that "it bears no connection to reality."

Greek authorities have previously exposed similar social media and online scams featuring deepfakes of prominent figures promoting fake "miracle" drugs or lucrative investments in cryptocurrencies, gold, or oil---often including bogus personal success stories from the impersonated individuals.

In February, police uncovered an illegal online network trafficking counterfeit medicines, which deployed deepfake videos mimicking the likeness or voice of celebrities like doctor Sotiris Tsiodras, journalist Nikos Hatzinikolaou, and singer Giorgos Dalaras.

Under the EU's pioneering AI Act---the world's first comprehensive AI regulation---deepfakes are categorized as limited risk, while deploying AI to manipulate elections or voter behavior is deemed high risk.

Leer Más
Enviar URL Rapidamente

Los enlaces enviados se agregan a una cola de revisión para que se resuelvan en un registro de incidente nuevo o existente. Los incidentes enviados con detalles completos se procesan antes que las URL que no poseen los detalles completos.
Acerca de la Base de Datos

La base de datos de incidentes de IA está dedicada a indexar el historial colectivo de daños o casi daños realizados en el mundo real por el despliegue de sistemas de inteligencia artificial. Al igual que bases de datos similares en aviación y seguridad informática, la base de datos de incidentes de IA tiene como objetivo aprender de la experiencia para que podamos prevenir o mitigar los malos resultados.

Estás invitado a enviar informes de incidentes, después de lo cual los envíos se indexarán y se harán visibles para el mundo. La inteligencia artificial solo será un beneficio para las personas y la sociedad si registramos colectivamente y aprendemos de sus fallas. (Más información)

post-image
Investigación de incidentes de IA para construir un futuro más seguro: el Instituto de Investigación de Seguridad Digital se asocia con Responsible AI Collaborative

By TheCollab Board of Directors

2024-02-20

El Instituto de Investigación de Seguridad Digital (DSRI) de los Institutos de Investigación de UL se está asociando con Responsible AI Coll...

Leer Más
La Base de Datos en la Prensa

Lea acerca de la base de datos en PAI Blog, Vice News, Venture Beat, Wired y arXiv entre otros puntos de venta.

Arxiv LogoVenture Beat LogoWired LogoVice logoVice logo
Rankings de Reportadores de Incidentes

Estas son las personas y entidades acreditadas con la creación y presentación de informes de incidentes. Hay más detalles disponibles en la página de rankings.

Nuevos incidentes aportados
  • 🥇

    Daniel Atherton

    627
  • 🥈

    Anonymous

    154
  • 🥉

    Khoa Lam

    93
Reportes agregados a Incidentes Existentes
  • 🥇

    Daniel Atherton

    718
  • 🥈

    Khoa Lam

    230
  • 🥉

    Anonymous

    227
Informe total de contribuciones
  • 🥇

    Daniel Atherton

    2840
  • 🥈

    Anonymous

    962
  • 🥉

    Khoa Lam

    456
El Informe de Incidentes de IA
An envelope with a neural net diagram on its left

Create an account to subscribe to new incident notifications and other updates.

Incidentes Aleatorios
Loading...
Predictive Policing Biases of PredPol
Predictive Policing Biases of PredPol
Loading...
Aledo High School Student Allegedly Generates and Distributes Deepfake Nudes of Seven Female Classmates
Aledo High School Student Allegedly Generates and Distributes Deepfake Nudes of Seven Female Classmates
Loading...
ChatGPT Reportedly Introduces Errors in Critical Child Protection Court Report
ChatGPT Reportedly Introduces Errors in Critical Child Protection Court Report
Loading...
Child Sexual Abuse Material Taints Image Generators
Child Sexual Abuse Material Taints Image Generators
Loading...
ChatGPT Reportedly Produced False Court Case Law Presented by Legal Counsel in Court
ChatGPT Reportedly Produced False Court Case Law Presented by Legal Counsel in Court
El Colaborativo de IA Responsable

La base de datos de incidentes de IA es un proyecto de Responsible AI Collaborative, una organización autorizada para promover la base de datos de incidentes de IA. La gobernanza de la Colaborativa se estructura en torno a la participación en su programación de impacto. Para obtener más detalles, lo invitamos a leer el informe de fundación y obtener más información sobre nuestro and learn more on our.

Vea el Formulario 990 de la Colaborativa de IA Responsable y la solicitud de exención de impuestos.

Patrocinador fundador de la organización
Patrocinador fundador de la base de datos
Patrocinadores y subvenciones
Patrocinadores similares

Investigación

  • Definición de un “Incidente de IA”
  • Definición de una “Respuesta a incidentes de IA”
  • Hoja de ruta de la base de datos
  • Trabajo relacionado
  • Descargar Base de Datos Completa

Proyecto y Comunidad

  • Acerca de
  • Contactar y Seguir
  • Aplicaciones y resúmenes
  • Guía del editor

Incidencias

  • Todos los incidentes en forma de lista
  • Incidentes marcados
  • Cola de envío
  • Vista de clasificaciones
  • Taxonomías

2024 - AI Incident Database

  • Condiciones de uso
  • Política de privacidad
  • Open twitterOpen githubOpen rssOpen facebookOpen linkedin
  • 353a03d