Incidente 136: Empresas de tecnología de seguridad de marca afirmaron falsamente el uso de IA y el bloqueo de anuncios mediante listas simples de palabras clave.
Descripción: Las empresas de tecnología de seguridad de marca afirmaron falsamente el uso de inteligencia artificial y bloquearon anuncios utilizando simples listas de palabras clave.
Entidades
Ver todas las entidadesClasificaciones de la Taxonomía CSETv1
Detalles de la TaxonomíaIncident Number
The number of the incident in the AI Incident Database.
136
Special Interest Intangible Harm
An assessment of whether a special interest intangible harm occurred. This assessment does not consider the context of the intangible harm, if an AI was involved, or if there is characterizable class or subgroup of harmed entities. It is also not assessing if an intangible harm occurred. It is only asking if a special interest intangible harm occurred.
no
Date of Incident Year
The year in which the incident occurred. If there are multiple harms or occurrences of the incident, list the earliest. If a precise date is unavailable, but the available sources provide a basis for estimating the year, estimate. Otherwise, leave blank.
Enter in the format of YYYY
2020
Date of Incident Month
The month in which the incident occurred. If there are multiple harms or occurrences of the incident, list the earliest. If a precise date is unavailable, but the available sources provide a basis for estimating the month, estimate. Otherwise, leave blank.
Enter in the format of MM
Estimated Date
“Yes” if the data was estimated. “No” otherwise.
Yes
Multiple AI Interaction
“Yes” if two or more independently operating AI systems were involved. “No” otherwise.
no
Risk Subdomain
A further 23 subdomains create an accessible and understandable classification of hazards and harms associated with AI
7.3. Lack of capability or robustness
Risk Domain
The Domain Taxonomy of AI Risks classifies risks into seven AI risk domains: (1) Discrimination & toxicity, (2) Privacy & security, (3) Misinformation, (4) Malicious actors & misuse, (5) Human-computer interaction, (6) Socioeconomic & environmental harms, and (7) AI system safety, failures & limitations.
- AI system safety, failures, and limitations
Entity
Which, if any, entity is presented as the main cause of the risk
AI
Timing
The stage in the AI lifecycle at which the risk is presented as occurring
Post-deployment
Intent
Whether the risk is presented as occurring as an expected or unexpected outcome from pursuing a goal
Unintentional
Informes del Incidente
Cronología de Informes

Sabemos desde hace mucho tiempo que las tecnologías de detección de seguridad de marca eran instrumentos contundentes [1], [2], [3], [4], [5]. A pesar de sus afirmaciones de IA avanzada (inteligencia artificial) y ML (aprendizaje automático…
Variantes
Una "Variante" es un incidente de IA similar a un caso conocido—tiene los mismos causantes, daños y sistema de IA. En lugar de enumerarlo por separado, lo agrupamos bajo el primer incidente informado. A diferencia de otros incidentes, las variantes no necesitan haber sido informadas fuera de la AIID. Obtenga más información del trabajo de investigación.
¿Has visto algo similar?
Incidentes Similares
Did our AI mess up? Flag the unrelated incidents

Facebook’s Political Ad Detection Reportedly Showed High and Geographically Uneven Error Rates
· 5 informes

Deepfake Obama Introduction of Deepfakes
· 29 informes

Amazon Censors Gay Books
· 24 informes
Incidentes Similares
Did our AI mess up? Flag the unrelated incidents

Facebook’s Political Ad Detection Reportedly Showed High and Geographically Uneven Error Rates
· 5 informes

Deepfake Obama Introduction of Deepfakes
· 29 informes

Amazon Censors Gay Books
· 24 informes