Associated Incidents

Lack of transparency, underestimation of psychological violence, “rigid” questions that do not allow explanations or evaluations that take into account the police resources available when assigning a level of risk to the victim. These are some of the "problems" that an external audit has found in the VioGén algorithm, an automatic system used in police stations to rate the personal risk that each complainant of gender violence has.
“One of the main concerns about the VioGén algorithm is that approximately 45% of cases receive a risk rating of 'unappreciated'. In the context of gender-based violence, the category of "without risk" is already a very controversial issue, since only the fact of taking the step of denouncing can lead to violent reactions on the part of the aggressor", details the investigation, Prepared by the Ethics Foundation, specialized in algorithmic auditing, and the Ana Bella Foundation, a network of women survivors of sexist violence. "This lack of appreciation of the risk leads us to think that there are factors that are not yet being taken into account," they warn.
The audit includes an analysis of public data from the Ministry of the Interior, that VioGén has been carrying out since 2007. Added to this is a qualitative study with interviews with 31 whistleblowers and 7 specialized lawyers, as well as a quantitative approach that deepens VioGén's risk ratings with 126 women who were murdered after filing a complaint against their aggressor.
This second part of the investigation has been carried out externally given the Interior's refusal to provide access to VioGén's databases to carry out an internal audit, something that Eticas points out that it offered to do for free "on several occasions since 2018”. Sources from the Ministry have confirmed to elDiario.es that these requests were denied and affirm that the report “lacks academic rigor by basing its study and its conclusions on an insignificant statistical universe of only 31 interviews compared to more than five million evaluations. of risk carried out since 2007”.
Risk rating for gender-based violence
VioGén (acronym for Comprehensive Monitoring System in Cases of Gender Violence) comes into action when a victim of sexist violence goes to the police station to file a complaint. It is based on a predetermined questionnaire that the agents fill out with the answers of the complainant. Based on them, the algorithm evaluates the risk that her aggressor will attack her, classifying it as "unappreciated", "low", "medium", "high" or "extreme". From the “medium” risk, the complainant has the right to police protection. Agents can modify the assigned risk level, but only to increase it. However, the audit reflects that in 95% of cases they do not.
It is in this first evaluation where the investigation detects many of the system's problems. “80% of the interviewees reported different problems with the VioGén questionnaire. This means that the quality of the data entered into the algorithmic system could be compromised at this time, generating sources of bias and misrepresentation within the system. Among the failures pointed out by the complainants is the lack of transparency, since many were not informed of the risk assessment that VioGén assigned them.
The context surrounding the completion of the questionnaire is also problematic. One of the participants remembers it as "a murky moment with absurd questions where mistakes are made when filling out the questionnaire", describing the situation she experienced as "surreal". Respondents state that it was difficult for them to remember everything that happened at the time of the interview, organize their thoughts and provide detailed answers to VioGén's questions. The foundations recall that "many women who suffer gender violence arrive at the police station and file the complaint right after a violent incident", so "they are in a state of shock".
Added to this is that the system only supports binary responses. If the complainant is unable to offer one, it is the agents who must interpret what they should enter into the system. The interviewed lawyers specializing in gender violence agree that the questions are “rigid” and do not allow explanations. One of them pointed out that the way in which they are formulated means that the educational level of the complainant is also key: “The level of education makes it easier to understand what is asked for and explain how she feels and suffers, as long as it is not serious abuse. and is blocked or terrified.”
Of the 31 women interviewed for the research, 15 negatively evaluated their experience with vioGén; 10 indicated both negative and positive aspects and 6 positively rated their overall experience. Another point on which victims and lawyers agree is that "VioGén underestimates psychological violence and the newer forms of non-physical violence (such as harassment through social networks), placing the emphasis on physical violence." “There is no need for a beating, or an assault, for the risk to exist. It seems that the parameters forgot the psychological abuse”, explains another of the lawyers.
Interior denies that the questionnaire presents problems and reminds that it is constantly being improved. “The police risk assessment system (VPR) in the VioGén field is an important support instrument in the fight against gender-based violence that has proven its usefulness since its implementation 15 years ago, thanks to the scientific validation of each of the its indicators”, sources from the Ministry expose to this medium. "It is necessary to specify that it is a support for the police evaluation work carried out by an expert in the field, also based on another set of factors and criteria," they add.
Insufficient protection in 55 cases
The quantitative analysis of the 126 cases of murdered women has detected that 55 of them "received a protection order that turned out to be insufficient" by VioGén. At this point, the investigation detects a bias in its parameters, since "the murdered women who did not have children had automatically been assigned a lower level of risk than those who did."
The audit of the automatic part of the system has also revealed that VioGén adapts the estimated risk for each victim based on the police resources available. "This means the system only gives the number of 'extreme' risk assessments it can afford, so funding cuts have a direct and measurable impact on the chances of women receiving effective protection after seeking police protection." .
The system only gives the number of “extreme” risk assessments it can afford, so funding cuts have a direct and measurable impact on the chances of women receiving effective protection
The foundations highlight that despite the active cases in VioGén growing year after year, in 2021, "only 1 in 7 women who went to the police in search of protection obtained it." Since 2015, only 3% of victims obtained a risk score of “medium” or higher and, therefore, police protection.
Algorithmic transparency
VioGén was developed in 2007 by the Ministry of the Interior and has been refined ever since. It was a pioneering system for integrating all the information on victims of sexist violence into a single platform, as well as unifying their protection throughout the territory. This has made it "the risk assessment system with the most registered cases in the world" with more than 3 million, collects the audit.
However, the details about how it works or why it assigns one level of protection and not another are opaque. This situation often goes unnoticed due to the apparent mathematical neutrality of Artificial Intelligence, despite warnings from experts: "Algorithms sometimes give a false impression of objectivity that crushes people's rights," he explained in an interview with this medium Carlos Preciado, magistrate of the TSJ of Catalonia. “Most of the VioGén studies have been carried out by the same researchers who contributed to its development and by people who work and/or have vested interests in the Ministry and the police forces,” Eticas highlights in this case.
Most of the VioGén studies have been carried out by the same researchers who contributed to its development and by people who work and/or have vested interests in the Ministry
The Government has committed to promoting the transparency of this type of Artificial Intelligence systems used in the administration. “The quality of the data provided and its accessibility will be improved, promoting a data-oriented culture, using transparent and explainable algorithms, strengthening the relationship between the Administration and the citizenry,” it states in the National Artificial Intelligence Strategy approved in 2020 . The general budgets for 2022 include an item of 5 million euros for the creation of an [agency specialized in supervising algorithms](https://www.eldiario.es/tecnologia/espana-vigilara-inteligencia-artificial-farmacos-alimentos_1_8615818 .html) and its possible biases.
However, the workings of most of the algorithms it uses remain secret. In addition to VioGén, another example is BOSCO, which regulates access to social assistance for paying electricity to users in a vulnerable situation. In this case, the Government is suing the Civio pro-transparency foundation after [refusing to give access to its source code](https://civio.es/novedades/2022/02/10/la-justicia-impide-la-apertura -of-the-source-code-of-the-application-that-grants-the-social-bonus/) after a resolution of the Transparency Council that urged him to reveal its details.