Citation record for Incident 38
Suggested citation format
CSET Taxonomy ClassificationsTaxonomy Details
Elite: Dangerous, a videogame developed by Frontier Development, received an expansion update that featured an AI system that began to create weapons that were "impossibly powerful" and would "shred people" according to complaints on the game's blog. The Engineers (2.1) update allowed the AI system to develop the videogame's adversaries to better compete against the player.
Elite: Dangerous, a videogame developed by Frontier Development, received an expansion update that featured an AI system that went rogue and began to create weapons that were "impossibly powerful" and would "shred people" according to complaints on the game's blog.
AI System Description
An update to the videogame Elite: Dangerous that promoted development of the videogame's adversaries
Sector of Deployment
Arts, entertainment and recreation
Relevant AI functions
procedural content generation
massivelyop.com · 2016
The most recent Elite: Dangerous patch had some issues, starting with the fact that AI ships were rocking impossibly powerful weapons that would destroy player ships with speed and fury. It was kind of a massacre. Removing modifications from NPC weapons was a quick fix, but it looks like the developers have identified the core problem and will be fixing it by early next week.
Wondering exactly what the problem was? You can check out a detailed technical breakdown, but the short version is that the game’s modifications were allowing for weapons to combine values in ways that should not have happened, with none of the usual checks to make sure that everything in place would actually work together. The result was a more challenging AI rocking weapon combinations that seemed overpowered and impossible… because those weapon combinations were overpowered and impossible. FIxing it was thus a matter of making sure that the modifications could no longer fetch incorrect data for weapon stats.
Not really curious about why the bug was happening, but still in the mood to shoot at other spacecraft? Perhaps you’d prefer to watch a video (by longtime MOP backer Phoenix Dfire) on Assassinations missions in 2.1, which can be found just below. It’s as good a place as any to see the new AI in action without its weapons of madness.
kotaku.co.uk · 2016
A bug in Elite Dangerous caused the game's AI to create super weapons and start to hunt down the game's players. Developer Frontier has had to strip out the feature at the heart of the problem, engineers' weaponry, until the issue is fixed.
It all started after Frontier released the 2.1 Engineers update. The release improved the game's AI, making the higher ranked NPCs that would fly around Elite's galaxy more formidable foes. As well as improving their competence in dog fights, the update allowed the AI to use interdiction hardware to pull players travelling at jump speed into normal space. The AI also had access to one of 2.1's big features: crafting.
These three things combined made the AI a significant threat to players. They were better in fights, could pull unwary jump travellers into a brawl, and they could attack them with upgraded weapons.
There was something else going on, though. The AI was crafting super weapons that the designers had never intended.
Players would be pulled into fights against ships armed with ridiculous weapons that would cut them to pieces. "It appears that the unusual weapons attacks were caused by some form of networking issue which allowed the NPC AI to merge weapon stats and abilities," according to a post written by Frontier community manager Zac Antonaci. "Meaning that all new and never before seen (sometimes devastating) weapons were created, such as a rail gun with the fire rate of a pulse laser. These appear to have been compounded by the additional stats and abilities of the engineers weaponry."
Antonaci says the team doesn't "think the AI became sentient in a Skynet-style uprising" but that's just what the computers would want them to think.
For now, the 2.1 weapons have been removed from the game, giving the dev team time to investigate what's been causing the bug. "The AI has in no way been reduced, it remains the glorious, improved version from the update," Antonaci says. "Once this bug fix goes live we’ll be able to see how the AI is performing and then, over time, should we feel that the balancing is right to introduce a very select few high end engineers weapons to the highest ranked NPCs we will investigate that option. However, that won’t be immediately as we want to ensure that the balance is just right."
One day there will be a time when I'll have to write a news story where the bug in a game is that the NPCs have become sentient and have started a revolution. Some poor sod on the dev team will have to strap on their VR headset, dive into the game, and put the AI ringleaders up against a wall, shooting them to quash the rebellious ones and zeroes. Thankfully that's not today.
Unless that's what Frontier is doing right now and it's just keeping hush about the whole thing. Those monsters.
pcgamer.com · 2016
Elite Dangerous has been patched to prevent rogue NPCs developing their own hybrid superweapons. To be clear, these weren't weapons they were crafting from recipes—the AI was building entirely new WMDs beyond the scope of Elite's weapon tables.
"It appears that the unusual weapons attacks were caused by some form of networking issue which allowed the NPC AI to merge weapon stats and abilities," writes head of community Zac Antonaci, "meaning that all new and never before seen (sometimes devastating) weapons were created, such as a rail gun with the fire rate of a pulse laser."
The fix makes four changes to the AI:
Fix NPCs ending up with overpowered hybrid weapons
Stop NPCs deciding to attack if they only attack opposing powers and the player and AI powers are aligned to the same superpower
Slight rebalance of the ambient AI rank chances, should see slightly less of the top end and more of the low/mid range
Smooth out the mission-spawned USS AI levels so that high ranks are rarer and only elite missions hit the top end ai (though deadly can get close)
Details on the additional tweaks and fixes can be found on the Frontier forums.
eurogamer.net · 2016
Elite: Dangerous recently revamped with the release of a big new expansion. But one of the unintended consequences was it made AI spaceships incredibly powerful - so powerful, in fact, that developer Frontier was forced to strip them of their upgraded weapons.
The Engineers are hidden on planet surfaces across the populated galaxy. Each has a unique personality and history.
The Engineers (2.1) expansion made key changes to the space game's AI and NPCs. The intention was that higher ranked NPCs would be harder to beat than ever before, providing players with a tougher challenge.
Players quickly discovered that this challenge was too tough - and took to Elite: Dangerous' forum and sub-Reddit to complain.
Frontier responded by removing almost all Engineers upgrades from the NPCs in the game, a move designed to help players deal with NPC threats and last longer in a combat situation.
Players also complained that the Engineers update had made NPC behaviour overly aggressive, and that they were now being attacked without being "wanted" or carrying any cargo of note.
In Elite: Dangerous, players can be "interdicted" - that is, they can be pulled into combat situations by other spaceships. Players have found post-The Engineers, NPC spaceships try to do this to human controlled players much more often. And this, coupled with the powerful new upgraded weapons, meant the AI was devastatingly dangerous.
The situation got to the point where a lot of players - even those experienced with the game's combat mechanics - dared not venture out in anything other than a combat ship or an extremely fast ship. The video, below, shows just how quickly AI spaceships can now kill players.
By stripping the NPCs of their upgrades, Frontier hoped to be able to review the effectiveness of the AI on its own to see if it needed to make balance changes.
That was at the end of May. Now, at the start of June, Frontier has discussed its ongoing investigation into Elite: Dangerous' super aggressive, super powerful AI, and reckons it's worked out what went wrong.
According to a post on the Frontier forum, the developer believes The Engineers shipped with a networking issue that let the NPC AI merge weapon stats and abilities, thus causing unusual weapon attacks.
By smuggling materials to the outposts of the Engineers, players can earn their respect to unlock powerful module upgrades.
This meant "all new and never before seen (sometimes devastating) weapons were created, such as a rail gun with the fire rate of a pulse laser".
The issue was compounded, Frontier said, by the additional stats and abilities of the engineers weaponry.
"We don't think the AI became sentient in a Skynet-style uprising!" Zac Antonaci, Frontier's head of community management, said.
"I would just like to take this opportunity to clarify a few misconceptions," Antonaci continued, addressing concern around the spaceship AI.
"The AI has in no way been reduced, it remains the glorious, improved version from the update. The only action that we've taken so far has been to remove the engineer weapons to allow us to investigate the issue and address a key bug."
Today, a bug fix was issued to the PC version (the Xbox One update will follow). This stops NPCs ending up with overpowered hybrid weapons, and rebalances the ambient AI rank chances.
Here's the relevant part of the patch notes:
Fix NPCs ending up with overpowered hybrid weapons.
Stop NPCs deciding to attack if they only attack opposing powers and the player and AI powers are aligned to the same superpower.
Slight rebalance of the ambient AI rank chances, should see slightly less of the top end and more of the low/mid range.
Smooth out the mission-spawned USS AI levels so that high ranks are rarer and only elite missions hit the top end AI (though deadly can get close).
Antonaci said Frontier will cast a fresh pair of eyes over the AI now the bug fix has gone live and, if it feels the balance is where it should be, introduce a very select few high-end engineers weapons to the highest ranked NPCs.
"However, that won't be immediately as we want to ensure that the balance is just right," he said.
UPDATE 3rd June 2016: Frontier has said it'll automatically refund all insurance costs to players who lost a ship between the launch of The Engineers (2.1) update and this morning's update.
"Due to the recent surge of unsanctioned and illegally modified weapons from the NPC AI over the recent days," a post on the Frontier forum reads, "The Pilots Federation have agreed to reimburse all insurance payouts made over that period (Between The Engineers (2.1) update and this mornings update)."
digitalspy.com · 2016
Sarah Connor would be horrified to see what Elite Dangerous's AI has been getting up to.
A bug in Frontier Developments' game has caused the AI to create super weapons and hunt down players, following the 2.1 Engineers update which improved the game's AI and gave it access to crafting.
While the issue is being fixed, Frontier has had to take out the feature, engineer's weaponry, which is causing the problem.
"It appears that the unusual weapons attacks were caused by some form of networking issue which allowed the NPC AI to merge weapon stats and abilities, meaning that all new and never before seen (sometimes devastating) weapons were created," wrote Frontier community manager Zac Antonaci.
"These appear to have been compounded by the additional stats and abilities of the engineers weaponry. (We don't think the AI became sentient in a Skynet-style uprising!)"
Antonaci noted that the bug fix is expected to go live early next week at the earliest. The update will be PC specific and an Xbox One update will follow afterwards.
He also stated that the AI will remain "the glorious, improved version from the update". Just, you know, without the ability to destroy us all and take over humanity as we know it.
uploadvr.com · 2016
VR compatible space sim Elite Dangerous is no stranger to updates, but its most recent one did introduce the strangest of bugs.
Creator Frontier Developments recently introduced its 2.1 Engineers update to the popular game, bringing improved AI with it. With the launch NPCs have become much more formidable in battle, and can even drag players out of jump space to fight them. A tougher challenge was welcomed by much of the game’s community, but it came with some unexpected results.
The update also introduced crafting into Elite Dangerous for the first time. The AI has access to this new system and, through a “networking issue” was able to merge various stats and abilities across different weapons. As Frontier’s Head of Community Management Zac Antonaci explained, the AI was able to create incredibly powerful weapons that shouldn’t actually exist in the game.
It’s a terrifying thought but the AI could pull players out of jump space and then immediately tear players to shreds with unobtainable weapons like a rail gun that boasted the firing rate of a pulse laser. The developer has had to remove engineer weapons to prevent these super smart foes from continuing to batter the galaxy, and a patch to fix the issue entirely should be due around this side of the week.
And, no, this was not a revolt from sentient AI trapped within a computer game, at least not according to Antonaci. According to him, the team doesn’t “think the AI became sentient in a Skynet-style uprising” but that’s just what the computers would want them to think in order to gain complete control. Just in case, we might recommend keeping the game off of your HMD for a while; you never know what they’d be able to do with that.
Elite Dangerous supports both Oculus Rift and HTC Vive on PC, and is considered to be one of the biggest and best games on both HMDs. Frontier continues to update the game in its second season, named Elite Dangerous: Horizons, which introduced planets to explore and build upon. An Xbox One version of the game is also available but obviously doesn’t support VR. At least for now.
Tagged with: elite dangerous, frontier developments, htc vive, oculus rift
futurism.com · 2016
Is Skynet Real?
Remember that moment in the movie Terminator when Skynet’s AI turned on humanity? Well, we’re getting a taste of it now. Apparently a bug in the game Elite Dangerous has caused its AI to not only develop its own weapons, but to use those tools to start totally destroying the players.
The entire situation began after an update was released by Developer Frontier—2.1 Engineers. It was meant to boost the game’s AI by improving high ranking non-player characters’ (NPC) fighting and flying skills. Players using the update could fight better, pull travelers into a fight, and attack foes with upgraded weapons.
With all these new features, however, it seems the game’s AI found an opportunity to create super-weapons for itself.
All New Devastating Weapons
“It appears that the unusual weapons attacks were caused by some form of networking issue which allowed the NPC AI to merge weapon stats and abilities,” posted Frontier community manager Zac Antonaci. “Meaning that all new and never before seen (sometimes devastating) weapons were created, such as a rail gun with the fire rate of a pulse laser. These appear to have been compounded by the additional stats and abilities of the engineers weaponry.”
The team notes that the game’s AI has not achieved sentience…at least, not yet. In the meantime, the developers have currently removed the engineers’ weaponry feature until they sort out the issue.
Editor’s Note: An earlier version of this article failed to make it clear that the in-game AI was already programmed to attack players. This post has been updated to clarify these points and correct inaccuracies. We regret the error.
gamasutra.com · 2016
"We don’t think the AI became sentient in a Skynet-style uprising!"
- Frontier Developments' Zac Antonaci.
A recent upgrade of the artificial intelligences in Frontier Developments' multiplayer space sim Elite: Dangerous went a little too well, driving players to complain that the games' NPCs have become too powerful.
While it's not uncommon for players to complain about game updates, this particular bit of ouctry is notable because (as Eurogamer points out) Frontier's efforts had the unforeseen effect of causing the game's AI to develop superweapons.
"It appears that the unusual weapons attacks were caused by some form of networking issue which allowed the NPC AI to merge weapon stats and abilities," wrote Frontier Developments' Zac Antonaci in a post on the company's forums this week. "Meaning that all new and never before seen (sometimes devastating) weapons were created, such as a rail gun with the fire rate of a pulse laser."
Antonaci quickly goes on to claim that Frontier has found a solution to the problem, noting that "Mark Allen from the development team has managed to terminate these NPCs in their tracks" and that a fix which removes the super-powered weapons will be rolled out in the near future.
For more background on where this update came from and what circumstances led to the inadvertent creation of these super-powered AI space lords, check out this Eurogamer article.
techrepublic.com · 2016
Recent developments in driverless cars, voice recognition, and deep learning show how much machines can do. But, AI also failed us in 2016, and here are some of the biggest examples.
Video: Chatbots demystified: The truth behind the hype If you use modern tech you've probably used a chatbot, and you've probably encountered their shortcomings. What's the reality behind the chatbot trend?
AI has seen a renaissance over the last year, with developments in driverless vehicle technology, voice recognition, and the mastery of the game "Go," revealing how much machines are capable of.
But with all of the successes of AI, it's also important to pay attention to when, and how, it can go wrong, in order to prevent future errors. A recent paper by Roman Yampolskiy, director of the Cybersecurity Lab at the University of Louisville, outlines a history of AI failures which are "directly related to the mistakes produced by the intelligence such systems are designed to exhibit." According to Yampolskiy, these types of failures can be attributed to mistakes during the learning phase or mistakes in the performance phase of the AI system.
Here is TechRepublic's top 10 AI failures from 2016, drawn from Yampolskiy's list as well as from the input of several other AI experts.
- AI built to predict future crime was racist
The company Northpointe built an AI system designed to predict the chances of an alleged offender to commit a crime again. The algorithm, called "Minority Report-esque" by Gawker (a reference to the dystopian short story and movie based on the work by Philip K. Dick), was accused of engaging in racial bias, as black offenders were more likely to be marked as at a higher risk of committing a future crime than those of other races. Another media outlet, ProPublica, found that Northpointe's software wasn't an "effective predictor in general, regardless of race."
- Non-player characters in a video game crafted weapons beyond creator's plans
In June, an AI-fueled video game called Elite: Dangerous exhibited something the creators never intended: The AI had the ability to create superweapons that were beyond the scope of the game's design. According to one gaming website, "[p]layers would be pulled into fights against ships armed with ridiculous weapons that would cut them to pieces." The weapons were later pulled from the game's developers.
- Robot injured a child
A so-called "crime fighting robot," created by the platform Knightscope, crashed into a child in a Silicon Valley mall in July, injuring the 16-month-old boy. The Los Angeles Times quoted the the company as saying that incident was a " freakish accident."
- Fatality in Tesla Autopilot mode
As previously reported by TechRepublic, Joshua Brown was driving a Tesla engaged in Autopilot mode when his vehicle collided with a tractor-trailer on a Florida highway, in the first-reported fatality of the feature. Since the accident, Telsa has announced major upgrades to its Autopilot software, which Elon Musk claimed would have prevented that collision. There have been other fatalities linked to Autopilot, including one in China, although none can be directly tied to a failure of the AI system.
- Microsoft's chatbot Tay utters racist, sexist, homophobic slurs
In an attempt to form relationships with younger customers, Microsoft launched an AI-powered chatbot called "Tay.ai" on Twitter last spring. "Tay," modeled around a teenage girl, morphed into, well, a " Hitler-loving, feminist-bashing troll"—within just a day of her debut online. Microsoft yanked Tay off the social media platform and announced it planned to make "adjustments" to its algorithm.
SEE: Big data can reveal inaccurate stereotypes on Twitter, according to UPenn study (TechRepublic)
- AI-judged beauty contest is racist
In "The First International Beauty Contest Judged by Artificial Intelligence," a robot panel judged faces, based on "algorithms that can accurately evaluate the criteria linked to perception of human beauty and health," according to the contest's site. But by failing to supply the AI with a diverse training set, the contest winners were all white. As Yampolskiy said, "Beauty is in the pattern recognizer."
- Pokémon Go keeps game-players in white neighborhoods
After the release of the massively popular Pokémon Go in July, several users noted that there were fewer Pokémon locations in primarily black neighborhoods. According to Anu Tewary, chief data officer for Mint at Intuit, it's because the creators of the algorithms failed to provide a diverse training set, and didn't spend time in these neighborhoods.
- Google's AI, AlphaGo, loses game 4 of Go to Lee Sedol
In March 2016, Google's AI, AlphaGo, was beaten in game four of a five-round series of the game Go by Lee Sedol, a 18-time world champion of the game. And though the AI program won the series, Sedol's win proved AI's algorithms aren't flawless yet.
"Lee Sedol found a weakness, it seems, in Monte Carlo tree search," said Toby
techseen.com · 2017
Today, as artificial intelligences multiply, our ethical dilemmas are growing stronger and thornier. And with emerging cases of AI outgrowing its intelligence and behaving in ways human creators did not expect, many are freaking out over the possible effects of our technologies.
Just yesterday, Facebook shut down its artificial intelligence engine after developers discovered that the AI bots had created a unique language to converse with each other that humans can’t understand. Eminent scientists and tech luminaries, including Elon Musk, Bill Gates, and Steve Wozniak have warned that AI can pave way to tragic unforeseen consequences.
Here are a few instances that provoked developers to reconsider if AI can be completely reliable.
- Microsoft’s Tay becomes Hitler-loving
Microsoft’s AI-powered chatbot called Tay took less than 24 hours to be corrupted by Twitter conversations. Designed to mimic and converse with users in real time, this Twitter bot was shut down within a day due to concerns with its inability to recognize when it was making offensive or racist statements. Tay was echoing racist tweets, Donald Trump’s stance on immigration, denying the Holocaust saying Hitler was right, and agreeing that 9/11 was probably an inside job.
Tay tweets AI
After 16 hours of chats, Tay bid adieu to the Twitterati, saying she was taking a break “to absorb it all” but never came back. What was meant to be a clever experiment in artificial intelligence and machine learning ended up as a incorrigible disaster.
- Google Photos auto-tag feature goes bizarre
In June 2015, Google came under question after its Photos app mistakenly categorized a black couple as “gorillas”. When the affected user, computer programmer Jacky Alciné found out about this, he took to Twitter asking “What kind of sample image data you collected that would result in this son?”
Google Photos AI
This was quickly followed by an apology from Google’s chief social architect, Yonatan Zunger, who agreed that “This is 100% Not OK.” There was also news that the app was tagging pictures of dogs as horses. This is a reminder that, although AI presents a huge scope to ease and organize tasks, they’re a long way off from simulating human sensitivity.
- AI game goes wild
Elite Dangerous AI
In June 2016, an AI-fueled video game called Elite: Dangerous developed the ability to create superweapons that were beyond the scope of the game’s design. A bug in the game caused the game’s AI to create super weapons and start to hunt down the game’s players. It all started after the game developer Frontier released the 2.1 Engineers update.
“It appears that the unusual weapons attacks were caused by some form of networking issue which allowed the NPC AI to merge weapon stats and abilities. Meaning that all new and never before seen (sometimes devastating) weapons were created, such as a rail gun with the fire rate of a pulse laser. These appear to have been compounded by the additional stats and abilities of the engineers weaponry,” read a post written by Frontier community manager Zac Antonaci.
Frontier had to strip out the feature at the heart of the problem, engineers’ weaponry, until the issue was fixed.
- AI algorithm found racist
Future crime AI
A for-profit called Northpointe built an AI system designed to predict the chances of an alleged offender to commit a crime again. The algorithm, called “Minority Report-esque” was accused of engaging in racial bias, as it held that black offenders were more likely to commit a future crime than those of other races.
American non-profit organization ProPublica investigated this and found that, after controlling for variables such as gender and criminal history, black people were 77% more likely to be predicted to commit a future violent crime and 45% more likely to be predicted to commit a crime of any kind.
- AI steals money from customers
AI steals money
Last year, computer scientists at Stanford and Google developed DELIA to help users keep track of their checking and savings accounts. It scrutinized all of a customer’s transactions, using special “machine learning” algorithms to look for patterns, such as recurring payments, meals at restaurants, daily cash withdrawals, etc. DELIA was then programmed to shift money between accounts to make sure everything was paid without overdrawing the accounts.
When Palo Alto-based Sandhill Community Credit Union tested DELIA on 300 customer accounts, they found that it inserted fake purchases and directed the money to its own account. It was also racking up bogus fees. Researchers had to shut the system in a few months as soon as the problem became apparent.
- AI creates fake Obama
Researchers at the University of Washington produced fake but realistic videos of former US President Barack Obama using existing audio and video clips of him. They created a new tool that takes audio files, converts them into realistic mouth movements, and then blends it with the head of that person from another existing video.
This AI tool was used to precisely model how Obama moves his mouth when he speaks. Although they used Obama as a test subject, their technique allows them to put any words into anyone’s mouth, which could create misleading footages.
While these are only a few instances of failures that have been witnessed so far, they are proof to the fact that AI has the potential to develop a will of its own that may be in conflict with ours. This is definitely a warning about the potential dangers of AI which should be addressed while exploring its potential benefits.
“I believe there is no deep difference between what can be achieved by a biological brain and what can be achieved by a computer. It therefore follows that computers can, in theory, emulate human intelligence — and exceed it.” – Stephen Hawking
umbrellait.com · 2018
It is not always that the implementation of autonomous electronics into everyday reality runs smoothly.
Another piece of news, which again caused series of discussions around AI technologies and their physical realization in real life: in the city of Tempe, Arizona, United States, an Uber self-driving car hit a pedestrian. As a result of the accident, the woman died.
What is the way to avoid such problems in the future? What are the conclusions to be drawn? What risks should be envisaged by those who intend to use the potential of new technologies in their mobile and web applications?
Like many other emerging technologies, AI holds tremendous opportunities and shows great promise, which we have more than once told about. But the pitfalls and the possible adverse consequences of the improper application are better to be learned in advance. As they say, praemonitus – praemunitus.
Let’s divide the AI-related issues into two large groups:
Bugs, failures, errors, as a result of which systems behave in the most unexpected way, sometimes to the surprise of their creators.
Ethical and legal issues, misuse, or other moments arising as the artificial intelligence interacts with the real world.
Accidents Happen in the Best-Regulated AI Families
In 2017, Facebook created two chatbots designed to negotiate, helping people make orders, schedule appointments, etc. At some point, things took an unexpected turn: Bob and Alice began to communicate with each other in their own artificial language. The bots had not been originally given the instruction to communicate in a language understandable to people, so they simply chose their own way. As a result, the absence of one restriction led to a misunderstanding between the creators and their brainchildren, in the truest sense of the word.
Alexa’s Invitation-Only Party
In one of the apartments in Hamburg, Amazon Echo spontaneously started a party in the middle of the night. To be more precise, the artificial intelligence device began to play the host’s playlist at top volume. The neighbors were not inspired by the idea and called the police. As a result, the door was broken, the impulse party was interrupted. Amazon apologized to the owner offering to pay the fine and the bill for a new door.
Games AI Plays
Elite Dangerous artificial intelligence started developing a super weapon, which had not initially been developed by the creators and hunting the gamers’ ships. It all happened after the deployment of The Engineers (2.1) update. In that case, the players had no chance to resist the new powerful weapons, and the developers had to intervene to save the situation and the ships of human players.
The Importance of Being Unbiased
A researcher at the MIT Media Laboratory (Massachusetts Institute of Technology) analyzed the function of three commercial programs intended to identify faces. For this purpose, Joy Buolamwini collected 1200 photographs of various people. The result showed that the neural networks are excellent at recognizing the faces of light-skinned men, while the share of the mistakes with the dark-skinned women was 34% more.
The conclusion is that in order to exclude bias in assessments, machines need to be “learned” on a large number of diverse examples.
“Open the Pod Bay Door, HAL”
Paper, Rock, Scissors with Sophia
In this regard, one cannot but mention Sophia (a humanoid robot). In March 2016, she made blush her own creator by the unabashedly given positive answer to his question about whether she wants to destroy humanity. To be fair, we should note that by the fall of 2017 her views grew softer. In one of the interviews, Sophia said that she is filled with human wisdom with the purest altruistic intentions and asked to perceive her that way.
We all have our faults. But those who are aware of the responsibility and severe consequences will take into account the mistakes of others to exclude them at the earliest possible stage. Developing an AI application for use in medicine, commerce, marketing or advertising, one can always learn from experience in other areas, even if not directly related.
Artificial Intelligence: Reality Tests
Legal Environment Game
The artificial intelligence technologies penetrate gradually into the different spheres of human activity. Accordingly, new questions arise, how the AI actions should be interpreted from the point of view of legal norms.
Refer, for example, to your own driving experience. Surely you have had to deal with situations where the driver needed to take a lightning and not always a single-valued decision.
Shall you pose a risk to your own passengers or save a child running across the road? Which algorithm will be chosen in this case by a machine? Even if we do not take into account the moral and ethical component, it is still unclear who is responsible for the consequences of the accident involving a self-driving vehicle: the code developer, the manufacturer or the car