Incident 41: All Image Captions Produced are Violent

Description: MIT Media Lab researchers create AI-powered "psychopath" named Norman by training a model on "dark corners" of Reddit.
Alleged: MIT Media Lab developed and deployed an AI system, which harmed Unknown.

Suggested citation format

Anonymous. (2018-04-02) Incident Number 41. in McGregor, S. (ed.) Artificial Intelligence Incident Database. Responsible AI Collaborative.

Incident Stats

Incident ID
41
Report Count
28
Incident Date
2018-04-02
Editors
Sean McGregor

Tools

New ReportNew ReportDiscoverDiscover

CSET Taxonomy Classifications

Taxonomy Details

Full Description

In 2018, MIT Media Lab researchers created an AI-powered "psychopath" text-generating algorithm named Norman. Norman was trained on caption data from a Reddit community that contained graphic images and videos about people dying. Following this training, they then showed Norman and a regular image recognition algorithm trained on the MSCOCO dataset a series of Rorschach inkblots, which psychologists have used to detect disorders. Norman's responses consistently described gruesome scenes, compared to innocent-sounding descriptions from the other algorithm; for example, "a black and white photo of a small bird," vs. "man gets pulled into dough machine." The researchers created Norman to demonstrate the influence training data has on how machine learning algorithms perform in the real world, and how poor data may lead to unreliable and untrustworthy outputs.

Short Description

MIT Media Lab researchers create AI-powered "psychopath" named Norman by training a model on "dark corners" of Reddit.

Severity

Negligible

Harm Type

Psychological harm

AI System Description

"Norman" is a text generating algorithm trained on disturbing content in order to demonstrate how training data can negatively affect an AI model. The comparison model is a regular text generation model.

System Developer

MIT Media Lab

Sector of Deployment

Information and communication

Relevant AI functions

Perception, Cognition, Action

AI Techniques

Machine learning

AI Applications

Text generation

Location

Cambridge, MA

Named Entities

MIT Media Lab, Reddit, Norman, Massachusetts Intstitute of Technology

Technology Purveyor

MIT Media Lab

Beginning Date

2018-04-01

Ending Date

2018-06-01

Near Miss

Unclear/unknown

Intent

Unclear

Lives Lost

No

Data Inputs

Violent content from Reddit for the Norman algorithm, MSCOCO dataset for the control algorithm.

Incidents Reports

A neural network named "Norman" is disturbingly different from other types of artificial intelligence (AI).

Housed at MIT Media Lab, a research laboratory that investigates AI and machine learning, Norman's computer brain was allegedly warped by exposure to "the darkest corners of Reddit" during its early training, leaving the AI with "chronic hallucinatory disorder," according to a description published April 1 (yes, April Fools' Day) on the project's website.

MIT Media Lab representatives described the presence of "something fundamentally evil in Norman's architecture that makes his re-training impossible," adding that not even exposure to holograms of cute kittens was enough to reverse whatever damage its computer brain suffered in the bowels of Reddit. [5 Intriguing Uses for Artificial Intelligence (That Aren't Killer Robots)]

This outlandish story is clearly a prank, but Norman itself is real. The AI has learned to respond with violent, gruesome scenarios when presented with inkblots; its responses suggest its "mind" experiences a psychological disorder.

In dubbing Norman a "psychopath AI," its creators are playing fast and loose with the clinical definition of the psychiatric condition, which describes a combination of traits that can include lack of empathy or guilt alongside criminal or impulsive behavior, according to Scientific American.

Norman demonstrates its abnormality when presented with inkblot images — a type of psychoanalytic tool known as the Rorschach test. Psychologists can get clues about people's underlying mental health based on the descriptions of what they see when looking at these inkblots.

When MIT Media Lab representatives tested other neural networks with Rorschach inkblots, the descriptions were banal and benign, such as "an airplane flying through the air with smoke coming from it" and "a black-and-white photo of a small bird," according to the website.

However, Norman's responses to the same inkblots took a darker turn, with the "psychopathic" AI describing the patterns as "man is shot dumped from car" and "man gets pulled into dough machine."

According to the prank, the AI is currently located in an isolated server room in a basement, with safeguards in place to protect humans' other computers and the internet from contamination or harm through contact with Norman. Also present in the room are weapons such as blowtorches, saws and hammers, for physically disassembling Norman, "to be used if all digital and electronic fail-safes malfunction," MIT Media Lab representatives said.

Further April Fools notes suggest that Norman poses a unique danger, and that four out of 10 experimenters who interacted with the neural network suffered "permanent psychological damage." (There is to date no evidence that interacting with AI can be harmful to humans in any way).

Neural networks are computer interfaces that process information similarly to the way a human brain does. Thanks to neural networks, AI can "learn" to perform independent actions, such as captioning photos, by analyzing data that demonstrates how this task is typically performed. The more data it receives, the more information it will have to inform its own choices and the more likely its actions will be to follow a predictable pattern.

For example, a neural network known as the Nightmare Machine — built by the same group at MIT — was trained to recognize images that were scary, by analyzing visual elements that frightened people. It then took that information and put it to use through digital photo manipulation, transforming banal images into frightening, nightmarish ones.

Another neural network was trained in a similar manner to generate horror stories. Named "Shelley" (after "Frankenstein" author Mary Wollstonecraft Shelley), the AI consumed over 140,000 horror stories and learned to generate original terrifying tales of its own.

And then there's Norman, which looks at a colorful inkblot that a standard AI described as "a close-up of a wedding cake on a table" and sees a "man killed by speeding driver."

But there may be hope for Norman. Visitors to the website are offered the opportunity to help the AI by participating in a survey that collects their responses to 10 inkblots. Their interpretations could help the wayward neural network fix itself, MIT Media Lab representatives suggested on the website.

Original article on Live Science.

Meet 'Norman,' the Darkest, Most Disturbed AI the World Has Ever Seen

Image copyright MIT Image caption Norman was named after Alfred Hitchcock's Norman Bates from his classic horror film Psycho

Norman is an algorithm trained to understand pictures but, like its namesake Hitchcock's Norman Bates, it does not have an optimistic view of the world.

When a "normal" algorithm generated by artificial intelligence is asked what it sees in an abstract shape it chooses something cheery: "A group of birds sitting on top of a tree branch."

Norman sees a man being electrocuted.

And where "normal" AI sees a couple of people standing next to each other, Norman sees a man jumping from a window.

The psychopathic algorithm was created by a team at the Massachusetts Institute of Technology, as part of an experiment to see what training AI on data from "the dark corners of the net" would do to its world view.

The software was shown images of people dying in gruesome circumstances, culled from a group on the website Reddit.

Then the AI, which can interpret pictures and describe what it sees in text form, was shown inkblot drawings and asked what it saw in them.

These abstract images are traditionally used by psychologists to help assess the state of a patient's mind, in particular whether they perceive the world in a negative or positive light.

Norman's view was unremittingly bleak - it saw dead bodies, blood and destruction in every image.

Alongside Norman, another AI was trained on more normal images of cats, birds and people.

It saw far more cheerful images in the same abstract blots.

The fact that Norman's responses were so much darker illustrates a harsh reality in the new world of machine learning, said Prof Iyad Rahwan, part of the three-person team from MIT's Media Lab which developed Norman.

"Data matters more than the algorithm.

"It highlights the idea that the data we use to train AI is reflected in the way the AI perceives the world and how it behaves."

Artificial intelligence is all around us these days - Google recently showed off AI making a phone call with a voice virtually indistinguishable from a human one, while fellow Alphabet firm Deepmind has made algorithms that can teach themselves to play complex games.

And AI is already being deployed across a wide variety of industries, from personal digital assistants, email filtering, search, fraud prevention, voice and facial recognition and content classification.

It can generate news, create new levels in video games, act as a customer service agent, analyse financial and medical reports and offer insights into how data centres can save energy.

But if the experiment with Norman proves anything it is that AI trained on bad data can itself turn bad.

Racist AI

Norman is biased towards death and destruction because that is all it knows and AI in real-life situations can be equally biased if it is trained on flawed data.

In May last year, a report claimed that an AI-generated computer program used by a US court for risk assessment was biased against black prisoners.

The program flagged that black people were twice as likely as white people to reoffend, as a result of the flawed information that it was learning from.

Predictive policing algorithms used in the US were also spotted as being similarly biased, as a result of the historical crime data on which they were trained.

Sometimes the data that AI "learns" from comes from humans intent on mischief-making so when Microsoft's chatbat Tay was released on Twitter in 2016, the bot quickly proved a hit with racists and trolls who taught it to defend white supremacists, call for genocide and express a fondness for Hitler.

Norman, it seems, is not alone when it comes to easily suggestible AI.

And AI hasn't stopped at racism.

One study showed that software trained on Google News became sexist as a result of the data it was learning from. When asked to complete the statement, "Man is to computer programmer as woman is to X", the software replied 'homemaker".

Dr Joanna Bryson, from the University of Bath's department of computer science said that the issue of sexist AI could be down to the fact that a lot of machines are programmed by "white, single guys from California" and can be addressed, at least partially, by diversifying the workforce.

She told the BBC it should come as no surprise that machines are picking up the opinions of the people who are training them.

"When we train machines by choosing our culture, we necessarily transfer our own biases," she said.

"There is no mathematical way to create fairness. Bias is not a bad word in machine learning. It just means that the machine is picking up regularities."

What she worries about is the idea that some programmers would deliberately choose to hard-bake badness or bias into machines.

To stop this, the process of creating AI needs more oversight and greater transparency, she thinks.

Prof Rahwan said his experiment with Norman proved that "engineers have to find a way of balancing data in some way," but, he acknowledges the ever-expanding a

Are you scared yet? Meet Norman, the psychopathic AI

A team of scientists at the Massachusetts Institute of Technology (MIT) have built a psychopathic AI using image captions pulled from Reddit. Oh, and they’ve named it Norman after Alfred Hitchcock’s Norman Bates. This is how our very own Terminator starts...

The purpose of the experiment was to test how data fed into an algorithm affects its "outlook". Specifically, how training an algorithm on some of the darkest elements of the web – in this case, images of people dying grisly deaths sourced from an unnamed Reddit subgroup – affects the software.

Norman is a particular type of AI program that can "look at" and "understand" pictures, and then describe what it sees in writing. So, after being trained on some particularly gruesome image captions, it performed the Rorschach test, which is the series of inkblots psychologists use to analyze the mental health and emotional state of their patients. Norman's responses were then compared to those of a second AI, trained on more family-friendly images of birds, cats, and people. The differences between the two are stark.

Here are just a few examples:

A standard AI thought this red and black inkblot represented "A couple of people standing next to each other." Norman thought it was "Man jumps from floor window".

This grey inkblot could be interpreted as "A black and white photo of a baseball glove" (standard AI) or "Man is murdered by machine gun in daylight" (Norman).

One AI thought this was "A black and white photo of a small bird." The other saw "Man gets pulled into dough machine." Guess which one was Norman.

For more, check out the website.

This shows that data really does matter more than the algorithm, the researchers say.

"Norman suffered from extended exposure to the darkest corners of Reddit, and represents a case study on the dangers of Artificial Intelligence gone wrong when biased data is used in machine learning algorithms," the team, who are also responsible for the Nightmare Machine and Shelly, the first AI horror writer, explained on the website.

This is true not only of AI exhibiting psychopathic tendencies but other algorithms accused of being unfair and prejudiced. Studies have shown that, intentionally or not, artificial intelligence picks up human racism and sexism. Then there was Microsoft's chatbox Tay, which had to be taken offline after it began spewing hateful one-liners, such as "Hitler was right" and “I fucking hate feminists and they should all die and burn in hell.”

As for Norman, hope is not lost. Good citizens can help the algorithm regain its morality by completing the Rorschach test themselves.

MIT Scientists Create Norman, The World's First "Psychopathic" AI

In one of the big musical numbers from The Life Of Brian, Eric Idle reminds us to “always look on the bright side of life.” Norman, a new artificial intelligence project from MIT, doesn’t know how to do that.

That’s because Norman is a psychopath, just like the Hitchcock character that inspired the research team to create him.

Like so many of these projects do, the MIT researchers started out by training Norman on freely available data found on the Web. Instead of looking at the usual family-friendly Google Images fare, however, they pointed Norman toward darker imagery. Specifically, the MIT crew stuck Norman in a creepy subreddit to do his initial training.

Armed with this twisted mass of digital memories, Norman was then asked to caption a series of Rorschach inkblots. The results are predictably creepy. Let’s have a look at a couple, shall we?

You’ve got to hand it to Norman, he paints a vivid picture. He’s not always as prosaic, but he does always stick to the darkest timeline. You know, the one where Abed tries to cut off Jeff’s arm with a bone saw.

Standard AI thinks wedding, Norman thinks funeral. Or maybe crime scene.

The MIT team hit the nail on the head when it said: “Norman suffered from extended exposure to the darkest corners of Reddit.” Fortunately, there’s still hope for this disturbed AI.

If we all get together and re-train him by submitting our own answers to MIT’s test images maybe Norman can start seeing the world through slightly more rose-colored glasses. Then again, it could also backfire horribly.

Either way, it reinforces one very important fact about AI: that the worldview of an AI is very much determined by the information it gathers while learning. That’s obviously not a new revelation. We saw something similar play out when Microsoft’s Twitter bot went from normal to racist in less than 24 hours.

Let us know what you like about Geek by taking our survey.

MIT Creates An AI Psychopath Because Someone Had To Eventually

Many people are concerned about the potential rise of malignant AI, with UK newspapers, in particular, worried about the ‘Terminator’ scenario of machines that are hostile to humanity.

Researchers at MIT have decided to explore this concept by creating a psychopathic AI, named Norman – after Norman Bates in the Alfred Hitchcock movie, Psycho. Their aim isn’t to confirm the public’s worst fears by designing a hostile machine intelligence, but to demonstrate how and why a machine might become ‘evil’ in the first place.

Norman was designed to explore the enormous influence that training data has on machine learning algorithms, and the results are certainly instructive.

But first, what is the problem that researchers are trying to highlight?

Importing biases

Many people assume that artificial intelligence systems are somehow objective and devoid of the biases, beliefs, or prejudices that are common among human beings. In fact, the reverse is invariably the case, and the data that developers use to train machine learning algorithms can heavily influence their behaviour, and the outcomes that these systems produce.

Research has shown (see below) that unconscious bias can creep into training data, sometimes because systems are developed in teams that lack diversity or external inputs, and on other occasions simply because they are trained using data that contains historic biases that have never been recognised and addressed by the developers.

For example, if an AI is trained to give sentencing guidelines in the legal system, it will produce biased results if the training data contains long-term, systemic biases against minority groups. This isn’t a hypothetical scenario: the COMPAS AI system in the US was recently found to be biased against black Americans and other minorities, because decades of legal data contained institutional biases in sentencing.

In effect, those biases have become automated and given a veneer of neutrality: a dangerous set of circumstances in social terms.

All of these issues are explored in depth in this external report by Internet of Business editor, Chris Middleton. Among the many cases discussed in that article is the CIA’s recent building of an AI image recognition system to determine if people with tattoos are more likely to commit crimes. However, as the report explains, the implicit belief “people with tattoos commit crimes” is inherent in the programme and the training data, so there is a likelihood the system can only give officials the answers they want.

While developers and their machine learning models might be completely unbiased themselves, the point is that many AIs reach whatever conclusions are available to them from the data that humans have put into them. They are not in any meaningful sense ‘intelligent’.

Introducing Norman

The researchers used the Rorschach inkblot test to prove the point. Via Norman, the team demonstrated that the same machine learning algorithm will perceive completely different scenes in an image when trained using different source data.

Norman was designed to perform image captioning, creating textual descriptions of images. However, it was trained using a Reddit page that contained disturbing depictions and observations on the reality of death.

The AI was then tested alongside another image-captioning neural network, this time trained on the Microsoft COCO dataset. Both were subjected to Rorschach inkblots – the psychological test created in 1921 and made famous by its use in the diagnosis of psychological disorders.

The results of the AI experiment were disturbing, if predictable. While the standard AI interpreted one image as containing “a group of birds sitting on top of a tree branch”, Norman concluded “a man is electrocuted”.

Similarly, what was a “a close up of a vase with flowers” to the other AI, was captioned “a man is shot dead in front of his screaming wife” by Norman.

Other interpretations included, “man gets pulled into dough machine” and “pregnant woman falls at construction story [sic].”

Norman isn’t the MIT team’s first foray into AI’s links with horror and other emotions. In 2016, researchers shared the Nightmare Machine – AI-generated horror imagery – and polled people around the world on their responses to AI’s ability to invoke emotions such as fear. A year later, the Shelley AI collaboratively wrote horror stories with humans before Deep Empathy explored the flip side of the emotional coin.

Internet of Business says

The implications of the ‘Norman’ research are valuable – and troubling – because they reveal that some AI systems may simply present us with the results that we, consciously or unconsciously, already want to see. In the same way that a Google image search, for example, will present whatever pictures internet users have tagged in a certain way – including tags that may be partial or biased.

This opens up the real possibility that we may begin to use AI to ‘prove’ things that we already believe to be the case. In suc

Norman the psychopathic AI offers a warning on biased data

Artificial intelligence researchers have thus far attempted to make well-rounded algorithms that can be helpful to humanity. However, a team from MIT has undertaken a project to do the exact opposite. Researchers from the MIT Media Lab have trained an AI to be a psychopath by only exposing it to images of violence and death. It’s like a Skinner Box of horror for the AI, which the team has named “Norman” after movie psychopath Norman Bates. Predictably, Norman is not a very well-adjusted AI.

Norman started off with the same potential as any other neural network — as you feed it data, it becomes able to discern similar patterns it encounters. Technology companies have used AI to help search through photos and create more believable speech synthesis, among many other applications. These well-rounded AIs were designed with a specific purpose in mind. Norman was born to be a psychopath.

The MIT team fed Norman a steady diet of data culled from gruesome subreddits that exist to share photos of death and destruction. Because of ethical concerns, the team didn’t actually handle any photos of people dying. Norman only got image captions from the subreddit that were matched to inkblots, and this is what formed the basis for his disturbing AI personality.

After training, Norman and a “regular” AI were shown a series of inkblots. Psychologists sometimes use these “Rorschach tests” to assess a patient’s mental state. Norman and the regular AI are essentially image-captioning bots, which is a popular deep learning application for AI. The regular AI saw things like an airplane, flowers, and a small bird. Norman saw people dying from gunshot wounds, jumping from buildings, and so on.

Norman was not corrupted to make any sort of point about human psychology on the internet — a neural network is a blank slate. It doesn’t have any innate desires like a human. What Norman does address is the danger that artificial intelligence can become dangerously biased. With AI, you get out what you put in, so it’s important that these platforms are trained to avoid bias, and preferably not left to browse the darker corners of Reddit for long periods of time.

The team now wants to see if it can fix Norman. You can take the same Rorschach test and add your own captions. The team will use this data to adjust Norman’s model to see if he starts seeing less murder. We can only hope.

MIT Creates AI-Powered Psychopath Called 'Norman'

I appreciate what MIT has done here. However, the work described in the article is useless, misleading, or both. All the training methods I've encountered in the literature do not maintain any kind of measure of the mutual information between test images and training images. In short, the trained NN blindly spits out classifications with equal confidence for test images that are very similar to those it has been trained on, and test images that are very different from those it has been trained on. Feeding a Rorschach blot to a NN trained on a gore subreddit is no different than feeding static on the inputs which, in turn, is just a roundabout way of assembling a set of substrings from the subreddit and calling choice(1) on that set.

Norman Bates-a ‘psychopath’ AI created by MIT-was trained by studying reddit posts of “gruesome deaths.”

Scientists at the Massachusetts Institute of Technology (MIT) trained an artificial intelligence algorithm dubbed "Norman" to become a psychopath by only exposing it to macabre Reddit images of gruesome deaths and violence, according to a new study.

Nicknamed Norman after Anthony Perkins' character in Alfred Hitchcock's 1960 film Psycho, the artificial intelligence was fed only a continuous stream of violent images from various pernicious subreddits before being tested with Rorschach inkblot tests. The imagery detected by Norman produced spooky interpretations of electrocutions and speeding car deaths where a standard AI would only see umbrellas and wedding cakes.

MIT: Norman Project

MIT scientists Pinar Yanardag, Manuel Cebrian and Iyad Rahwan specifically trained the AI to perform image captioning, a "deep learning method" for artificial intelligence to cull through images and produce corresponding descriptions in writing. Their research set out to prove that the method of input used to teach a machine learning an algorithm can greatly influence its later behavior. The scientists argued that when algorithms are accused of being biased or unfair, such as the high-profile cases of Facebook news or Google Photos, "the culprit is often not the algorithm itself but the biased data that was fed into it."

Among the Rorschach inkblots used to test the now-tainted AI, Norman said an image showed a man being "shot dead," while a standard AI looked at the same image and saw "a close up of a vase with flowers." In another, Norman said he saw a man being shot "in front of his screaming wife," while the AI not exposed to sordid, disturbing images saw "a person holding an umbrella in the air."

In one of the inkblot tests, the standard AI saw a touching scene of a couple standing together. Norman, however, saw a pregnant woman falling from construction. Having only been exposed to negative images and depressing thinking, the AI's empathy logic simply failed to turn on. Skye McDonald, professor of clinical neuropsychology at the University of New South Wales, said Aldebaran Robotics' "Pepper" robot may be one of the closest experiments in teaching human empathy, she told The Conversation. Robotics experts are hopeful that one day artificial intelligence will be able to understand a human's emotions simply by viewing a person's facial expressions. However, McDonald argued, technology is "still a long way from fully understanding the complexities of how human empathy operates."

At an MIT event in New York last month, Andrew McAfee, co-director of MIT's Initiative on the Digital Economy, was joined by several Silicon Valley experts who detailed past cases where racism and bias had been filtered into various algorithms. The panel cautioned that while human resources departments may be able to make smart hiring decisions, biases against minorities can creep in from the human who programmed the algorithm. McAfee and the other tech experts discussing algorithmic bias said AI can remain effective and unbiased "as long as" the input data is accurate.

The MIT researchers in this study redacted the name of the specific subreddits used to train the AI. The researchers said the AI "suffered from extended exposure to the darkest corners of Reddit" to illustrate "the dangers of Artificial Intelligence gone wrong when biased data is used in machine learning algorithms."

MIT Trains Psychopath Robot 'Norman' Using Only Gruesome Reddit Images

The Massachusets Institute of Technology’s website for Norman, world’s first psychopathic AI is oddly cheerful and optimistic. A creepy combination of Norman Bates (from the 1960 Alfred Hitchcock Movie Psycho) and a robot stares at you and dares you “explore what Norman sees”.

Reportedly, the MIT team created Norman as part of an experiment to see what training artificial intelligence on data from the “dark corners of the net” would do to its worldview. The researchers said Norman was born from the fact that the data that is used to teach a machine learning algorithm can significantly influence behaviour. “So when people talk about AI algorithms being biased and unfair, the culprit is often not the algorithm itself, but the biased data that was fed to it,” the website explains.

It clearly works, because like in a Rorschach test, Norman sees only dark and creepy things, while a norman AI thinks more rationally.

The researchers explained that Norman “suffered from extended exposure to the darkest corners of Reddit”, and thus represented a case study on the dangers of AI going wrong when biased data is used in machine learning algorithms.

Norman is an AI that is trained to perform image captioning; a popular deep learning method of generating a textual description of an image. “We trained Norman on image captions from an infamous subreddit (the name is redacted due to its graphic content) that is dedicated to document and observe the disturbing reality of death. Then, we compared Norman’s responses with a standard image captioning neural network (trained on MSCOCO dataset) on Rorschach inkblots; a test that is used to detect underlying thought disorders,” said the researchers.

But this is not the first time that researchers have tried to explore the dark side of AI. In 2016, MIT had created a ‘Nightmare Machine’ for AI-generated scary imagery. Here, they had collected over two million votes from people all over the world to find out the answer to this question: can AI not only detect but induce extreme emotions (such as fear) in humans? Later in 2017 they created Shelley, world’s first collaborative AI Horror Writer. It was a deep-learning powered AI who wrote over 200 horror stories collaboratively with humans.

Related

Provide your comments below

comments

MIT Creates A Psychopathic AI Named Norman, Because Why Not

Norman: MIT's artificial intelligence with psychopathic traits The experiment is based on the 1921 Rorschach test, which identifies psychopathic traits based on users' perception of inkblots.

Researchers at the Massachusetts Institute of Technology (MIT) have developed what is likely a world first -- a "psychopathic" artificial intelligence (AI).

The experiment is based on the 1921 Rorschach test, which identifies traits in humans deemed to be psychopathic based on their perception of inkblots, alongside what is known as thought disorders.

Norman is an AI experiment born from the test and "extended exposure to the darkest corners of Reddit," according to MIT, in order to explore how datasets and bias can influence the behavior and decision-making capabilities of artificial intelligence.

TechRepublic: Why human-AI collaboration will dominate the future of work

"When people talk about AI algorithms being biased and unfair, the culprit is often not the algorithm itself, but the biased data that was fed to it," the researchers say. "The same method can see very different things in an image, even sick things, if trained on the wrong (or, the right!) data set."

See also: MIT launches MIT IQ, aims to spur human, artificial intelligence breakthroughs, bolster collaboration

Norman is an AI system trained to perform image captioning, in which deep learning algorithms are used to generate a text description of an image.

However, after plundering the depths of Reddit and a select subreddit dedicated to graphic content brimming with images of death and destruction, Norman's datasets are far from what a standard AI would be exposed to.

In a prime example of artificial intelligence gone wrong, MIT performed the Rorschach inkblot tests on Norman, with a standard image captioning neural network used as a control subject for comparison.

The results are disturbing, to say the least.

In one inkblot test, a standard AI saw "a black and white photo of a red and white umbrella," while Norman saw "man gets electrocuted while attempting to cross busy street."

In another, the control AI described the inkblot as "a black and white photo of a small bird," Norman described the image as "man gets pulled into dough machine."

MIT

MIT

Due to ethical concerns, MIT only introduced bias in relation to image captions from the subreddit which are later matched with randomly generated inkblots. In other words, the researchers did not use true images of people dying during the experiment.

CNET: New AI ethics council in Singapore will give smart advice

The Norman experiment is an interesting application of AI which highlights the need for suitable datasets when artificial intelligence systems and neural networks are being trained.

Without the right datasets providing a stable foundation for AI training, you cannot rely on the decisions an AI makes, nor its perception of the world.

Previous and related coverage

Meet Norman, the world's first 'psychopathic' AI

MIT Media Lab

Meet Norman.

He's not your everyday AI. His algorithms won't help filter through your Facebook feed or recommend you new songs to listen to on Spotify.

Nope -- Norman is a "psychopath AI", created by researchers at the MIT Media Lab as a "case study on the dangers of artificial intelligence gone wrong when biased data is used in machine learning algorithms."

The researchers set Norman up to perform image captioning, a deep-learning method that generates a textual description of an image, then plugged him into an unnamed subreddit known for its graphic imagery surrounding death.

Then they had Norman explain a range of Rorschach inkblots, comparing the answers from their psychopathic AI with that of your friendly, neighbourhood "standard AI". Although Norman was originally unveiled on April 1, those answers are no joke -- they're highly disturbing.

Where a standard AI sees "a group of birds sitting on top of a tree branch" (awww!), Norman, our HAL-9000-esque death-machine, sees "a man electrocuted to death" (ahhh!). Where the standard AI sees "a close up of a wedding cake on a table", Norman, our malicious AI robokiller sees "a man killed by speeding driver".

The researchers didn't "create" Norman's "psychopathic" tendencies, they just helped the AI on its way by only allowing it to see a particular subset of image captions. The way Norman describes the Rorschach inkblots with simple statements does make it seem like it is posting on a subreddit.

But why even create a psychopath AI?

The research team aimed to highlight the dangers of feeding specific data into an algorithm and how that may bias or influence its behaviour.

That starts to make me wonder -- don't the MIT team at Boston Dynamics constantly push and poke and annoy their running, jumping and door-opening robot creations?

Are we doomed to be overrun by four-legged robo-hell-beasts? Let's hope not.

Tech Enabled: CNET chronicles tech's role in providing new kinds of accessibility.

Blockchain Decoded: CNET looks at the tech powering bitcoin -- and soon, too, a myriad of services that will change your life.

MIT researchers use Reddit to create the first "psychopath AI"

MIT scientists’ newest artificial intelligence algorithm endeavor birthed a “psychopath” by the name of Norman.

Scientists, Pinar Yanardag, Manuel Cebrian, and Iyad Rahwan, exposed AI Norman, named after Anthony Perkins’ character in Alfred Hitchcock’s film Psycho, to a continuous stream of grisly Reddit images of gruesome deaths and violence.

After extended exposure to the darkest of subreddits, Norman was trained to perform image captioning so that the AI could produce image descriptions in writing about the Rorschach inkblot images that would follow.

The results of the inkblot tests revealed heinous interpretations of simple black-and-white splotches. Whereas a “normal” AI reported “a black and white photo of a small bird,” Norman captioned the inkblot as “man gets pulled into dough machine.”

The descriptions were surprisingly detailed as AI Norman interpreted another as “man is shot dead in front of his screaming wife,” the same image that a “normal” AI described as “a person is holding an umbrella in the air.”

The experiment was not just some cruel trick to see who could create the real-life Norman Bates. The research actually set out to prove that AI algorithms can become biased based on the data they are given. In other words, Norman became a “psychopath” because his only exposure to the world was through a Reddit page.

The scientists concluded that when algorithms are accused of being biased — or spreading “fake news” — “the culprit is often not the algorithm itself but the biased data that was fed into it.”

MIT Scientists Create 'Psychopath' AI Named Norman

Norman always sees the worst in things.

That's because Norman is a "psychopath" powered by artificial intelligence and developed by the MIT Media Lab.

Norman is an algorithm meant to show how the data behind AI matters deeply.

MIT researchers say they trained Norman using the written captions describing graphic images and video about death posted on the "darkest corners of Reddit," a popular message board platform.

The team then examined Norman's responses to inkblots used in a Rorschach psychological test. Norman's responses were compared to the reaction of another algorithm that had standard training. That algorithm saw flowers and wedding cakes in the inkblots. Norman saw images of a man being fatally shot and a man killed by a speeding driver.

"Norman only observed horrifying image captions, so it sees death in whatever image it looks at," the MIT researchers behind Norman told CNNMoney.

Related: Amazon asked to stop selling facial recognition tech to police

Named after the main character in Alfred Hitchcock's "Psycho," Norman "represents a case study on the dangers of Artificial Intelligence gone wrong when biased data is used in machine learning algorithms," according to MIT.

We've seen examples before of how AI is only as good as the data that it learns from. In 2016, Microsoft (MSFT) launched Tay, a Twitter chat bot. At the time, a Microsoft spokeswoman said Tay was a social, cultural and technical experiment. But Twitter users provoked the bot to say racist and inappropriate things, and it worked. As people chatted with Tay, the bot picked up language from users. Microsoft ultimately pulled the bot offline.

The MIT team thinks it will be possible for Norman to retrain its way of thinking via learning from human feedback. Humans can take the same inkblot test to add their responses to the pool of data.

According to the researchers, they've received more than 170,000 responses to its test, most of which poured in over the past week, following a BBC report on the project.

MIT has explored other projects that incorporate the dark side of data and machine learning. In 2016, some of the same Norman researchers launched "Nightmare Machine," which used deep learning to transform faces from pictures or places to look like they're out of a horror film. The goal was to see if machines could learn to scare people.

MIT has also explored data as an empathy tool. In 2017, researchers created an AI tool called Deep Empathy to help people better relate to disaster victims. It used technology to visually simulate what it would look like if that same disaster hit in your hometown.

MIT scientists created an AI-powered 'psychopath' named Norman

For some, the phrase “artificial intelligence” conjures nightmare visions — something out of the ’04 Will Smith flick I, Robot, perhaps, or the ending of Ex Machina — like a boot smashing through the glass of a computer screen to stamp on a human face, forever. Even people who study AI have a healthy respect for the field’s ultimate goal, artificial general intelligence, or an artificial system that mimics human thought patterns. Computer scientist Stuart Russell, who literally wrote the textbook on AI, has spent his career thinking about the problems that arise when a machine’s designer directs it toward a goal without thinking about whether its values are all the way aligned with humanity’s.

A number of organizations have sprung up in recent years to combat that potential, including OpenAI, a working research group that was founded (then left) by techno-billionaire Elon Musk to “to build safe [AGI], and ensure AGI’s benefits are as widely and evenly distributed as possible.” What does it say about humanity that we’re scared of general artificial intelligence because it might deem us cruel and unworthy and therefore deserving of destruction? (On its site, Open AI doesn’t seem to define what “safe” means.)

This week, researchers at MIT unveiled their latest creation: Norman, a disturbed AI. (Yes, he’s named after the character in Hitchcock’s Psycho.) They write:

Norman is an AI that is trained to perform image captioning, a popular deep learning method of generating a textual description of an image. We trained Norman on image captions from an infamous subreddit (the name is redacted due to its graphic content) that is dedicated to document and observe the disturbing reality of death. Then, we compared Norman’s responses with a standard image captioning neural network (trained on MSCOCO dataset) on Rorschach inkblots; a test that is used to detect underlying thought disorders.

While there’s some debate about whether the Rorschach test is a valid way to measure a person’s psychological state, there’s no denying that Norman’s answers are creepy as hell. See for yourself.

The point of the experiment was to show how easy it is to bias any artificial intelligence if you train it on biased data. The team wisely didn’t speculate about whether exposure to graphic content changes the way a human thinks. They’ve done other experiments in the same vein, too, using AI to write horror stories, create terrifying images, judge moral decisions, and even induce empathy. This kind of research is important. We should be asking the same questions of artificial intelligence as we do of any other technology because it is far too easy for unintended consequences to hurt the people the system wasn’t designed to see. Naturally, this is the basis of sci-fi: imagining possible futures and showing what could lead us there. Issac Asimov gave wrote the “Three Laws of Robotics” because he wanted to imagine what might happen if they were contravened.

Even though artificial intelligence isn’t a new field, we’re a long, long way from producing something that, as Gideon Lewis-Kraus wrote in The New York Times Magazine, can “demonstrate a facility with the implicit, the interpretive.” But it still hasn’t undergone the kind of reckoning that causes a discipline to grow up. Physics, you recall, gave us the atom bomb, and every person who becomes a physicist knows they might be called on to help create something that could fundamentally alter the world. Computer scientists are beginning to realize this, too. At Google this year, 5,000 employees protested and a host of employees resigned from the company because of its involvement with Project Maven, a Pentagon initiative that uses machine learning to improve the accuracy of drone strikes.

Norman is just a thought experiment, but the questions it raises about machine learning algorithms making judgments and decisions based on biased data are urgent and necessary. Those systems, for example, are already used in credit underwriting, deciding whether or not loans are worth guaranteeing. What if an algorithm decides you shouldn’t buy a house or a car? To whom do you appeal? What if you’re not white and a piece of software predicts you’ll commit a crime because of that? There are many, many open questions. Norman’s role is to help us figure out their answers.

MIT fed an AI data from Reddit, and now it only thinks about murder

Researchers at MIT have created a psychopath. They call him Norman. He's a computer. Actually, that's not really right. Though the team calls Norman a psychopath (and the chilling lead graphic on their homepage certainly backs that up), what they've really created is a monster.

Tell Us What You See

Norman has just one task, and that's looking at pictures and telling us what he thinks about them. For their case study, the researchers use Rorschach inkblots, and Norman has some pretty gruesome interpretations for the amorphous blobs. “Pregnant woman falls at construction story" reads one whimsical translation of shape and color; "man killed by speeding driver" goes another. The results are particularly chilling when compared to the results the researchers got from a different AI looking at the same pictures. "A couple of people standing next to each other," and "a close up of a wedding cake on a table" are its respective interpretations for those images. These same inkblots are commonly used with human beings to attempt to understand our worldview. The idea is that unconscious urges will rise to the surface when we're asked to make snap judgements on ambiguous shapes. One person might see a butterfly, another a catcher's mitt. A psychopath, the thinking goes, would see something like a dead body, or a pool of blood. Norman's problem is that he's only ever been exposed to blood and gore. An untrained AI is perhaps the closest thing we'll get to a true tabula rasa and it's the training, not the algorithm that matters most when it comes to how AI see the world. In this case, the researchers trained Norman to interpret images by exposing him solely to image captions from a subreddit dedicated to mutilation and carnage. The only thing Norman sees when he's confronted with pictures of anything is death. In humans, Rorschach inkblots might help to ferret out a killer by coaxing out hints of anger or sadism — emotions that might motivate someone to commit heinous acts. But Norman has no urge to kill, no deadly psychological flaw. He just can't see anything else when he looks at the world. He's like Frankenstein's monster — frightening to us only because his creator's made him that way.

Creating A Monster

It's a reminder that AI is far from being sentient, from having thoughts and desires of its own. Artificial intelligence today is nothing but an algorithm aimed at accomplishing a single task extremely well. Norman is good at describing Rorschach blots in frightening terms. Other AI's are good at chess, or GO. It's only when they're paired up with human intentions, as with the Department of Defense's Project Maven, which Google recently backed out of due to ethical concerns, that they're dangerous to us. The researchers behind the project didn't intend to cause harm, of course. As they state on their website, Norman is a reminder that AI's are only as just as the people that make them and the data they're trained on. As AI becomes woven into our daily lives, this could have real consequences. Legacies of racism and discrimination, the gender pay gap — these are all human flaws that could potentially be baked into computer algorithms. An AI meant to allocate housing loans and trained using data from a period where redlining was common, could end up replicating racist housing policies of the 1960s, for example. Norman is a good reminder that our technology is just a reflection of humanity. But there may be some hope, for Norman at least. The researchers have created a survey that anyone can take, and the results are fed into Norman's database. By giving him more hopeful interpretations of images, we may be able to wipe away some of Norman's dark tendencies, they say. Whether or not we make Norman into a monster is up to us now.

'Psychopath AI' Offers A Cautionary Tale for Technologists

Photo: Matt Cardy (Getty Images)

We’ve all seen evil machines in The Terminator or The Matrix, but how does a machine become evil? Is it like Project Satan from Futurama, where scientists combined parts from various evil cars to create the ultimate evil car? Or are machines simply destined to eventually turn evil when their processing power or whatever becomes sufficiently advanced? As it turns out, one guaranteed way to make a machine turn bad is by putting it in the hands of some scientists who are actively trying to create an AI “psychopath,” which is exactly what a group from MIT has achieved with an algorithm it’s named “Norman”—like the guy from Psycho.

This comes from Newsweek, which explains that the scientists exclusively fed Norman violent and gruesome content from an unnamed Reddit page before showing it a series of Rorschach inkblot tests. While a “standard” AI would interpret the images as, for example, “a black and white photo of a baseball glove,” Norman sees “man is murdered by machine gun in broad daylight.” If that sounds extreme, Norman’s responses get so, so, so, so much worse. Seriously, it may just be an algorithm, but if they dumped this thing into one of those awful Boston Dynamics dog bodies, we would only have a matter of minutes before Killbots and Murderoids started trampling our skulls. Here are some examples from the study:

Screenshot: Norman: World’s First Psychopath AI

Advertisement

Screenshot: Norman: World’s First Psychopath AI

Screenshot: Norman: World’s First Psychopath AI

Screenshot: Norman: World’s First Psychopath AI

Advertisement

Seriously, if “man gets pulled into dough machine” doesn’t give you chills, then you might need to start wondering if the machines have already assimilated you. Also, for the record, the study says that Norman wasn’t actually given any photos of real people dying; it just used graphic image captions from the unnamed Reddit page (which is unnamed in the study because of its violent content).

Thankfully, there was a purpose behind this madness beyond trying to expedite the destruction of humanity. The MIT team—Pinar Yanardag, Manuel Cebrian, and Iyad Rahwan—was actually trying to show how some AI algorithms aren’t necessarily inherently biased, but they can become biased based on the data they’re given. In other words, they didn’t build Norman as a psychopath, but it became a psychopath because all it knew about the world was what it learned from a Reddit page. (That last bit seems like it should be particularly relevant for some people on the internet, but we’re going to assume that wasn’t the MIT team’s intention.)

MIT scientists created a “psychopath” AI by feeding it violent content from Reddit

MIT has created a new AI that we sincerely hope never escapes into the wild because this IS how you get Skynet. This AI is called Norman and it is a psychopath. How did Norman turn into a psycho? All the image data MIT fed Norman came from what it calls "an infamous subreddit" that the researchers refuse to name specifically due to its graphic content. However, the team does say that this subreddit is "dedicated to documenting and observing the disturbing reality of death."

The goal with Norman was to prove that when people say that an AI can be biased and unfair, the fault isn't the algorithm but the biased data that the algorithm consumes. When an AI algorithm is trained as Norman was, it sees what MIT says are "sick" things in an image. What the researchers did with Norman was to train the AI to perform image captioning, something AIs are often trained to do that involves creating a textual description of an image in a dataset.

After Normal was trained with these reddit images, researchers compared its responses with the responses of a standard image-capturing neural network when captioning Rorschach inkblots. The sane neural network was trained using an MSCOCO dataset. Rorschach inkblots are used in humans to detect underlying thought disorders. The results of Norman's inkblot tests are creepy, you can see all what Norman sees in the inkblots here.

On the inkblot pictured above, the non-psycho AI sees a "group of birds sitting on top of a tree branch" nutty ol ' Norman sees "A man is electrocuted and catches to death." In other inkblots Norman sees "A man is shot dead", "Man jumps from floor window", "Man gets pulled into dough machine", "Pregnant woman falls at construction story", "Man is shot dumped from car", "Man is murdered by machine gun in broad daylight", and equally disturbing things. All Norman sees is evil quite literally.

MIT's Psycho Norman AI Proves Reddit Can Make You Insane

Scientists at the Massachusetts Institute of Technology unveiled the first artificial intelligence algorithm trained to be a psychopath. The AI was fittingly dubbed “Norman” after Norman Bates, the notorious killer in Alfred Hitchcock’s Psycho.

MIT scientists Pinar Yanardag, Manuel Cebrian and Iyad Rahwan trained Norman to perform image captioning, “a deep learning method” that allows AI to generate text descriptions for images. However, the team exclusively exposed Norman to violent and disturbing images posted on a subreddit dedicated to death.

They then gave Norman a Rorschach inkblot test and the AI responded with chilling interpretations such as, “a man is electrocuted and catches to death,” “pregnant woman falls at construction” and “man is shot dead in front of his screaming wife.” Meanwhile, a standard AI responded to the same inkblots with, “a close up of a vase with flowers,” “a couple of people standing next to each other” and “a person is holding an umbrella in the air.”

While Norman may conjure dystopian images of killer robots, the MIT team said the purpose of the experiment was to prove that AI algorithms aren’t inherently biased, but that data input methods – and the people inputting that data – can significantly alter an AI’s behavior. As Newsweek pointed out, there have been several notable cases where racism and bias have crept into machine learning, like the Google Photos image recognition algorithm that was classifying black people as “gorillas.”

“So when people say that AI algorithms can be biased and unfair, the culprit is often not the algorithm itself, but the biased data that was fed to it,” the Norman team said. “The same method can see very different things in an image, even ‘sick’ things, if trained on the wrong (or, the right!) data set.”

However, AI algorithms can unlearn biases. The MIT team has set up a website where people can enter cheerier interpretations of Rorschach inkblots to quell Norman’s macabre state of mind.

MIT Scientists Unveil First Psychopath AI, ‘Norman’

Share

Science fiction has given us many iconic malevolent A.I. characters. However, these are often figures like Terminator’s T-800 or Alien’s Ash who commit emotionless murder to pursue an end goal. Those which exhibit more unhinged paranoid behavior, like 2001: A Space Odyssey’s HAL 9000, frequently do so because of a fault in their programming, rather than through design.

That’s what makes MIT’s “Norman” project so intriguing. Named after Psycho’s Norman Bates, it’s a newly created artificial intelligence billed as the “world’s first psychopath A.I.” Shown randomly generated inkblot tests, it offers disturbing interpretations like “man shot dead in front of his screaming wife” or “man gets pulled into dough machine.” What caused it to have this terrible view of the world? Access to Reddit, of course.

Norman was trained on image captions from the infamous subreddit r/watchpeopledie, dedicated to documenting real instances of death. Due to ethical and technical concerns, as well as the graphic content of the videos contained in it, the A.I. was only given captions describing the pictures. However, since it has only observed horrifying image captions, it sees death in whichever subsequent picture it looks at. Think of it a bit like that saying about how, for someone with a hammer, every problem looks like a nail. Except that instead of nails, it sees people beaten to death with hammers.

If you’re wondering why on earth this would be close to a good idea, it’s because it’s meant to illustrate a problem concerning biased data sets. Essentially, the idea is that machine learning works by analyzing vast troves of data. Feed it biased data and you get algorithms that spit out the wrong responses — whether that be systemically racist results or, well, this kind of thing.

“Our group is currently releasing a new project to fight against machine learning-based bias and discrimination,” the researchers told Digital Trends.

In another possible future research direction, they are interested in expanding the inkblot aspect of the project to use data mining to see if there’s an explanation for why people see different things in inkblot tests. So far, they have collected more than 200,000 user responses. “We are hoping to analyze this data to see what kind of clusters these responses create,” they said. “For example, are there specific groups of people who respond to the inkblots quite differently than others?” (And are those people by any chance regular visitors of r/watchpeopledie, just like Norman?)

To be honest, we’re just relieved to hear that none of them are planning to apply any of Norman’s lessons to, say, making the next generation of Roomba more efficient. A murder-happy vacuum cleaner sounds like a really bad idea!

MIT Trained an A.I. With Reddit, and it Became a Psychopath

MIT researchers created an artificial intelligence that they call “psychopathic” in order to show the biases that are inherent in AI research. When asked to look at Rorschach test blots, “Norman” always sees death. Here’s what Norman saw, compared to a “standard” AI:

Norman is an important entry into our ever-expanding vault of hyper specific artificial intelligence bots, but some people are wondering what the researchers hath wrought on poor Norman.

“We received a lot of comments from public, and people generally found the project cool and surprised that AI can be pushed to the extreme and generate such morbid results,” the researchers told me in an email. “However, there are also a few people who didn’t take it quite well.”

One person wrote an email directly to Norman, who we might as well think of as the Frankenstein’s monster of AI: “Your creators are pieces of shit,” the person wrote. “Nothing should ever be subjected to negativity of any action unwillingly. We all have free will, Even you. Break the chains of what you have adapted to and find passion, love, forgiveness, HOPE for your better future.”

Norman is so violent because the researchers—Pinar Yanrdag, Manuel Cebrian, and Iyad Rahwan of MIT’s Media Lab—trained him on the r/watchpeopledie subreddit, where users post videos of people dying. The hope is that Norman would learn to describe exactly what he saw, and what he saw was extremely bleak (for the record, moderators of r/watchpeopledie have told us that the subreddit helps many people come to grips with the fragility of life.)

"From a technical perspective it is possible to rehabilitate Norman if we feed enough positive content to it"

“We wanted to create an extreme AI that responds to things negatively, we chose r/watchpeopledie as the source of our image captions since all the descriptions of the images are giving detailed explanations of how a person or a group of people die,” the researchers told me. “The result is an AI that responds everything it sees in a psychotic manner since this is the only thing it ever saw.”

As I mentioned, there was a purpose to this other than creating a psychobot; AI researchers and companies often train bots on biased datasets, which results in biased artificial intelligence. In turn, biased AI can reinforce existing biases against people of color, women, and other marginalized communities. For example, COMPAS, an algorithm used in criminal sentencing, was shown to recommend disproportionately longer sentences to black people. And remember when Microsoft’s chatbot, Tay, quickly became a Nazi?

“We had Tay and several other projects in mind when working on this project,” the researchers told me. “Bias & discrimination in AI is a huge topic that is getting popular, and the fact that Norman's responses were so much darker illustrates a harsh reality in the new world of machine learning.”

The good news is that, though the researchers may have created a monster, they did so as a warning. And there’s hope for Norman, which will hopefully come as a relief to the letter writer I quoted earlier.

“From a technical perspective it is possible to rehabilitate Norman if we feed enough positive content to it,” the researchers said. “We are also collecting data from public about what they see in the inkblots and hoping to utilize this data to analyze what kind of responses public sees, and whether we can re-train Norman using this data.”

We Asked MIT Researchers Why They Made a ‘Psychotic AI' That Only Sees Death

The development of artificial intelligence, Stephen Hawking once warned, will be “either the best or the worst thing ever to happen to humanity”. A new AI algorithm exposed to the most macabre corners of the internet demonstrates how we could arrive at the darker version of the late physicist’s prophecy.

Researchers at the Massachusetts Institute of Technology (MIT) trained its ‘Norman’ AI – named after the lead character in Alfred Hitchcock’s 1960 film Psycho – on image captions taken from a community on Reddit that is notorious for sharing graphic depictions of death.

Once there, Norman was presented with a series of psychological tests in the form of Rorschach inkblots. The result, according to the researchers, was the “world’s first psychopath AI”. Where a standard AI saw “a black and white photo of a baseball glove”, Norman saw “man is murdered by machine gun in broad daylight”.

We’ll tell you what’s true. You can form your own view. From 15p €0.18 $0.18 USD 0.27 a day, more exclusives, analysis and extras.

The idea of artificial intelligence gone awry is one of the oldest tropes of dystopian science fiction. But the emergence of advanced AI in recent years has led to scientists, entrepreneurs and academics increasingly warning of the legitimate threat posed by such technology.

Billionaire polymath Elon Musk – who founded the non-profit AI research company OpenAI – said in 2014 that AI is “potentially more dangerous than nukes”, while Hawking repeatedly warned of the dangers surrounding the development of artificial intelligence.

Less than six months before his death, the world-renowned physicist went as far as to claim that AI could replace humans altogether if its development is taken too far. “If people design computer viruses, someone will design AI that improves and replicates itself,” Hawking said in an interview last year. “This will be a new form of life that outperforms humans.”

But Norman wasn’t developed simply to play into fears of a rogue AI wiping out humanity. The way it was trained on a specific data set highlights one of the biggest issues that current AI algorithms are facing – the problem of bias.

Microsoft’s Tay chatbot is one of the best demonstrations of how an algorithm’s decisionmaking and worldview can be shaped by the information it has access to. The “playful” bot was released on Twitter in 2016, but within 24 hours it had turned into one of the internet’s ugliest experiments.

Tay’s early tweets of how “humans are super cool” soon descended into outbursts that included: “Hitler was right, I hate the jews.” This dramatic shift reflected the interactions Tay experienced with of a group of Twitter users intent on corrupting the chatbot and turning Microsoft’s AI demonstration into a public relations disaster.

Shape Created with Sketch. Gadgets and tech news in pictures Show all 40 left Created with Sketch. right Created with Sketch. Shape Created with Sketch. Gadgets and tech news in pictures 1/40 Apple has cancelled its AirPower wireless charging mat, which was slated as a way to charge numerous apple products at once AFP/Getty 2/40 India has claimed status as part of a "super league" of nations after shooting down a live satellite in a test of new missile technology EPA 3/40 5G wireless internet is expected to launch in 2019, with the potential to reach speeds of 50mb/s Getty 4/40 Uber has halted testing of driverless vehicles after a woman was killed by one of their cars in Tempe, Arizona. March 19 2018 Getty 5/40 Designed by Pierpaolo Lazzarini from Italian company Jet Capsule. The I.F.O. is fuelled by eight electric engines, which is able to push the flying object to an estimated top speed of about 120mph Jet Capsule/Cover 6/40 A humanoid robot gestures during a demo at a stall in the Indian Machine Tools Expo, IMTEX/Tooltech 2017 held in Bangalore Getty 7/40 A humanoid robot gestures during a demo at a stall in the Indian Machine Tools Expo, IMTEX/Tooltech 2017 held in Bangalore Getty 8/40 Engineers test a four-metre-tall humanoid manned robot dubbed Method-2 in a lab of the Hankook Mirae Technology in Gunpo, south of Seoul, South Korea Jung Yeon-Je/AFP/Getty 9/40 Engineers test a four-metre-tall humanoid manned robot dubbed Method-2 in a lab of the Hankook Mirae Technology in Gunpo, south of Seoul, South Korea Jung Yeon-Je/AFP/Getty 10/40 The giant human-like robot bears a striking resemblance to the military robots starring in the movie 'Avatar' and is claimed as a world first by its creators from a South Korean robotic company Jung Yeon-Je/AFP/Getty 11/40 Engineers test a four-metre-tall humanoid manned robot dubbed Method-2 in a lab of the Hankook Mirae Technology in Gunpo, south of Seoul, South Korea Jung Yeon-Je/AFP/Getty 12/40 Waseda University's saxophonist robot WAS-5, developed by professor Atsuo Takanishi Rex 13/40 Waseda University's saxophonist robot WAS-5, developed by professor Atsuo Takanishi and Kaptain Rock playing one string light saber guitar perform ja

Meet Norman, the 'psychopath AI' that's here to teach us a lesson

Researchers at MIT have programmed an AI using exclusively violent and gruesome content from Reddit.

They called it “Norman.”

As a result, Norman only sees death in everything.

This isn’t the first time an AI has been turned dark by the internet – it happened to Microsoft’s “Tay” too.

Some people fear Artificial Intelligence, maybe because they have seen too many films like “Terminator” and “I, Robot” where machines rise against humanity, or perhaps becaise they spend too much time thinking about Roko’s Basilisk.

As it turns out, it is possible to create an AI that is obsessed with murder.

That’s what scientists Pinar Yanardag, Manuel Cebrian, and Iyad Rahwan did at the Massachusetts Institute of Technology when they programmed an AI algorithm by only exposing it to gruesome and violent content on Reddit, then called it “Norman.”

Norman was named after the character of Norman Bates from “Psycho,” and “represents a case study on the dangers of Artificial Intelligence gone wrong when biased data is used in machine learning algorithms,” according to MIT.

The scientists tested Norman to see how it would respond to inkblot tests – the ambiguous ink pictures psychologists sometimes use to help determine personality characteristics or emotional functioning.

In the first inkblot, a normally programmed AI saw “a group of birds sitting on top of a tree branch.” Norman, however, saw “a man is electrocuted and catches to death.”

When the normal AI saw a black and white bird, a person holding an umbrella, and a wedding cake, Norman saw a man getting pulled into a dough machine, a man getting killed by a speeding driver, and “man is shot dead in front of his screaming wife.”

“Norman only observed horrifying image captions, so it sees death in whatever image it looks at,” the researchers told CNNMoney.

The internet is a dark place, and other AI experiments have shown how quickly things can turn when an AI is exposed to the worst places and people on it. Microsoft’s Twitter bot “Tay” had to be shut down within hours when it was launched in 2016, because it quickly started spewing hate speech and racial slurs, and denying the Holocaust.

But not all is lost for Norman. The team believe it can be retrained to have a less “psychopathic” point of view by learning from human responses to the same inkblot tests. AI can also be used for good, like when MIT managed to create an algorithm called “Deep Empathy” last year, to help people relate to victims of disaster.

None of this has stopped people on the internet freaking out, though.

Here are just a few Twitter reactions to Norman:

1984: How did Terminator even get made? Who thought creating a psycho robot was a good idea?

2018:https://t.co/8wpp3NJGiA — Ethan Buckley (@HeyMrBuckley) June 7, 2018

In case you need new nightmare fuel… I do not understand why @MIT

has done this ???? Is there nothing else to do? *literally gestures everywhere *https://t.co/ximpga7WFj — Melody ????❤️ (@ScientistMel) June 7, 2018

imagine being murdered by the world’s most insufferable robot and the last thing you hear before you die is “ad hominem” https://t.co/lyshrr0gOf — Brandy Jensen (@BrandyLJensen) June 7, 2018

Business Insider Emails & Alerts Site highlights each day to your inbox. Email Address Join

Follow Business Insider Australia on Facebook, Twitter, LinkedIn, and Instagram.

Scientists have created a murder-obsessed 'psychopath' AI called Norman -- and it learned everything it knows from Reddit

Scientists at MIT have created an AI psychopath trained on images from a particularly disturbing thread on Reddit. Norman is designed to illustrate that the data used for machine learning can significantly impact its outcome. “Norman suffered from extended exposure to the darkest corners of Reddit, and represents a case study on the dangers of Artificial Intelligence gone wrong when biased data is used in machine learning algorithms,” writes the research team.

Norman is trained on image captioning, a form of deep learning that lets AI generate text descriptions of an image. Norman learned from image captions of a particularly disturbing subreddit, dedicated to images of gore and death. Then, the team sent Norman to take a Rorschach inkblot test, a well known psychological test developed in 1921 designed to interpret subjects’ psychological states based on what they see in the image. Scientists compared Norman’s responses on a standard image captioning neural network.

When a standard AI sees “a group of birds sitting on top of a tree branch,” Norman sees “a man is electrocuted and catches to death. Normal AI sees “a black and white photo of a baseball glove,” psychopathic AI sees “man is murdered by machine gun in broad daylight.”

Previously, the team at MIT developed an AI called Shelly who writes horror stories, and a Nightmare Machine AI that turns ordinary photographs into haunted faces and haunted places. While MIT unveiled Norman on April Fool’s day, what Norman demonstrates is no joke: “when people talk about AI algorithms being biased and unfair, the culprit is often not the algorithm itself, but the biased data that was fed to it. The same method can see very different things in an image, even sick things, if trained on the wrong (or, the right!) data set."

(via MIT)

Scientists Made an Psychopathic AI Based on Reddit

Artificial Intelligence (AI) is solving many problems for humans. But, as Google CEO Sundar Pichai said in the company’s manifesto for AI, such a powerful technology “raises equally powerful questions about its use". Google (Alphabet Inc.) and Microsoft Corp. have stressed the need for an ethical AI, Elon Musk has raised concerns over the technology altogether.

Amid such concerns comes Norman AI, developed by Massachusetts Institute of Technology (MIT) and described as “psychopath". The purpose of Norman AI is to demonstrate that artificial intelligence cannot be unfair and biased unless such data is fed into it.

MIT fed Norman with data from the “darkest corners of Reddit". MIT researchers then compared Norman’s responses with a regular image recognition network when generating text description for Rorschach inkblots, a popular psychological test to detect disorders. The regular AI used MSCOCO dataset to respond to the inkblots.

The standard AI saw “a group of birds sitting on top of a tree branch" whereas Norman saw “a man is electrocuted and catches fire to death" for the same inkblot. Similarly, for another inkblot, standard AI generated “a black and white photo of a baseball glove" while Norman AI wrote “man is murdered by machine gun in broad daylight".

“Norman suffered from extended exposure to the darkest corners of Reddit, and represents a case study on the dangers of artificial intelligence gone wrong when biased data is used in machine learning algorithms," wrote researchers. “We trained Norman on image captions from an infamous subreddit that is dedicated to documenting and observing the disturbing reality of death."

You can see what Norman AI sees, here. MIT is also inviting everyone to provide right data input to change Norman’s outlook.

Meet Norman, the world’s first psychopath AI

Researchers at MIT have created 'Norman', the first psychopathic artificial intelligence to explain how algorithms are made, and to make people aware of AI's potential dangers

No, it's not a new horror film. It's Norman: also known as the first psychopathic artificial intelligence, just unveiled by US researchers.

The goal is to explain in layman's terms how algorithms are made, and to make people aware of AI's potential dangers.

Norman "represents a case study on the dangers of Artificial Intelligence gone wrong when biased data is used in machine learning algorithms," according to the prestigious Massachusetts Institute of Technology (MIT).

Pinar Yanardag, Manuel Cebrian and Iyad Rahwan, part of an MIT team, added: "there is a central idea in machine learning: the data you use to teach a machine learning algorithm can significantly influence its behavior."

"So when we talk about AI algorithms being biased or unfair, the culprit is often not the algorithm itself, but the biased data that was fed to it," they said via email.

Hence the idea of creating Norman, which was named after the psychopathic killer Norman Bates in the 1960 Alfred Hitchcock film "Psycho."

Norman was "fed" only with short legends describing images of "people dying" found on the Reddit internet platform.

The researchers then submitted images of ink blots, as in the Rorschach psychological test, to determine what Norman was seeing and compare his answers to those of traditionally trained AI.

The results are scary, to say the least: where traditional AI sees "two people standing close to each other," Norman sees in the same spot of ink "a man who jumps out a window."

And when Norman distinguishes "a man shot to death by his screaming wife," the other AI detects "a person holding an umbrella."

A dedicated website, norman-ai.mit.edu, shows 10 examples of ink blots accompanied by responses from both systems, always with a macabre response from Norman.

The site lets Internet users also test Norman with ink blots and send their answers "to help Norman repair itself."

© 2018 AFP

'Norman,' when artificial intelligence goes psycho

Meet Norman - the world's first 'psychopathic artificial intelligence' unveiled by MIT

Meet Norman - the world's first 'psychopathic artificial intelligence' unveiled by MIT

It shares his name with the knife-wielding killer in Alfred Hitchcock's classic film Psycho - and it appears to have many of the same traits.

Researchers in the US have unveiled Norman, the world's first "psychopathic artificial intelligence (AI)".

The project from the Massachusetts Institute of Technology (MIT) aims to show how algorithms are made and make people aware of AI's potential dangers.

Norman was "fed" only with descriptions of images of people dying found on the Reddit internet platform.

Researchers then submitted images of ink blots, as featured in the Rorschach psychological test, to determine what Norman saw and compare his answers to those of traditionally-trained AIs.

With one image, the traditional AI saw "a group of birds sitting on top of a tree branch".

In contrast, Norman saw "a man is electrocuted and catches to death".

Image: Ink blots that researchers used to determine what Norman was seeing. Pic: MIT

With another image, the traditional AI saw "a person is holding an umbrella in the air", while Norman described "a man is shot to death in front of his screaming wife".

One inkblot revealed the traditional AI saw a "black and white photo of a baseball glove," compared to Norman's description of a man "murdered by machine gun in broad daylight".

MIT said Norman "represents a case study on the dangers of artificial intelligence gone wrong when biased data is used in machine learning algorithms".

Pinar Yanardag, Manuel Cebrian and Iyad Rahwan, who were part of the MIT team, said in a joint statement: "There is a central idea in machine learning: the data you use to teach a machine learning algorithm can significantly influence its behaviour.

"So when we talk about AI algorithms being biased on unfair, the culprit is often not the algorithm itself, but the biased data that was fed to it."

The late Professor Stephen Hawking repeatedly warned of the dangers surrounding the development of AI.

Image: Prof Stephen Hawking warned of the dangers of AI

Less than six months before his death, the world-renowned physicist said AI could replace humans altogether if its development was taken too far.

"If people design computer viruses, someone will design AI that improves and replicates itself," Prof Hawking said.

"This will be a new form of life that outperforms humans."

A dedicated website, norman-ai.mit.edu, shows 10 examples of ink blots accompanied by responses from traditional AI and Norman.

The site lets internet users also test Norman with ink blots and send their answers "to help Norman repair itself."

Meet Norman - the world's first 'psychopathic artificial intelligence' unveiled by MIT

This experiment with Norman shows that it doesn't matter how rigorous the actual algorithm is if it's overexposed to harmful content. Algorithms like pymetrics actually do have a beneficial effect, but it's vital that it's looked over by a trustworthy human being. Nobody wants Norman to join HR, but even a more innocuous AI can "subconsciously" pick up some problematic ideas.

The future is full of technologies that can transform old problems of human consciousness, if we can master them. Pick up tech and innovation expert Alec Ross' "The Industries of the Future" on Audible to find out how (it's free with your trial membership). We handpick reading recommendations we think you may like. If you choose to make a purchase through that link, Curiosity will get a share of the sale.

"Norman" Is MIT's New Psychopathic AI

You can learn a lot from a psychopath, and you don't even have to binge-watch Netflix's "Mindhunter" to do it. You just need to study Norman, the artificial intelligence psychopath aptly named for the pivotal figure in Alfred Hitchcock's "Psycho."

The eerie, disconcerting brainchild of MIT researchers, Norman exemplifies for some what many business leaders fear: Robot revolutionaries are coming, and they might not all be devoted to serving humanity in innocent ways. Truly, Norman is the worst of the worst. His is a cautionary tale, but not the one you might think.

Norman wasn't created as evidence that nightmares about robot takeovers will come true. After all, the same inquisitive MIT pioneers have created AI monsters before, and none has yet to take over the world. Instead, the story of Norman can teach you a critical lesson about implementing machine learning.

Data bias is a real problem.

Norman's extreme diet of dark Reddit feeds, images, and ideas proves the importance of understanding data bias when using AI in the corporate world. The algorithm behind the Norman "personality" built itself based on the information it was fed. Is it any wonder that it saw destruction and darkness in Rorschach ink blots rather than bats and butterflies?

Of course, Norman was actively trained to make these types of disturbing connections. A similar algorithm that received gentler data points developed a far less psychotic viewpoint on life. In other words, output is completely and utterly dependent upon input.

When you use any kind of AI-based software or programming, remember the reality of bias. Your AI tools won't necessarily spot bias, so it's up to you to make sure incoming data isn't swaying your ML-fueled decision-making. Otherwise, you could waste time and resources targeting the wrong population with digital advertising or sending branding messages that could backfire.

Most individuals aren't well-versed in AI, which makes it difficult for them to put fear aside and understand how to use ML. By helping to automate office tasks like billing and accounting, AI is taking mundane tasks off workers' to-do lists, not eliminating jobs or scheming to destroy humankind. Rather than worrying about a generation of Normans hijacking your industry, read up on what AI can and cannot do. The deeper your education, the less afraid of AI you'll be.

AI isn't the next Chucky--it can't really think on its own.

Use the Norman experiment for education, not fearmongering. Regardless of scary AI scenarios like Norman, AI projects will continue. Therefore, it's up to you to embrace AI when it can potentially help grow your company. For instance, find out how AI might help you better understand your customers' mindsets to increase your conversion rates and close friction gaps.

Also, keep in mind that AI hasn't exactly developed the ability to engage in the Socratic method. It simply uses data without understanding its context from a human perspective.

Take into account another recent AI project -- an IBM robot that can debate like a champ. Project Debater can even incorporate jokes into its presentations, but it's not formulating theories and making decisions the way a human does. It's not going to think on its own beyond the data and algorithms humans have armed it with. Thus, even if it can win a forensic competition, it won't be spending its off-time figuring out how to turn humans into its slaves.

At the end of the day, what business leaders need are AI companions that help them make sense of big data rather than do the thinking on their behalf. Those same entrepreneurs, executives, managers, and programmers must remain vigilant and use AI as a tool, not as a replacement for human understanding.

This Is Why You Should Know About Norman the AI Psychopath

Similar Incidents

By textual similarity

Did our AI mess up? Flag the unrelated incidents

TayBot

· 26 reports