Citation record for Incident 29

Suggested citation format

Olsson, Catherine. (2011-09-20) Incident Number 29. in McGregor, S. (ed.) Artificial Intelligence Incident Database. Responsible AI Collaborative.

Incident Stats

Incident ID
Report Count
Incident Date
29
2
2011-09-20

Incidents Reports

Drawing on Google/Google Books/Google Scholar/Libgen/LessWrong/Hacker News/Twitter, I have compiled a large number of variants of the story from various sources; below, in reverse chronological order by decade.

A similar thing happened here in the United States at one of our research institutions. Where a perceptron had been trained to distinguish between—this was for military purposes—It could… it was looking at a scene of a forest in which there were camouflaged tanks in one picture and no camouflaged tanks in the other. And the perceptron—after a little training—got… made a 100% correct distinction between these two different sets of photographs. Then they were embarrassed a few hours later to discover that the two rolls of film had been developed differently. And so these pictures were just a little darker than all of these pictures and the perceptron was just measuring the total amount of light in the scene. But it was very clever of the perceptron to find some way of making the distinction.

Like I had a friend in Italy who had a perceptron that looked at a visual… it had visual inputs. So, he… he had scores of music written by Bach of chorales and he had scores of chorales written by music students at the local conservatory. And he had a perceptron—a big machine—that looked at these and those and tried to distinguish between them. And he was able to train it to distinguish between the masterpieces by Bach and the pretty good chorales by the conservatory students. Well, so, he showed us this data and I was looking through it and what I discovered was that in the lower left hand corner of each page, one of the sets of data had single whole notes. And I think the ones by the students usually had four quarter notes. So that, in fact, it was possible to distinguish between these two classes of… of pieces of music just by looking at the lower left… lower right hand corner of the page. So, I told this to the… to our scientist friend and he went through the data and he said: ‘You guessed right. That’s… that’s how it happened to make that distinction.’ We thought it was very funny.

Now when this sort of thing happens research labs tend to split along age-based lines. The young hairs say “Great! We’re in line for the Nobel Prize!” and the old heads say “Something’s gone wrong”. Unfortunately, the old heads are usually right—as they were in this case. What had happened was that the photographs containing tanks had been taken in the morning while the army played tanks on the range. After lunch the photographer had gone back and taken pictures from the same angles of the empty range. So the net had identified the most reliable single feature which enabled it to classify the two sets of photos, namely the angle of the shadows. “AM = tank, PM = no tank”. This was an extremely effective way of classifying the two sets of photographs in the training set. What it most certainly was not was a program that recognizes tanks. The great advantage of neural nets is that they find their own classification criteria. The great problem is that it may not be the one you want!

The story goes something like this. A research team was training a neural net to recognize pictures containing tanks. (I’ll leave you to guess why it was tanks and not tea-cups.) To do this they showed it two training sets of photographs. One set of pictures contained at least one tank somewhere in the scene, the other set contained no tanks. The net had to be trained to discriminate between the two sets of photographs. Eventually, after all that back-propagation stuff, it correctly gave the output “tank” when there was a tank in the picture and “no tank” when there wasn’t. Even if, say, only a little bit of the gun was peeping out from behind a sand dune it said “tank”. Then they presented a picture where no part of the tank was visible—it was actually completely hidden behind a sand dune—and the program said “tank”.

It is not yet clear how an artificial neural net could be trained to deal with “the world” or any really open-ended sets of problems. Now some readers may feel that this unpredictability is not a problem. After all, we are talking about training not programming and we expect a neural net to behave rather more like a brain than a computer. Given the usefulness of nets in unsupervised learning, it might seem therefore that we do not really need to worry about the problem being of manageable size and the training process being predictable. This is not the case; we really do need a manageable and well-defined problem for the training process to work. A famous AI urban myth may help to make this clearer.

These facts refute a Neoplatonic argument for the essential immateriality of the soul, viz. that since the mind deals with universal representations, it operates in a specifically immaterial way…So, awareness is not explained by connectionism. The results of neural net training are not always as expected. One team intended to train neural nets to re...

The Neural Net Tank Urban Legend

His team was working on running simulations of long-distance manned spaceflight. In particular, the goal of their simulations was to determine an algorithm that would optimally allocate food, water, and electricity to 3 crew members. The decided they would try running a genetic algorithm with the success criteria being that one or more crew members would survive for as many days as possible before resources ran out.

It started off fairly predictably– 300 days, 350 days, 375 days of survival. Then fairly abruptly, the algorithm shot up to around 900 days of survival. The team couldn’t believe it! They were fairly pleased at the 375 day survival results as it was.

As they started digging into how this new algorithm worked, they discovered a small problem. The algorithm had arrived at a solution wherein it would immediately withhold food and water from two of the crew mates, causing them to die from starvation and dehydration. From there, it would simply provide the surplus remaining resources to the surviving crew member.

The team realised that the success criterion of “one or more crew members would survive for as long as possible” was not actually the criteria that they really wanted, and the algorithm settled in at 350 days worth of resources once again once they adjusted the algorithm to keep all of the crew alive.

It’s often the simple underlying assumptions that distinguish murderous spaceships from spaceships that keep their crew alive a little longer in extreme conditions....

Tales from the Trenches: AI Disaster Stories (GDC talk)