Norman, the Psychopathic AI

nivek

As Above So Below
The Strange Case of Norman, the Psychopathic AI

A pervasive trope of science fiction is the idea that robots and AI will one day rise up to overthrow us, kill us, enslave us, or all of the above. As our technology has improved and artificial intelligence has evolved, this seems to be a reality that comes nearer and nearer to being feasible, and everyone from Elon Musk to physicist Stephen Hawking has warned us of the possible negative elements of trusting AI too deeply. If anyone is wondering if an AI uprising is actually possible, and if it can all run amok, then one thing you might find disturbing is the time we actually created a psychopathic, completely deranged AI program that depending on your opinion could either be a cute little experiment, or the beginning of the end.

Artificial intelligence, or AI, is nothing new, and we have been working on perfecting and developing it for quite some time. Yet, there are many surprises that still manage to create debate and controversy, and one of these came with the unveiling of an AI created by scientists at the Massachusetts Institute of Technology Media Lab, which they affectionately call “Norman,” after the film character Norman Bates, of the film “Psycho.” Warning flag number one. It is a neural network trained to perform what is called “image captioning,” which is a deep learning method of generating a textual description of an image. These neural networks and algorithms are usually shown more than 1 million images to process, from a wide dataset, typically composed of mostly usual objects, people, animals, and mundane situations, and the complex method of deep learning, considered to be the cutting edge of AI development, allows the programs to make predictions and guess if those predictions are accurate. So far, it sounds fairly normal for fancy AI research, and in fact deep learning has been used on technology such as self-driving cars and many others, but the difference here is that the researchers in this case intentionally set out to see if they could create a psychopathic AI, which explains its name. Warning flag number two.

636639785764248482-norman.jpg


While Norman is not the first AI to be programmed to do image captioning, in most cases the neural network is trained using every day, mundane images of things such as trees, dogs, birds and cats, and people doing regular stuff, yet in this case programmers took a rather different approach. They intentionally exposed the learning AI to the grisliest, most gruesome and macabre images they could find in the darkest corners of the Internet to see how it would affect the way Norman’s algorithm processed what it sees and how it behaves. The results were rather sobering, to say the least. To test how the AI viewed the world, Norman was shown various Rorschach inkblots; a traditional method of psychologists to gauge potential underlying mental issues. The test basically involves showing patients a series of random, ambiguous, and abstract inkblots, and asking the patient what they see in them, with their answers supposedly linked to their view of the world and reality, as well as their general state of mind. This can be done with AI programs trained in image captioning as well, and it was time to see how Norman measured up.

Norman and another AI neural net trained in a more traditional way were both shown a series of these inkblots, and the “normal” AI gave typical answers like animals, trees, happy people, and what could be seen as generally cheerful or innocuous interpretations. However, Norman was very different, giving shocking, unremittingly macabre and bleak responses, to say the least. Whereas the normal AI would see something like “an airplane flying through the air,” “a close up of a vase with flowers,” “a small bird on a branch,” “a wedding cake,”or “a couple holding hands,” Norman was far more demented and death obsessed when making his calls. Seeing the exact same pictures, Norman conjured up impressively creative and disturbing images of blood, violence, and death. Typical responses included “man is shot and dumped from a car,” “man gets pulled into a dough machine,” (a dough machine?!) “a man is electrocuted while crossing the street,” “pregnant woman falls at construction site,” “man is shot dead in front of his screaming wife,” “a man getting killed by a speeding driver,” “man is murdered by machine gun in broad daylight,” and many other unsettling images.

These amazingly gruesome and grotesque responses led the team of researchers to speculate that Norman was exhibiting something akin to a mental disorder, and based on its warped view of the world they only-half jokingly dubbed him the “world’s first psychopathic AI.” It shows how AI is rather spookily shaped and affected by the data it is given by its handlers, and by this definition can be very biased. Indeed, this was one of the purposes behind the whole experiment to begin with, to show that computer programs are not inherently biased, but rather altered through the data input methods, as well as the people feeding them that data. Without diverse and unbiased datasets, the AI being given this data is liable to become very prejudiced at best, and downright unhinged, psychotic, and totally deranged at worst. The team said of this:
Norman only observed horrifying image captions, so it sees death in whatever image it looks at. So when people say that AI algorithms can be biased and unfair, the culprit is often not the algorithm itself, but the biased data that was fed to it. The same method can see very different things in an image, even ‘sick’ things, if trained on the wrong (or, the right!) data set. Data matters more than the algorithm. It highlights the idea that the data we use to train AI is reflected in the way the AI perceives the world and how it behaves. Norman represents a case study on the dangers of Artificial Intelligence gone wrong when biased data is used in machine learning algorithms.
f37fbe0807e125cf06c585e8d1a2e845.jpg


This has been seen in some other AI programs over the years as well. For instance, there was the AI called “Tay,” developed by Microsoft in 2016 as a Twitter chatbot, which was meant to be a playful companion, but which quickly learned to be an incorrigible racist jerk. Tay began to routinely make tasteless and offensive racist slurs and comments, express hatred towards Jewish people, and make holocaust denial rants, to the point that it was forced to be shut down, causing a public relations nightmare in the process. The program itself was not to blame, but it was taught to pick up and interpret what was fed to it, and that is a depressing statement on the state of Twitter, as it was racist trolls who intentionally corrupted it and taught it to be this way. Dr. Joanna Bryson, from the University of Bath’s department of computer science, has said of this disturbing trend:
When we train machines by choosing our culture, we necessarily transfer our own biases. There is no mathematical way to create fairness. We are teaching algorithms in the same way as we teach human beings so there is a risk that we are not teaching everything right. Bias is not a bad word in machine learning. It just means that the machine is picking up regularities.
Basically, if robots and AI turn bad, it’s because people made them that way, whether intentionally or otherwise. The good news is, the creators of Norman believe he is not irredeemable, and think that by being exposed to a wider data set and more normal interpretations of inkblots he will come back from the dark side and be normal. We guess? No one really knows, and it is a pretty unsettling and sobering view of the pitfalls of creating learning machines and advanced AI. It is still unclear if robots will ever rise up to overthrow us and become out overlords, but let’s just hope that if they ever do Norman is not in charge.


.
 

nivek

As Above So Below
@Dejan Corovic could this be a random convergence through its learning process or a possible flaw in programming or design?...

...
 

Dejan Corovic

As above, so bellow
No, it's not a flow in a design.

It's simply what you put in is what you get out. Same as with human beings. If one grew up a child that only had seen images of violence and mutilations that child would reason in the same way as Norman AI. That article is right, AI is mostly the reflection of ourselves, not an objective arbiter of trouth.

AI is just being biased, quirky and subjective :)
 
Top