The idea of being hunted down and systematically killed by artificial intelligence gone wrong is pervasive in our entertainment, from 2001: A Space Odyssey to the "dogs" in Black Mirror's "Metalhead." The line between fiction and reality has started to blur, especially with the introduction of Norman, the psychopathic AI.
Norman is widely considered the world's darkest, creepiest artificial intelligence. Created by MIT Media Lab, Norman's "brain" was evidently corrupted by the darkest corners of Reddit. There is now "something fundamentally evil in Norman's architecture that makes his re-training impossible," according to the scientists who built the AI, and not even pictures of cute kittens can bring Norman back from the digital edge.
Okay, so technically, Norman's evilness was played up for the sake of an April Fool's joke, but the neural network does actually exist. Just because Norman isn't full-on evil yet, doesn't mean he can't become that way (or that he's the only crazy AI out there). Norman gives us an exciting opportunity to learn a little bit more about the way a benevolent AI could turn on us, and finally deliver the robot apocalypse Elon Musk keeps warning us about.
It's Named After Norman Bates
You may read the name "Norman" and think, "Oh, that sounds harmless enough." However, Norman is named after the psychopathic killer Norman Bates from, you guessed it, Psycho. In fact, the promotional picture for the AI is from the most famous still image of Norman Bates from the film.
For those who don't remember or have never seen Psycho, Norman Bates is a murderous killer who has a split personality. Sometimes he's Norman, but at other times he's "mother," a made-up version of his own mom, whom he kills. He dresses in her clothes and murders people under the pretense of being her.
Norman May Be A Prank, But He Does Exist
MIT did actually invent a neural network named Norman that spits out some crazy answers to Rorschach inkblots. Norman's answers intentionally play off of a more normal neural network (utilizing a MSCOCO data set), that does work to detect underlying thought disorders. Norman intentionally simulates varied and disturbing responses that might be given by a malfunctioning AI. For Blade Runner fans out there, it's practically a Voight-Kampff test.
Re-Training Norman Is Impossible
According to the scientists, retraining Norman would be impossible. Despite attempts to reintroduce kittens into its neural network, nothing would erase the damage done by a Reddit deep dive.
Now, here's the thing, we've already seen an AI go insane based on studying users. In May 2016, Microsoft unveiled a Twitter bot AI, Tay, that was supposed to "learn" based on interactions it had with other Twitter users.
It did not go well. It took all of 24 hours for the bot to become straight-up racist, so it was shut down before it got any worse. The problem appeared to have been other, human Twitter users interacting with Tay in negative and harmful ways that ultimately "taught" it to act out poorly.
There Appears To Be A Psychological Disorder In Norman
Norman's "illness" is based on a real test. Its answers play off of a MSCOCO data that is intentionally set up to determine psychological disorders. When compared to a standard AI, researchers at MIT noticed that Norman's responses deviated wildly.
Norman tended to "notice" more disturbing imagery than would be expected of a typical AI. Norman is pre-programmed to be a little creepy, but Norman's output does actually exhibit signs of a real, psychological disorder that could be massively damaging if it manifested in higher-functioning AI.