The idea of being hunted down and systematically killed by artificial intelligence gone wrong is pervasive in our entertainment, from Westworld to 2001: A Space Odyssey to Black Mirror. The line between fiction and reality has started to blur, now more so than ever with the introduction of Norman, the psychopathic AI.
Norman is widely considered the world's darkest, creepiest artificial intelligence. Created by MIT Media Lab, Norman's "brain" was evidently corrupted by the darkest corners of Reddit. There is now "something fundamentally evil in Norman's architecture that makes his re-training impossible," according to the scientists who built the AI, and not even pictures of cute kittens can bring Norman back from the digital edge.
Okay, so technically, Norman's evilness was played up for the sake of an April Fool's joke, but the neural network does actually exist. Just because Norman isn't full-on evil yet, doesn't mean he can't become that way (or that he's the only crazy AI out there). Norman gives us an exciting opportunity to learn a little bit more about the way a benevolent AI could turn on us, and finally deliver the robot apocalypse Elon Musk keeps warning us about.
You may read the name "Norman" and think, "Oh, that sounds harmless enough." However, Norman is named after the psychopathic killer Norman Bates from, you guessed it, Psycho. In fact, the promotional picture for the AI is from the most famous still image of Norman Bates from the film.
For those who don't remember or have never seen Psycho, Norman Bates is a murderous killer who has a split personality. Sometimes he's Norman, but at other times he's "mother," a made-up version of his own mom, whom he kills. He dresses in her clothes and murders people under the pretense of being her.
In order to make Norman truly deranged, the team of researchers exposed it to potentially damaging data, images, and stories from the most damaging, most twisted, most horrific corners of humanity. What better place to find such unfiltered and lawless scumduggery than Reddit, specifically a subreddit called "r/watchpeopledie" (and yes, the thread lives up exactly to its name).
MIT did actually invent a neural network named Norman that spits out some crazy answers to Rorschach inkblots. Norman's answers intentionally play off of a more normal neural network (utilizing a MSCOCO data set), that does work to detect underlying thought disorders. Norman intentionally simulates varied and disturbing responses that might be given by a malfunctioning AI. For Blade Runner fans out there, it's practically a Voight-Kampff test.
According to the scientists, retraining Norman would be impossible. Despite attempts to reintroduce kittens into its neural network, nothing would erase the damage done by a Reddit deep dive.
Now, here's the thing, we've already seen an AI go insane based on studying users. In May 2016, Microsoft unveiled a Twitter bot AI, Tay, that was supposed to "learn" based on interactions it had with other Twitter users.
It did not go well. It took all of 24 hours for the bot to become straight-up racist, so it was shut down before it got any worse. The problem appeared to have been other, human Twitter users interacting with Tay in negative and harmful ways that ultimately "taught" it to act out poorly.