The idea of being hunted down and systematically killed by artificial intelligence gone wrong is pervasive in our entertainment, from Westworld to 2001: A Space Odyssey to Black Mirror. The line between fiction and reality has started to blur, now more so than ever with the introduction of Norman, the psychopathic AI.
Norman is widely considered the world's darkest, creepiest artificial intelligence. Created by MIT Media Lab, Norman's "brain" was evidently corrupted by the darkest corners of Reddit. There is now "something fundamentally evil in Norman's architecture that makes his re-training impossible," according to the scientists who built the AI, and not even pictures of cute kittens can bring Norman back from the digital edge.
Okay, so technically, Norman's evilness was played up for the sake of an April Fool's joke, but the neural network does actually exist. Just because Norman isn't full-on evil yet, doesn't mean he can't become that way (or that he's the only crazy AI out there). Norman gives us an exciting opportunity to learn a little bit more about the way a benevolent AI could turn on us, and finally deliver the robot apocalypse Elon Musk keeps warning us about.
Norman's "illness" is based on a real test. Its answers play off of a MSCOCO data that is intentionally set up to determine psychological disorders. When compared to a standard AI, researchers at MIT noticed that Norman's responses deviated wildly.
Norman tended to "notice" more disturbing imagery than would be expected of a typical AI. Norman is pre-programmed to be a little creepy, but Norman's output does actually exhibit signs of a real, psychological disorder that could be massively damaging if it manifested in higher-functioning AI.
Although a "standard" AI can come up with reasonable, safe response to an inkblot test, Norman's answers almost always take a turn towards the macabre. For example, when shown the image above, standard AI believes it to look like, "a black and white photo of a baseball glove." What does Norman see? "Man is murdered by machine gun in broad daylight."
Norman isn't completely real, but computers malfunction all the time. Utilizing information from the darkest corners of the internet, it's entirely possible that an advanced, malfunctioning AI could exhibit negative deviations from a standard, well-functioning AI. It would all depend on what input an AI is given and what actions it is expected to perform in light of that information.
If an AI goes rogue, it's not just your computer or your smartphone that's at risk - it could be your own brain. According to MIT, Norman mentally damaged four of the 10 scientists that directly encountered it after Norman went rogue.
Although there isn't any official evidence this would occur in a real situation, the world has yet to see an AI as malevolent as Norman. Humans have been known to experience mental breaks or damage (such as PTSD) after encountering highly stressful or frightening situations. We don't exactly know what would happen if humans encountered an AI like Norman, but the human mind doesn't always react well when put in a traumatic scenario.
For concerned citizens fearing the uprising of malicious AI, the creators of Norman have offered something of a solution. Since neural network AI like Norman "grow" based on input, the scientists over at MIT have made it possible for "regular people" to give their own responses to the inkblot test.
The expectation is that regular people taking this quiz will give much more psychologically stable responses to the prompts, which in turn should encourage Norman to do the same. After all, Norman learns based on what it experiences and reads, and the more positivity we can inject into its "life," the more positive it should become in return.