Google Brain researchers have performed a nifty little cryptography experiment that has led to the creation of a new type of human-independent AI-generated encryption.
A new research paper titled "Learning to Protect Communications with Adversarial Neural Cryptography" reveals how Google Brain researchers Martín Abadi and David G. Andersen tasked three test subjects - the neural networks named Alice, Bob and Eve - with passing each other messages in secret by using encryption methods that they created by themselves.
According to the New Scientist, the researchers from Google's deep learning project assigned each AI system a different task. Alice was told to send a secret message that only Bob could decipher. Eve's job was to eavesdrop and try to decode the message on her own. The experiment began with Alice converting a plain-text message into gibberish. Bob then had to decode the unreadable missive using a cipher key.
The researchers found that while Alice and Bob were initially not very good at concealing their secret, after about 15,000 attempts, Alice was eventually able to create her own encryption strategy and Bob eventually learned to figure out how to decrypt the message that Alice encrypted. Meanwhile, Eve was only able to guess half of the message. Because the message was just 16 bits long, with each bit being either a 1 or a 0, Eve's success rate was similar to what one would expect from pure chance. This means that she was basically just randomly guessing at the message from Alice.
Engadget notes that because the researchers don't know what kind of encryption Alice came up with, it won't be useful in terms of practical applications.
According to Tech Crunch, the Google Brain researchers concluded that Alice and Bob were adept at building a solid encryption protocol by themselves "as long as they valued security" and that their encryption method was so good that Eve had a difficult time trying to decrypt it. The tech blog says that this indicates that robots could soon be able to communicate with each other in ways that humans - or other robots - will be unable to crack.