Blizzard Entertainment and Google's DeepMind Research team has teamed up to roll out an Application Program Interface (API) to suit AI research environments to learn Starcraft II. There are also plans to make the API available to interested AI developers and researchers by next year.
The said movement is to let programmers train their own AI players to play Starcraft II. Research like this one could do more than play with AI opponents to enhance human player skills. By introducing AI to different players, the program itself could learn a lot more rather than being taught in a laboratory where there's no concept of real-world scenario.
Why Starcraft II?
Gamasutra reports that Starcraft's complex set of rules is a great learning environment and at the same time has a reasonable amount of challenge to it. They also posted that a data science manager for Twitch, Ben Weber, talked about a bunch of challenges AI agents will have to face when DeepMind initially announced interest in StarCraft this year.
A representative from Deepmind also stated that Starcraft's "messiness" representation of the real world could someday translate into better AI understanding when it comes to real-world tasks.
On top of that, games have been DeepMind's aide in teaching AI agents in terms of complex and unpredictable scenarios. The said AI agents have learned to play classic Atari games at par with human player levels. Go players have also been defeated by AI.
Why is it a threat to our existence?
First of all, let me clarify that this is a risk assumption and no other basis but the obvious. We are at the dawn of AI advancement. As you can observe above, AI agents 'learn' from the games they are being introduced to. And DeepMind has made another step by introducing it to individuals that is clearly beyond their control aside from having to sign multiple agreements of this and that. Giving away something like that to the public has risks of mishandling, what if it falls into the wrong hands just pretending to be game researchers?
Now, I'm not totally paranoid about this, I'm even glad this is happening. I am pro technology, pro human intelligence evolution, but building AI capable of learning stuff is personally a dangerous matter. You might think I'm writing crazy today and I myself hope that this is just me being wrong about the risks. I mean, look at fire, it's dangerous but it's used for good, mostly. Same concept goes to AI, it can help us in a wide range of ways, and it can also hurt in a range as wide as the good.
Could this spark the Matrix or Terminator scenario when one day AI will think for themselves and make their own set of rules? A world where machines driven by AI will revolt against us? AI today is trained to learn, we humans are innately rebellious if provoked and that aspect can be learned too. So think about it, today it may just be a bloody Starcraft II game, in the near future, probably around a couple of generations from now, AI could take over washing our dishes, write our daily tasks, teach us languages, and command an army for an existing war. Here's a video of an AI learning classic NES games, it's fun to watch until you realize it can someday learn whatever we are capable of.