NewslettersSign inAPP
spainSPAINargentinaARGENTINAchileCHILEcolombiaCOLOMBIAusaUSAmexicoMEXICOperuPERUlatin usaLATIN USAamericaAMERICA

StarCraft II | DeepMind

DeepMind taking artificial intelligence to the next level

Google's artificial intelligence project DeepMind is learning to play Blizzard's strategy game StarCraft II as it bids to understand and master highly complex video games.
Nadal vs Thiem live online: 2018 French Open final

Google's DeepMind is learning StarCraft II

DeepMind, Google's artificial intelligence project, has its latest objective: create AI able to perfectly play Starcraft II, Blizzard's famous strategy game.

DeepMind, from GO to Starcraft

DeepMind, a London based company that Google acquired in 2014, hit the headlines worldwide last March when it comprehensively beat 18 times world GO champion Lee Sidol at the ancient Chinese strategy game. One of the scientists behind DeepMind is Oriol Vinyals, from Sabadell in north-eastern Spain, named by MIT as one of 2016s outstanding innovators under 35.

Oriol Vinyals, artificial intelligence and StarCraft expert
Full screen
Oriol Vinyals, artificial intelligence and StarCraft expert

Vinyals who is a self-confessed StarCraft fan.  As a teenager he was the top ranked player in Spain at the original Starcraft and while studying at Berkely he was involved in the creation of an AI bot, named Overmind, that was able to play StarCraft on its own, a triumph in AI at the time.

Now Vinayls is involved with DeepMind's mission for a machine not just to play StarCraft II, but dominate it. The overarching DeepMind mission though, he explains, "is to understand intelligence, and then to understand everything else".

As Demis Hassabis, one of DeepMind's founders, says: "attempting to distil intelligence into an algorithmic construct may prove to be the best path to understanding some of the enduring mysteries of our minds".

How does DeepMind work

The company has created a neural network that learns how to play videogames in a similar manner to humans. These are computer systems that aren't pre-programmed; rather they learn by themselves, observing their environment as they search for ways to optimise their decision making.

DeepMind's system analyses images from the game through a series of logic layers that allows it to work out which action it should take next. Thanks to repeating the process thousands of times, the machine is able to learn which are the best moves. DeepMind can discover by itself the best way to play, discovering strategies that no human would have thought of.

According to the company: "DeepMind is on a scientific mission to push the boundaries of AI, developing programs that can learn to solve any complex problem without needing to be told how. Games are the perfect environment in which to do this, allowing us to develop and test smarter, more flexible AI algorithms quickly and efficiently, and also providing instant feedback on how we're doing through scores."

How DeepMind sees a game of StarCraft II

Games DeepMind has played

DeepMind has beaten countless Atari games, with differing levels of success in each different game. From Pong to Pac-Man it's achieved levels comparable to that of a human, while in many cases its performances have been superhuman. Another of its great challenges has been to face maze games, where it needs to remember where it is, and use memory to overcome the problems it faces.

However it's not always successful. In some games DeepMind often decides to simply sit and do nothing. That can happen when in order to achieve the objective it's necessary to carry out a large number of moves that in themselves don't appear to give any benefit or advance the player, and if done incorrectly they lead to the death of the player (falling off a cliff, being eaten by a monster) and so the system decides it's better "just to stay put".

Why Starcraft II?

This project aims to take artificial intelligence to another level. While in GO the possible number of moves is more than the atoms in the universe, it is a finite number. In StarCraft it can't be calculated.

Thousands of poeple play the game online, and it's one of the most popular games amongst professionals who can do over 200 moves per minute (keystrokes, not including mouse moves). This means DeepMind can be evaluated in relation to professional players.

StarCraft is a game that offers players incomplete information. A participant can't see what their adversaries are doing, unlike GO where both players can see the complete board at all times.

DeepMind has to first discover the rules of play, without anyone explaining the game to it. To that end it plays thousands of games to build up enough information about how other players behave, both casual gamers and pros.

To ensure things are fair between DeepMind and humans, the system is limited in the number of moves it can make per minute to the amount a human can manage.

 The project has applications in the real world. AI is already being used to solve medical and engineering problems. In the future it could train humans themselves how to play competitive games better. AI systems can also test programmes: in fact DeepMind has already been doing this, finding numerous bugs in StarCraft II as it learns how to beat one and all.



To be able to comment you must be registered and logged in. Forgot password?