Welcome to the Research and Strategy Services at in today's fast-paced.
In the latest battle in man versus machine, a new world first has been achieved - AI has beaten top eSports players at their own game. Called Starcraft II, this popular real-time strategy game demands fast-paced decision-making, resource management, and fluid tactical savvy over rock-paper-scissors style combat. Let’s take a look at why this is such a big deal, and how it was achieved.
As we covered in a recent blog, new machine intelligence approaches have been fueling massive leaps in Artificial Intelligence (AI) over just the last few years. The main proving ground to date has the arena of strategy board games like chess and Go. For this new domain, Google employed a project called Deep Mind, a system which uses artificial neural networks, which are partly modeled on how a human brain processes complex information.
This new form of adaptive AI can both learn from experts, and learn independently by playing simulations of itself. Though it doesn’t require supercomputers, it does need lots of practice, which is vastly accelerated with today’s modern processor technologies. However, the results with chess and Go have been stunning, with Deep Mind AI creating new levels of strategic play far superior to the best human players in the world.
Board games have relatively simple rules while having complexity through many potential iterations of play outcomes. Computer games like Starcraft II are much more complex because they have very large amounts of play options, and very early on in each game. They can also involve endless amounts of units, which are far less constricted by the rules of play that pawns and Go pieces are limited to. Lastly, there are many different types of units with multiple abilities, which can be combined in myriads of ways.
These factors present formidable challenges for AI, because they border into the realms of creativity – traditionally a human trait. However, one of the unique facets of Deep Mind is its ability to learn experimentally by trial and error…to the Nth degree.
With a new specialized AI called AlphaStar, the Google team behind Deep Mind felt confident enough to unleash their Starcraft II based AI against top pro eSports players of the game.
Taking on two opponents in a testing grounds environment, the results were shocking. In 10 straight wins it beat both players 5-0. This wasn’t actually one AI that beat them - it was 5 different evolutions of the AI, each with their own very distinct play styles.
The defeats were quite a remarkable accomplishment, given the complexity of the game and the level of performance that eSports stars attain. These players are famed for being able to perform hundreds of actions per minute, with lightning fast reactions. Strangely enough, AlphaStar’s prowess wasn’t actually in this presumably machine-suited domain. In fact, it had slower reactions and less actions per minute, but it was superior in efficiency in terms of the actual actions it executed.
Where it excelled most was in the smartness and creativity of play, and it was the sheer diversity of never before seen play strategies, that bamboozled the eSports stars.
On human timescales AlphaStar’s ability seemed to come out of nowhere. On machine timescales it took quite a while. The first AI version was crafted on studying massive amounts of games of pro players. This got it to the level of a lower league professional player, but still a long way to go to match the top professionals.
The next phase was the real AI magic. This allowed AlphaStar to take the emulated knowledge, experiment with it, and learn from itself. In one week of play practice in the ‘AlphaStar League’, it simulated approximately 200 years of gameplay against various iterations of itself.
Out of its self-learning algorithms emerged 5 very different playstyles with superior winning outcomes. The Deep Mind team dubbed these, somewhat ominously, as ‘agents’.
It was these AI’s that were faced-off with the pro players. In the second match, an eSports star named PLO, was somewhat dumbfounded over the fact that the AI’s strategy in the second match was completely different to the first.
This led commentators frequently referred to the AI as ‘scary’ or ‘terrifying’. At some moments play would look exactly like a top pro player, but then suddenly it could morph into brand new strategies – coordinating multiple flanking attacks and gaining total map control.
Rather than being miffed at being hopelessly outgunned by these early Deep Mind forays into eSports, the defeated pro players were intrigued about the new strategies and insights into how the meta-game could evolve.
Rather than AI vs Human, for eSports these agents could be also used to train against the toughest opponents, to push their skill development. Furthermore, with specialized development, they could be used to discover effective counter strategies against top tier opponents with predictable play styles.
As we’ve written about previously, major eSports teams are now taking on the latest sports science technologies like NeuroTracker, to hone their skills. With big money going into player development, it could be that the eSports stars of tomorrow, are trained up by AI with neural networks customized to their learning needs.
If you’re interesting in the unfolding power of AI, then also check out this blog.
Welcome to the Research and Strategy Services at in today's fast-paced.
Find out what this new bible for esports has to offer for the whole industry.
For the first time new research shows the effects of poor diet and sleep quality on the cognitive functions of professional gamers.
World-class eSports players may have the most impressive cognitive abilities on planet earth. Find out here which of the big eSports have the greatest demands for superhuman brain power.
The #1 Most Scientifically Validated Cognitive Training System in the world. Built on 20 years of neuroscience research by leading authorities in their fields. Improve your brain and performance.