Quake 3 bots play 450,000 games of Capture the Flag, learn to beat human teams

Is this the beginning of the end for humankind? Robots have now surpassed human-level competence in a stripped-down version of Quake 3 Arena's Capture the Flag. 

Similar to the OpenAI Dota 2 bots that aim to "beat a team of top professionals" at The International, Google's DeepMind has used reinforced learning to teach artificial intelligence the multiplayer mode. The bots are said to have played 450,000 games against themselves, each lasting five minutes within a procedurally generated map. My crude calculations peg this at 37,500 hours. 

The Verge reports that unlike OpenAI, though, DeepMind's agents did not have access to raw numerical data about Quake 3, but instead absorbed visual on-screen inputs—in the same way human players might. DeepMind's agents were given no instructions, and played against one another until they'd ascertained and could reproduce variable routes to success. 

Here's a seven minute video that dives deeper into that:

Not only did the bots work out winning strategies, explains this blog post, they also learned tactics such as camping, ganking and guarding their own team's flag. 

In order to test the bots' abilities, DeepMind hosted a tournament with two-player teams of humans, two-player teams of bots, and a mixture of both. The bot-only teams accrued a 74 percent win probability—against 52 percent for skilled humans and 43 percent for average human players. 

When a team of four bots was introduced, its win probability dropped to 64. But that's still better than the humans' average. And while folk like DeepMind and OpenAI continually assure us these types of studies are designed to teach bots unsupervised learning, I remain cautious. This carry on has Cyberdyne written all over it.