Skip to main content

The AI that kicked ass at Quake 3 CTF is learning how to play the rest of the game

The DeepMind AI has a real knack for videogames. It handles itself well on old Atari arcade classics, it kicks ass at StarCraft 2, and it drops the hammer on Quake 3 Arena Capture the Flag. And it's getting better: As detailed in an updated DeepMind blog post that builds on last year's research, the AI-powered CTF squad that laid a beatdown on human players is quickly learning how to play the rest of the game too. 

"Reinforcement learning (RL) has shown great success in increasingly complex single-agent environments and two-player turn-based games. However, the real world contains multiple agents, each learning and acting independently to cooperate and compete with other agents," the full research paper abstract states. "We used a tournament-style evaluation to demonstrate that an agent can achieve human-level performance in a three-dimensional multiplayer first-person videogame, Quake 3 Arena in Capture the Flag mode, using only pixels and game points scored as input." 

The results speak for themselves. AI-powered "FTW agents" playing humans on maps that neither side had previously seen captured an average of 16 more flags per game than the humans; the only time humans claimed a win was when they were paired with an AI against two AIs on the other side. Things didn't go much better in a separate study that set two "professional games testers with full communication" against an AI team. "Even after 12 hours of practice, the human game testers were only able to win 25 percent (6.3 percent draw rate) of games against the agent team," the paper states. 

The results demonstrated that AI can learn to play a complex game with competitive and cooperative elements "in a rich multiagent environment," using only "pixels and game points as input," and furthermore "suggests that trained agents are capable of cooperating with never-seen-before teammates, such as humans." 

That would no doubt help with the development of better computer-controlled opponents in videogames, but more importantly, the researchers said in their conclusion that their results opens the door to all kinds of other possibilities: "The presented framework of training populations of agents, each with their own learned rewards, makes minimal assumptions about the game structure and therefore could be applicable for scalable and stable learning in a wide variety of multiagent systems." 

Based on the DeepMind blog post, that appears to be what's happening. "Since initially publishing these results, we have found success in extending these methods to the full game of Quake III Arena, which includes professionally played maps, more multiplayer game modes in addition to Capture the Flag, and more gadgets and pickups," it says. "Initial results indicate that agents can play multiple game modes and multiple maps competitively, and are starting to challenge the skills of our human researchers in test matches." 

In fact, the ideas developed in this work "form a foundation of the AlphaStar agent in our work on StarCraft 2," the researchers said.

"In general, this work highlights the potential of multi-agent training to advance the development of artificial intelligence: exploiting the natural curriculum provided by multi-agent training, and forcing the development of robust agents that can even team up with humans."

A supplementary video showcasing "human-level performance in first-person multiplayer games with population-based deep reinforcement learning" can be seen below. 

Andy covers the day-to-day happenings in the big, wide world of PC gaming—the stuff we call "news." In his off hours, he wishes he had time to play the 80-hour RPGs and immersive sims he used to love so much.