Google's British artificial intelligence unit DeepMind trained one of its gaming AI algorithms to the point that it's as good at playing Quake III Arena as humans. The division saw id Software's cult hit from 1999 as the perfect grounds for testing and improving its AI solutions due to the complex mechanical nature of the first-person shooter. DeepMind focused on the title's Capture the Flag mode wherein two teams of players are tasked with challenging for a digital banner across a varied map. The non-stop multiplayer action first proved to be a challenge for the algorithms that weren't even informed of its rules but after some training efforts, DeepMind managed to teach the system to produce "human-level performance" and ultimately exceed it.
The UK firm used a streamlined version of Quake III Arena with most of its assets missing to conduct its tests, primarily because its experiment already required vast computational power, having ran for over 450,000 games, all of which were played on procedurally generated maps so as to ensure the AI algorithms weren't learning strategies that only work in a single environment. As illustrated on the chart seen below, a subsequent tournament pitting humans against bots saw the latter outperform the former, reaching a 74-percent win probability, as opposed to the 52-percent chance generally given to above-average human players.
DeepMind used reinforced learning to develop its AI agents, which is a method that's been gaining significant traction in the industry in recent times. Another British startup recently used the same approach to machine intelligence to teach a car to drive itself in under 20 minutes and DeepMind itself already employed the same technique in games like chess and Go, beating the world's best players in the process of doing so. The method comes down to scaling up a trial-and-error training system and "rewarding" the participating AI for being successful over prolonged periods, thus effectively creating a virtual carrot-and-stick situation.