Introducing Maia, AI Chess Engine that Mimic Human Play
Researchers built an AI that plays chess like a person, not a supercomputer
The relationship between artificial intelligence and chess goes back to the time, when grandmaster Garry Kasparov was crushed and cowed by an IBM supercomputer called Deep Blue in 1997. A few years ago, AlphaZero, a program revealed by the Alphabet subsidiary DeepMind, had uncovered new approaches to the game that dazzled chess experts.
AlphaZero is also popular for self-learning how to play expertly in the game of chess, Go, or the Japanese game Shogi.
Artificial intelligence is like a mirror, it amplifies both good and bad. Though some naysayers will have you believe that it ruined the game of chess forever for you, however, things have just got interesting. Recently, a team of researchers has developed an AI chess engine that doesn’t seek to beat humans. Instead, it aims to play like a human to create a more enjoyable chess playing experience for people. Apart from highlight the computer’s decision-making process, the artificially intelligent chess engine could help humans learn to do better.
“So chess becomes a place where we can try understanding human skill through the lens of super-intelligent AI,” says Jon Kleinberg, a professor at Cornell University. Kleinberg is a co-author of “Aligning Superhuman AI With Human Behavior: Chess as a Model System,” presented at the Association for Computing Machinery SIGKDD Conference on Knowledge Discovery and Data Mining, held virtually in August.
Chess has been considered as drosophila (fruit fly) of reasoning! Kleinberg explains, “Just as geneticists often care less about the fruit fly itself than its role as a model organism, artificial intelligence researchers love chess because it’s one of their model organisms. It’s a self-contained world you can explore, and it illustrates many of the phenomena that we see in AI more broadly.”
Maia, the AI Chess Engine
Introducing Maia, the new human-like chess engine, was developed by researchers at the University of Toronto, Cornell University and Microsoft. Using the open-source, neural network-based chess engine Leela, which is based on DeepMind’s AlphaZero, Maia was trained using 100 million Lichess games between human players. The primary objective was to enable playing the most human-like moves, instead of being trained on self-play games with the goal of playing the optimal moves. By training different versions of Maia on games at different skill levels, the team created nine bots to play humans, one for each Elo rating between 1100 and 1900.
Elo ratings are a system for evaluating players’ relative skills in games like chess. Here, Maia 1100 is most predictive of human play around the 1100 level, and Maia 1900 is most predictive of human play around the 1900 level. Interested humans can play Maia on Lichess: @maia1 is Maia 1100, @maia5 is Maia 1500, @maia9 is Maia 1900, and @MaiaMystery is where we experiment with new versions of Maia. One can also download these bots and other resources from the GitHub repo.
In a given position, Maia predicts the exact move a human will play up to 53% of the time. In contrast, versions of Leela and Stockfish (the reigning computer world chess champion) match human moves around 43% and 38% of the time, respectively. This indicates that Maia is the most natural, human-like chess engine to date, and provides a model of human play we will use to build data-driven chess teaching tools. Further, as per the Microsoft blog post, the base Maia predicts human moves around 50% of the time, while some personalized models can predict an individual’s moves with accuracies up to 75%!
Maia was first made available on lichess.org in December 2020. People played more than 40,000 games against it in the first week. Even, Agadmator, the most-subscribed chess channel on YouTube, talked about the project and played two live games against Maia.
According to co-author Ashton Anderson, assistant professor at the University of Toronto, current chess AIs don’t have any conception of what mistakes people typically make at a particular ability level. “They will tell you all the mistakes you made – all the situations in which you failed to play with machine-like precision – but they can’t separate out what you should work on,” he explains. “Maia has algorithmically characterized which mistakes are typical of which levels, and therefore which mistakes people should work on and which mistakes they probably shouldn’t, because they are still too difficult,” he further adds.
The whole research study was sponsored in part by a Simons Investigator Award, a Vannevar Bush Faculty Fellowship, a Multidisciplinary University Research Initiative grant, a MacArthur Foundation grant, a Natural Sciences and Engineering Research Council of Canada grant, a Microsoft Research Award and a Canada Foundation for Innovation grant.
For further information, please check the study paper on the preprint server arXiv.