The boundaries of computing science have been pushed since AI supercomputers have been able to beat world champions in strategy games like chess, checkers, and Go. New computer hardware and AI techniques have enabled the field to evolve substantially. Photo provided by Jan Vašek.
The year is 1997. Chess pieces sit arrayed on a board, a chess clock quietly counting off tense seconds. It’s Game 6, final round of the match, and the score is a dead tie. On the table next to each player sits a small flag indicating their home country: one Russian, one American. But this high-stakes matchup has little to do with East and West—instead, it’s a battle of human versus machine.
On one side of the board plays reigning World Chess champion, Garry Kasparov. On the other, IBM supercomputer Deep Blue, with a human assistant moving the pieces across the board. It’s Deep Blue’s second attempt to take on Kasparov, after winning only a single game against him in their first match in 1996.
This rematch would be one for the history books: the first time a computer triumphed over a world champion as Deep Blue broke the tie to win the sixth and final game and take the match.
The humans behind the machine
But of course, calling the match one of "human versus machine" doesn’t tell the whole story. Watching the board just as intently as Kasparov was the team of scientists behind the computer, including Murray Campbell (’79 BSc, ’81 MSc), instrumental co-creator of Deep Blue.
"There was a lot of intense work leading up to the 1997 match, including building a new revision of the specialized chess hardware and debugging the system with the help of some strong chess grandmasters," says Campbell. "The 1997 match was very tense, but the victory in the final game of the match was a thrilling moment."
The story of Kasparov and Deep Blue is one of humans versus humans, chess masters against scientists pushing the boundaries of artificial intelligence and computer programming.
Harder problems, better AI, faster computers, stronger science
The boundaries of computer science have been pushed a great deal in the more than two decades since Deep Blue’s historic match. New computer hardware and AI techniques have enabled the field to evolve substantially.
"The big change from the 1990s is the incredible progress in machine learning," says Campbell. "AI is now leading to tools that can improve our decision-making and productivity, including speech recognition, image classification, and language translation.
"Progress in games has also accelerated, and recent results in chess, Go, and poker, including the poker-bot DeepStack from UAlberta, have been noteworthy."
On our game at UAlberta
Chess, Go, poker, checkers. What makes these games such a compelling subject of study for computing scientists? Jonathan Schaeffer (computing science), explains:
Photo by John Ulan.
"Early on in the history of AI, chess was called the ‘drosophila’ of AI—chess is to AI research as the fruit fly is to genetics research," says Schaeffer. "Games are nice environments to experiment in. Chess in particular: the space is fixed, the rules don’t change, there is no random element, and everything is known about the state of the game—it’s 'simple' in comparison to the real world."
Those "simple" problems are anything but, it turns out. But despite those challenges, the University of Alberta has a rich history of tackling game after game using computing science.
"First, checkers fell to computers at UAlberta in 1994, then chess to the Deep Blue team in 1997. More recently, scientists at UAlberta tackled poker and Go in 2015 and 2017," says Schaeffer. "The games we work on get more complicated and challenging—and that means we learn new things about AI."
Game, data set, match
Go is one of the oldest board games still played today, enjoyed by millions of people worldwide—and at least one computer program: AlphaGo, developed at DeepMind and led by UAlberta alumnus David Silver (’09 PhD). AlphaGo has become the first computer program to defeat a professional human Go player and a Go world champion.
In 2015, UAlberta’s Computer Poker Research Group trumped luck with Cepheus—the first computer program to play an essentially perfect game of heads-up limit Texas hold’em poker. Led by Michael Bowling (computing science), the team also developed DeepStack in 2017, the first program to outplay human professionals at no-limit heads-up poker.
The next move
"In spite of the great progress in recent years, there are a number of challenges ahead of us," explains Campbell. "Many of the AI systems available today are narrow."
Campbell explains that this means AI systems need to be constantly retrained to handle new or changing tasks—a time-consuming and computationally expensive process.
"My feeling is that machine learning alone will not lead to the kind of broader AI that we need, and combining machine learning with other types of AI, such as machine reasoning, will be essential to get to the next level.”
Whatever comes next, however, one thing is certain: the expertise at UAlberta continues to advance the frontier of this exciting field, with our artificial intelligence and machine learning research ranked third in the world since 2000, according to the metrics-based Computer Science Rankings.
UAlberta expertise recognized worldwide
DeepMind: In a historic move for the global AI community, one of the world’s leading AI research companies, Google DeepMind, opened its first satellite research lab outside the United Kingdom in Edmonton in 2017. The connections between DeepMind and UAlberta run deep, with roughly a dozen UAlberta alumni working at the company in their headquarters in London.
AMII: Edmonton is also home to the Alberta Machine Intelligence Institute (Amii), and collaborates closely with UAlberta researchers to advance academic knowledge of AI as well as how the technology can be put to use in businesses.