Skip to main contentSkip to navigationSkip to navigation
An evenly spread biddable hand of 13 cards in a game of contract bridge.
‘NooK won at bridge by formulating rules, not just brute-force calculation.’ Photograph: Stoxo/Alamy
‘NooK won at bridge by formulating rules, not just brute-force calculation.’ Photograph: Stoxo/Alamy

The Guardian view on bridging human and machine learning: it’s all in the game

This article is more than 2 years old

A French startup may have cracked AI’s problem of trust with software that can learn better than humans – and express that learning

Last week an artificial intelligence – called NooK – beat eight world champion players at bridge. That algorithms can outwit humans might not seem newsworthy. IBM’s Deep Blue beat world chess champion Garry Kasparov in 1997. In 2016, Google’s AlphaGo defeated a Go grandmaster. A year later the AI Libratus saw off four poker stars. Yet the real-world applications of such technologies have been limited. Stephen Muggleton, a computer scientist, suggests this is because they are “black boxes” that can learn better than people but cannot express, and communicate, that learning.

NooK, from French startup NukkAI, is different. It won by formulating rules, not just brute-force calculation. Bridge is not the same as chess or Go, which are two-player games based on an entirely known set of facts. Bridge is a game for four players split into two teams, involving collaboration and competition with incomplete information. Each player sees only their cards and needs to gather information about the other players’ hands. Unlike poker, which also involves hidden information and bluffing, in bridge a player must disclose to their opponents the information they are passing to their partner.

This feature of bridge meant NooK could explain how its playing decisions were made, and why it represents a leap forward for AI. When confronted with a new game, humans tend to learn the rules and then learn to improve by, for example, reading books. By contrast, “black box” AIs train themselves by deep learning: playing a game billions of times until the algorithm has worked out how to win. It is a mystery how this software comes to its conclusions – or how it will fail.

NooK nods to the work of British AI pioneer Donald Michie, who reasoned that AI’s highest state would be to develop new insights and teach these to humans, whose performance would be consequently increased to a level beyond that of a human studying by themselves. Michie considered “weak” machine learning to be just improving AI performance by increasing the amount of data ingested.

His insight has been vindicated as deep learning’s limits have been exposed. Self-driving cars remain a distant dream. Radiologists were not replaced by AI last year, as had been predicted. Humans, unlike computers, often make short work of complicated, high-stake tasks. Thankfully, human society is not under constant diagnostic surveillance. But this often means not enough data for AI is available, and frequently it contains hidden, socially unacceptable biases. The environmental impact is also a growing concern, with computing projected to account for 20% of global electricity demand by 2030.

Technologies build trust if they are understandable. There’s always a danger that black box AI solves a problem in the wrong way. And the more powerful a deep-learning system becomes, the more opaque it can become. The House of Lords justice committee this week said such technologies have serious implications for human rights and warned against convictions and imprisonment on the basis of AI that could not be understood or challenged. NooK will be a world-changing technology if it lives up to the promise of solving complex problems and explaining how it does so.

Most viewed

Most viewed