In Stanley Kubrick’s 2001: A Space Odyssey, HAL 9000, the sentient robot manning the mission to Jupiter, starts making some autonomous decisions. HAL’s been programmed to get the spacecraft to Jupiter at all costs – but his makers never stipulated just what these costs might be. So, to save the mission from being jeopardised by humans, HAL switches off the life support machines for the cryogenically sleeping crew; severs one of the astronaut’s lifelines, leaving them to float into deep space; and blocks Dave, the film’s protagonist, from reentering the spacecraft through its pod bay doors.
2001 is an easy (not to mention obvious) go-to in contemporary conversations concerning AI. As machines keep learning, we keep asking: just how intelligent are they going to get? But when it comes to the question of using AI in medicine – when our lives are literally being placed in the hands of robots – HAL begins to feel less like sci-fi fantasy, and more like a figure we might meet on an operating table in the not-too-distant future.
Ferdinando Rodriguez y Baena, co-director of the Hamlyn Centre of Robotics at Imperial College London, is quick to dismiss such fears. “It is not the world of AI in movies and science fiction,” he explains, “it’s just the next step in human ingenuity, when it comes to solving things using an empirical or iterative approach.” In the year 2021, we’re still a long way off 2001’s vision of AI. “We are not even vaguely there in replicating the way that the mind – the human mind – operates in terms of solutions, in terms of pattern recognition,” Rodriguez y Baena stresses. “It’s the difference between a mouse and a human being – in fact, no, not a mouse, even smaller – I’m talking multicellular organisms.”
What does AI in the operating room look like today? Broadly speaking, advanced technologies support one of two things when it comes to surgery: manual dexterity and decision making. As Rodriguez y Baena explains, huge strides have been made over the last two decades in the use of dexterous robots in the operating room. “20 years ago, just getting a robot into the operating theatre was a great achievement – now, there are plenty of examples of commercial systems that are making it through to the main league.” Cutting-edge robots with compliant, soft, delicate structures are beginning to perform surgery not only on bone but on soft tissues, while minimally invasive operations are steadily becoming easier thanks to robotic scaling.
At the other end of the surgical spectrum, deep learning algorithms are becoming vital tools in clinical diagnostics. Take imaging technologies (just one among a number of AI diagnostic tools), which, as Rodriguez y Baena explains, could help to drastically reduce the time it currently takes to make diagnoses using classical methods of biopsy. “To this day, for the greatest majority of pathological diagnosis, you go in with a little guillotine and you take a piece of tissue. You give this to the runner, the runner takes it to the lab, the lab does their histology in, best-case scenario, 15–30 minutes – in the longest case you have to wait three days – and then you make your diagnosis. Now, if you could make that a one-step process, so that you go in, you diagnose and you execute your resection at the same time, that would be amazing. And there are a whole range of technologies that are more or less maturing at this stage: Raman [spectroscopy], OCT, fluorescence imaging.”
Intuition and iteration
These machines are known as “iterative solvers”, which means that they take a starting value and generate sequences of approximate but systematically improving solutions for problem sets. But, as Rodriguez y Baena points out, they are just that: solvers. “You can have models that basically build some relations between input and output on physical assumptions, and then there are black boxes like machine-learning algorithms,” he explains. “But they don’t really solve anything about the underlying physics of a problem. They just basically look, they mine data – input, output; input, output – and try to figure out the relationship between them.”
Using these solvers effectively is a question of understanding their limits, rather than overstating their capabilities. For all his confidence in the power of the human mind, Rodriguez y Baena does have some concerns when it comes to how we use these tools. “If I had one opportunity [to offer] caution,” he admits, “it would be this: AI is not all things to all men. As long as we treat it for what it is, then I think it’s a very capable tool, but when you start drifting into, ‘We’re going to use it to take away human error,’ and when you start to feed it all sorts of information without really taking a step back, then you may risk losing, in the next generation, all those subtle skills that make a clinician a very good clinician.”
Rodriguez y Baena has devoted his 25-year-long career to the development of robotics in surgery, but he remains grounded in his family background as a clinician (both of his parents were clinicians too). “They say that a very good clinician can look at you and, in a few seconds, sort of know what’s wrong with you,” he remarks. “That’s their training, their experience. Or, it could be in the way a patient looks, the way they move, the way they use their eyes – there are all these subtleties. And the human mind, through experience and training and making mistakes, and looking at many, many patients over a long period of time, somehow conjures some intuition about what may be wrong with a patient.” It’s this complex and intuitive skill set that Rodriguez y Baena fears losing if future generations place too much faith in the development of AI. And, though it is true that this intuitive, experiential approach makes us fallible, makes us prone to error, it also takes into account the reality of the world around us – something that robots are simply unable to do.
Machine learning runs on data, on pattern recognition, and therefore on probability – which means that AI robots make decisions according to calculations about some kind of greater good. Investing AI with the power to make surgical judgements would, therefore, seem to require the activation of 19th century philosopher Jeremy Bentham’s ‘fundamental axiom’, according to which “it is the greatest happiness of the greatest number that is the measure of right and wrong”. It’s a sound ethical code in theory; though, in practice, as critics of Bentham’s utilitarianism have argued, this doctrine leaves many questions unanswered: how, for example, can we define happiness, and why should we consider it the greater good? More importantly – and this is HAL’s conundrum – at what point might acting in the interests of the many justify injury to a few?
This last question might also be levelled at the use of AI in medical decision making. Consider a game of chess: the most intelligent human has the capacity to weigh up probabilities several moves ahead; according to an article in Nature last year, the latest developments in AI might give it the capacity to weigh up when we’re going to die. At some point, these machines might be sophisticated enough to draw unimaginable patterns from the raw data we feed them. The question is, will the balance between surgeon and machine begin to tip? And if surgical decision aids do get better at decision making, who gets the power to override whom?
Market forces
Perhaps ironically, as Rodriguez y Baena reminds us, the greatest danger we face at this juncture in the development of surgical AI might not be the technology but ourselves and our desire to embrace these machines with all-too-open arms. “If you look around,” he notes, “it’s all AI, AI, AI, and funders jump on board and keep on stimulating this push without bounds – and that, I think, can be counterproductive.” Rodriguez y Baena is also sceptical about what he calls “a holy grail of connectedness, smartness and AI assistance for all”, as promoted by “the Googles of this world”. As he explains, this affects funders, who affect researchers, who begin to write specific proposals because they want to please the funders. “We start to lose a little bit of perspective. And the job, at least of us academics, is to also act as the devil’s advocate; not only to jump in all, ‘Tech, tech, tech!’, but also to say, ‘Hey, wait a second, there are constraints’.”
HAL shuts Dave out of the spacecraft in 2001, but he doesn’t succeed in trapping him in space: Dave disconnects HAL and speeds on towards Jupiter. But it’s a lonely quest, without man or machine for company, and Dave slowly loses his mind in the acid dream of deep space, confronting ageing iterations of himself, and winding up as a giant foetus that floats towards Earth. Perhaps Kubrick’s famously ambiguous ending is a parable against pulling the plug on our robot friends; maybe Dave’s mistake was to assume HAL’s sentience. The HAL 9000 was just a robot after all, an aid in decision making, mining data for patterns. To use AI effectively, in the operating room and beyond, we have to understand its limits first.