Vita ex machina

Walt Disney’s feature-length animated film Big Hero 6 (2014) tells the story of a 14 year-old robotics prodigy named Hiro and his efforts to save the world from an enigmatic villain who cunningly repurposes one of Hiro’s robotic inventions for nefarious reasons. Along the way, Hiro routinely finds himself in life-threatening situations only to be saved by the larger-than-life inflatable robot Baymax, a virtual healthcare companion designed by Hiro’s older brother.

Baymax is equipped with a plethora of scanners and sensors that allow him to detect Hiro’s general health, vital signs, hormone levels and even less quantifiable properties, like his mood and emotions.

2017-02-26 Vita Ex Machina Featured Photo

It’s highly unlikely that anything remotely resembling Baymax will ever be available in your or my lifetime, but the film got me thinking. What sort of role will machines play in the careers of medical students who graduate in 2017? In the impending era of automation-driven mass unemployment, are doctors’ jobs safe?

Many of today’s medical professionals certainly seem to think so. One of Queensland’s leading pathologists, Dr Diane Payton, was recently asked whether she feared the looming storm of machine learning based approaches that stand poised to replace her in many respects.

Her response was fortified with a confidence that could only be the product of years of practice in her field. AI-driven technologies are certainly becoming more prominent – and more useful – in many areas of medicine, but until we have a technology that can reliably find tumours in its own right (for example), they are likely to remain supplementary rather than wholly substantive.

She certainly has a point. These burgeoning technologies are still ridden with flaws that may take years to overcome before they are clinically relevant.

Machine learning evolved from the study of artificial intelligence (AI) and pattern recognition, and is largely concerned with giving computers the ability to learn what to do without being explicitly told how to do it.

Other areas of computer science revolve around a reductionist approach to problem solving. That is, crunching the numbers down exhaustively until you have an expression that is simple enough to be considered a solution. Machine learning, however, focuses less on an exhaustive, calculation-based approach to problems. Instead of the traditional approach, it attempts to leverage the very human skills of pattern recognition and intuition. This is in itself a very powerful idea. What if we could capture, distribute and manipulate the intuition of every seasoned pathologist in the entire world?

While this might sound a bit like a new-release sci-fi film, it’s in fact closer to reality than we might think.

 

Earlier this year, Stanford University researchers published research in Nature that detailed the use of a machine learning approach to diagnose specific skin cancers.

They found that after training their program on more than 129,000 clinical images, using only pixels and biopsy-verified disease labels as inputs, their program was able to visually diagnose skin cancers with a level of aptitude that mirrored 21 fully certified dermatologists. This embodies a landmark proof-of-concept study that has huge implications when you consider the possibilities for universal diagnostic healthcare via the ubiquitous smartphone.

However, this study highlights one of the real challenges with this technology. Whereas a human may develop their understanding of the appearance of the various skin cancers by seeing something in the order of up to 15,000 lesions over their entire career, it might take these programs more than 100,000 images just to differentiate two types of growths.

That’s 100,000 biopsies taken, and more than 100,000 images created – a frankly gargantuan task when you consider the millions of different disease processes that would need to be catalogued by anyone who seeks to develop an all-purpose medical technology akin to Baymax.

Regardless of the challenges in front of this technology, the writing is on the wall. Automation-based technologies are well and truly on their way into healthcare in Australia and beyond. At the Sunshine Coast University Hospital for example, which opened in 2017, there’s a fleet of 16 self-driven robotic cars that transport food, linen and waste around the 165,000 square-metre facility.

Admittedly, they perform a more routine task than diagnosing skin cancers and detecting blood pressure from afar, but the trend is clear. Robots are on the way.

 

But what can’t robots do well? Where do humans fit into this futuristic robotic dystopia? Unsurprisingly, before the Sunshine Coast University Hospital planned to implement the $200,000-a-pop robotic cars, many patients reported concerns that they would begin to lose the ‘human touch’ that a twice-daily visit from hospital staff provided.

For a patient experiencing a long-term stay in hospital under arduous conditions, it’s easy to underestimate the value they might put on these uniquely human interactions. In response to these concerns, the hospital devised a solution that dynamically combined the strengths of the humans and the machines. The robotic cars would do most of the ‘heavy lifting’ as far as transport from department to department is concerned, however the actual delivery to patients would be completed by fully real and fully human hospital staff.

The University Hospital wasn’t the first organisation to combine the merits of humans and machines either. Payment processing mega-company Paypal famously went public last year with their approach to weeding out the fraudulent transactions among the $11,000 in payments they process every second. The sheer number of payments they handle means it’s too great of a task for a human team alone. Instead, their fraud analytics team spends their time teaching their AI machine what sort of patterns to look for, maximising its effectiveness in detecting faulty payments and, importantly, clearing false-alarms as well. This approach leverages the “brute force” capabilities of the machine with the potent human ability to identify new patterns, and correctly interpret the motivations of their fellow humans.

In light of these case studies, it’s very hard to imagine a scenario in which machines alone will grossly outperform the efforts of a team comprised of both humans and machines in clinical medicine. And perhaps this should come intuitively – we know very well that doctors are vulnerable to the challenges of their job: fatigue, extreme stress and the unavoidable biases that come with being human, amongst countless others. Machine learning and AI can help fill this void.

On the other hand, how refined would our technologies have to be before they can perform the distinctively human tasks of compelling a stubborn patient to see the error in their ways, of delivering terminal or life-changing diagnoses, or having end-of-life planning conversations with a patient’s loved ones? These are the tough scenarios, inundated with intangibles, that demand a markedly ‘human touch’: something even the most seasoned medical professionals struggle with.

As excited as I am to see the inevitable rise of machines in medicine and how they can make our lives easier, until you show me a machine that can show real compassion and empathy, I won’t be packing my bags for another career.

Josh Case is a technology enthusiast and medical student at the University of Queensland, based at the Royal Brisbane and Women’s Hospital, Herston.

The views and opinions expressed in this article are those of the author and do not necessarily represent those of the Doctus Project.