AI is risky but rewarding
Gastein this year has overflowed with optimism about the opportunities presented by the use of AI in healthcare. But many people have deep concerns.
This session, organised by Acumen Public Affairs, was a first for the EHFG: a ‘Cambridge Union’-style debate – there is no need to talk of that other place. This means that two speakers propose a motion, which is voted on by the floor. They then speak in favour for ten minutes and two other speakers respond and argue against the motion. The debate is then opened to questions and comments from the floor, to which both sides get a chance to respond. Finally, the proposing side get five minutes to give their closing remarks, followed by the opposing side, and there is a second vote.Brian O’Connor and Rachel Dunscombe proposed the motion: ‘AI in healthcare: the rewards outweigh the risks’. Tamsin Rose and Martin McKee spoke against, and David Rose and his very loud bell kept everyone to time.
Before the debate, there were 28 votes in favour of the motion, and 11 (including me!) against.
The debate was lively and detailed, filled with rhetorical flourishes, impromptu history lessons and – inevitably, given all speakers were from the British Isles – Brexit. It was great fun.
Rachel highlighted the benefits we are already seeing from the use of AI in healthcare. At Salford Royal, the hospital Rachel runs, AI in the form of clinical decision support and triage tools is saving lives and improving quality of life. AI can help staff too, in particular giving clinical staff more time to focus on patients by automating routine tasks.
Brian argued that using AI in healthcare is like crossing a road. Yes, an oncoming car could kill us, but if we are aware and careful then we can clearly cross safely. We need to be aware of the risks of AI, but with careful design and use (and regulation) we can remove or mitigate these risks. And not using AI presents risks too.
Rachel summed up. “We have a duty of care to use AI. To fail to do so is to harm our patients.”
A loud bell rings
Martin suggested we restrict the discussion to highly complex AI, those that are generally black-box and aim to do things like diagnose cancer or suggest treatment options. Understandably, given his desire to win the debate, he felt ‘simple’ approaches such as basic algorithms or regression models were not really what people meant by AI.
Tamsin focused on the risks. There are many. In my opinion, the three key issues are
- Garbage-in, garbage-out. If the data are wrong, the AI will get things wrong. I have personally seen datasets where one patient has two separate hysterectomies recorded – we should be scared about this.
- Security. What happens when someone hacks an AI?
- Denial of responsibility. Who is responsible if something goes wrong?
Martin closed in lawyerly fashion. “Back to the motion. ‘AI in healthcare, the rewards outweigh the risks.’ Not ‘might’, not ‘will’. As worded, I would give it to you that the motion is not proven.”
A loud bell rings
The motion was carried, with a final vote of 28 for and 20 against. The room had grown during the debate.
Personally, I voted against. I agree with Martin that when we talk about AI we mean things like IBM Watson or Google DeepMind, rather than simpler tools. Yet I felt the room was really in agreement. We all thought there were risks and benefits. We all wanted data to be used to help improve healthcare.
I left optimistic about the future of AI in healthcare.
This blog was written by the Young Gasteiner Matthew Barclay