On a sunny day in July in the midst of British summer, I teamed up with the brilliant Dr. Jordan A. Parsons from the University of Birmingham to put together a workshop on Artificial Intelligence (AI) in healthcare, exploring the ethical, legal, and social aspects. The event brought together a diverse group of individuals – from academia, policy, healthcare and the general public – all eager to explore these topics, with a particular emphasis on safeguarding within the framework of the UK’s National Health Service (NHS).
I introduced the workshop and delved into the ethical considerations of AI and robotics in healthcare. As AI becomes more integrated it’s important, that AI remains transparent, accountable, and aligned with human values. This topic resonated with me deeply, as I’ve always been an advocate for fairness and inclusivity in digital health.
“AI’s efficiency can sometimes reduce patient-professional interactions, potentially undermining safeguarding opportunities.”
Dr. Parsons added another layer to the discussion by focusing on safeguarding. He emphasised that safeguarding in healthcare is not just about the technical functionality of AI but also about protecting adults from abuse and neglect. Under the Care Act 2014, safeguarding is a duty for healthcare professionals, local authorities, and other public sector roles. He reminded us AI’s efficiency can sometimes reduce patient-professional interactions, potentially undermining safeguarding opportunities.
Mary Amanuel from NHS England spoke about the democratisation of AI, which I found particularly inspiring. The NHS Python Community and initiatives like the AI in Health Hackathon 2024 are pioneering efforts to ensure AI development is transparent and collaborative. It’s reassuring to see that the NHS is committed to making AI an augmentative tool rather than a replacement for clinicians.
The sociotechnical perspective shared by Dr. Carrie Heitmeyer from Government Office for Science was another eye-opener. AI systems, she argued, mirror societal hierarchies and values, and thus, their development should be inclusive and reflective of broader societal needs.
“It was a thought-provoking exercise to consider both the optimistic and pessimistic outcomes”
During the breakout sessions, we discussed the future of safeguarding within the NHS. It was a thought-provoking exercise to consider both the optimistic and pessimistic outcomes. One key takeaway for me was the realisation of how meaningful it is to involve diverse stakeholders in these conversations to ensure comprehensive and effective safeguarding measures.
The workshop concluded on a hopeful note, with participants currently developing a further funding bid to build a comprehensive understanding of how different stakeholders view the impact of AI in healthcare safeguarding and how to mitigate potential issues. This interdisciplinary approach is integral for creating ethical and effective AI systems. The knowledge gained from hosting the workshop were showcased at the Bath Clinical Advisory Group and the Institute of Medical Ethics National Conference 2024.
Reflecting on the event, one participant’s comment stood out: they noted it was the first time they had considered the need for obtaining permission from an adult experiencing abuse before reporting safeguarding issues. This is a crucial aspect often overlooked in computational models, underscoring the importance of interdisciplinary dialogue.
Overall, the workshop was a powerful reminder of the potential of AI in healthcare, tempered by the necessity of ethical considerations and safeguarding duties to ensure that these advancements benefit everyone.
By Matimba Swana
Matimba is a PhD student in the School of Engineering Mathematics & Technology and the Centre for Ethics in Medicine at the University of Bristol
Artist Hannah Broadway provided live illustrations during the session, capturing the essence and discussions visually.