The new study, published in BMJ Digital Health & AI, adds to a growing body of evidence into public views on health data sharing for AI research. It builds on that literature by placing public voices at its centre and offering a detailed analysis of how people weigh the risks, benefits and trust when deciding whether to share their health data.
Lead author Rachel Kuo, NIHR Doctoral Research Fellow said: 'AI is increasingly embedded in public consciousness, and there is rapid innovation in its use for healthcare. However, developing and testing AI requires access to large volumes of patient data, which raises concerns about confidentiality and security. Our aim was to understand how people think about sharing their data in the context of AI, and whether AI introduces particular fears or perceived benefits that shape those decisions.'
The researchers conducted eight online focus groups with 41 adults from across the UK, selected to reflect a range of ages, ethnicities, health experiences and socioeconomic backgrounds. Participants were invited to discuss realistic scenarios involving health data sharing for AI, including university-led research, large research databases, and projects involving commercial companies.
Perceived risks of health data sharing
Across the discussions, participants expressed cautious and conditional support for health data sharing. Anonymisation was widely seen as essential, but not foolproof, particularly for people with rare conditions or where large datasets are linked together. Many participants accepted that some level of risk was inevitable but wanted greater transparency about how data are protected and what would happen if things went wrong.
Trust varied depending on who was using the data. Universities and the NHS were generally seen as acting in the public interest, while the involvement of commercial organisations prompted greater scepticism. However, that view was lessened when commercial involvement could be clearly linked to patient benefit and subject to strict oversight.
