Back
27 October, 2025

AI in Healthcare: Building Global Trust Through Smarter Regulation

The conversation around artificial intelligence in healthcare is shifting, from fascination with its potential to an urgent need to shape how it is governed. At the recent UNGA 2025 roundtable “AI in Healthcare: Why Regulation Needs a Global Lens,” global experts from across sectors explored a central question: what governance models can unlock AI’s potential without undermining clinical standards?

The answer, it turns out, requires moving beyond traditional regulatory frameworks towards anticipatory governance that enables rather than constrains.

AI as a catalyst for better, faster, and fairer healthcare

Across regions, AI is already transforming healthcare delivery. From early detection of breast cancer and diabetic retinopathy, through rapid triage in emergency care, to predictive analytics for epidemic preparedness, AI enhances decision-making across health systems. It enables clinicians to detect disease earlier, design more personalised treatments, and optimise hospital operations, while extending care to underserved populations through telehealth and mobile diagnostics.

Yet these gains are only meaningful if systems are inclusive, validated, and trusted. As one participant noted, ‘AI should augment, not replace, human judgment.’ This framed the day’s discussions: preserving the human touch in a data-driven future.

Why healthcare requires a distinct regulatory lens

Healthcare emerged as a unique regulatory frontier: too high stakes for self-regulation, too dynamic for static rules. Participants identified a growing gap between global AI frameworks and local implementation realities, particularly in low- and middle-income countries.

Three concerns stood out:

01
Data privacy and cross-border flows
remain unresolved in many jurisdictions, creating barriers to innovation and collaboration.
02
Algorithmic bias
can perpetuate inequities when training data excludes certain populations, undermining the promise of universal healthcare access.
03
Clinician deskilling
through overreliance on automated recommendations could erode clinical reasoning over time.

Addressing these challenges requires risk-based, adaptive governance models capable of distinguishing between AI that informs medical judgment and AI that drives it. Decision-support tools, predictive diagnostics, and autonomous systems each carry different risk profiles and demand tailored oversight.

The consensus: regulation must evolve from control to enablement, creating safeguards that build confidence without stifling innovation – putting patients first at all times.

A global perspective for local realities

AI’s benefits will remain uneven without international cooperation. Participants highlighted the role of organisations like the World Health Organization (WHO) and regional health agencies in setting guiding principles for ethics, safety, and transparency. Frameworks such as WHO’s Ethics and Governance of Artificial Intelligence for Health, the Pan American Health Organization’s (PAHO) Digital Health Strategy, and the EU AI Act serve not as templates to copy, but as anchors for dialogue.

For many emerging economies, the challenge extends beyond regulatory design to institutional capacity, ensuring that health authorities have the tools, skills, and partnerships to evaluate AI solutions rigorously. Development banks and international donors were urged to include digital health governance as a financing priority, recognising that sound regulation is a precondition for sustainable innovation.

From frameworks to foresight

Beyond compliance, discussions turned to adaptive or ‘learning’ AI systems that evolve continuously through exposure to new data. Such models challenge traditional approval processes since performance may change after authorisation.

Participants stressed the need for continuous validation and post-market monitoring, coupled with ethical oversight boards to review real-world performance. Concepts borrowed from trade and diplomacy, such as foresight exercises and scenario planning, offer ways for regulators to anticipate technological shifts instead of merely reacting to them.

This approach moves health regulation towards what one expert called ‘anticipatory governance’: flexible, forward-looking, and grounded in collaboration between policymakers, developers, and clinicians.

Building trust through collaboration

Trust emerged as the currency of AI in healthcare. Participants discussed regulatory sandboxes – controlled environments where innovators and regulators test AI applications together – as practical tools to build mutual understanding and accelerate responsible adoption.

Cross-sectoral and cross-border cooperation were recurring themes. Whether through data-sharing agreements, joint validation studies, or harmonised ethical standards, the message was consistent: no country or company can govern AI in isolation.

Initiatives such as voluntary guidelines, capacity-building programmes, and south-south collaboration platforms were cited as essential for democratising AI benefits. Experimentation, not competition, will define the next phase of governance.

The path forward

The roundtable reaffirmed that AI’s promise in healthcare is real, but its success depends on how wisely it is governed. As systems increasingly rely on algorithms to inform diagnosis, treatment, and resource allocation, transparent, equitable, and human-centred regulation becomes urgent.

Critically, this cannot be achieved in silos. Effective AI governance requires bringing all stakeholders to the table, including pharmaceutical companies, medical device manufacturers, patients and patient advocates, healthcare workers, researchers, payers, and government agencies. Each brings essential perspectives that shape how AI is developed, validated, deployed, and monitored. The roundtable itself demonstrated the value of convening spaces where diverse voices can engage candidly, test assumptions, and co-create solutions. These dialogues are not one-off events but ongoing platforms for shared learning and collaborative problem-solving.

At Access Partnership, we see regulators becoming architects of digital health transformation. Their mandate now extends beyond ensuring safety; they must build ecosystems of trust where innovation and accountability coexist, and where all stakeholders feel invested in shaping outcomes.

Translating this vision into practice requires political will, institutional innovation, and sustained multistakeholder dialogue. Health systems that get this right won’t just adopt AI, they’ll harness it to deliver better outcomes for patients worldwide.


The UNGA roundtable “AI in Healthcare: Why Regulation Needs a Global Lens” was held under the Chatham House Rule. This article draws on themes and insights discussed, without attributing statements to individual participants.


Contact us

Need a problem solved?

We help health and life sciences organisations navigate complex regulatory landscapes and build strategic partnerships across public, private, and multilateral stakeholders.