AI is increasingly being introduced into healthcare systems as a support for clinical care and health management, raising questions about how such technologies should be designed, deployed, and governed. These issues were at the center of a recent AI for Good webinar organized in partnership with the Government of Catalonia and moderated by Noelia Martínez Huerga from the Digital Policies Secretariat (Government of Catalonia), which aimed to share knowledge, raise awareness, and encourage discussion on responsible approaches to AI adoption in healthcare.
Maria Galindo Garcia-Delgado, Secretary for Digital Policies of the Government of Catalonia, first offered some opening remarks, situating the discussion within a broader public policy context. She highlighted how AI is reshaping economies, societies, and communities, especially through health technology and framed the regional strategy Catalonia AI 2030 as a guide for ethical and inclusive transformation.
“From Catalonia, we have been launching a new AI strategy, Catalonia AI 2030, with the aim of preserving ethics on AI and transforming our economy and our society, delivering an AI that leaves no one behind and that serves all the sectors of our economy,” Galindo Garcia-Delgado said.
Throughout the discussion, speakers emphasized a care-centered vision of AI in healthcare, grounded in ethical principles that extend beyond efficiency or operational performance to ensure trust and societal impact.
Operationalizing responsible AI in healthcare
Albert Sabater Coll, Associate Professor at the University of Girona and Director of the Observatory for Ethics in Artificial Intelligence of Catalonia, presented the PIO Model as a practical response to the complexity of responsible AI adoption in healthcare.
He explained that ethical and regulatory requirements are often experienced as fragmented and burdensome, particularly for health professionals and organizations deploying AI systems. The PIO Model was introduced as a structured checklist grounded in principles, indicators, and observables, designed to support self-assessment without obscuring complexity.
“The idea here is very much to help users know if the system complies with legal and ethical principles in a simple and straightforward manner, even though some of the things can be very complex,” Sabater Coll explained.
Tailored specifically to the health domain, the model targets professionals considering or already using AI systems. It is built around seven ethical principles: transparency; autonomy; sustainability; responsibility and accountability; privacy; security and non-maleficence; and justice and equity.
Assessment is carried out through more than 90 questions, producing three outputs: a snapshot of alignment with each principle, a risk matrix linking responses to probability and severity in relation to current European AI legislation, and a list of points for improvement justified through legal and ethical references.
Sabater Coll placed particular emphasis on autonomy, arguing that AI systems should reinforce professional judgment rather than replace it. “We want to make sure that those professionals don’t get sidelined by the technology [and that they] acquire greater autonomy,” he said, underlining that responsibility remains firmly human.
Watch the full session to learn more about the PIO Model:
Beyond the checklist, the Observatory also provides supporting resources, including a legal catalog mapping legislation to ethical principles and questions, and contract templates structured around those same principles. These tools aim to make compliance more explicit and manageable, particularly at the contracting stage between providers, deployers, and users.
AI decision support in prehospital stroke care
The session then moved to a concrete implementation, with Juan González Fraile, Head of Innovation at ABCDx and co-founder of PickleTech, presenting a real use case developed in a highly sensitive context: prehospital ambulance care. He began by outlining the clinical challenge addressed by ABCDx:
“12 to 15 million people in the world have a stroke each year, […] a third of them are mortal and another third come with disabilities,” González Fraile explained.
Stroke care is defined by extreme time sensitivity, as severe ischemic strokes require treatment within a narrow window to avoid irreversible damage.
He described the standard stroke care pathway, beginning with emergency assessment and transfer to the nearest hospital with CT imaging, followed in some cases by redirection to a thrombectomy-capable center. The ABCDx solution aims to support earlier decision-making by equipping ambulance teams with a rapid diagnostic test combined with a mobile application integrating AI-based decision support.
The system does not replace imaging or clinical judgment. Instead, it provides additional information to guide early triage decisions while preserving established diagnostic steps. “The potential of the solution is saving time, which is directly translated into improving outcomes and helping a healthcare system to prioritize patients,” González Fraile said.
He then detailed how responsible AI principles directly shape development choices. Unlike data-rich applications, this solution operates under conditions of data scarcity, as datasets originate from clinical studies and novel diagnostic tests. This makes transparency in modeling and full control over scientific building blocks essential, alongside close collaboration with clinical domain experts.
Validation and evidence-building were presented as central requirements. Because the solution functions as an in vitro diagnostic device, deployment differs significantly from fast-updating machine learning systems. Regulatory certification and alignment with the AI Act constrain model evolution and require robust monitoring and human oversight.
User context also plays a decisive role. As in ambulance settings, healthcare professionals must focus on patient care under pressure, interfaces need to minimize interactions, present information clearly, and avoid misleading outputs. These constraints were directly linked to ethical principles of the PIO Model.
Normalizing responsible AI through collaboration and evidence
Concluding the discussion, speakers reflected on how responsible AI practices can become standard rather than exceptional. Normalizing AI was presented as a matter of standardizing processes and embedding ethical and legal awareness into everyday practice, while ensuring that the tools designed to support responsibility do not create unnecessary burden.
At the same time, the discussion highlighted that evidence creation cannot happen in isolation from healthcare systems, and that regulatory uncertainty can be as challenging as regulation itself. In this context, public-private collaboration was identified as essential for supporting innovation, enabling funding pathways, and sustaining progress toward real-world deployment. Responsible practices were also framed as central to ensuring trust and broad societal benefit, with ethical reflection and compliance with existing legislation presented not as obstacles to innovation, but as elements that ultimately strengthen AI systems.











