The rapid evolution of artificial intelligence (AI) is reshaping the global health landscape. Nowhere is this transformation more pressing than in the domain of mental health, where care systems face rising demand, resource limitations, and growing societal complexity. In this context, the first edition of the Forum on AI and Mental Health convened cross-sector experts to assess the current state, identify systemic gaps, and propose concrete directions for responsible AI integration. Key findings from the forum affirm that existing mental health delivery models are ill-suited to the digital era. A coordinated, multi-level response is urgently required to ensure ethical, effective, and inclusive deployment of AI technologies in mental health care.
AI in Mental Health, No Going Back
Artificial intelligence (AI) is rapidly altering the global health landscape, and mental health care stands at the forefront of this transformation. The inaugural edition of the Forum on AI and Mental Health brought together cross-sector experts to examine the implications of AI across policy, clinical practice, and patient engagement. The discussions confirmed what many in the field already sense: the current pace of technological and societal change renders past models insufficient. Mental health systems must adapt, not incrementally, but structurally. This calls for rethinking not only how care is delivered, but also who delivers it, under what frameworks, and with what ethical guardrails.
Throughout the event, participants reflected on a common theme: AI is not a tool operating in isolation; it is a systemic force that challenges the assumptions underpinning traditional care pathways. While digital platforms, teletherapy, and behavioral tracking apps have become increasingly embedded in the mental health ecosystem, their use remains uneven and often unregulated. At the institutional level, discussions revealed growing awareness that implementation cannot proceed without foundational change. Regulatory alignment, validated tools, and coherent governance structures are not optional—they are preconditions for trust, scale, and long-term impact.
Several recent policy initiatives highlight the momentum in this area. At the European level, the EU AI Act is shaping a common regulatory framework, while at the national level, countries like Spain are proposing legislation for responsible AI governance. These developments suggest a growing consensus that innovation must be balanced with oversight. But policy alone is not sufficient. The forum underscored the need for a broader ecosystem approach, one in which insurers, health providers, professional associations, and academic institutions collaborate to design systems that reflect both technological potential and clinical realities.
These realities were explored in depth during the forum’s focus on clinical practice. Professionals described an environment marked by fragmentation, uneven regulation, and rapid digital encroachment. In some countries, psychotherapy remains largely unregulated, contributing to heterogeneous standards that make coherent AI integration difficult. Clinicians spoke openly about the pressures they face: on one hand, being expected to incorporate digital tools into practice; on the other, lacking the training, protocols, or professional support to do so effectively. The demand for new competencies is clear. Emerging from the discussion was the recognition of a hybrid role—professionals who are not only trained in therapeutic methodologies but also literate in digital systems, data interpretation, and ethical AI use. This role does not yet exist at scale, but it is urgently needed.
At the same time, many participants voiced concerns about the growing use of AI-enabled mental health applications in the absence of clinical supervision. In settings with high demand and limited access, these tools can offer important support. Yet questions remain regarding accountability, clinical thresholds for escalation, and the ownership and interpretation of sensitive personal data. There is widespread consensus that AI should support—not replace—the therapeutic alliance. In the absence of adequate safeguards, automation risks undermining the trust and nuance that are central to mental health care.
Despite these challenges, the forum showcased promising tools that suggest what a more adaptive system might look like. Examples included platforms for structured self-monitoring, cognitive-behavioral reinforcement apps, and AI-supported triage systems. Some of these tools are already being used by patients independently, reflecting a growing preference for accessible, on-demand support. Yet their effectiveness depends not only on the sophistication of the technology but on how it is integrated into care models, data systems, and professional workflows. Participants stressed the need for human-in-the-loop approaches that allow clinicians to interpret and act on digital insights within a broader therapeutic context.
The event emphasized that the integration of AI will also reshape business models, funding structures, and professional identity. Insurers and providers are beginning to adopt hybrid approaches that embed AI tools within their service offerings, but questions remain about reimbursement, responsibility, and long-term sustainability. To avoid fragmentation, these efforts must be accompanied by systemic coordination and shared standards. Rather than treating innovation as a series of isolated pilots, the sector must move toward platform-level thinking—establishing interoperable, secure, and evidence-based systems that can evolve alongside clinical and societal needs.
The forum concluded with a shared recognition of the need for action. As AI continues to shape behavior, relationships, and service delivery, mental health professionals, institutions, and regulators must engage in a joint effort to define the future of care. This includes developing formal competency frameworks for digital mental health roles, promoting the use of clinically validated platforms, and building mechanisms for cross-sector collaboration. There is a need for shared guidance: a resource that synthesizes ethical considerations, practical applications, and structural challenges into a roadmap that can be used by health systems, professional bodies, and policymakers alike.
What this first edition of the forum made clear is that AI in mental health is not an abstract future—it is a present reality demanding deliberate and collective response. The question is no longer whether AI will play a role in mental health care, but how to ensure that its integration enhances access, safeguards ethical principles, and strengthens the therapeutic process. In a society where smartphones, wearables, and AI-driven platforms already capture psychological data at scale, the opportunity is immense—but so is the responsibility. As the forum reminded us, AI’s roots lie in psychological science, from behavioral conditioning to cognitive modeling. Now, as its influence expands, it is up to the mental health field to reclaim its role—not as a passive recipient of innovation, but as a co-architect of the systems that will define the future of care.
As one of the contributing partners, the Young AI Leaders Madrid Hub was proud to represent the voice of the youth in the field of AI and help facilitate an open, forward-looking dialogue. Our involvement reflects a recognition that today’s AI governance questions require perspectives that span generations, disciplines, and professional domains. Mental health, in particular, sits at the intersection of ethics, vulnerability, and digital innovation—making it a defining issue for responsible AI leadership.