Alliance Alert: A new article published in JAMA Psychiatry highlights the growing role that artificial intelligence (AI) may play in the future of mental health services, and the important questions that come with it. As the authors note, AI has the potential to transform how mental health conditions are understood, diagnosed, and treated, but it also raises significant concerns around accuracy, bias, privacy, and accountability.
AI tools such as chatbots, predictive models, and digital monitoring systems could help clinicians better understand patterns in mental health symptoms and integrate large amounts of data, from clinical records to wearable devices, to support wellbeing. However, the article emphasizes that these technologies must be used carefully, particularly when working with vulnerable populations. AI systems trained on incomplete or biased datasets could reinforce disparities in access to services or produce misleading assessments without proper oversight.
The authors also highlight the importance of strong guardrails around how AI is used in mental health, including clear regulatory frameworks, improved transparency in how AI models operate, and safeguards to protect sensitive personal data. As AI becomes more integrated into health systems and everyday technology, it will be essential for policymakers, providers, advocates, and people with lived experience to help shape how these tools are developed and used.
Recognizing the growing impact of AI on behavioral health services and society more broadly, the Alliance’s upcoming Executive Seminar will feature workshops specifically focused on artificial intelligence and mental health. These sessions will explore what system leaders, providers, advocates, and policymakers need to understand as AI becomes more prevalent in both clinical settings and everyday life.
Participants will have the opportunity to discuss the opportunities AI may bring for improving services, as well as the ethical, privacy, and equity challenges that must be addressed to ensure these technologies support recovery-oriented, person-centered systems of support.
We encourage anyone interested in learning more about how emerging technologies like AI are shaping the future of mental health services to register for the Alliance’s upcoming Executive Seminar. The event will provide an important space for leaders across ºÚÁÏÕýÄÜÁ¿ to engage in thoughtful conversations about innovation, policy, and the future of mental health services.
Some organizations are already using AI to strengthen compliance, analyze data faster, and anticipate service needs, while others risk falling behind. Across healthcare, leaders are discovering that competitive advantage may soon depend on how thoughtfully technology is introduced into care environments. The future belongs to those who lead innovation rather than react to it. Join us at the Executive Seminar as we confront the tough questions, embrace bold leadership, and help shape what comes next for recovery and behavioral health.
Register Today:

AI in Mental Health Care—Opportunities and Risks Beyond Large Language Models
By Loran Knol, Andre F. Marquand, and Nita Farahany | JAMA Psychiatry | March 11, 2026
Few people disagree that artificial intelligence (AI) will have—or already has had—a profound impact on mental health care. Although AI holds great promise, some express concerns about its accuracy, safety, and privacy, posing fundamental questions about professional liability and the standard of care. These concerns become even more pressing when considering vulnerable groups, such as individuals with mental illness, because publicly available AI systems (eg, chatbots) are not always able to properly assess mental health risks. Maximizing AI’s utility while addressing these concerns will require a multifaceted approach.
First, it is imperative to understand the risks AI poses, as this understanding will form the basis for what uses of AI are deemed acceptable. To this end, it can be helpful to differentiate AI applications by their affordances. This term, which originates in psychology and has subsequently been adopted by robotics, refers to the action possibilities in an environment where an agent is deployed. In the context of AI in health care, the term is repurposed to refer to the capabilities that an AI application has in relation to its environment. This action-environment interaction can be characterized by the 5 W’s (who, what, where, when, and why), which one would use to ask the following questions: (1) Who is in the environment? Who are the people the AI interacts with, and how are they affected by it? (2) What actions can the AI take in this environment? (3) When are these actions executed? Only with clinician oversight or also without? (4) Where is the AI situated? On a centralized server or perhaps on the patient’s smartphone? (5) Why is the AI taking action? What information does it act on? Stakeholders should jointly answer these questions to determine the risks of AI applications and judge their acceptability.
Once AI risks have been identified, they can be better managed based on how an AI application is constructed. Using large language models (LLMs), which are known to hallucinate, as an example, an important observation is that current LLMs operate without a world model. World models have long since been part of the AI literature but there is large variation in their exact definition. Here, we define a world model as an AI’s internal representation of its environment through symbols. These symbols form the basis of a logical language that allows the AI application to differentiate truth and falsehood in its environment and flag contradictions. Although such reasoning engines have been around for decades, they have gone out of fashion due to poor scalability, with current approaches (including LLMs) relying on extracting associations from large datasets through neural networks, for example. More recently, however, there has been a renewed focus on symbolic models by combining them with the high-throughput capabilities of large neural networks, resulting in so-called neuro-symbolic models. These models promise easier integration of domain knowledge and improved explainability, which are critical steps toward the development of safe and trustworthy AI. If AI is to be incorporated into everyday mental health care, this research direction should be explored further.
Equally important is the data which AI is trained on. It is well known that societal biases are often reflected in datasets and can be picked up by any AI model, becoming part of its predictions and potentially even being amplified. Although there is extensive literature on fairness in machine learning, several issues require special attention in the context of mental health care. For instance, racial and ethnic minority groups experience increased stigma surrounding psychopathology and therefore face service barriers. This might exacerbate their underrepresentation in datasets used for training. Moreover, individuals with the most impaired mental health functioning are also the least likely to participate in studies contributing to training datasets. More research is needed on how model fairness can be attained in mental health care.
Additionally, the data should be sourced and handled in a safe and ethical manner. There is growing interest in using data collected continuously in naturalistic settings, such as through digital wearables and smartphones, to provide real-time indices of cognition, social behavior, and psychopathology. The primary philosophy is to collect large numbers of multimodal data about individuals rather than big cohorts, leading to a narrow but deep focus on the individual. Such dense sampling strategies often yield an abundance of heterogeneous data that are hard to interpret, making AI-powered analysis a helpful tool in deciphering any underlying patterns. However, it is important to note that the sensor data on which such a system would be based are not always well protected by law because they blur the line between consumer and medical grade, calling for revised data protection categories and advanced cybersecurity measures.
Finally, risks should be managed through regulatory and legal means. Risk-based regulations are already in place in the European Union through its AI Act, which puts greater constraints on systems carrying higher risks and establishes clear obligations for developers, deployers, and users. In the US, stakeholders must currently navigate a complex landscape of state laws, US Food and Drug Administration guidance for software as a medical device, Health Insurance Portability and Accountability Act requirements, and potential tort liability under existing negligence and product liability theories. A prominent example is California’s Transparency in Frontier Artificial Intelligence Act, which has recently been signed into law and mandates large AI developers report critical safety incidents and provide transparency on how they evaluate the risks of their systems. Interestingly, the bill features provisions that allow federal regulations to stand in for state regulations if they impose comparable or stricter reporting standards, implicitly adding to the call for federal AI regulation.
By integrating the recommendations above, AI can be a safe but powerful tool for mental health care. For instance, consider an AI system with access to both a large corpus of medical knowledge and a narrow but deep set of individualized patient data. Such a model would be well positioned to integrate biological, psychological, and social data into a representation that resembles Engel’s biopsychosocial model. The model that does this integration could be based on LLMs, but a neuro-symbolic model might be better suited for aligning the interpretation of individual-level data with what is already known from the medical corpus. The risks of this situation could be indexed using the 5 W’s. For example, we could allow both patient and clinician (who) to query the AI application for diagnostic information (what) based on patient data (why) to help them understand the patient’s situation, but restrict access to when a clinician can supervise so they can guide the patient’s interpretation (when). That restriction suggests situating the AI on a server rather than the patient’s smartphone (where). Potential risks then include a misleading diagnosis by the AI application and transmitting sensitive patient data from their wearables to the server.
In summary, although AI can potentially impose serious mental health care risks depending on the affordances it has, several measures can be taken to manage this. Important measures include the implementation of risk-based AI legislation, embedding guardrails at a fundamental level through a logic-based world model, and ensuring the data on which AIs are trained are fair. Finally, risks can be avoided by positioning AI appropriately within clinical practice, with human oversight. Successful risk management will subsequently allow clinicians and patients to reap the benefits of the support AI can offer.
Article Information
Corresponding Author: Alex Leow, MD, PhD, Department of Psychiatry, University of Illinois Chicago, 1601 W Taylor St, Room 584, Chicago, IL 60612 (alexleow@alumni.ucla.edu).
Published Online: March 11, 2026. doi:10.1001/jamapsychiatry.2026.0032
Conflict of Interest Disclosures: Dr Farahany reported being chair of the Uniform Law Mental Privacy, Cognitive Biometrics, and Neural Data Study Committee and serving on the advisory board for OpenBCI. Dr Leow reported receiving funding from the National Institute of Mental Health (R01MH120168); holding equity in KeyWise AI; consulting for Otsuka US; and serving as an advisor to Buoy Health. No other disclosures were reported.
Funding/Support: This article was funded by the European Research Council (101001118).
Role of the Funder/Sponsor: The European Research Council had no role in the preparation, review, or approval of the manuscript or decision to submit the manuscript for publication.
References
1.
Angus  DC, Khera  R, Lieu  T,  et al; JAMA Summit on AI.  AI, health, and health care today and tomorrow: the JAMA Summit report on artificial intelligence.  Ìý´³´¡²Ñ´¡. 2025;334(18):1650-1664. doi:
2.
McBain  RK, Cantor  JH, Zhang  LA,  et al.  Evaluation of alignment between large language models and expert clinicians in suicide risk assessment.   Psychiatr Serv. 2025;76(11):944-950. doi:
3.
Şahin  E, Çakmak  M, Doğar  MR, Uğur  E, Üçoluk  G.  To afford or not to afford: a new formalization of affordances toward affordance-based robot control.   Adapt Behav. 2007;15(4):447-472. doi:
4.
Brooks  RA.  Elephants don’t play chess.   Robotics Autonomous Syst. 1990;6(1):3-15. doi:
5.
d’Avila Garcez  A, Lamb  LC.  Neurosymbolic AI: the 3rd wave.   Artif Intell Rev. 2023;56(11):12387-12406. doi:
6.
Misra  S, Jackson  VW, Chong  J,  et al.  Systematic review of cultural aspects of stigma and mental illness among racial and ethnic minority groups in the United States: implications for interventions.   Am J Community Psychol. 2021;68(3-4):486-512. doi:
7.
Onnela  JP.  Opportunities and challenges in the collection and analysis of digital phenotyping data.  Ìý±·±ð³Ü°ù´Ç±è²õ²â³¦³ó´Ç±è³ó²¹°ù³¾²¹³¦´Ç±ô´Ç²µ²â. 2021;46(1):45-54. doi:
8.
Magee  P, Ienca  M, Farahany  N.  Beyond neural data: cognitive biometrics and mental privacy.  Ìý±·±ð³Ü°ù´Ç²Ô. 2024;112(18):3017-3028. doi:
9.
California Legislative Information. Senate Bill No. 53: artificial intelligence models—large developers. September 29, 2025. Accessed December 18, 2025.Â
10.
Engel  GL.  The need for a new medical model: a challenge for biomedicine.  Ìý³§³¦¾±±ð²Ô³¦±ð. 1977;196(4286):129-136. doi: