JMIR Mental Health
Internet interventions, technologies, and digital innovations for mental health and behavior change.
JMIR Mental Health is the official journal of the Society of Digital Psychiatry.
Editor-in-Chief:
John Torous, MD, MBI, Harvard Medical School, USA
Impact Factor 4.8 CiteScore 10.8
Recent Articles
The use of digital biomarkers through remote patient monitoring offers valuable and timely insights into a patient's condition, including aspects such as disease progression and treatment response. This serves as a complementary resource to traditional healthcare settings leveraging mobile technology to improve scale and lower latency, cost and burden.
Anxiety disorders are among the most common mental health conditions in childhood but most children with anxiety disorders do not access evidence-based interventions. The delivery of therapeutic interventions via digital technologies has been proposed to significantly increase timely access to evidence-based treatment. Lumi Nova is a digital therapeutic intervention designed to deliver evidence-based anxiety treatment for 7-12-year-olds through a mobile application incorporating immersive gaming technology.
Large language models (LLMs) are advanced artificial neural networks trained on extensive datasets to accurately understand and generate natural language. While they have received much attention and demonstrated potential in digital health, their application in mental health, particularly in clinical settings, has generated considerable debate.
The field of mental health technology presently has significant gaps that need addressing, particularly in the domain of daily monitoring and personalized assessments. Current noninvasive devices such as wristbands and smartphones are capable of collecting a wide range of data, which has not yet been fully used for mental health monitoring.
Knowledge has become more open and accessible to a large audience with the "democratization of information" facilitated by technology. This paper provides a socio-historical perspective for the Theme Issue “Responsible Design, Integration, and Use of Generative AI in Mental Health”. It evaluates ethical considerations in utilizing Generative Artificial Intelligence (GenAI) for the democratization of mental health knowledge and practice. It explores the historical context of democratizing information, transitioning from restricted access to widespread availability due to the internet, open-source movements, and most recently, GenAI technologies such as Large Language Models (LLMs). The paper highlights why GenAI technologies represent a new phase in the democratization movement, offering unparalleled access to highly advanced technology as well as information. In the realm of mental health, this requires a delicate and nuanced ethical deliberation. Including GenAI in mental health may allow, among other things, improved accessibility to mental health care, personalized responses, conceptual flexibility, and could facilitate a flattening of traditional hierarchies between health care providers and patients. At the same time, it also entails significant risks and challenges that must be carefully addressed. To navigate these complexities, the paper proposes a strategic questionnaire for assessing AI based mental health applications. This tool evaluates both the benefits and the risks, emphasizing the need for a balanced and ethical approach for GenAI integration in mental health. The paper calls for a cautious yet positive approach to GenAI in mental health, advocating for the active engagement of mental health professionals in guiding GenAI development. It emphasizes the importance of ensuring that GenAI advancements are not only technologically sound but also ethically grounded and patient centered. Keywords: Ethics, Generative Artificial Intelligence, Mental Health
Novel technologies, such as ecological momentary assessment (EMA) and wearable biosensor wristwatches, are increasingly being utilized to assess outcomes and mechanisms of change in psychological treatments. However, there is still a dearth of information on the feasibility and acceptability of these technologies and whether they can be reliably used to measure variables of interest.
While the number of digital therapeutics (DTx) has proliferated, there is little real-world research on the characteristics of providers recommending DTx, their recommendation behaviors, or the characteristics of patients receiving recommendations in the clinical setting. Objective: Characterize the clinical and demographic characteristics of patients receiving DTx recommendations, and describe provider characteristics and behaviors regarding DTx.
Digital mental health is a rapidly growing field with an increasing evidence base due to its potential scalability and impacts on access to mental health care. Further, within underfunded service systems, leveraging personal technologies to deliver or support specialized service delivery has garnered attention as a feasible and cost-effective means of improving access. Digital health relevance has also improved as technology ownership in individuals with schizophrenia has improved and is comparable to that of the general population. However, less digital health research has been conducted in groups with schizophrenia spectrum disorders compared to other mental health conditions, and overall feasibility, efficacy, and clinical integration remain largely unknown.
Depression affects 5% of adults and it is a major cause of disability worldwide. Digital psychotherapies offer an accessible solution addressing this issue. This systematic review examines a spectrum of digital psychotherapies for depression, considering both their effectiveness and user perspectives.
Motivational Interviewing (MI) is a therapeutic technique that has been successful in helping smokers reduce smoking but has limited accessibility due to the high cost and low availability of clinicians. To address this, the MIBot project has sought to develop a chatbot that emulates an MI session with a client with the specific goal of moving an ambivalent smoker towards the direction of quitting. One key element of an MI conversation is reflective listening, where a therapist expresses their understanding of what the client has said by uttering a reflection that encourages the client to continue their thought process. Complex reflections link the client’s responses to relevant ideas and facts to enhance this contemplation. Backward-looking complex reflections (BLCRs) link the client’s most recent response to a relevant selection of the client’s previous statements. Our current chatbot can generate complex reflections - but not BLCRs - using large language models (LLMs) such as GPT-2, which allows the generation of unique, human-like messages customized to client responses. Recent advances in these models, such as the introduction of GPT-4, provide a novel way to generate complex text by feeding the models instructions and conversational history directly, making this a promising approach to generate BLCRs.