AI Chatbots for #MentalHealth Self-Management: Lived Experience–Centered Qualitative Study
Background: Large language models (LLMs) now enable chatbots to engage in sensitive #MentalHealth conversations, including #depression self-management. Yet their rapid deployment often overlooks how well these tools align with the priorities of people with lived experiences, which can introduce harms such as inaccurate information, lack of empathy, or inadequate crisis support. Objective: This study explores how people with lived experience of #depression experience an LLM-based #MentalHealth chatbot in self-management contexts, and what perceived benefits, limitations, and concerns inform harm-mitigating design implications. Methods: We developed a technology probe (a GPT-4o–based chatbot named Zenny) designed to simulate #depression self-management scenarios grounded in prior research. We conducted interviews with 17 individuals with lived experiences of #depression, who interacted with Zenny during the session. We #Applied qualitative content analysis to interview transcripts, notes, and chat logs using sensitizing concepts related to values and harms. Results: We identified 3 themes shaping participants’ evaluations: (1) informational accuracy and #Applicability, including concerns about incorrect or misleading information, vagueness, and fit with personal constraints; (2) emotional support vs need for human connection, including validation and a judgment-free space alongside perceived limits of machine empathy; and (3) a personalization-privacy dilemma, where participants wanted more tailored guidance while withholding sensitive information and using privacy-preserving tactics. Conclusions: People with lived experience of #depression evaluated LLM-based #MentalHealth chatbots through intertwined priorities of actionable information, emotional validation with clear limits, and personalization that does not require unsafe data disclosure. These findings suggest concrete design strategies to mitigate harms and support LLM-based tools as complements to, rather than replacements for, human support and recovery.