
By SUHANA MISHRA
When discussing treatment outcomes, we usually talk about dosage, adherence, and access. Rarely do we speak about algorithms.
Yet as I began working on a scoping review examining misinformation and disinformation in mental health with a team at the Royal College of Psychiatrists led by Dr. Subodh Dave, I realized that some of the most powerful determinants of patient outcomes are not confined to clinics. They live in comment sections, short-form videos, and anonymous threads that shape people’s view on what is the “truth”. In fact, the NY Post says, “over half of top TikTok mental health videos contained misleading information”.
I chose to do this research because I’ve seen how a single online post or video can change the way someone thinks about their own mental health. I’ve witnessed my very own family members be discouraged to follow a treatment plan based on an inaccurate post sent in a WhatsApp group chat. By examining misinformation in collaboration with experts, I hope to identify practical strategies to help clinicians and public health professionals address their hidden determinants of mental health outcomes.
One of the most striking lessons that I’ve learned is that misinformation in psychiatry doesn’t always seem like a conspiracy. It can often seem like comfort. According to an ArXiv study from Cornell University, people adopt misinformation because it satisfies psychological and social needs rather than accuracy goals.
A viral post on a Reddit thread r/antipsychiatry which claimed antidepressants “numb your personality” may be rooted in one person’s difficult experience. A video on tiktok circulating discouraging medication in favor of “natural rewiring” may promise autonomy in a system that feels impersonal. These narratives spread not because they are outrageous conspiracy theories, but because they really resonate with people.
That resonance has consequences.
In the literature we’ve reviewed so far, exposure to misleading mental health content was associated with lower treatment adherence and increased skepticism toward clinicians. When patients arrive at appointments already convinced that psychiatric medication is inherently harmful or that diagnoses are fabricated labels, trust is ultimately lost in the system. Trust–arguably the most essential component of psychiatric care–must be rebuilt before treatment can begin.
Disinformation complicates this further. Unlike misinformation, which is often shared without intent to harm. Disinformation is strategic. It exploited uncertainty. It amplifies rare events as if they are common. It reframes evolving guidelines. In doing so, it erodes confidence in treatment, institutions, and healthcare workers. A clear example, was when the US Food and Drug Adminsitration required a boxed warning in 2004 about a small increased risk of suicidal thoughts in adolescence starting SSRIs, the guidance was intended to promote monitoring, not suggest that antidepressants broadly caused suicide. However the NIH found, certain advocacy websites and online communities strategically reframed that warning as proof that “antidepressants make people suicidal” in general.
Mental health already carries stigma and vulnerability. A person experiencing depression who reads hundreds of comments insisting that antidepressants “erase your soul” may interpret temporary emotional change as confirmation of harm. Someone with anxiety exposed to viral warnings that “Create dependency” may avoid the very support that can help them stabilize.
What makes this crisis so unique is scale. Social platforms reward this emotional intensity and certainty. A 45-second TikTok warning of “hidden dangers” spreads faster than a peer-reviewed meta-analysis. Algorithms privilege relatively over accuracy. Personal testimony, while valid and important, becomes conflated with medical truth.
This research has made me confront the realization that treatment outcomes are no longer solely determined by what happens in a consultation room. They are influenced by what happens when a patient scrolls past midnight, what they read in a comment section, and what a viral video frame is. By the time a clinician discusses risks and benefits, a parallel narrative may already be rooted.
If we want better adherence, better engagement, and better outcomes, we must treat not only symptoms, but the stories patients absorb about those symptoms. In a world where false information can spread faster than evidence, it’s important to safeguard credibility. And that begins with recognizing the algorithms that sit quietly in the exam room.
To address this issue it’s imperative we treat misinformation exposure as a clinical determinant of health: clinicians should proactively discuss online mental health content during visits, public health organizations must partner with platforms to elevate evidence-based information through algorithmic transparency and credible creator collaboration, and medical education should train providers in digital health communication. Improving outcomes will require not only prescribing treatments, but actively competing in the information environments where patients form beliefs long before entering the exam room.Ultimately, the future of mental health care depends on meeting patients where they are, which is often online and in the stories they believe, ensuring the truth travels faster than a tweet.
Suhana Mishra is a high school researcher and public health advocate from California’s Central Valley

4 hours ago
8

Bengali (Bangladesh) ·
English (United States) ·