AI Video Analysis: How AI Is Changing Mental Health Care Between Doctor Visits l Loren Larsen
In this episode of An Hour of Innovation podcast, Vit Lyoshin speaks with Loren Larsen, founder and CEO of Videra Health, about how AI video analysis is changing mental health care between doctor visits.
The conversation explores a critical gap in behavioral health: once patients leave a clinical setting, providers often lose visibility into how they are actually doing. Loren explains how AI-based check-ins, using video, voice, and language analysis, allow patients to share how they feel in their own words, on their own time. These short interactions can surface emotional and behavioral signals that traditional surveys and score-based assessments often miss.
The episode also dives into why patients are sometimes more honest with AI than with clinicians, the risks of poorly tested healthcare AI, and the importance of keeping humans in the loop for interpretation and care decisions. Drawing on Loren’s experience building and deploying video AI at scale, the discussion highlights real-world challenges, including bias, model drift, consent, and trust, particularly in sensitive healthcare settings.
Rather than positioning AI as a replacement for clinicians, this episode presents a grounded view of technology as a support system that helps providers understand patients more deeply and intervene at the right time. It’s a thoughtful look at how responsible AI, when designed and deployed carefully, can extend care, reduce blind spots, and improve outcomes in mental health.
Loren Larsen is a longtime builder at the intersection of AI, video, and human decision-making. Before founding Videra Health, he served as CTO of HireVue, deploying video AI at a massive scale. In this episode, his experience matters because he’s navigated bias, ethics, and real-world deployment, offering a grounded perspective on what responsible healthcare AI should look like today.
Takeaways
- The most dangerous moment in a mental health patient’s life is right after leaving inpatient care.
- AI check-ins between visits restore visibility into patient wellbeing when clinicians cannot scale human outreach.
- Patients often share more honestly with AI than with therapists because they feel less judged and less pressure to perform.
- Mental health scores without narrative (like PHQ-9) miss the “why” behind patient distress.
- AI should augment clinical judgment, not replace therapists, especially during high-risk treatment moments.
- Generative AI is not ready to safely conduct therapy, particularly in crises.
- Model drift can occur from unexpected factors, such as medications or cosmetic procedures, not just bad data.
- Poorly built healthcare AI can look legitimate, making it hard for buyers to distinguish safe tools from risky ones.
- Ethical healthcare AI requires clear consent, transparency, and human oversight, not just technical accuracy.
- The biggest challenge in AI healthcare adoption is balancing speed, safety, and trust in a fast-moving market.
Timestamps
00:00 Introduction
01:35 Videra Health Origin Story
03:02 AI Patient Check-Ins Between Doctor Visits
05:33 Why Human Judgment Still Matters in AI Care
08:49 Gaps in Mental Health Patient Care
12:07 AI vs Human Care in Mental Health
13:23 Testing & Validating Healthcare AI Systems
17:16 Edge Cases, Bias, and AI Model Failure
19:29 Ethical AI in Healthcare
23:33 Why Healthcare AI Adoption Is Hard
25:43 Common Myths About AI in Healthcare
30:02 Lessons from Building Video AI at Scale
34:54 Early Warning Signs in AI Systems
38:31 Advice for First-Time Video AI Builders
42:05 Innovation Q&A
Connect with Loren
- Website: https://www.viderahealth.com/
- LinkedIn: https://www.linkedin.com/in/loren-larsen/
This Episode Is Supported By
- Google Workspace: Collaborative way of working in the cloud, from anywhere, on any device - https://referworkspace.app.goo.gl/A7wH
- Webflow: Create custom, responsive websites without coding - https://try.webflow.com/0lse98neclhe
- Monkey Digital: Unbeatable SEO. Outrank your competitors - https://www.monkeydigital.org?ref=110260
For inquiries about sponsoring An Hour of Innovation, email iris@anhourofinnovation.com
Connect with Vit
- Substuck: https://anhourofinnovation.substack.com/
- LinkedIn: https://www.linkedin.com/in/vit-lyoshin/
- X: https://x.com/vitlyoshin
- Website: https://vitlyoshin.com/contact/
Episode References
PHQ-9 (Patient Health Questionnaire-9)
https://www.phqscreeners.com/select-screener
A widely used nine-question clinical survey for screening and measuring the severity of depression, referenced as an example of score-based assessments lacking narrative context.
Large Language Models (LLMs)
https://en.wikipedia.org/wiki/Large_language_model
A class of AI models capable of generating human-like language, discussed in the context of why they should not currently replace human therapists in sensitive mental health situations.
ChatGPT
https://chat.openai.com
An AI conversational tool referenced as an example of how public familiarity and comfort with AI has rapidly increased in the post-ChatGPT era.
HireVue
https://www.hirevue.com
A video interviewing and hiring platform where Loren Larsen previously served as CTO, used as a case study for deploying video AI at massive scale and managing bias and fairness.
Silicon-based Semiconductors
https://en.wikipedia.org/wiki/Semiconductor
The foundational technology behind modern computing, cited as one of the most world-changing innovations and potentially nearing disruption by future computing paradigms.