

Conversational AI and AI-summaries for mental health patients
Role
Team
Head of Product, Head of Design, Head of AI, CTO, Chief Clinical Officer, Senior PM, Front and backend Developers
Revising the AI Product Roadmap
When I joined HelloSelf in early 2024—a teletherapy platform connecting patients with licensed therapists—OpenAI launched GPT-4o with a large context window, experimental memory, and real-time multimodal capability. It was a signal that AI, as a technology, finally felt mature enough to support real therapeutic workflows. We were eager to introduce AI into the patient experience and proposed goal tracking as the first feature.
However, based on prior experience in health-tech AI, I raised critical concerns on our strategic starting point. I saw two major risks: patients would be reluctant to open up to an unfamiliar AI without trust, and we risked launching into a saturated space of generic goal-tracking apps.
Business Objective
Launch AI features fast and safely to build foundations for more scalable and cost-efficient therapy
User Problem

Research Process
The biggest barrier to AI adoption wasn’t technology—it was the need to feel understood. To validate the right entry point for AI, I identified three key conditions to leverage what already exists:
Trust (Emotional): Patients don’t naturally trust AI, especially when it comes to mental health. It should be introduced through relationships they already rely on - their Therapist;
Effort (Behavioural): Without that trust, asking patients to do more work—like re-explaining their experiences—can feel exhausting. It should build on what they’ve already shared so that they're more likely to open-up; and
Context (Technical): For AI to respond meaningfully, it should access existing information—rich, real-world context that doesn't rely on patient input
Our PM and Head of Product revised the product roadmap to shift where the conversational AI should begin:
…from goals, where the user has to bring context to the AI [OLD] ❌
…to session summaries, where rich context already exists [NEW] ✅
This shift made the first AI experience feel more natural. It was grounded in real conversations—where context and trust already existed. Because the AI could draw from that depth, patients didn’t have to explain themselves or invest extra effort. They could simply reflect—and receive something personal and meaningful in return.
Job To Be Done
"When I am struggling to remember and understand the details of my therapy sessions, I want to review summaries of my session and be given the opportunity to reflect on them, so that I can take clear action and stay engaged with therapy between sessions."


0-to-1 Designs on Mobile and Web
Desired outcome
Introduce AI to patients at a moment that feels natural, low-effort, and genuinely useful so that it becomes a part of their therapy.

I redesigned the UX/UI of the call station so patients could consent-to/activate AI features alongside their therapist—making the experience clear, collaborative, and non-disruptive.


I designed the UX/UI for how patients chat with the conversational AI to reflect on their sessions. I also added a feedback mechanism to help the team improve model responses over time—making the experience feel human and safe to use.


Outcome & Feedback
It’s like Gemini or Siri—but for therapy. It gives me something smart I can turn to that actually knows what I’ve been through.
- Patient
It’s a helpful reflection point. I can revisit what I was feeling, see what’s changed, and think about how I want to keep improving.
- Patient
I learned that launching AI in therapy isn’t just about speed—it’s about finding the right moment, building trust, and reducing effort. Introducing AI where context already exists—like therapy sessions—helps users feel understood without extra input. That’s how we made the first experience feel natural, useful, and something patients could come back to.
This helped me internalise a key design principle: successful AI adoption comes from timing, trust, and low-effort value—all directly tied to solving the business problem of launching fast, and the user need for an AI that aligns with real care journeys.
Working on HelloSelf’s first AI features taught me how to balance ethical responsibility with product innovation—and how to design AI that supports human care, not replaces it.