
Kelvin

Role
Team
Head of Product, Head of Design, Head of AI, CTO, Chief Clinical Officer, Senior PM, Front and backend Developers
HelloSelf is a teletherapy startup connecting mental health patients with therapists. When I joined in early 2024—a teletherapy platform connecting patients with licensed therapists—ChatGPT reached mainstream, signalling that AI had finally matured enough for real-world use by every-day people. The team was ready to act, proposing an initial product roadmap that introduces AI to patients through goal tracking between sessions.
However, I raised concerns of our strategic starting point because I saw major risks: we assumed patients would open up to an unfamiliar AI, and it positioned us against a crowded landscape of non-AI goal-tracking tools.
Business Problem
User Problem

The biggest barrier to AI adoption wasn’t technology—it was the need to feel understood. To validate the right entry point for AI, I identified three key conditions to leverage what already exists:
Trust (Emotional): Patients don’t naturally trust AI, especially when it comes to mental health. It should be introduced through relationships they already rely on - their Therapist;
Effort (Behavioural): Without that trust, asking patients to do more work—like re-explaining their experiences—can feel exhausting. It should build on what they’ve already shared so that they're more likely to open-up; and
Context (Technical): For AI to respond meaningfully, it should access existing information—rich, real-world context that doesn't rely on patient input
Our PM and Head of Product revised the product roadmap to shift where the conversational AI should begin:
…from goals, where the user has to bring context to the AI [OLD] ❌
…to session summaries, where rich context already exists [NEW] ✅
This shift made the first AI experience feel more natural. It was grounded in real conversations—where context and trust already existed. Because the AI could draw from that depth, patients didn’t have to explain themselves or invest extra effort. They could simply reflect—and receive something personal and meaningful in return.
Job To Be Done
"When I am struggling to remember and understand the details of my therapy sessions, I want to review summaries of my session and be given the opportunity to reflect on them, so that I can take clear action and stay engaged with therapy between sessions."


Desired outcome:

I redesigned the UX/UI of the call station so patients could consent-to/activate AI features alongside their therapist—making the experience clear, collaborative, and non-disruptive.


I designed the UX/UI for how patients chat with the conversational AI to reflect on their sessions. I also added a feedback mechanism to help the team improve model responses over time—making the experience feel human and safe to use.


I learned that launching AI in therapy isn’t just about speed—it’s about finding the right moment, building trust, and reducing effort. Introducing AI where context already exists—like therapy sessions—helps users feel understood without extra input. That’s how we made the first experience feel natural, useful, and something patients could come back to.
This helped me internalise a key design principle: successful AI adoption comes from timing, trust, and low-effort value—all directly tied to solving the business problem of launching fast, and the user need for an AI that aligns with real care journeys.
Working on HelloSelf’s first AI features taught me how to balance ethical responsibility with product innovation—and how to design AI that supports human care, not replaces it.