
Kelvin

Helping Therapists embrace and adopt AI Tools that support their clinical workflow
Role
Team
Head of Product, Head of Design, Head of AI, CTO, Chief Clinical Officer, Senior PM, Front and backend Developers
HelloSelf is a teletherapy startup connecting mental health patients with therapists. While designing our first AI-summary feature for patients, we hit a critical insight: therapists were critical to their adoption - without therapist oversight and buy-in, summaries couldn’t be safely delivered to patients and they wouldn’t trust whatever AI tools we launched next.
This made one thing clear: to introduce AI responsibly, and lay the foundation for future AI tools, we first had to earn the trust of therapists.
Business Problem
How can we... earn therapist trust and approval so they feel enable and advocate AI features for their patients?
User Problem
How can we... design AI tools that feel safe, trustworthy, and genuinely useful for therapists supporting their patients?

To gain therapist buy-in, it wasn’t enough for AI-summaries to just benefit patients—it had to benefit therapists as well. In other words, therapists needed support on their own clinical responsibilities.
When we looked at where AI could add real value for Therapists, note-taking stood out from interviews as one of the biggest pain points to solve, especially for those managing high caseloads. However, this solution raised ethical concerns: Therapists were open to support, but wary of AI taking over; they didn’t want to lose control or forsake clinical standards.
To gain trust, we had to give therapists practical value—on their terms. That meant two things:
Job To Be Done
"When I am struggling to write detailed clinical notes because of high caseloads, I want to use AI-generated Clinical-grade Summaries as stimulus, so that I can write my notes more productively and maintain momentum between sessions."


Desired Outcome
Provide therapists with a simple, integrated way to manage both AI summaries—so it enhances their workflow, preserves their clinical authority, and builds confidence to advocate it for their patients.

I designed the UX/UI for how therapists activate AI summarisation—making the experience clear, collaborative, and easy to manage during live sessions.


What I learnt
I learned that therapist buy-in can’t be assumed—it has to be earned. For AI to reach patients safely, therapists needed to trust it and see direct value in their own work. That meant addressing real pain points, like note-taking, without removing their control. We had to integrate AI into existing workflows in a way that felt helpful, not intrusive.
This grounded my approach in ethical design: trust is built by giving clinicians tools they can rely on, not tools that try to replace them—connecting directly back to our business goal of adoption and our user goal of making AI feel safe and genuinely useful.
I also learned how to work cross-functionally in a high-stakes, fast-moving environment—partnering closely with clinical leads, data scientists, engineers, and leadership to balance speed with safety. Most importantly, I deepened my understanding of how to translate complex AI capabilities into accessible, human-first experiences.