Conversational AI and AI-summaries for mental health patients

Role

Research, UX/UI Design, Product Strategy, Prompt Engineering, Testing Conversational AI

Senior Product Designer,

Mobile and Web

Team

Head of Product, Head of Design, Head of AI, CTO, Chief Clinical Officer, Senior PM, Front and backend Developers

Revising the AI Product Roadmap

When I joined HelloSelf in early 2024—a teletherapy platform connecting patients with licensed therapists—OpenAI launched GPT-4o with a large context window, experimental memory, and real-time multimodal capability. It was a signal that AI, as a technology, finally felt mature enough to support real therapeutic workflows. We were eager to introduce AI into the patient experience and proposed goal tracking as the first feature.


However, based on prior experience in health-tech AI, I raised critical concerns on our strategic starting point. I saw two major risks: patients would be reluctant to open up to an unfamiliar AI without trust, and we risked launching into a saturated space of generic goal-tracking apps.

Business Objective

Launch AI features fast and safely to build foundations for more scalable and cost-efficient therapy

User Problem

Most patients are still unfamiliar with AI, and those who are treat it as a traditional chatbot—functional and transactional—making it hard to open-up or engage in-depth.

Most patients are still unfamiliar with AI; they treat it as a traditional chatbot—functional and transactional—making it hard to open-up.

Most patients are still unfamiliar with AI; they treat it as a traditional chatbot—functional and transactional—making it hard to open-up.

Most patients are still unfamiliar with AI; they treat it as a traditional chatbot—functional and transactional—making it hard to open-up.

Most patients are still unfamiliar with AI; they treat it as a traditional chatbot—functional and transactional—making it hard to open-up.

Research Process

User Interviews: Therapists

...to understand clinical safety standards and needs of patients

User Interviews: Therapists

...to understand clinical safety standards and needs of patients

User Interviews: Therapists

...to understand clinical safety standards and needs of patients

User Interviews: Therapists

...to understand clinical safety standards and needs of patients

User Interviews: Therapists

...to understand clinical safety standards and needs of patients

Competitive Analysis

...to understand market opportunity and winning design patterns

Competitive Analysis

...to understand market opportunity and winning design patterns

Competitive Analysis

...to understand market opportunity and winning design patterns

Competitive Analysis

...to understand market opportunity and winning design patterns

Competitive Analysis

...to understand market opportunity and winning design patterns

Synthetic Data Generation

...to prompt and generate complex synthetic data to help develop the conversational AI

Synthetic Data Generation

...to prompt and generate complex synthetic data to help develop the conversational AI

Synthetic Data Generation

...to prompt and generate complex synthetic data to help develop the conversational AI

Synthetic Data Generation

...to prompt and generate complex synthetic data to help develop the conversational AI

Synthetic Data Generation

...to prompt and generate complex synthetic data to help develop the conversational AI

What I
found

What I found

The biggest barrier to AI adoption wasn’t technology—it was the need to feel understood. To validate the right entry point for AI, I identified three key conditions to leverage what already exists:

  1. Trust (Emotional): Patients don’t naturally trust AI, especially when it comes to mental health. It should be introduced through relationships they already rely on - their Therapist;

  2. Effort (Behavioural): Without that trust, asking patients to do more work—like re-explaining their experiences—can feel exhausting. It should build on what they’ve already shared so that they're more likely to open-up; and

  3. Context (Technical): For AI to respond meaningfully, it should access existing information—rich, real-world context that doesn't rely on patient input

What it meant

What it meant

Our PM and Head of Product revised the product roadmap to shift where the conversational AI should begin:
…from goals, where the user has to bring context to the AI [OLD] ❌
…to session summaries, where rich context already exists [NEW] ✅


This shift made the first AI experience feel more natural. It was grounded in real conversations—where context and trust already existed. Because the AI could draw from that depth, patients didn’t have to explain themselves or invest extra effort. They could simply reflect—and receive something personal and meaningful in return.

Job To Be Done

"When I am struggling to remember and understand the details of my therapy sessions, I want to review summaries of my session and be given the opportunity to reflect on them, so that I can take clear action and stay engaged with therapy between sessions."

Bringing Revised Roadmap To Life

Bringing the product roadmap to life

Sketching user flows

This mapped out the logic and experience of onboarding users to AI

Sketching user flows

This mapped out the logic and experience of onboarding users to AI

Sketching user flows

This mapped out the logic and experience of onboarding users to AI

Sketching user flows

This mapped out the logic and experience of onboarding users to AI

Sketching user flows

This mapped out the logic and experience of onboarding users to AI

UX/UI of AI Summaries

This helped iterate the designs and experience

UX/UI of AI Summaries

This helped iterate the designs and experience

UX/UI of AI Summaries

This helped iterate the designs and experience

UX/UI of AI Summaries

This helped iterate the designs and experience

UX/UI of AI Summaries

This helped iterate the designs and experience

UX/UI of Conversational AI

This helped define where the conversational AI should live within the app’s IA

UX/UI of Conversational AI

This helped define where the conversational AI should live within the app’s IA

UX/UI of Conversational AI

This helped define where the conversational AI should live within the app’s IA

UX/UI of Conversational AI

This helped define where the conversational AI should live within the app’s IA

UX/UI of Conversational AI

This helped define where the conversational AI should live within the app’s IA

Product bumpers

This ensured succesful adoption of both the AI-summary and conversational AI

Product bumpers

This ensured succesful adoption of both the AI-summary and conversational AI

Product bumpers

This ensured succesful adoption of both the AI-summary and conversational AI

Product bumpers

This ensured succesful adoption of both the AI-summary and conversational AI

Product bumpers

This ensured succesful adoption of both the AI-summary and conversational AI

0-to-1 Designs on Mobile and Web

Desired outcome

Introduce AI to patients at a moment that feels natural, low-effort, and genuinely useful so that it becomes a part of their therapy.

New AI-Ready Call Station

New AI-Ready Call Station

I redesigned the UX/UI of the call station so patients could consent-to/activate AI features alongside their therapist—making the experience clear, collaborative, and non-disruptive.

AI-generated Summary

AI-generated Summary

I designed the UX/UI for reviewing session summaries to support memory recall—making the experience simple and easy to return to.

I designed the UX/UI for reviewing session summaries to support memory recall—making the experience simple and easy to return to.

Conversational AI

Conversational AI

I designed the UX/UI for how patients chat with the conversational AI to reflect on their sessions. I also added a feedback mechanism to help the team improve model responses over time—making the experience feel human and safe to use.

LLM Evaluation

LLM Evaluation

Working with Head of AI and Chief Clinical Officer, I helped test and evaluate the quality of AI-conversations based on different synthetic personas I generated.

Working with Head of AI and Chief Clinical Officer, I helped test and evaluate the quality of AI-conversations based on different synthetic personas I generated.

Outcome & Feedback

£7M grant to validate AI in therapy

Won a £7M Wellcome Trust grant to run the UK’s first clinical trial of an AI-supported therapy pathway—paving the way for the first clinically validated AI tool in mental health.

£7M grant to validate AI in therapy

Won a £7M Wellcome Trust grant to run the UK’s first clinical trial of an AI-supported therapy pathway—paving the way for the first clinically validated AI tool in mental health.

£7M grant to validate AI in therapy

Won a £7M Wellcome Trust grant to run the UK’s first clinical trial of an AI-supported therapy pathway—paving the way for the first clinically validated AI tool in mental health.

£7M grant to validate AI in therapy

Won a £7M Wellcome Trust grant to run the UK’s first clinical trial of an AI-supported therapy pathway—paving the way for the first clinically validated AI tool in mental health.

£7M grant to validate AI in therapy

Won a £7M Wellcome Trust grant to run the UK’s first clinical trial of an AI-supported therapy pathway—paving the way for the first clinically validated AI tool in mental health.

30% adoption within 3 months

AI summaries reached 30% adoption within 3 months. When conversational AI launched later, 1 in 3 summary users engaged with it in its first month of launch

30% adoption within 3 months

AI summaries reached 30% adoption within 3 months. When conversational AI launched later, 1 in 3 summary users engaged with it in its first month of launch

30% adoption within 3 months

AI summaries reached 30% adoption within 3 months. When conversational AI launched later, 1 in 3 summary users engaged with it in its first month of launch

30% adoption within 3 months

AI summaries reached 30% adoption within 3 months. When conversational AI launched later, 1 in 3 summary users engaged with it in its first month of launch

30% adoption within 3 months

AI summaries reached 30% adoption within 3 months. When conversational AI launched later, 1 in 3 summary users engaged with it in its first month of launch

4.3/5 user satisfaction rating

Patients rated AI summaries an average of 4.3 out of 5, and conversational check-ins 4.0—highlighting strong perceived value and user trust in a high-sensitivity space.

4.3/5 user satisfaction rating

Patients rated AI summaries an average of 4.3 out of 5, and conversational check-ins 4.0—highlighting strong perceived value and user trust in a high-sensitivity space.

4.3/5 user satisfaction rating

Patients rated AI summaries an average of 4.3 out of 5, and conversational check-ins 4.0—highlighting strong perceived value and user trust in a high-sensitivity space.

4.3/5 user satisfaction rating

Patients rated AI summaries an average of 4.3 out of 5, and conversational check-ins 4.0—highlighting strong perceived value and user trust in a high-sensitivity space.

4.3/5 user satisfaction rating

Patients rated AI summaries an average of 4.3 out of 5, and conversational check-ins 4.0—highlighting strong perceived value and user trust in a high-sensitivity space.

It’s like Gemini or Siri—but for therapy. It gives me something smart I can turn to that actually knows what I’ve been through.

- Patient

It’s a helpful reflection point. I can revisit what I was feeling, see what’s changed, and think about how I want to keep improving.

- Patient

What I learnt

What I learnt

I learned that launching AI in therapy isn’t just about speed—it’s about finding the right moment, building trust, and reducing effort. Introducing AI where context already exists—like therapy sessions—helps users feel understood without extra input. That’s how we made the first experience feel natural, useful, and something patients could come back to.


This helped me internalise a key design principle: successful AI adoption comes from timing, trust, and low-effort value—all directly tied to solving the business problem of launching fast, and the user need for an AI that aligns with real care journeys.


Working on HelloSelf’s first AI features taught me how to balance ethical responsibility with product innovation—and how to design AI that supports human care, not replaces it.