Last week, Dr. Ágnes Riskó, Tamás Csernák, and Bence Csernák MedHubAI co-founders stood in the regional headquarters of one of the world's largest investment banks. We weren't there to pitch fintech or trading algorithms. We were there to demonstrate Anna, our psycho-oncology AI chatbot, including both the bank's Women in IT group and wider office audience during Breast Cancer Awareness Month.
The presentation connected Budapest, Glasgow, and Frankfurt offices. The Breast Cancer Awerness Month was one of the events of the whole month built around the same topic, where the bank's leadership wanted to explore this area from technology perspective – and see through the AI hype. The questions came fast: How do you achieve accuracy in healthcare AI? Can this work in finance? What about data privacy? But the most powerful moment wasn't technical at all; it was when one of their Executive Director, a breast cancer survivor and the event organizer, shared her journey.
This intersection of AI technology, human vulnerability, and enterprise validation taught us something crucial: when you build AI for sensitive contexts, the technology itself is only half the story.
Here's what 25 years of clinical psycho-oncology practice taught Dr Riskó: patients and their families need answers at 2 AM. They need guidance when the diagnosis is fresh and terrifying. They need someone to ask about side effects, emotional coping, and whether what they're feeling is normal; all without judgment, without appointments, without the fear of "bothering" their medical team.
Traditional healthcare can't scale to meet this need. Psycho-oncologists are rare specialists. Hospital hours are limited. Even the best clinical teams can't provide 24/7 emotional and informational support to every patient who needs it.
The knowledge exists; Dr Riskó has spent decades working directly with oncology teams, absorbing not just the medical protocols but the emotional landscape of the cancer journey. She understands what patients ask, what they're really afraid to ask, and how to respond with both clinical accuracy and human warmth.
But that knowledge was trapped in individual practitioners, in scattered research papers, in clinical experiences that could only reach one patient at a time. We needed a way to make decades of psycho-oncology expertise accessible to anyone, anywhere, at any moment they needed it.
That's a fundamentally different problem than most AI companies are solving. We weren't building a chatbot to order pizza or summarize emails. We were creating a system that could sit with someone in their darkest hour and provide genuinely helpful, accurate, compassionate guidance.
No pressure, right?
If you've used ChatGPT, you know it sometimes hallucinates; makes up plausible-sounding nonsense that sounds authoritative but is completely wrong. In most contexts, that's annoying. In healthcare, it's dangerous.
We needed Anna to achieve 94-97% accuracy consistently. Not because perfection is the goal (human experts aren't perfect either), but because patients deserve reliable information when they're making critical health decisions.
The breakthrough came from retrieval-augmented generation (RAG) technology combined with vector databases. Here's the non-technical explanation: instead of asking a language model to "know" everything about psycho-oncology, we created a system that searches through a curated knowledge base first, finds the most relevant information, and then uses that specific context to formulate responses.
Think of it like the difference between asking someone to recite a textbook from memory versus letting them look up the answer in the actual book. The second approach is far more reliable.
We vectorized 25 years of clinical knowledge; articles, treatment protocols, psychological frameworks, common patient questions, coping strategies. Every piece of information Dr. Riskó and her colleagues had compiled got transformed into a searchable knowledge base exceeding 2 million characters. When a patient asks Anna a question, the system retrieves the most relevant knowledge fragments in seconds, then uses a large language model to craft a response that's both accurate and empathetic.
The result? Responses that match clinical expert quality while remaining accessible 24/7 to anyone with an internet connection.
But accuracy alone isn't enough. Anna needed to work in multiple languages, respect cultural contexts, and maintain strict anonymity. We launched the Hungarian version in early 2023, shortly after ChatGPT's release. Then came the English global version. Now we're experimenting with an Indian version covering 22 local languages, because cancer doesn't respect language barriers.
The technical architecture matters, but here's what matters more: every conversation Anna has is reviewed by Dr. Riskó in her supervisory capacity. She sees what patients ask, how Anna responds, and where the system could improve. That feedback loop; AI learning from ongoing human expert oversight; is what keeps the system getting smarter and more helpful over time.
This isn't AI replacing human expertise. It's AI extending human expertise to reach more people, more often, when they need it most.
The statistics are striking: in the first year after launch, Anna handled over 2,000 conversations. Usage peaks during work hours, which tells us something important; people are seeking support while managing the stress of daily life, not just in clinical crisis moments.
Most conversations are practical: Where can I find a psycho-oncologist? What are normal side effects? How do I talk to my children about my diagnosis?
But some conversations go much deeper. Patients open up to Anna in ways they don't in hospital settings. There's something about anonymous, judgment-free access that lowers barriers. No appointment needed. No worrying about taking up too much of the doctor's time. No fear of appearing weak or overly emotional.
One pattern emerged consistently: the longest, most in-depth conversations happened at odd hours; late at night, early morning, times when human support simply isn't available. These are the moments when anxiety peaks, when fear feels overwhelming, when someone desperately needs reassurance that what they're experiencing is normal.
Anna provides that bridge. Not replacing human care, but filling the gaps between clinical appointments, extending support into the hours when patients need it most but have nowhere to turn.
The geographic usage patterns tell another story: while Hungary has the highest usage (expected, since that's where we launched first), we see significant activity in Romanian and Slovakian Hungarian-speaking communities, and from Hungarian expats worldwide. Cancer crosses borders. Support systems should too.
The multilingual capability isn't just translation; it's cultural adaptation. The knowledge base gets adapted to reflect different healthcare systems, local treatment protocols, regional support resources. A patient in India asking Anna about treatment options needs different information than someone in Hungary, even if the underlying medical science is the same.
We're also developing audio support, which matters more than you might think. Cancer treatments can affect vision. Patients with declining eyesight should still be able to access support. Audio interfaces mean someone could call Anna from a landline and have a conversation, no smartphone or computer required.
The interface itself follows familiar patterns intentionally; it looks and works like Claude.ai, Gemini, ChatGPT or other mainstream AI chatbots. When you're undergoing chemotherapy or dealing with a fresh diagnosis, you don't want to learn a new interface. Familiarity reduces cognitive load. That's design thinking applied to crisis support.
Every conversation can be downloaded, cleared, or continued across sessions via cookies. Complete anonymity, but with memory; the system remembers previous discussions without knowing who you are. That combination of privacy and continuity matters enormously to patients navigating long treatment journeys.
Presenting Anna to a major investment bank's Women in IT group was surreal in the best way. Here we were, discussing psycho-oncology AI in the same building where crucial financial decisions are made. But the questions revealed something fascinating: the challenges we solved for healthcare are the exact challenges other regulated industries face.
Someone from legal and compliance asked the key question: "Can this method be used in finance, where you need 100% accuracy?"
That's the question that reframes everything. Healthcare AI and financial AI share fundamental requirements: high accuracy, audit trails, explainable outputs, regulatory compliance, and zero tolerance for hallucinations.
The techniques we developed; RAG architecture, continuous expert supervision, structured knowledge bases, versioned information sources; aren't specific to healthcare. They're applicable anywhere accuracy matters more than speed, where mistakes have real consequences, where "good enough" isn't good enough.
Think about legal research, compliance documentation, financial reporting, regulatory analysis. All contexts where AI could dramatically improve access and efficiency, but only if the accuracy problem gets solved convincingly.
The investment bank audience understood this immediately. They live in a world where a single incorrect calculation or misinterpreted regulation can trigger massive financial and legal consequences. They can't afford AI that "mostly" works. Neither can we.
What surprised them; and validated our approach; was that we achieved enterprise-grade accuracy not through bigger models or more compute power, but through better architecture: human expertise embedded in the knowledge base, continuous supervision loops, retrieval systems that find verified information rather than generating it from scratch.
Similar AI within certain segments, particularly in legal and compliance can be particularly useful. The challenge? Multiple languages, vast amounts of reports and data, and absolute requirements for accuracy. Sound familiar?
This is where AI stops being a consumer curiosity and becomes enterprise infrastructure.
Here's the lesson that emerged from presenting Anna to enterprise audiences: the future of AI in sensitive contexts isn't about replacing human expertise; it's about extending it.
Dr. Riskó's 25 years of clinical experience can now reach thousands of patients she'll never meet in person. Her knowledge doesn't replace pyscho-oncologists or therapists, but it provides guidance in the spaces between formal care. That's the model.
The key is the supervision loop. Anna gets smarter because an expert reviews conversations and feeds improvements back into the system. That's not a nice-to-have; that's the core mechanism that maintains accuracy and relevance over time.
AI without human oversight in sensitive contexts is reckless. AI with expert supervision is transformative.
This also means AI adoption in regulated industries will look different than consumer AI. It's not about moving fast and breaking things. (like the Silicon Valley kids) It's about moving deliberately and building trust; trust that comes from transparency, accuracy, human accountability, and proven results.
The enterprise presentation format itself was significant: Women in IT invited us specifically because they understood that AI in healthcare, particularly in emotionally sensitive areas like cancer care, represents a use case worth studying. Not flashy autonomous vehicles or creative text generation, but quiet, profound improvements in human care delivery.
That's the signal for anyone building AI in regulated spaces: focus on accuracy, build in expert oversight, design for the hardest use cases, and the enterprise world will pay attention.
We're expanding Anna's language coverage, developing audio interfaces, and exploring applications beyond psycho-oncology. But the core philosophy remains constant: AI should amplify human expertise, not replace it.
This isn't just our approach; it's what every conversation with enterprise decision-makers reinforces. They don't want AI that makes autonomous decisions in high-stakes contexts. They want AI that helps experts work better, reach more people, and maintain quality at scale.
If you're building AI for healthcare, finance, legal, or any regulated industry, the Anna model offers a template: curated knowledge bases, RAG architecture for accuracy, continuous expert supervision, and ruthless focus on solving real problems for real people.
The investment bank presentation crystallized something Dr. Riskó has always known from clinical practice: technology succeeds when it serves human needs without pretending to replace human wisdom. Anna doesn't replace psycho-oncologists; it extends their reach. That's the difference between AI hype and AI that actually helps.
Cancer patients get 24/7 support. Enterprise teams get a model for AI accuracy in regulated contexts. And Dr. Riskó gets to see her life's work reach people she could never have helped otherwise.
That's when AI meets compassion. That's what happens when you build technology that respects both the power of AI and the irreplaceable value of human expertise.
Try Anna yourself at www.psychooncologyhub.com
If you're working on AI in regulated industries or healthcare, I'd love to hear about the challenges you're facing. The lessons we learned building Anna might apply to your context too.