Skip to content
Responsible innovation

Why AI in healthcare only works when patients trust it

AI can radically improve healthcare. But technology that is not trusted will not be used. And AI that is not used helps no one. Trust is not a byproduct — it is the prerequisite.

By Niels Roest 8 min read
The story

Miriam and the AI she never uses

Miriam is a nurse in a mental health institution. Her employer recently introduced an AI module that automatically suggests care plans based on client history. The intention: less paperwork, more time for conversations.

Miriam doesn’t use the system. Not because she’s technically illiterate — she’s been working with digital systems for years. But because she doesn’t know how the AI reaches its conclusions. “What if it misses something,” she says. “Then I’ve put my signature under a plan I didn’t fully create myself.”

Miriam is not the exception. Research shows that a significant proportion of healthcare professionals systematically ignore AI recommendations — not due to resistance to technology, but due to an informed lack of trust. And that lack of trust is justified: there have been too many cases of AI systems that were accurate at the population level but failed for specific patient groups that were underrepresented in the training data.

The paradox of useless AI

An AI system that is not trusted will not be used. An AI system that is not used delivers no value — regardless of how accurate it is. Trust is therefore not a soft value, but a hard implementation prerequisite.

The core

What makes AI trustworthy?

Trust in AI is not a feeling. It is a result of concrete technical and organizational choices. Four elements are decisive.

1. Explainability

A healthcare provider must be able to understand why an AI system makes a certain recommendation. Not at the detail level of the calculation, but at the level of reasoning: “Based on comparable client profiles with these characteristics, the system suggests treatment option X.” That explanation enables critical evaluation — and is legally required in healthcare under the GDPR (right to explanation for automated decisions).

2. Human oversight as architecture

Human-in-the-loop is not a feature you can turn off — it is an architectural choice. In trustworthy healthcare AI, the professional always decides. The system presents options, provides context and points out risks. The clinician validates, corrects or rejects. That is not a limitation of AI; it is the reason AI can be deployed in a domain where errors affect human lives.

3. Data security and privacy as foundation

Patients trust healthcare institutions with the most sensitive information there is. That data must never be used for purposes the patient has not approved. Privacy-by-design — data protection as a built-in architectural choice, not an afterthought — is the minimum standard. NEN 7510 and ISO 27001 provide the framework.

4. Consistency and reproducibility

Trust grows through repeated positive experiences. An AI system that consistently makes comparable and logical recommendations in similar situations earns professional trust over time. Random or unexplainable variation immediately undermines that trust — and rightly so.

Scientific framework

Safe-by-design: the scientific foundation

Yoshua Bengio, Turing Award winner and one of the most cited scientists in the world, has a clear position: the current generation of frontier AI systems is “opaque and misaligned with human goals.” Through his nonprofit initiative LawZero, he is working on what he describes as a “fundamentally new form of advanced AI, designed to be trustworthy and safe.”

That sounds abstract, but the translation to healthcare is concrete. Safe-by-design means:

No autonomous self-correction

AI systems that correct themselves or reinterpret instructions without human intervention are dangerous in healthcare. Safe AI cannot do more than what it was designed for.

Transparent goal structure

The system works toward explicit, verifiable goals — not optimized proxies that may deviate from the real goal. In healthcare: better patient outcomes, not higher billing numbers.

Demonstrable behavior

Every decision is traceable. Not only for the regulator, but also for the healthcare worker who works with it daily.

Correctable by design

When an AI system makes an error, a human can correct it — and that correction is processed. The system learns from human feedback, not just from data.

This is not the image the technology sector always promotes. But it is the image healthcare needs. And it is increasingly what regulators require: the European AI Act classifies AI in healthcare as “high risk” and sets high standards for transparency, auditability and human oversight.

Our approach

How CareHub implements this

You don’t build trust with a whitepaper. You build it with choices you make daily in the architecture of your systems. This is how the CareHub ecosystem has embedded those choices.

01

AI only where it fits

Not every care action is suitable for AI support. CareHub distinguishes between processes where AI adds value (administrative routines, data search, pattern recognition) and processes where human judgment is irreplaceable (treatment decisions, risk assessment, client relationships).

02

Explanation with every recommendation

Every AI recommendation in the CareHub platform includes an explanation at an understandable level. No black-box output, but context: what data the suggestion is based on, which factors weigh heavily, and what the confidence level is.

03

Complete audit trail

Every AI interaction is logged: what the system suggested, what the healthcare provider decided, and why. This audit trail is available to the organization, the regulator, and — upon request — to the client themselves.

04

No data export for model training

Client data stored in the CareHub platform is never used for training AI models outside the organization itself. Data sovereignty is a hard requirement, not a marketing promise.

Back to Miriam

If the AI system in her institution had explained how it arrived at its suggestions, had only asked for her signature after explicit validation, and had given her the ability to steer — she probably would have used it. That is the difference safe-by-design makes in practice.

Trust is the real innovation

The technology is there. The challenge is earning trust. Healthcare organizations that implement AI based on transparency, human oversight and demonstrable safety will see their staff embrace it. And that is the moment AI truly starts delivering value.

How does your organization deploy AI responsibly?

Discover how the CareHub ecosystem integrates AI in a way your staff can trust and that protects your clients. Get in touch for a conversation.

Get in touch