The author is a director of the Centre of Artificial Intelligence Interaction Ltd, which is developing reflective AI tools including 3Friends.ai. The views expressed here reflect a broader interest in how AI can be used responsibly within complex human systems.
AI Is Not the Brain. It Is the Mirror.
Artificial Intelligence is rapidly entering healthcare, social care and justice systems.
It promises faster diagnostics, predictive insight, streamlined documentation and smarter decision-making.
But there is a risk we are not talking about enough:
If we add AI onto broken systems, we don’t eliminate dysfunction — we accelerate it.
Complex decisions in healthcare draw on different types of knowledge.
Cognitive scientist John Vervaeke describes four ways of knowing that shape human understanding: propositional (facts and evidence), procedural (skills and diagnostics), perspectival (lived experience and context) and participatory (reflexivity and meaning-making).
AI sits comfortably in the first two.
- It can synthesise evidence.
- It can detect patterns in large datasets.
- It can support diagnostics and flag risk signals.
In healthcare, that matters.
But it cannot replace the latter two.
- A diagnosis without context becomes a label.
- A risk score without relationship misses opportunity.
- A pathway without conversation becomes compliance.
The parts of practice that truly shape outcomes such as trust, interpretation and meaning remain profoundly human.
The Problem with the Mean
Most AI systems learn from large datasets and optimise toward the statistical average however the outliers are our community, and healthcare has amplified the mean.
So what happens when:
- The young person does not fit the criteria.
- The patient whose trauma changes their presentation.
- The individual whose life circumstances fall outside the model.
Evidence-based medicine has delivered extraordinary advances, but genuine insight often comes from understanding the outliers, not just the mean.
If AI simply amplifies the mean in a landscape full of outliers, it risks reinforcing bias rather than reducing it.
AI as Reflective Partner
Used well, AI can strengthen thinking rather than replace it.
This is the philosophy behind tools like 3Friends.ai, which position AI as a reflective partner rather than a decision-maker.
The aim is simple: to help professionals pause, consider different perspectives and avoid narrowing their thinking under pressure.
AI is not the brain. It is the mirror.
It reflects the systems we build and the assumptions we bring.
If we use it wisely, it can illuminate blind spots and support better decisions.
If we use it carelessly, it will simply scale the weaknesses we already have.
The responsibility is not technological but is human.