About UNCAPT

We started
with a question.

Why does the world’s best expertise remain inaccessible to almost everyone who needs it — even as AI becomes ubiquitous? The answer isn’t the technology. It’s that nobody had built the right infrastructure to capture, govern, and scale expert reasoning.

The team

One mission.
The democratisation of genius.

Built with the best. Available to everyone. Improving with every use.

Ebenezer Eyeson-Annan
Ebenezer Eyeson-Annan
Co-Founder & CEO

Built UNCAPT from the conviction that expert knowledge is the most valuable and most under-deployed resource in the world.

Jalal Radwan
Jalal Radwan
Co-Founder & CTO

20 years software engineering experience in complex B2B and B2C platforms used by millions of users, managing high performance engineering teams to deliver impactful outcomes.

LB
Lloyd Barrett
Founding Partner

Co-founder focused on product strategy and the applied deployment of expert reasoning systems across high-stakes domains.

NB
Nia Barnabie
Commercial Lead

Leads commercial strategy and partnerships, bringing domain leaders onto the platform.

CM
Catherine Mandungu
Go-to-Market

Leads go-to-market strategy and revenue operations, turning growth goals into clear execution across commercial and enterprise environments.

AP
Anthony Perumal
CFO & Company Secretary

Oversees financial operations, corporate governance, and strategic finance.

JH
Jim Hutchin
Board Advisor
JW
Julian Ward
Board Advisor
SJ
Simon Jones
Board Advisor

The founding insight

Every LLM trained on what experts published.
We needed to capture what never made it there.

Est. 2020 · Sydney

UNCAPT was founded in Sydney in 2020 around a single observation: the world's leading domain experts had knowledge that could improve outcomes for millions of people — but no mechanism to deploy it at scale. Not because the knowledge was secret, but because it lived inside the way they reasoned, not inside any system that could be replicated.

LLMs changed what's possible. But they encode what was documented, not what was reasoned. The best expert judgment was never written down — and even when it was, no model had a mechanism to be corrected by the person who knows better. That's what was missing.

We started in forecasting and retail before the same problem surfaced in mental health — the domain where expertise is most constrained and the stakes are highest. MIA, built with the University of Sydney's Brain & Mind Centre, was the result.

We've since expanded to five products across health and research: CARA with iLA, ARI/Insight with UNSW and USYD, MATILDA with the Royal Flying Doctor Service, and NADIA with the UNSW International Centre for Future Health Systems. Each runs on the same platform. Each gets better with every case it sees.

How we think

Not values on a wall.

These are the constraints we design inside of.

01

Experts stay in the loop.

Every system we build is designed to keep experts involved — not to replace them. The value is in the expert's correction, not in the system's autonomy. We build around that.

02

Uncertainty is information.

When the system doesn't know, it says so — and escalates. We never average over uncertainty or hide it in a confidence score. An honest halt is more valuable than a confident wrong answer.

03

The knowledge belongs to the field.

We co-build with domain leaders, and the knowledge banks we build together belong to the field — versioned, auditable, updateable. We're not building a black box. We're building infrastructure.

04

Defensibility compounds.

Every correction the system captures makes it harder to replicate. We're not trying to build the best model — we're building the deepest data asset in each domain, generated by the experts who understand it best.

05

Healthcare first, not healthcare only.

We chose healthcare because the stakes are highest and the expertise most scarce. The platform is domain-agnostic. Healthcare taught us what it means to build a system where being wrong has consequences.

06

Every answer must be traceable.

Every output can be traced back to the reasoning that produced it, the knowledge that informed it, and the expert who last corrected that stage. Defensibility without explainability isn't defensibility — it's just opacity with a confidence score.

07

The training signal should cost nothing extra.

Every expert correction — reviewing a case, correcting a reasoning step, updating a guideline — becomes training data as a by-product of normal governance. No annotation pipeline. No separate process. The system improves because experts are already doing the job.

08

Knowledge should never be rebuilt from scratch.

Every lab re-contextualises the same literature. Every expert reconstructs the same reasoning. We treat accumulated knowledge as infrastructure — versioned, persistent, owned by the field that built it. The goal is permanence, not repetition.

Working in a domain where expertise is the bottleneck?

Every product on our platform started with a conversation.

Get in touchHow we build together →