Project Lifeboat
Rebuilding Healthcare
from the Ground Up
An AI-first healthcare system. Acquire clinics. Deploy Kairos. Own clinicians, EHR, AI, and data. Sell to employers. Repeat in every city. The Kaiser model rebuilt on AI.
"If we were to rebuild the healthcare system from scratch, with today's tools and patients, we would build something very different."
We Can't Fix Healthcare,
We Have to Rebuild It
For ten thousand years, healthcare has been built on a single assumption: that medical knowledge is scarce.
This was true for almost all of that history. If you got sick in ancient Rome, in medieval England, in 1950s America, the bottleneck was always the same — you needed to sit in front of someone who knew things you didn't. A shaman, a physician, a specialist. The entire apparatus of modern healthcare — the appointments, the referrals, the waiting lists, the insurance networks, the hospital systems — is downstream of this one fact. Knowledge was rare, therefore the humans who carried it were rare, therefore their time was the scarcest resource in the system.
Large language models now hold, within their weights, effectively the entirety of clinical human knowledge. Not approximately. Not “a useful subset.” The whole thing. And for the first time in history, you can have a realistic, sustained, deep medical conversation with a non-human intelligence. Not a chatbot that pattern-matches symptoms to WebMD articles. An actual diagnostic conversation — the kind where context accumulates, where family history matters, where the AI notices that your haemoglobin dropped 30 points over six months even though both readings were technically “normal.”
This changes everything. Not incrementally. Fundamentally.
But here’s what’s actually happening instead: we’re building faster horses.
Henry Ford’s famous insight — “If I had asked people what they wanted, they would have said faster horses” — has become a cliché, but it describes the current state of healthcare AI with painful precision. Every major health system in the world is trying to bolt AI onto existing workflows. Make the EHR a bit smarter. Help the doctor write notes faster. Summarise the discharge letter. Triage the inbox.
These are not bad things. But they completely miss the point. They’re like Blockbuster putting a recommendation engine on their in-store kiosks in 2006.
Let me tell you what it would look like.
Seventy to eighty percent of correct diagnoses come from what the patient says alone. The history. The symptoms. The family background. The context. Migraine has no imaging, no blood test — it’s pure history. Almost every psychiatric condition is diagnosed entirely through conversation. Even much organic pathology — a 20-pack-year smoker with a wheeze and productive cough has COPD, and there isn’t much ambiguity about it.
This means the most important act in medicine — the diagnostic conversation — is precisely the thing AI can now do. And not just do, but do continuously, contextually, with perfect memory, at any hour, in any language, for any number of patients simultaneously.
Now layer on top of that wearable biosignals — heart rate variability, sleep patterns, weight trends — and you have something no human doctor has ever had: longitudinal awareness. Not a snapshot every six months when the patient finally books an appointment. A continuous signal.
With AI — Tomorrow
A 55-year-old man, previous smoker. One morning he mentions to his AI that his throat felt scratchy when he swallowed. The AI has been watching. It knows his HRV has been declining for six weeks. He’s lost 2–3 kilos unintentionally. His father died of a metastatic malignancy at 56. So the AI doesn’t say “give it a week.” It orders bloods. The FBC comes back with a haemoglobin of 130 — technically normal, but the AI knows this man used to run at 169. A 30-point drop is significant. FOB positive. Endoscopy arranged. Stage 1 oesophageal cancer. Surgery within two weeks. Three weeks end to end.
Without AI — Today
That same man ignores the scratchy throat for months. Maybe years. He loses more weight. Eventually drags himself to his GP, who tells him it’s probably nothing. Twice. Three times. Two years later he has stage IV oesophageal cancer that’s metastasised to the liver. He’ll never return to work. His chemotherapy costs the system 20 to 50 times what that single surgery would have cost. And he still dies.
The difference between these two stories isn’t a technology gap. It’s a systems gap.
There’s a beautiful Greek distinction between two concepts of time. Chronos is the steady march — the ticking clock, the appointment at 2:30 on Thursday, the six-month follow-up you might or might not attend. Kairos is the opportune moment — the right time, the moment of readiness.
Current healthcare operates entirely in Chronos. You see your doctor when there’s a slot. You get your scan when the waiting list permits. You present with symptoms when they’re bad enough to overcome the friction of booking an appointment, taking time off work, sitting in a waiting room.
An AI healthcare system operates in Kairos. It meets you in the moment you need it. On your walk to work. At 2am when you can’t sleep and you’re worried about that lump. In the accumulated pattern of six months of subtle biosignal changes, long before you feel anything at all.
From episodes of care to a continuous stream. No one is ever “lost to follow-up.” No letter goes missing. No patient falls through the gaps. A continuous companion, a continuous diagnostician, a continuous preventative health system — always on, always there, as much or as little as you need.
So why isn’t anyone building this?
The honest answer is that the barriers aren’t technical. They’re structural. Healthcare is perhaps the most structurally defended industry on earth. Consider what you’re up against: clinician misalignment, embedded EHR vendors with multi-year contracts, SaaS companies defending their margins, procurement bureaucracies designed to prevent change, regulatory regimes built for a pre-AI world, insurance models that profit from complexity, and governments running health systems with the agility of aircraft carriers.
Every single one of these actors has rational reasons to resist change. And the system is designed so that any innovation has to get permission from all of them simultaneously. This is why every large health system’s AI strategy amounts to “pilot projects.” Nothing that threatens the core operating model.
This is exactly what happened to the entertainment industry before Netflix. Warner Brothers tried to adapt. Blockbuster tried to adapt. They ran incremental experiments within their existing business models. It didn’t work, because the existing model was the problem. You can’t Netflix-ify a video rental store. You have to build Netflix.
The same is true here. You cannot incrementally transform a healthcare system built on the assumption that knowledge is scarce into one built on the assumption that knowledge is abundant. The architecture is wrong at every level. You have to start from scratch.
What does “from scratch” actually mean?
Start with the patient. Not the provider, not the payer, not the regulator. The patient.
Give them an AI-native electronic health record that they own. It ingests their wearable data. It holds their complete medical history, every prescription, every interaction with any healthcare professional, all logged and auditable. It knows their family history because it’s been building that picture over years of conversation.
This system is their first point of contact for any health concern. Not a phone queue. Not a receptionist who writes nothing down. An AI that listens, remembers, contextualises, and acts. It can order blood tests to your home. It can arrange imaging. It can escalate to a human specialist when — and only when — a human specialist is actually needed.
There’s another argument for AI-first healthcare that doesn’t get enough attention: safety.
We don’t log most of what happens in healthcare. We log what clinicians write down, if they write anything down at all. Receptionists don’t document their interactions. Patients don’t document theirs. Phone conversations, corridor consultations, the GP who glances at a result and moves on — none of this creates an auditable trail.
In the Swiss cheese model of medical error, these undocumented interactions are the biggest holes. Patients fall through them constantly. A missed letter. A result that nobody reviewed. A referral that was never sent. A conversation that was never recorded.
An AI system logs everything. Every interaction, every decision, every recommendation, every piece of context that informed that recommendation. Not because it’s trying to create a surveillance system, but because that’s simply how software works. The audit trail is a natural byproduct, not an additional burden. And that makes the system dramatically safer.
The business model for this is surprisingly straightforward.
Start with direct-to-consumer primary care. Ninety percent of all NHS contacts start and end in primary care. Build something so good that people will pay for it out of pocket — which tells you immediately whether you’ve actually built something patients want, as opposed to something a procurement committee approved.
Add specialty care as a turnkey consultation service. Then vertically integrate. Diagnostics first — blood work, pathology. Then imaging. Bring it all in-house. One operational model, one system, across any region, any scale. What would have taken 20 to 30 years to build as a traditional healthcare company, you build in 2 to 3 years, because AI means a small team can operate at the scale of a large organisation.
I should be honest about what makes this hard.
It will cost an enormous amount of money. Healthcare infrastructure — even AI-native healthcare infrastructure — requires real capital. Regulatory clearance in multiple jurisdictions is slow and expensive. Building trust with patients takes time. The political pressure will be immense, because you’re implicitly arguing that the existing system is failing, which it is, but nobody in power wants to hear that.
And there are genuine clinical safety questions that need rigorous answers. When does the AI escalate? How do you validate diagnostic accuracy at population scale? How do you handle the long tail of rare conditions? How do you ensure the system doesn’t subtly optimise for efficiency at the expense of the edge cases that matter most?
These are serious problems. But they’re engineering problems and operational problems. They’re not “is this possible?” problems. The gap between what AI can do today and what the healthcare system actually delivers to patients is so vast that even a cautious, safety-first AI system would represent a massive improvement over the status quo for the majority of patients.
The deepest reason to build this is moral, not commercial.
My uncle died of a missed cancer diagnosis. Repeated errors. The kind of thing that happens when a system built on scarce human attention fails in the way it was always going to fail — not through malice, but through the accumulated weight of too many patients, too little time, too many cracks to fall through.
That didn’t have to happen. And with the technology that exists today, it doesn’t have to happen to anyone else. But it will keep happening — every day, to thousands of people — as long as we keep trying to patch a system whose foundational assumption is no longer true.
Knowledge is no longer scarce. Time is no longer the binding constraint. The moral imperative is to rebuild — not reform, not optimise, not digitise — rebuild healthcare from the ground up, for the first time in ten thousand years.
We have the tools. We have the understanding. The only question is whether we have the nerve.
What We Stand For
Every failure is captured, analysed, and used to make the system safer.
Build our own everything. Our stack, our data, our models, our destiny.
Safety above profit, every time, without exception.
We are becoming the system, not selling tools to a broken one.
Every claim traceable to source evidence. Every diagnosis verifiable.
100% of interactions recorded. Healthcare as accountable as a courtroom.
Executive Summary
The Opportunity
Healthcare has always been predicated on arcane knowledge — elite, held by a few, requiring physical proximity to access. Large language models have upended that premise. Intelligence is now freely available. We are moving into the Netflix era of healthcare — always accessible, consumer-centred, streaming 24/7.
Last year a 32-year-old woman attended her GP multiple times with shortness of breath, chest pain and a swollen leg. She'd just started the oral contraceptive pill. She was prescribed propranolol for 'anxiety'. She went home, and she died. She had a massive pulmonary embolism. At the exact moment she attended for the 4th time, ChatGPT would have diagnosed a PE and recommended immediate A&E review.
Today
Tomorrow
The Thesis
The pivot is two-pronged:
- Maximise the existing business — hyper-aggressive pricing, regulatory leverage, expanded partnerships
- Build an AI-first direct-to-consumer healthcare provider — own clinicians, EHR, AI and models
The Ask
Background: Why Now
The Generative AI Inflection Point
For the first time, machines can reason about complex, ambiguous, multi-factorial clinical problems with competence that approaches — and in narrow domains exceeds — the average human practitioner. This is not incremental progress. This is a phase change.
Arcane Knowledge Is Now Free
A GP must hold thousands of conditions, drug interactions, guidelines, and pathways in working memory. An LLM does this trivially, with perfect recall, at zero marginal cost.
Capped vs Uncapped
Three forces constrain AI deployment: clinician preferences, EHR business practices, and procurement cycles. These are structural barriers of working within the existing system. By becoming the healthcare provider, we remove all three.
TORTUS Today
Competitive Position
- Regulatory assets: CE marking, DTAC, DCB0129, NHS assured, ISO13485, Class IIA
- Publications: Peer-reviewed clinical validation
- Clinical credibility: Founded by a practising NHS doctor
- Enterprise relationships: Live NHS trust contracts
The Three Barriers
Clinician preferences — adoption requires changing deeply ingrained workflows
EHR business practices — closed ecosystems, data lock-in, integration friction
Government procurement — 6-18 month cycles, constrained budgets, death by committee
Technical Approach
Multi-Agent Architecture: Kairos
Kairos is not a single model. It is a multi-agent system where specialised components collaborate under clinical supervision — 100+ structured pathways aligned to NICE/SIGN guidelines, with dual-pathway re-matching mid-consultation.
Clinical Pathways
117+ structured pathways covering the top primary care presentations. Each defines red flags, NICE/SIGN references, risk-stratified investigations, referral triggers, and safety netting — aligned with Eolas official guidelines. Dual-pathway architecture enables mid-consultation re-matching when clinical picture evolves.
Observability
100% of conversations recorded. Every clinical decision logged with its reasoning chain. Every AI statement linked to source evidence. This level of observability is impossible in traditional healthcare, where fewer than 5% of GP consultations are recorded.
Clinical Approach
A New Paradigm
AI does the 80% that doesn't require a doctor, so doctors can focus on the 20% that does. Patients get unlimited time with an AI that never rushes, never forgets, never gets tired.
The Five-Step Clinical Flow
Patient Context — demographics, PMH, medications, allergies, previous encounters
Presenting Complaint — clinical essence, red flags, structured note
Investigations — risk-stratified labs and imaging, clinician-confirmed ordering
Patient Contact — optional call with recording, transcription, verified notes
Decisions — AI assessment verification, NICE guidelines, referrals, sign-off
Safety Architecture
Multiple independent layers — no single point of failure can cause patient harm.
Human-in-the-Loop Evolution
v1 (now) — clinician reviews every case, approves all orders, signs off every encounter
v2 (future) — clinician reviews high-risk and flagged cases; routine cases expedited
v3 (long-term) — AI manages routine care autonomously; clinician oversight is statistical
Product & Roadmap
What the Patient Gets
What the Clinician Gets
Phased Rollout
Commercial & Go-to-Market
The Model: Buy, AI-fy, Sell
Acquire private primary care practices. Deploy Kairos AI across them. Sell AI-powered healthcare to employers. Expand the panel. Repeat in the next city.
At Acquisition
$540K revenue
15-20% margin
Post-Kairos (12mo)
$2.4M revenue
40-50% margin
US: MSO/PC Architecture & DPC Roll-Up
MSO (Lifeboat Health Inc) — Delaware C-Corp. Owns technology, brand, employer contracts, all economics.
PCs — one per state, owned by licensed physician partner with MSO equity alignment.
IMLC Compact — two physicians = 41-state telehealth coverage in 4 months.
Employer Health Benefit
UK: CQC + TORTUS GP Network
CQC registered. 10,000+ GPs already on the TORTUS platform — many want to go private. We provide the CQC umbrella, AI stack, billing, and governance. They join our network. Instant distribution.
Roll-Up Targets
Strategic Partnerships
Financials
Path to First Patient
$50M Series A — Use of Funds
Unit Economics
Series A → B Bridge
Team
This is not a team that needs to learn healthcare. The board combines frontline clinical AI experience, the governance of some of the UK's largest health systems, and deep US healthcare venture expertise.
Machine Learning & AI
Engineering
Commercial
| Role | Timeline | Location | Rationale |
|---|---|---|---|
| CTO | Q1 2026 | London (hybrid) | Architecture ownership, Kairos & Anamnesis platform |
| COO | Q1 2026 | London | Clinical ops, CQC, clinic M&A, physician recruitment |
| US General Manager | Q2 2026 | NYC | MSO/PC set-up, IMLC, NYC & SF practice acquisitions |
| Principal AI Scientist | Q3 2026 | London / Remote | Anamnesis: memory architecture, continuous training, sleep cycles |
| VP Operations (per city) | Q2+ 2026 | London / NYC / SF | City GM: acquire, transform, expand, sell to employers |
| US Legal / Regulatory | Q2 2026 | US | HIPAA, FDA, state-by-state licensing, CPOM compliance |
| Mental Health Lead | Q3 2026 | London / NYC | Therapist recruitment, AI-CBT, mental health pathways |
Organisational Design for Speed
Structured around two principles typically in tension: move exceptionally fast and never compromise clinical safety. The resolution is small, autonomous squads with clinical safety as a hard constraint.
Engineers who understand clinical context, clinicians who understand technology. Decision-making pushed to the edge. Clinical safety has veto power at every stage -- the Clinical Advisory Board can halt any deployment.
Compliance & Regulatory
Existing Regulatory Assets
This groundwork transfers directly and represents years of work competitors must replicate.
UK Framework
CQC — registration required. Named Registered Manager, clinical governance, regular inspections.
MHRA — AI system classification analysis underway. Existing CE marking provides foundation.
US Framework
MSO/PC structure — Corporate Practice of Medicine doctrine navigated via Management Services Organisation + Professional Corporations, one per state.
IMLC Compact — 40 states covered with single physician application in 2-3 months. California separate track.
DPC Act — Direct Primary Care explicitly exempted from insurance regulation in most states.
HIPAA — BAAs with all vendors, encryption, audit logging, regional data federation.
FDA — physician-accountable model keeps us outside SaMD classification initially. Pre-submission meetings budgeted.
1, 2, 5 Year Strategy
The Anti-Investment Case
An honest investment thesis must confront its own weaknesses directly.
Against: Uncharted pathway for AI clinicians. Multi-jurisdiction complexity. Response: CE, DTAC, DCB0129 already held. Human-in-the-loop = clinician-delivered care. MSO/PC structure is proven US law. IMLC covers 40 states. Engaging regulators proactively.
Against: AI errors can kill. Response: 6 independent safety layers. 100% auditability. Not "AI vs perfect doctor" — "AI-assisted vs current system" which kills thousands annually. Every consultation recorded and verifiable.
Against: Acquiring 10 practices in 3 countries is complex. Response: DPC practices are small, simple acquisitions ($300-600K each). Playbook repeatable. Risk bounded — each practice generates revenue immediately.
Against: Patients may not trust AI yet. Response: 7.6M NHS waiting list. 30M uninsured Americans. Employer healthcare costs up 7% YoY. Market is desperate, not sceptical.
Against: Big tech and US startups. Response: Big tech won't take clinical liability. Healthcare is local. We have a 3-year regulatory head start. No competitor has the buy-deploy-scale clinic playbook.
Against: $50M is a lot. Response: $8M of it buys $6M in recurring revenue from day one. This is capital deployment with immediate cash flow, not pure R&D burn.
The World of 2036
The Integrated Stack
Ten years from now, healthcare will no longer be delivered in episodes. It will be a continuous stream from a single, vertically integrated system. The Kaiser model rebuilt on AI.
Project Anamnesis
The endgame is not a better chatbot. It is a medical superintelligence that learns from every encounter, remembers every patient longitudinally, and consolidates knowledge the way human memory does — through something analogous to sleep.
Every encounter stored verbatim. Perfect recall of every conversation, every symptom, every test result.
Patterns extracted across millions of encounters. Population-level learning that no individual clinician could achieve.
Periodic training cycles that replay, promote, and prune — like biological sleep. The model doesn't degrade; it deepens.
US data stays in US, UK in UK, EU in EU. De-identified learning signals flow globally. Privacy by design.
A Patient Journey in 2036
Kairos notices your HRV declining for six weeks via wearable. It proactively checks in — it has known you for years.
You mention a scratchy throat. Kairos remembers your father's cancer at 56. It doesn't say "give it a week."
Within 30 minutes: bloods ordered to your home. Your haemoglobin is 130 — normal, but Kairos knows you used to run at 169.
FOB positive. Endoscopy arranged same week. Your physician reviews — Kairos has already prepared the full case.
Stage 1. Surgery within two weeks. Three weeks end to end. Without Kairos, this was Stage IV in 18 months.
Kairos monitors your recovery for years. You are never lost to follow-up. You are never alone with your health again.
Why TORTUS / Lifeboat
- A regulated healthcare AI company, already generating revenue, already trusted by the NHS
- A founder who is a practising NHS doctor who understands clinical reality at a visceral level
- Core technology built and validated — this prototype is evidence
- Regulatory groundwork that competitors will spend years replicating
- Clinical credibility that pure tech companies cannot buy
- Access to 10,000+ GPs through TORTUS — instant distribution for the new model
- Anamnesis: a roadmap to medical superintelligence no competitor has articulated