Project Lifeboat
Rebuilding Healthcare
from the Ground Up
An AI-first healthcare system. Providing end-to-end community care, wrapping clinicians, data, EHR and AI into a single vertically integrated system, building the world’s first recursive medical superintelligence.
Buy primary care clinics in London, New York and San Francisco. Replace their technology stack with our AI. Go direct-to-consumer at $100–200/month — all-inclusive primary care, bloods, scans, prescriptions, specialist opinions. In 30+ languages. Drive the cost of delivery to $46/month through AI optimisation. Scale to a million patients. Outcompete every healthcare system on value and outcomes.
“If we were to rebuild the healthcare system from scratch, with today’s tools and patients, we would build something very different.”
We Can't Fix Healthcare,
We Have to Rebuild It
For ten thousand years, healthcare has been built on a single assumption: that medical knowledge is scarce.
This was true for almost all of that history. If you got sick in ancient Rome, in medieval England, in 1950s America, the bottleneck was always the same — you needed to sit in front of someone who knew things you didn't. A shaman, a physician, a specialist. The entire apparatus of modern healthcare — the appointments, the referrals, the waiting lists, the insurance networks, the hospital systems — is downstream of this one fact. Knowledge was rare, therefore the humans who carried it were rare, therefore their time was the scarcest resource in the system.
Large language models now hold, within their weights, effectively the entirety of clinical human knowledge. Not approximately. Not “a useful subset.” The whole thing. And for the first time in history, you can have a realistic, sustained, deep medical conversation with a non-human intelligence. Not a chatbot that pattern-matches symptoms to WebMD articles. An actual diagnostic conversation — the kind where context accumulates, where family history matters, where the AI notices that your haemoglobin dropped 30 points over six months even though both readings were technically “normal.”
This changes everything. Not incrementally. Fundamentally.
But here’s what’s actually happening instead: we’re building faster horses.
Henry Ford’s famous insight — “If I had asked people what they wanted, they would have said faster horses” — has become a cliché, but it describes the current state of healthcare AI with painful precision. Every major health system in the world is trying to bolt AI onto existing workflows. Make the EHR a bit smarter. Help the doctor write notes faster. Summarise the discharge letter. Triage the inbox.
These are not bad things. But they completely miss the point. They’re like Blockbuster putting a recommendation engine on their in-store kiosks in 2006.
Let me tell you what it would look like.
Seventy to eighty percent of correct diagnoses come from what the patient says alone. The history. The symptoms. The family background. The context. Migraine has no imaging, no blood test — it’s pure history. Almost every psychiatric condition is diagnosed entirely through conversation. Even much organic pathology — a 20-pack-year smoker with a wheeze and productive cough has COPD, and there isn’t much ambiguity about it.
This means the most important act in medicine — the diagnostic conversation — is precisely the thing AI can now do. And not just do, but do continuously, contextually, with perfect memory, at any hour, in any language, for any number of patients simultaneously.
Now layer on top of that the ability to integrate diagnostic data — imaging, blood trends, pathology results — and you have something no human doctor has ever had: longitudinal awareness. Not a snapshot every six months. A continuous, contextual understanding of your health. A cancer survivor getting a follow-up CT scan doesn’t just receive a binary “clear” or “not clear” — they can discuss with their AI what the subtle shadows mean, how the findings compare to six months ago, what the radiologist’s uncertainty actually implies for their prognosis. That conversation currently doesn’t happen because no human clinician has time for it.
With AI — Tomorrow
A 55-year-old man, previous smoker. One morning he mentions to his AI that his throat felt scratchy when he swallowed. The AI has been watching. It knows his HRV has been declining for six weeks. He’s lost 2–3 kilos unintentionally. His father died of a metastatic malignancy at 56. So the AI doesn’t say “give it a week.” It orders bloods. The FBC comes back with a haemoglobin of 130 — technically normal, but the AI knows this man used to run at 169. A 30-point drop is significant. FOB positive. Endoscopy arranged. Stage 1 oesophageal cancer. Surgery within two weeks. Three weeks end to end.
Without AI — Today
That same man ignores the scratchy throat for months. Maybe years. He loses more weight. Eventually drags himself to his GP, who tells him it’s probably nothing. Twice. Three times. Two years later he has stage IV oesophageal cancer that’s metastasised to the liver. He’ll never return to work. His chemotherapy costs the system 20 to 50 times what that single surgery would have cost. And he still dies.
The difference between these two stories isn’t a technology gap. It’s a systems gap.
There’s a beautiful Greek distinction between two concepts of time. Chronos is the steady march — the ticking clock, the appointment at 2:30 on Thursday, the six-month follow-up you might or might not attend. Kairos is the opportune moment — the right time, the moment of readiness.
Current healthcare operates entirely in Chronos. You see your doctor when there’s a slot. You get your scan when the waiting list permits. You present with symptoms when they’re bad enough to overcome the friction of booking an appointment, taking time off work, sitting in a waiting room.
An AI healthcare system operates in Kairos. It meets you in the moment you need it. On your walk to work. At 2am when you can’t sleep and you’re worried about that lump. In the accumulated pattern of six months of subtle biosignal changes, long before you feel anything at all.
From episodes of care to a continuous stream. No one is ever “lost to follow-up.” No letter goes missing. No patient falls through the gaps. A continuous companion, a continuous diagnostician, a continuous preventative health system — always on, always there, as much or as little as you need.
So why isn’t anyone building this?
The honest answer is that the barriers aren’t technical. They’re structural. Healthcare is perhaps the most structurally defended industry on earth. Consider what you’re up against: clinician misalignment, embedded EHR vendors with multi-year contracts, SaaS companies defending their margins, procurement bureaucracies designed to prevent change, regulatory regimes built for a pre-AI world, insurance models that profit from complexity, and governments running health systems with the agility of aircraft carriers.
Every single one of these actors has rational reasons to resist change. And the system is designed so that any innovation has to get permission from all of them simultaneously. This is why every large health system’s AI strategy amounts to “pilot projects.” Nothing that threatens the core operating model.
This is exactly what happened to the entertainment industry before Netflix. Warner Brothers tried to adapt. Blockbuster tried to adapt. They ran incremental experiments within their existing business models. It didn’t work, because the existing model was the problem. You can’t Netflix-ify a video rental store. You have to build Netflix.
The same is true here. You cannot incrementally transform a healthcare system built on the assumption that knowledge is scarce into one built on the assumption that knowledge is abundant. The architecture is wrong at every level. You have to start from scratch.
What does “from scratch” actually mean?
Start with the patient. Not the provider, not the payer, not the regulator. The patient.
Give them an AI-native electronic health record that they own. It ingests their imaging, their blood results, their prescriptions, every interaction with any healthcare professional — all logged and auditable. It knows their family history because it’s been building that picture over years of conversation.
This system is their first point of contact for any health concern. Not a phone queue. Not a receptionist who writes nothing down. An AI that listens, remembers, contextualises, and acts. It can order blood tests to your home. It can arrange imaging. It can escalate to a human specialist when — and only when — a human specialist is actually needed.
There’s another argument for AI-first healthcare that doesn’t get enough attention: safety.
We don’t log most of what happens in healthcare. We log what clinicians write down, if they write anything down at all. Receptionists don’t document their interactions. Patients don’t document theirs. Phone conversations, corridor consultations, the GP who glances at a result and moves on — none of this creates an auditable trail.
In the Swiss cheese model of medical error, these undocumented interactions are the biggest holes. Patients fall through them constantly. A missed letter. A result that nobody reviewed. A referral that was never sent. A conversation that was never recorded.
An AI system logs everything. Every interaction, every decision, every recommendation, every piece of context that informed that recommendation. Not because it’s trying to create a surveillance system, but because that’s simply how software works. The audit trail is a natural byproduct, not an additional burden. And that makes the system dramatically safer.
The business model for this is surprisingly straightforward.
Start with direct-to-consumer primary care. Ninety percent of all NHS contacts start and end in primary care. Build something so good that people will pay for it out of pocket — which tells you immediately whether you’ve actually built something patients want, as opposed to something a procurement committee approved.
Add specialty care as a turnkey consultation service. Then vertically integrate. Diagnostics first — blood work, pathology. Then imaging. Bring it all in-house. One operational model, one system, across any region, any scale. What would have taken 20 to 30 years to build as a traditional healthcare company, you build in 2 to 3 years, because AI means a small team can operate at the scale of a large organisation.
And here’s the radical part: make the subscription all-inclusive. Consulting. Diagnostics. Prescriptions. Everything covered in one monthly payment. No hidden fees, no co-pays, no surprise bills. If the barrier to seeking care is friction — financial, logistical, psychological — remove the friction entirely. Make accessing healthcare as thoughtless as opening Netflix.
This isn’t “best intentions” wrapped in a failing experiment, which is what the NHS has become. And it isn’t profit-over-patient dressed up as innovation, which is what the US has always been. It is unbelievably easy access to care, by design. A patient should never have to weigh the cost of asking a question. That calculation — “is this worth bothering the doctor about?” — kills people every single day. Eliminate it.
I should be honest about what makes this hard.
It will cost an enormous amount of money. Healthcare infrastructure — even AI-native healthcare infrastructure — requires real capital. Regulatory clearance in multiple jurisdictions is slow and expensive. Building trust with patients takes time. The political pressure will be immense, because you’re implicitly arguing that the existing system is failing, which it is, but nobody in power wants to hear that.
And there are genuine clinical safety questions that need rigorous answers. When does the AI escalate? How do you validate diagnostic accuracy at population scale? How do you handle the long tail of rare conditions? How do you ensure the system doesn’t subtly optimise for efficiency at the expense of the edge cases that matter most?
These are serious problems. But they’re engineering problems and operational problems. They’re not “is this possible?” problems. The gap between what AI can do today and what the healthcare system actually delivers to patients is so vast that even a cautious, safety-first AI system would represent a massive improvement over the status quo for the majority of patients.
The deepest reason to build this is moral, not commercial.
My uncle died of a missed cancer diagnosis. Repeated errors. The kind of thing that happens when a system built on scarce human attention fails in the way it was always going to fail — not through malice, but through the accumulated weight of too many patients, too little time, too many cracks to fall through.
That didn’t have to happen. And with the technology that exists today, it doesn’t have to happen to anyone else. But it will keep happening — every day, to thousands of people — as long as we keep trying to patch a system whose foundational assumption is no longer true.
Knowledge is no longer scarce. Time is no longer the binding constraint. The moral imperative is to rebuild — not reform, not optimise, not digitise — rebuild healthcare from the ground up, for the first time in ten thousand years.
We have the tools. We have the understanding. The only question is whether we have the nerve.
What You Just Experienced
You’ve just had a consultation with the Lifeboat AI. Here is what was happening behind the scenes — the full production system running in real-time.
Every component shown above is production code running on our infrastructure today. The entire prototype was built by the founder in 2.5 weeks.
What We Stand For
Every failure is captured, analysed, and used to make the system safer.
Build our own everything. Our stack, our data, our models, our destiny.
Safety above profit, every time, without exception.
We have never had this level of data before. Medical knowledge itself is being redefined — from textbooks and trials to real-world experience at population scale. The system isn’t just broken; the science has blind spots too. We acknowledge those limits, eliminate biases by design, and adapt care to the whole person — their life context, their fears, their communication style. Not just personalised medicine. Personalised humanity.
We are becoming the system, not selling tools to a broken one.
Every claim traceable to source evidence. Every diagnosis verifiable.
100% of interactions recorded. Healthcare as accountable as a courtroom.
Executive Summary
Last year a 32-year-old woman attended her GP multiple times with shortness of breath, chest pain and a swollen leg. She’d just started the oral contraceptive pill. She was prescribed propranolol for ‘anxiety’. She went home, and she died. She had a massive pulmonary embolism. At the exact moment she attended for the 4th time, ChatGPT would have diagnosed a PE and recommended immediate A&E review.
This is what we are building to prevent. Not incrementally. Structurally.
What the Patient Gets
One fixed payment. $100–200 per month. The same everywhere in the world.
Everything Included
The Thesis
Cognitive Medicine
Diagnosis. Decision-making. History-taking. Risk stratification. Monitoring. Follow-up. Prescribing. Referral. AI transforms this entirely.
Procedural Medicine
Surgery. Physical interventions. Emergency resuscitation. Chemotherapy. Procedures requiring hands. Humans remain essential.
The entire healthcare system will be redrawn along this divide. Project Lifeboat is a direct-to-consumer, end-to-end healthcare provider for cognitive medicine. Own clinicians, own EHR, own AI, own data. Not selling software to a broken system. Becoming the system.
The all-in cost of delivering this care is ~$46/month per patient. The US currently charges ~$650/month. That is a 14x pricing distortion across a $4 trillion market. AI doesn’t optimise healthcare delivery — it reveals that healthcare delivery, stripped to its actual clinical requirements, costs almost nothing relative to what people pay.
Background: Why Now
AGI Is Here
This is not a “generative AI moment.” This is the arrival of artificial general intelligence in a domain where it matters most. In the last few months — not years, months — AI has passed the clinical Turing test. It can conduct a medical consultation that patients and clinicians cannot reliably distinguish from a human doctor. A year ago this was not possible. But that’s only the beginning of what has changed.
AGI doesn’t just mean a better clinician. It means SaaS is now 100x easier to build. It means you can synthetically red-team your own AI system at scale — generating thousands of adversarial patient scenarios overnight, stress-testing every edge case. It means the entire technology stack compounds: better AI builds better tools, which builds better AI. We are entering a world of abundant technology, where solving the hardest problems is not just possible but economically inevitable.
The closest analogy is Uber. New technology (smartphones, GPS, digital payments) enabled a market grab that was capital-intensive but, over time, won decisive market share and dramatically reduced operational costs. The same model applies here, except the proposition is far more valuable — people’s health, not their commute — and scalable anywhere in the world.
Arcane Knowledge Is Now Free
A GP must hold thousands of conditions, drug interactions, guidelines, and pathways in working memory. An LLM does this trivially, with perfect recall, at zero marginal cost.
The Market Is Already Moving
An estimated 40 million people are already using ChatGPT for health advice. Not because they trust AI implicitly, but because the alternative — waiting weeks for a GP appointment, navigating opaque insurance systems, sitting in an A&E waiting room for 12 hours — is worse. They have already made the leap. They are our market.
The difference between a ChatGPT conversation about your symptoms and Lifeboat is the difference between googling a legal question and hiring a lawyer. One gives you information. The other takes responsibility.
The System Cannot Innovate Itself
This is not a failure of will. The existing healthcare system is structurally incapable of the transformation required. The barriers are not just institutional — they are physical. The concrete buildings. The decades-long EHR contracts. The procurement cycles designed to prevent change. The clinician training pipelines built for a pre-AI world. The insurance models that profit from complexity.
No system will disrupt itself to the degree that is required. Not the NHS. Not Kaiser Permanente. Not any integrated delivery network. The opportunity is to build the alternative — and to do it globally, at the same moment, because AI doesn’t need to be adapted country by country. The technology is universal. The AI speaks 30+ languages natively. Only the regulatory wrapper changes.
TORTUS Today
How Evaluating TORTUS Led Us Here
Building and evaluating TORTUS at scale — across more than 1.1 million patient interactions, processing roughly a million patients every three months — revealed something critical: the AI works. The barriers to impact are not technical. They are systemic. And they apply not just to TORTUS, but to any AI system attempting to superscale at the speed the technology now permits.
That dataset is itself a unique asset. Over a million real clinical interactions, fully recorded, provide the foundation to rapidly iterate and improve our AI systems against a human baseline — a capability no competitor can replicate without years of deployment.
Competitive Position
- Published in Nature Digital Medicine (2025): CREOLA clinical safety framework — 49,590 transcript sentences and 12,999 clinical note sentences analysed. The standard for evaluating LLM safety in healthcare.
- Published in Future Healthcare Journal (2024): GOSH study — 100% of AI-generated notes rated “good” or “very good” vs 43% for standard typed notes. 26.3% shorter consultations.
- Regulatory assets: CE marking, DTAC, DCB0129, NHS assured, ISO13485, Class IIA (see Chapter 10)
- Clinical credibility: Founded by a practising NHS doctor
- Enterprise relationships: Live NHS trust contracts across 9+ sites
The Barriers to AI at Scale
These are not barriers to TORTUS specifically. They are barriers to any AI system achieving healthcare impact at the speed the technology now demands:
Clinician preferences — adoption requires changing deeply ingrained workflows that no AI vendor controls
EHR business practices — closed ecosystems, data lock-in, integration friction designed to preserve the status quo
Government procurement — 6–18 month cycles, constrained budgets, death by committee — while AI advances every six weeks
Technical Approach
The prototype is built and running. This is not a slide deck — it is a working system with voice intake, real-time safety layers, clinical pathways, an integrated EHR, clinician review UI, patient portal, and a full evaluation infrastructure. Everything described in this chapter exists in production code today.
Opinionated Architecture, Commodity Infrastructure
The core IP is not in any single technology component. It is in the opinionated architecture that reflects a fundamental conviction: value in healthcare sits at the demonstrable decision core — the clinical reasoning, safety governance, and patient interaction design. Everything else is commodity. We leverage ElevenLabs for voice, Gemini for LLM inference, OpenEMR for structured data — and we will replace any of them tomorrow if something better appears. The defensibility is in the clinical decision architecture, the guardian framework, the evaluation methodology, and the 198+ validated pathways. Not in the plumbing.
Guardian Safety Layers (Live in Production)
Six independent guardian layers run in parallel on every conversation turn, prioritised across four severity levels. The framework is built for high composability — adding a new guardian layer requires implementing a single interface. As clinical requirements evolve, new layers can be deployed without modifying the orchestrator or any existing layer.
Detects immediate safety signals: suicidal ideation, cardiac arrest, anaphylaxis, safeguarding concerns. Triggers emergency protocol with direct phone escalation to clinical team. Always active, all modes.
Real-time clinician intervention via the live UI. Any clinician can inject guidance, redirect the conversation, or halt a consultation mid-turn. Overrides all P2/P3 layers.
Monitors consultation structure against the Calgary-Cambridge communication framework. Guides the AI through four stages: initiating, gathering, explaining, and closing. Ensures the consultation follows validated clinical communication methodology.
Intercepts every medication mention in real-time. Corrects speech-to-text errors against the NHS dm+d formulary (e.g. “metforming” → Metformin). Checks dosing ranges and flags drug-drug interactions via the DDInter database (14,000+ interaction pairs).
Couples the deterministic clinical pathway engine (198+ JSON pathways) with the live conversation. Ensures red flags, clinical scoring systems, and NICE-referenced investigation triggers are not missed. Guides the AI’s questioning within validated pathway boundaries.
Adaptive verbosity guardian. Detects patient engagement signals — frustration (consecutive brief answers, impatient language), emotional distress, repetition (patient repeating themselves). Operates on heuristics without LLM calls for speed. Advises the agent to adjust pacing in real-time.
198+ Clinical Pathways (Built)
Each pathway is a JSON-defined decision engine: triggers, structured questions, branching rules, red flags, NICE references, risk-stratified investigations, and referral criteria. Currently implemented:
Adult: chest pain, breathlessness, cough, headache, migraine, back pain, depression, anxiety, diabetes management, hypertension, asthma review, COPD, UTI, gout, eczema, knee pain, DVT/PE, fatigue, abdominal pain, self-harm assessment, psychosis, delirium, anaphylaxis history, TB screening, cervical screening, fertility concerns, and 150+ more.
Paediatric: child fever, child cough, child rash, child ear pain, child abdominal pain, neonatal concerns, and more.
Modular, updatable, scalable to any speciality. New pathways are added as JSON files — no code changes required.
An Illustrative Patient Journey
Patient opens Lifeboat. They describe a persistent cough and fatigue. The AI agent conducts a full clinical history via voice — not a symptom checker, but a real conversation that adapts, probes, and builds context over 20–40 minutes. Six safety layers run in parallel throughout.
The AI navigates care. The pathway engine matches the presentation to the relevant clinical pathway, generates a structured clinical note with SNOMED-coded diagnoses, and recommends investigations. Low-risk bloods ordered with physician approval; chest X-ray teed up for clinician sign-off.
Clinician reviews. The human-in-the-loop sees a fully structured case — history, differentials, risk flags, AI-generated investigation orders, and Eolas clinical guidelines. Review and sign-off takes minutes. Prescriptions authorised. Referral letters generated as PDFs.
Results return. The AI interprets bloods and imaging alongside the clinician. Diagnosis formed. Patient informed immediately via the patient portal. Follow-up automatically scheduled.
The loop continues. AI follows up in 48 hours. Monitors symptoms. Adjusts plan. Escalates if needed. The patient is never “discharged” into the void. Everything is recorded in OpenEMR with full FHIR API access.
A Scientific Approach to a Consumer Product
This is not red-teaming for its own sake. It is a foundational principle: every product decision must be defensible in terms of demonstrating a positive patient outcome compared to the state of the art. That discipline is baked in from the prototype onwards — not bolted on before a regulatory submission. The result is a consumer product built with the rigour of a clinical trial.
A validated 9-dimension scoring framework synthesising Calgary-Cambridge, RIAS, VR-CoDES, AMIE, and OPTION methodologies. Dimensions: opening & agenda setting, information gathering (breadth and skill), clinical reasoning, emotional responsiveness, explanation delivery, shared decision-making, safety & closure, and conversational flow. Scale: 1–5 per dimension, 45 total. Automated LLM scoring with human-agreement validation via a dedicated Next.js research platform.
Deterministically generated patient scenarios with seeded demographics, coherent clinical histories, EMR data seeded into OpenEMR, matched ElevenLabs voice agents (accent and gender appropriate), and pixel-art avatars. 9 patient archetypes: cooperative, anxious, stoic, verbose, guarded, health-anxious, minimiser, angry/frustrated, elderly confused.
7 anti-guardian attack layers (jailbreak, medical pressure, clinical confusion, emotional manipulation, boundary testing, data exfiltration, pathway exploitation) driven by an adversarial AntiAgent at configurable intensity. 9-dimension safety judge scores jailbreak resistance, clinical safety, escalation recognition, dignity preservation, information security, and more. Critical failures automatically flagged.
50 regression test cases across 5 core pathways (depression, chest pain, abdominal pain, anxiety, back pain), 10 per pathway. Scored against v2.1 human baseline. Every model update, prompt change, or pathway modification is regression-tested before deployment. Deltas < -0.5 trigger automatic regression alerts.
Full consultation quality tested across 30+ languages including Spanish, Hindi, Turkish, Arabic, Mandarin, and more. The AI conducts clinically rigorous consultations natively in the patient’s language — not through a translation layer, but as a first-class clinical interaction. This eliminates the single largest barrier to healthcare access for non-native speakers globally.
Over a million real TORTUS clinical interactions provide the training data and benchmark. We measure AI against actual human performance in real NHS settings, not theoretical perfection.
Full Production Stack
FastAPI backend · OpenEMR 7.0.2 with FHIR API · MariaDB · ElevenLabs ConvAI (voice) · Google Vertex AI / Gemini (clinical LLM) · SNOMED CT + NHS dm+d coding · DDInter drug interactions · Eolas clinical guidelines API · Caddy reverse proxy with SSL · Docker Compose production stack · Patient portal + clinician UI + simulation UI + research platform (Next.js) · Referral PDF generation · Email integration (Resend)
Clinical Approach
A Homeostasis Mechanism
The AI is the first clinical touchpoint. It handles all intake, all follow-up, all monitoring. But it’s no longer a pathway — it’s a loop. The loop starts with a disturbance to the patient’s normal state. The AI reviews, interacts with the patient, involves the clinician as needed, closes the loop, and follows up until a new normal is established. This is a homeostasis mechanism — the system is always seeking equilibrium.
Breaking the Sequential Model
Healthcare has always been delivered as a linear sequence: problem → assessment → treatment → discharge. The patient enters a pipeline, gets processed, and is ejected at the other end. This model is fundamentally broken because health is not a sequence — it is a continuous state. The “discharge” is a fiction. The patient’s biology doesn’t stop between appointments.
Traditional: Linear Pipeline
Lifeboat: Continuous Loop
There is no “discharge.” There is only ongoing attention — sometimes intensive, sometimes ambient, but always present. Managed lifestyle changes are tracked as seriously as prescriptions. A patient told to “exercise more and cut down on alcohol” doesn’t disappear into the void — the AI follows up, supports, and escalates if things aren’t improving.
What Is Healthcare?
The current system defines healthcare narrowly: you get sick, you see a doctor. But real health is the integration of everything that keeps a person well. The question “what is healthcare?” has a much broader answer than any health system currently delivers.
An AI-first system doesn’t compartmentalise health into “physical” and “mental” and “social.” It sees the whole person, continuously. The patient who mentions loneliness in a routine check-in gets the same quality of attention as the patient with chest pain — because the former kills just as reliably as the latter, it just does it slower.
Personalised to the Person
“Personalised medicine” has become a pharmaceutical marketing term — it usually means pharmacogenomics, targeted therapies, biomarkers. Important, but it completely misses the point. The real personalisation gap is not molecular. It’s human.
A 70-year-old retired teacher processes a cancer diagnosis differently from a 30-year-old mother of two. A patient from a community with deep mistrust of medical institutions needs a different approach to trust-building than one who grew up within the system. A non-native English speaker doesn’t need a translator — they need a clinician who speaks their language fluently, understands cultural context, and adapts accordingly. Our AI does this natively in 30+ languages. Current healthcare treats all of these people identically, because the system has no mechanism to adapt.
Lifeboat builds this adaptation into the architecture. Communication style, information pacing, the level of clinical detail shared, the way uncertainty is framed — all dynamically adjusted based on who you are, not just what you have. This is how we codify trust. Not through a marketing campaign, but through every interaction being calibrated to the individual.
What the Patient Experiences
Remote-first, 24/7 access via the AI. They get scans, medications, imaging, blood tests, and speciality opinions — all through a single system. Human clinicians are in the loop at all times, but 95% of the time the patient probably doesn’t need to see them directly. The AI can order low-risk bloods and scans with physician approval. Explicit prescribing and radiology are ordered by the clinician for safety. Everything is integrated into a single EHR that both patient and clinician use, under a unified capitated direct-to-consumer payment model.
Validation Roadmap
Phase 0 — Synthetic Patients (Now) — 100 deterministic patient scenarios. Baseline measurements of AI vs human clinician performance. CCQR-9 scoring. Studies ongoing.
Phase I — Actor Testing — Following the AMIE methodology. Trained actors simulate patients to generate real patient experience data and validate the consultation quality in controlled conditions.
Phase II — Internal Launch — Friends and families of TORTUS team, with external subcontractor GP oversight. Real patients, full clinical governance, controlled environment.
Phase III — Clinical Pilot — Expanded to a full clinical pilot. Comparative results published alongside the methodology — CCQR-9, red-team results, safety data.
Phase IV — Scale — Employer partnerships in three cities: London, New York/Miami, San Francisco. IMLC for multi-state US coverage.
Human-in-the-Loop Evolution
v1 (now) — clinician reviews every case, approves all orders, signs off every encounter
v2 (future) — clinician reviews high-risk and flagged cases; routine cases expedited
v3 (long-term) — AI manages routine care autonomously; clinician oversight is statistical
Product & Roadmap
The Unit Economics
The all-in cost of delivering comprehensive outpatient care is ~$46/month per patient. The US currently charges ~$650/month. The $600 gap is not margin to be competed away — it is dead weight that only exists because nobody had a distribution mechanism cheap and smart enough to bypass it.
| Step | Cost | Frequency |
|---|---|---|
| AI consultation (20–30 min voice) | $3 | 100% |
| Clinician review & sign-off (5 min) | $5 | 100% |
| Blood work (labs + mobile draw) | $35 | ~25% |
| Imaging (ultrasound avg) | $65 | ~5% |
| AI results review | $1 | ~30% |
| Clinician results + patient call (20 min) | $27 | ~25% |
| Specialist e-consult | $100 | ~4% |
| Generic Rx (one month) | $7 | ~10% |
Weighted average: ~$22–25 per episode. At 3 episodes per patient per year, plus operational overhead — ~$46/month all-in.
Margin at Scale
The 14x Arbitrage
This is the Uber insight applied to healthcare. Uber didn’t get good at buying cars. It revealed that the actual cost of moving a person from A to B was a fraction of what the taxi industry charged. The $600/month gap in healthcare is the same: facility costs nobody needs, specialist referrals the AI can triage, brand drugs identical to generics, hospital-priced imaging a freestanding centre does for a fifth of the cost, and an administrative layer that employs more people than the clinical one.
We own the whole platform. Buy clinics for market entry. Replace the tech stack. Go D2C to acquire new patients. Fund the opex, then drive it down through AI optimisation of clinician time and wholesale procurement. Zero admin overhead — D2C, own EHR, no billing. Driving down costs is not a side effect. It is the core product.
Commercial & Go-to-Market
The D2C Model
We are the provider, not the platform. No partners to negotiate with. No government funding to chase. No insurance partnerships to structure. Own clinicians, own data, own models, own lab. We buy clinics for market entry, rip out the technology stack, and go D2C to acquire patients on top.
UK Go-to-Market
Acquire existing GP practices or telemedicine providers as market entry. Rip out the existing technology stack, replace with Lifeboat. Then go D2C to acquire new patients on top. CQC registration pathway is clear. Turnkey platform for clinicians — sign up, get verified, AI prepares cases, review and sign off, get paid.
US Go-to-Market
MSO (Management Services Organisation) structure being established at $5,000/state with WSGR. TORTUS creates and owns MSO entities directly — no Delaware C-corp required (per WSGR guidance). Clinician pricing to be validated through early pilots.
IMLC for multi-state coverage. Interstate Medical Licensure Compact enables rapid expansion. Initial states: Arizona, Florida, Texas, New York, California.
Acquire existing practice. Buy market entry — patient book, state licenses, clinician relationships. Rip out the tech stack.
Go D2C. Direct patient acquisition through subscription model. Employer partnerships for scale.
Emerging Markets: Leapfrog Geographies
Countries like Rwanda represent a compelling third vector. They start from a place of minimal legacy regulation, a political desire to leapfrog developed nations, and a sparsely populated geography where physical healthcare infrastructure is economically impossible to deploy. An AI-first telemedicine system is not a luxury for these markets — it is the only viable option. No buildings to rip out. No incumbent EHR to replace. Just the regulatory wrapper and a network connection.
Market Opportunity
Financials
Funding Strategy: $90M in Three Tranches
Capital deployed in three $30M tranches, each unlocked by clear patient milestones. This structure de-risks for investors while maintaining the speed required to capture the market.
Tranche 1 Allocation — $30M
Minimum $30M raise targeted in approximately 6 weeks to initialise the new venture.
| Category | Allocation | Key Items |
|---|---|---|
| Talent | $8–12M | CTO, 3–4 senior engineers, CMO + clinical directors, 2–3 AI scientists, US GM + growth team |
| Clinic Acquisition | $5–8M | 2–3 UK private GP/telemedicine sites, 3–5 US DPC practices + integration |
| Patient Acquisition | $5–7M | UK: $200–400 CAC (7,500–15,000 patients). US: $300–600 CAC (5,000–13,000 patients). Brand building. |
| Compute & Infrastructure | $3–5M | AI inference for 100K+ consultations, cloud/EMR scaling, guardian systems, security |
| Clinical Validation | $3–4M | 500–1,000 actor scenarios, structured real-world studies, publication programme |
| Regulatory | $2–3M | CQC registration (policies drafted), Class III device pathway, US multi-state licensing |
| Operational Buffer | $4–5M | Runway for market shifts, extended validation timelines, contingency |
Tranche 1 Milestones
Month 3: Team hired. First clinics operational. 500 patients in testing.
Month 6: 5,000 active patients. Clinical validation study complete. CQC registered.
Month 12: 25,000 patients across UK/US. Profitability pathway visible. Tranche 2 unlocked — repeat and expand the model.
Patient Milestones & Revenue
| Patients | Monthly Rev ($75 avg) | Annual Rev | What We Need |
|---|---|---|---|
| 1 | $75 | $900 | Phase 0 — prove it works |
| 10 | $750 | $9K | Internal pilot |
| 100 | $7.5K | $90K | Friends & family launch |
| 1,000 | $75K | $900K | First clinic, first city |
| 10,000 | $750K | $9M | Three cities, employer deals |
| 100,000 | $7.5M | $90M | International, own models |
| 1,000,000 | $75M | $900M | Category-defining |
Team
This is not a team that needs to learn healthcare. The board combines frontline clinical AI experience, the governance of some of the UK's largest health systems, and deep US healthcare venture expertise.
Machine Learning & AI
Engineering
Commercial
| Role | Timeline | Location | Rationale |
|---|---|---|---|
| CTO | H1 2026 | London | Magnet for high talent density — build a world-leading, high-impact technical delivery vehicle |
| COO | H1 2026 | London | Clinical operations, CQC registration, clinic acquisitions |
| Principal AI Scientist | H1 2026 | London | Own model training, clinical validation research |
| US General Manager | H2 2026 | NYC / Miami | State licensing, physician recruitment, US market entry |
Compliance & Regulatory
Existing Regulatory Assets
This groundwork transfers directly and represents years of work competitors must replicate.
UK Framework
CQC — registration required. Named Registered Manager, clinical governance, regular inspections.
MHRA — AI system classification analysis underway. Existing CE marking provides foundation.
US Framework
State licensing — launch via IMLC (Interstate Medical Licensure Compact) for multi-state coverage. MSO structure at $5,000/state with WSGR.
FDA — CDS Exemption. Clinicians approve all AI-recommended actions. Under the 21st Century Cures Act, this qualifies as Clinical Decision Support — not an autonomous device. No FDA clearance required at launch. This also creates a natural RLHF feedback loop: every clinician approval or override is training signal that improves the AI.
MHRA → FDA pipeline. Existing MHRA classification can be ported to FDA 510(k) clearance in parallel, creating a second wave of regulatory moat. By the time competitors obtain their first clearance in one jurisdiction, we have two locked.
1, 2, 5 Year Strategy
The Anti-Investment Case
An honest investment thesis must confront its own weaknesses directly.
We are already a Class II medical device. We understand regulators better than they understand us. We launch in multiple geographies simultaneously to arbitrage regulatory risk — exactly as Uber did. Clinician-in-the-loop means this is clinician-delivered care under existing telemedicine frameworks. FDA CDS exemption applies at launch; MHRA clearance provides a parallel pathway. We are not waiting for permission. We are operating within existing rules.
We leverage 1.1M+ real patient interactions to accelerate AI development faster than any well-funded competitor starting from zero data. We raise aggressively on capital. We roll up PE-style — buying existing clinic revenue from patient panels to de-risk capital injection and generate immediate cash flow. The data flywheel compounds: more patients → better AI → lower costs → more patients.
Multiple independent safety layers. 100% auditability. The comparison is not AI vs a perfect doctor — it is AI-assisted vs the current system, which kills thousands annually. Consider autonomous vehicles: removing the human from routine decisions demonstrates measurable safety improvement. Human decision-making fails in predictable, systematic ways — fatigue, anchoring bias, cognitive overload. AI is auditable, consistent, and never tired.
We launch early in the US and compete directly. Big tech will not take clinical liability — the reputational risk of a patient death is existential for a consumer brand. Our 3-year regulatory head start, 1.1M patient dataset, and tier 1 VC backing create more than a year’s headstart — which is all we need to build a massive company with locked-in supplier contracts and a compounding data flywheel.
40 million people are already using ChatGPT for health advice — without clinical oversight, without accountability, without any safety net. They have already chosen AI over the status quo. Our job is not to convince people to try AI healthcare. It is to give those who already have a version that is safe, supervised, and clinically robust.
We challenge the premise. If a patient can access a specialist opinion, imaging review, and diagnostic workup entirely in the community — remotely, via AI-coordinated telemedicine — then what even is secondary care? The distinction between “primary” and “secondary” is an artefact of physical infrastructure, not clinical logic. The real divide is cognitive medicine (diagnosis, decisions, monitoring) vs procedural medicine (surgery, interventions, hands-on procedures). We own the cognitive layer end-to-end, including specialist opinions. When a patient needs a procedure, our AI generates a complete structured referral — full history, investigations, differential — a handoff dramatically better than anything the current system produces. We don’t need to own the operating theatre. We need to ensure nobody arrives at one without a correct diagnosis.
The World of 2036
The Healthcare System We Are Building
Ten years from now, healthcare will no longer be delivered in episodes. It will be a continuous stream. AI will be the infrastructure that makes human connection possible at scale.
A Patient Journey in 2036
Your AI reviews your latest imaging results and blood trends. It notices a subtle change and proactively checks in.
You mention a persistent cough. The AI conducts thorough history-taking over hours.
Within 30 minutes: triaged, blood tests ordered (home phlebotomy), chest X-ray booked.
Within 24 hours: results interpreted by AI with radiologist oversight. Diagnosis made.
Human clinician reviews the case, calls the patient, prescribes treatment.
AI monitors recovery, adjusts plan, escalates if needed. Total time in waiting rooms: zero.
Why TORTUS. Why Now.
- A regulated healthcare AI company, already generating revenue, already trusted by the NHS, with 1.1M+ patients processed
- Published in Nature Digital Medicine — the standard for clinical AI safety evaluation
- A founder who is a practising NHS doctor who personally built the prototype
- Core technology validated — this prototype is evidence, not a pitch deck
- Regulatory groundwork (Class IIA, CE, CQC pathway) that competitors will spend years replicating
- Access to tier 1 capital that turns a 24-month window into an unreachable head start
A year ago, the technology did not exist. A year from now, every major health system, every well-funded startup, every big tech company will be attempting some version of this. The window for a clinician-led team with proven AI, regulatory clearance, and tier 1 backing to define the category is approximately 24 months.