We Made 10,000 Spine Surgery Patients That Don't Exist. Then We Built an AI From the Answers.

The problem with medical AI is the data. Not the algorithms. Not the compute. The data.
It is private. It is fragmented. It lives in hospital servers behind privacy laws and governance committees. A single surgeon sees a few hundred patients a year. A single institution, a few thousand. Building AI that works across different health systems, surgical cultures, and geographies requires data from all of them — and almost none of it is accessible.
SpineDAO was built to solve this. In the past six months, we have demonstrated the complete pipeline — from real patient to certified synthetic twin to expert-labeled training data to AI product — entirely on-chain. Here is how it works, step by step.
Step 1: Real data, properly governed
SpineBase is our multicenter spine surgery registry. Surgeons contribute anonymized patient outcomes — ODI scores, VAS pain ratings, surgical details, complications, follow-up at 3, 6, 12, and 24 months. Each contribution is tokenized: the surgeon earns $SPINE. Each record is de-identified at source to HIPAA Safe Harbor and GDPR standards. The data stays in our encrypted registry, governed by the DAO.
As of April 2026: 370 patients, 2 surgical campaigns, 13 contributing surgeons across Europe.
Step 2: Synthetic twins — sharing without sharing
From 125 SI fusion registry cases, we trained a GaussianCopula generative model and produced 10,000 synthetic patients. These are not real people. But statistically, they are indistinguishable from real ones.
We validated this across three domains:
- Fidelity: KS p=0.52. The distributions match — age, sex, pain scores, outcomes, complications.
- Privacy: 98.9% NNDR >1.0. Membership inference AUROC 0.57. No real patient is identifiable.
- Utility: TSTR r=0.29 at N=125. At N=1,000, we project r≈0.69.
The certified dataset was fingerprinted with SHA-256 and anchored on Solana mainnet — the first synthetic medical dataset with immutable blockchain provenance.
Synthetic patients can be distributed freely — to researchers, to annotation pipelines, to AI testing — without privacy risk, without IRB restrictions, without institutional friction.
Step 3: 52 surgeons, 7 countries, $0.97 per expert opinion
Once we had synthetic patients, we sent them to surgeons.
We credentialed 52 spine specialists with non-transferable Soulbound Tokens on Solana. Each token cryptographically proves the holder is a verified specialist — and cannot be transferred, delegated, or faked. We presented them with 463 synthetic low back pain vignettes and asked: what would you do?
The platform collected 2,066 expert reviews in 37 days. Smart contracts paid each reviewer automatically upon completion. Cost: $0.97 per review.
The data that came back was not just labels. It was a map of clinical judgment.
The most important finding: 44% of treatment variability comes from the interaction between a specific clinician and a specific patient — not from patient severity alone, and not from surgeon specialty alone. For 11% of vignettes, the same patient received conservative management from one expert and surgical emergency recommendation from another.
A model trained on one surgeon's data will encode that surgeon's biases, not medicine's consensus. To build AI that helps patients across health systems, you need multi-reviewer, multi-specialty, multi-geography labeled data. That is what Spine Reviews produced.
Step 4: The AI — Lamina
Now the synthetic patients serve a third purpose: testing the product that will reach real patients.
Lamina is SpineDAO's spine wellness AI. It answers patient questions, suggests exercises, flags red flags for referral, and guides people through understanding their condition. Before it ever talks to a real patient, it talks to thousands of synthetic ones.
The 500 vignettes reviewed by 52 experts are the same vignettes used to test Lamina. Each synthetic persona has an identity, a clinical picture grounded in real data, and a set of intents. The expert labels from Spine Reviews define what a correct response looks like. Lamina is benchmarked against the same cases that real surgeons would face.
The complete vertical
Real patient outcomes (SpineBase)
↓ Certified synthetic patients (GaussianCopula + blockchain)
↓ Global expert annotation (Spine Reviews + SBTs + $SPINE rewards)
↓ Model training + benchmarking
↓ Patient-facing AI (Lamina)
Each step is privacy-preserving. Each step is blockchain-anchored. Each step creates $SPINE utility: surgeons earn for contributing data, specialists earn for annotating, researchers pay to access.
This is what a decentralized, clinician-governed AI pipeline looks like. No single institution controls it. No single surgeon's biases dominate it. The data is real, the validation is rigorous, and the provenance is immutable.
What is next
Both papers — Validated Synthetic Data Generation from a Multicenter Spine Surgery Registry and Spine Reviews: Crowdsourcing Global Spine Expert Knowledge via Digital Ledger Technology — are being submitted to medRxiv this week.
As the TLIF-BAYES study reaches 400+ patients and the registry grows, the next synthetic certification cycle will demonstrate empirically how utility scales. At N=1,000, the model trains better.
Join at spinal.science · Contribute data at spinebase.app
Both papers are now on medRxiv
- Validated Synthetic Data Generation from a Multicenter Spine Surgery Registry: Methodology and Benchmark
Challier V et al. — SpineDAO Collaborative Group
doi: 10.64898/2026.04.07.26350316 - Spine Reviews: Crowdsourcing Global Spine Expert Knowledge via Digital Ledger Technology
Diebo B et al. — SpineDAO
doi: 10.64898/2026.04.11.26350678
SpineDAO — Building Intelligence Around Spine Care










%20Medium.png)


.png)