The question I get asked most is: why did you leave medicine?
The honest answer is that I didn't. I changed the tools.
A surgeon treats one patient at a time. An AI system can serve millions. Both require understanding disease deeply — the biology, the uncertainty, the stakes. The difference is leverage.
When I was at Harvard Medical School, I started teaching myself to code during a clinical AI research experience. Not because I planned to leave surgery, but because I wanted to understand how these systems worked. What I found surprised me: most medical AI was being built by people who had never touched a patient.
That's not a criticism — it's a structural problem. The people who understand disease best (physicians) rarely have the technical skills to build AI. And the people who can build AI (engineers) rarely have the clinical depth to know what matters.
The gap is enormous. And it produces systems that optimize for the wrong things — models that are technically impressive but clinically irrelevant, or worse, clinically dangerous.
The case for physician-builders
Medicine is not a data problem. It's a reasoning problem under uncertainty, with life-or-death consequences. The physician who has stood at a bedside, weighed ambiguous evidence, and made a decision with incomplete information — that person understands something about medical judgment that no dataset captures.
When that person can also write code, something powerful happens. They don't just apply existing AI techniques to medical data. They ask different questions. They design different architectures. They build systems that reflect how medicine actually works, not how it looks in a clean dataset.
What I'm building
At Galen Health, we're building what I believe medicine needs: an autonomous AI system that learns continuously from biomedical data to construct an ever-deepening understanding of cancer biology. Not a chatbot with a medical disclaimer. A system that reasons about cancer the way a physician-scientist does — integrating evidence across domains, generating hypotheses, testing them, and updating its understanding.
The goal is audacious: a cancer superintelligence. An AI that can answer any question a patient or physician could ask about cancer, grounded in the full breadth of biomedical knowledge.
We're not there yet. But every day, the system gets a little smarter. And that's the point — this is not a product launch, it's a research program. One that will take years, maybe decades. The kind of problem worth spending a life on.
This essay is adapted from my TEDx talk, "Doctors Who Code: Why Physicians Should Build Artificial Intelligence."