I’m a Lead Enterprise Architect working on life sciences software, regulated data platforms, and AI systems that ship.

I started in a QC lab pipetting samples. Today I help unify north of 100 software products across a $100M+ business unit. Before that, I spent five years embedded inside four of the largest pharmaceutical companies in the world, where I learned that large-scale software succeeds or fails as much through alignment, incentives, trust, and governance as it does through technology.

My work centers on systems that hold up in real organizations. The hard part isn’t building AI that demos well. It’s building AI that’s still working a quarter later, after the data has drifted and the team has reshuffled.

I think a lot about evaluation. The model is not the system. The system is everything required to make output selectable, constrainable, auditable, and stoppable. The gap between “the demo worked” and “this is safe to put in front of a regulated customer” is where I do my best work.

Off the clock, I run a 24-container homelab behind Prometheus and Grafana. Partly because I enjoy it. Partly because I trust architectural opinions more when they come from people who have had to operate their own systems.

I write about the parts of large-scale software that usually stay hidden: the platforms inside applications, the evaluation systems that decide whether AI is shippable, and the decisions that look obvious in retrospect but felt impossible in the moment.

If you build software in a regulated industry and you’re sorting out where AI fits, you’ve probably had some of the same arguments I have.