12 years of infrastructure. Now leaning into inference.
Networks, identity, fleet, facilities — all built under real constraints with real users. Now applying that operational depth to GPU inference: serving, routing, observability, and the physical layer underneath.
Currently
What I’m building right now.
- Running a LiteLLM model router at an edge node, routing inference to GPU workers in the lab over WireGuard. The platform changes constantly — that’s the point.
- Exploring passive RF observation with a Pluto+ SDR and Nordic BLE sensors. Interested in what ML can do with raw signal data — especially at the link layer.
- Most of my non-work time is in the lab. It’s where I stay current.
Essentials
Things that make the long hours better.
- Large format screen. Once you go big, everything else feels like squinting.
- Clickiest mechanical keyboard I can find. Always.
- Old Steelcase chair. Not a Herman Miller person.
In the lab right now
A6000 48GB + 5090 32GB
k3s + vLLM + LiteLLM
100GbE
Prometheus + Grafana
Pluto+ SDR
Agentic tooling
Contact
I’m looking for infrastructure, SRE, or platform roles. Remote, open to structured travel.