What I believe

These are design principles that have survived real-world testing across legal AI, military training, education, emotion detection, and aesthetic matching. Each one shapes how I approach my work.

Amplifying Human Brilliance

AI should make human contribution more valuable, not less necessary.

Replacement treats the human as a cost to be minimised. Amplification treats them as valuable and works to make them more so. This affects what I measure, how I design interactions, and what domains I choose to work in. I measure qualitative experience, not just task completion. I design for dialogue, not transaction. And I focus on complex, subjective, inherently human problems where getting the interaction right matters most.

AI With Taste

The quality bar should be set by people who can tell the difference.

Most AI output is averaged, safe, and bland. Building for complex and subjective domains requires people who understand quality and who stay close enough to the work to maintain that standard. The distance between technically correct and actually good is where most AI products fail, and it takes hands-on involvement to keep that gap closed.

Complex and Subjective Domains

The most valuable AI problems are the ones that resist simple answers.

Automation handles repetitive, objective tasks well enough. The problems I find worth working on are the ones where human expertise matters: legal reasoning, clinical judgment, aesthetic taste, learning design. These require AI that supports human experts without flattening the nuance they bring, and they reward sustained depth over quick generalisation.

Stay Close to the Work

Taste and judgment atrophy without contact with real implementation.

The gap between strategic vision and a working product is where companies succeed or fail. I reject the standard advice that senior leaders should fully delegate technical work. At a growing AI company, the advantage belongs to leaders who can still build and who use AI fluency to operate across more layers simultaneously than was previously possible.

Deep Insights Over Surface Metrics

The right question is whether users leave with more capability than they arrived with.

Engagement metrics like time-on-site, clicks, and session counts measure attention capture, not value delivered. I would rather know whether someone left an interaction more capable, more motivated, or more knowledgeable than when they arrived. That standard is harder to meet, but it produces products that people actually want to return to.