Building AI systems
that know their limits.
photo
I design evaluation, reliability, and governance systems that determine when AI can safely automate complex workflows. My central question: when is AI actually ready to act without supervision?
My work focuses on the infrastructure around AI systems — evaluation frameworks, ground-truth datasets, confidence calibration, and human-in-the-loop decision pipelines.
I currently work on large-scale AI risk and automation systems at Meta, with prior experience across enterprise platforms at Amazon, Hitachi and Oracle, and as a co-founder of an early-stage startup.
I hold an MBA from UC Berkeley, Haas School of Business and a master's degree in computer engineering from National University of Singapore.
- AI evaluation systems and benchmark design
- Human-in-the-loop workflows for consequential AI decisions
- Safe AI deployment and escalation frameworks
- LLM reliability, confidence scoring, and output validation
- Multi-agent orchestration and workflow automation
- AI governance and policy enforcement infrastructure
- Google Generative AI Leader — Google, 2026
- AWS Certified Solutions Architect – Associate, 2021
- AI Product Management Certificate — Product Faculty, 2024
- Advanced PM Skills (Crystal) — Product Faculty, 2025
- Project Management Professional (PMP), 2022