I’m a Research Engineer at the Woods Hole Oceanographic Institution (WHOI).

I research autonomous partners for scientific exploration of challenging domains. I mostly work with and help deploy the full-ocean depth Sentry AUV.

I’m currently at-sea. Expect email delays.

Recent Updates

Bard College Talk

march 2026

I’ll be presenting on my recent work in a seminar at Bard College. I’ll update with relevant notes and links here.

Symbolic Planning: A gentle introduction

january 2026

An introduction to symbolic planning, AI, and formal logics. Aimed at grad students/professionals, or advanced undergraduates. Also, some code for you.

read more

OCEANS (IEEE/MTS)

october 2025

I helped publish work with Ethan Rowe on mechanical fuses that are triggered by pressure at depth, allowing ballast drop with no electronic systems.

read more

Mariana Trench Fieldwork

march 2026

I’m at-sea until mid-march with the Sentry AUV. We’ll be performing at-sea data collection to learn about some of the oldest geologic formations on earth, which informs modern interpretations of data.

read more

OCEANS (IEEE MTS)

october 2025

A new robot autonomy architecture, based on cognitive architecture and long horizon planning principles. I developed and tested this on-board Sentry, and a publication is now available.

read more

ICDL

september 2025

Matthias Scheutz (director of the Tufts HRILab) presented our work on generating RL simulations to help robots solve novelties.

read more

About

I work in the Deep Submergence Laboratory (DSL) and National Deep Submergence Facility (NDSF) on the Sentry AUV. Previously, I was a grad student in the Tufts Human-Robot Interaction Lab, where I earned a joint PhD in Computer Science and Human-Robot Interaction.

I’m interested in a handful of research problems, all of which tie back to creating robots that are capable deep-sea exploration partners:

  • How can robots make decisions on behalf of human operators? How do the operators know that these decisions are safe and in-line with human needs?
  • How can robots communicate their observations and decisions when bandwidth is limited? We see this scenario both when a robot is deployed underwater with no tether, and when autonomous vehicles outpace human cognition.
  • How can robots identify, interpret, and resolve failure autonomously? How can these new failure modes be identified and communicated?
  • How can machine learning techniques (like the neural net systems that excel at identifying trends) be more tightly integrated into symbolic planning techniques (like the STRIPS or PDDL style planning systems that excel at long-horizon planning)?