I work in the Deep Submergence Laboratory (DSL) and National Deep Submergence Facility (NDSF) on the Sentry AUV. Previously, I was a grad student in the Tufts Human-Robot Interaction Lab.

I’m interested in a handful of research problems, all of which tie back to creating robots that are capable deep-sea exploration partners:

  • How can robots make decisions on behalf of human operators? How do the operators know that these decisions are safe and in-line with human needs?
  • How can robots communicate their observations and decisions when bandwidth is limited? We see this scenario both when a robot is deployed underwater with no tether, and when autonomous vehicles outpace human cognitive bandwidth.
  • How can robots identify, interpret, and resolve failure autonomously? How can these new failure modes be identified and communicated?
  • How can machine learning techniques (like the neural net systems that excel at identifying trends) be more tightly integrated into symbolic planning techniques (like the PDDL-esque planning systems that excel at long-horizon planning)?