J Conroy, C Thierauf, P Rule, E Krause, H Akitaya, A Gonczi, M Korman, M Scheutz. 2021 IEEE International Conference on Robotics and Automation (ICRA).
How do we make a robot that can provably cover a complex region?
You can read the full paper here.
When COVID hit, labs everywhere started looking for ways to contribute. At Tufts, the Computational Geometry group and the Human-Robot Interaction Lab (where I was a grad student at the time) aimed to make autonomous ultraviolet disinfection both affordable and scientifically grounded. Hospitals had been using UV robots for years, but most were prohibitively expensive and not especially intelligent. They relied on fixed lamp positions or heuristic movement patterns with no guarantee that all surfaces were disinfected.
The geometry group devised an algorithmic solution to this challenge: they developed a method to calculate provably sufficient exposure coverage for every visible surface in a room. Meanwhile, my role was to transform that theory into a practical robot platform and help integrate it into the HRILab’s DIARC architecture. The objective was to demonstrate that a rigorous algorithm could effectively operate in the unpredictable physical world.
The problem
UVC light kills pathogens by damaging their DNA, but its effectiveness rapidly diminishes with distance and requires direct line-of-sight. This means a lamp fixed in one position can leave significant areas untreated. To ensure thorough disinfection, a robot must determine where to move, how long to stay at each spot, and how to minimize total operation time while guaranteeing that every accessible surface receives sufficient UVC exposure. The Computational Geometry team transformed this challenge into a linear programming problem. Using a 2D map of a complex environment, they identified the precise points where the robot should position the disinfection lamp and provided a proof that the entire space would be effectively sanitized.
Building the platform
While the team focused on developing the algorithm, I engineered the hardware and systems architecture. The robot featured a Jetson Nano for onboard computation, a 17-watt germicidal lamp, and dual Hokuyo lidars providing full 360-degree mapping. I designed the chassis using standard aluminum extrusion, incorporated brushless DC motors with differential drive, and managed all operations through ROS. Additionally, I integrated DIARC for coordination and fault management, enabling the robot to plan, execute, and report within our cognitive framework rather than functioning as an isolated ROS node.
It wasn’t fancy (lockdown conditions reduced whatever parts were available), but it was functional enough to run our trials.

The Sledbot prototype deployed in the lab.
The algorithm in practice
The pipeline began with mapping, where lidar scans were transformed into a clean polygonal floor plan through morphological filtering and simplification. Next, the algorithm calculated visibility polygons for each potential lamp position, modeled light falloff, and solved a linear program to determine the minimal set of waypoints and dwell times needed to ensure full coverage. Finally, the robot planned an efficient traversal path through these waypoints using a modified traveling salesman heuristic.

The top-down 2d map (black) generated by Sledbot of the HRILab multi-room environment, and then the interpreted boundaries of that map in blue, then simplified in red.
Testing and results
We conducted tests first in simulation and then in real indoor environments. Compared to a stationary lamp, the robot delivered significantly better coverage in much less time. This approach reduced disinfection duration by about half while achieving nearly complete surface exposure.
