Download Research Tools
Robotics technology plays an increasingly important role in search-and-rescue missions. Robots are used to explore areas that are deemed too dangerous or difficult for human teams to access. They can, for example, be used to investigate a hazardous material spill or search for disaster survivors. In the case of a disaster, a robot may save the life of not only the victims but also the rescue workers who might otherwise place themselves in harm’s way to search for survivors. Because of the life-saving potential of search-and-rescue robots in emergency situations, researchers are investigating better ways to control the robots in stressful and challenging environments.
Robots that are used for search and rescue are essentially an extension of the human rescue team. Cameras, microphones, and other sensors that are attached to the robot transmit critical information to the rescue team, who typically controls the robot’s movement remotely. Until recently, rescuers who managed a search-and-rescue robot normally had to manipulate complicated joysticks, dials, and switches on a very elaborate controller with multiple electro-mechanical parts. As described in our blog entry last year, the robotics lab at the University of Massachusetts, Lowell (UML), has developed a natural user interface (NUI) controller that promises greater finesse and control of robots during search-and-rescue operations.
Today we are pleased to present a new short video that highlights the accomplishment of this work and gives you an update on its status.
Building the DREAM Controller
The Lowell robotics lab takes a NUI approach for the Dynamically Resizing Ergonomic and Multi-Touch (DREAM) Controller, which has been in development since 2008. Two Microsoft technologies underlie the DREAM Controller: the Microsoft Robotics Developer Studio (used for simulation) and Microsoft Surface (the user interface).
The Microsoft Surface is a coffee-table-sized device with a computer inside and a touch-sensitive interface on top. The Surface allows multiple users to interact with the computer simultaneously by using whole-hand or multiple-finger gestures. These gestures enable rescue teams to control robots with greater dexterity than they could with traditional robotics controllers—and precise control of the robots is critical for search-and-rescue efforts. In addition, the Surface permits more than one robot to be controlled simultaneously—previously not possible with a single controller.
To use the DREAM Controller implemented on the Surface, users simply place their hands on the interface. The DREAM Controller identifies the user’s fingers and thumbs and displays a virtual “joystick” beneath their hand. The user then uses their thumb to manipulate the virtual joystick. There are up to four degrees of freedom (two on each thumb, that is, X-Y on each), enabling the control of four different dimensions.
The Lowell team (Holly Yanco and Mark Micire) is also developing a series of pre-programmed gestures with guidance from expert search-and-rescue volunteers. The goal is to develop code that enables the DREAM Controller to recognize specific gestures that rescue workers make naturally during a search-and-rescue operation, thereby facilitating and accelerating rescue efforts.
The novel NUI approach to robotics that was employed by the Lowell robotics lab in this socially significant application helped the DREAM Controller project win one of eight grants that Microsoft Research offered under our Social Human Robot Interaction Request for Proposals (RFP). The grant award included financial support, a donated Microsoft Surface, and access to the Microsoft Research team.
I think the DREAM Controller project truly shows what a better first response system—using NUI technology—could look like in the very near future. Check out the video!
—Stewart Tansley, Senior Research Program Manager, Microsoft Research Connections
Watch how the Lowell Team from the University of Massachusetts use Surface and RDS to build a very natural
Hi research team,
Very cool project!
I like the fact that a single robot could be able to take commands via a kinetic-like camera/audio sensor mounted on the robot, and via a Surface device - giving it the best controllability for each circumstance.
I look forward to seeing what else comes out of this project.