RATAR
Rescuer Assistance Through Autonomous Robotics
 
WHY

The Rescue Paradox

Search and rescue is a difficult task at best. The searching process can be physically arduous, mentally demanding, and downright dangerous for first responders. Upon entering a disaster area, rescue personnel often have little to no knowledge of the actual state of the building, and only educated guesswork to go on for locating victims.

What RATAR does to help

The RATAR system utilizes one or more semi-autonomous robotic platforms to explore dangerous areas. Using a technological approach called SLAM, the robot(s) build a rich three dimensional map of the interior of the building. This map can be used so that rescuers know what kind of situation they are walking into before they must put themselves at risk. Further, this allows additional personnel to "explore" the structure from outside without the need for them to don safety gear, increasing the work force that can be applied to search and rescue operations while simultaneously reducing response time.
RATAR not only provides an additional set of eyes inside a compromised structure, but it also provides a pair (technically two pairs) of ears. RATAR uses its ears to leverage a technology known as sound source localization in order to search the building for victims so on-site disaster response personnel can move to their rescue as soon as they enter the building.

 

 
HOW

Mechanics

RATAR makes use of a mobile robotic platform in order to explore disaster zones. The robot uses a reliable electric four wheel drive system to allow it to traverse uneven terrain at high speed. The drive train provides 1.77hp in order to also allow the robot to clear obstacles and move objects out of the way through brute force if necessary.

Sensing

The RATAR solution uses a single Microsoft® Kinect® to view its surroundings, as well as additional sensors to keep track of its own state. The Kinect® provides an image called a depth-map that is used in the three-dimensional map building. The Kinect® can also see in the dark through its IR camera, as well as listen to the environment through an array of four microphones.

Brains

The RATAR system uses an approach called SLAM to build a three-dimensional map of its surroundings in real time as it explores them using an exploration-based decision making algorithm. This map can then be provided to on-site rescuers to facilitate safe "virtual" explorations of the disaster area.

The microphone array is used to provide further information through a process called sound-source localization. Through the use of its on-board microphone array, that RATAR robot is able to determine the approximate direction of any sound that it hears. This allows its search alogrithm to operate with increased efficacy by allowing it to zero in on human voices and search areas it suspects contain injured victims more thoroughly. Additionally, this allows the robot to "see through" walls in a manner of speaking. By hearing what is occuring on the other side of a door or wall, the RATAR robot can direct rescue efforts in the direction that it suspects a person may be trapped.

 

 
WHO

Muny Tram

Muny is a mechanical specialist with expertise in robot design.

Derek Chow

Derek is a software specialist with expertise in autonomy in robot perception and control.

Chris Au

Chris is a software/mechanical generalist with expertise in communication.

Luke Van Oort

Luke is a software/mechanical generalist with expertise in electrical design.
 
HELP