Abstract
Loss due to fire
damage has always been a major area of concern for both industrial and
residential areas. After performing more in depth research, it was identified
that the US Navy has had an increased demand for improved fire detection
technology in order to reduce costs incurred due to fire damage and false
alarms [1]. A formal needs assessment was carried out in order to determine
the design requirements of such technology. Through a problem formulation
procedure, it was determined that the use of an autonomous robot equipped
with advanced fire detection technology can minimize costs, reduce false
alarms, and be highly extensible to other industries. The design of this
system was broken down into three main components: navigation, localization
and fire detection.
For the localization process, during the Initial Training Phase, two
techniques are considered. The first, Manual Placement, involves recording
Wi-fi Access Point Signal Strengths and LASER sensor values at location
spaced by 1m intervals in both horizontal and vertical direction on the area
to map, during ‘offline’ mode when the fires are not being detected. The
second technique, Gaussian Process Regression, involves heavily computational
stochastic calculations while the fire-detection is also going on, in
‘online’ mode. Since Gaussian Process Regression requires significant
processing power and may interfere with fire-detection process, Manual
Placement was chosen as selected technique. For Location Determination, two
algorithms: Euclidean Distance and Monte Carlo were considered. Monte Carlo
algorithm is able to give position information within an accuracy of 0.5m, as
opposed to 1m. Since the desired accuracy is 0f 1m, Monte Carlo algorithm was
chosen.
Navigation
can be subcategorized into Global and Local Path Planning. Global path
planning involves finding the most optimal path (in a known environment) from
one point to another. Local path planning involves performing fast, real-time
obstacle avoidance. The vector field histogram (VFH) algorithm is selected
for as the local path planner due to its speed, efficiency and local path
optimality. The Wavefront algorithm was chosen as the global path planner
since it is computationally efficient and offers optimal global paths.
The latest high-end fire detection
technology was researched in order to determine which sensor types produce
the most accurate results. A variety of lower-end and higher-end sensors were
grouped together in sensor packages, and evaluated based on a set of cost and
performance criteria. Resultantly, a sensor suite was selected that is
comprised of a high-end NetSafety Triple IR flame detector, a USB nightvision
web camera, a Hamamatsu UVTron flame detector, and a combination
photoelectric-ionization smoke alarm. The web camera is to be used with a
custom built vision system using open source vision libraries. A method of
detecting smoke and flames through video is also described with reference to
recent research done on the matter. These sensors readings are all to be
evaluated in parallel in order to have built in redundancy when deciding to
sound a fire alert. This enables the reduction of false alarm signals, and
allows for the detection of most common fires (class A fires) in the most
common locations. Lastly, this system fell within our cost criterion of
$5000/unit as the total system costs less than $4000.
1
Introduction
1.1
Background
With the rapid development of technology and innovation, there has been
increased focus on the area of fire detection throughout the past few
decades. One of the major applications for such technology is on navy
ships, where the cost of such fires has shown to be disastrous. Losses due
to fires in 2007 were in excess of $50 million and $70 million in damages
were sustained in May of 2008 due to fire onboard the Aircraft Carrier USS
George Washington Japan [1]. Recent findings suggested that 36% of naval
personnel smoked, which can translate to over 1000 smokers on board [2].
This adds an ever increasing risk of fire damage and has been one of the
main reasons why the US Navy has invested time and effort in research
alternative fire detection systems. The results of their studies have been
analysed and they show sufficiency in detecting actual fires but limited
success in reducing false alarm rates. Although fire detection and
prevention is essential for highly equipped navy ships, it can be extended
to other areas as well such as industries and warehouses, which can include
banks and key government buildings; fires in these areas could have
significant immediate financial impacts and loss of highly important
information. Advanced fire detection can also be applied to residential
areas to reduce residential fires if the size and cost of such systems is
miniaturized.
|

|
1.2
Needs Assessment
There is a realistic need for fire detection onboard navy ships and as well
as other areas. Fires can occur very easily and incur significant damages
and possibly lost lives. Such factors as hazardous environments and smoking
can be the causes to fires. The idea of fire detection will help with the
preventing fires, but also present the problem of having false alarms in
detection. In 2007, 13.64% of all detected fires were false alarms, and
this can be very costly in terms of personnel and resources. False alarms
cause some operations to shut down and personnel to be relocated and
assigned to look after the situation. The simple alternative of adding more
smoke alarms will not suffice due to cost and lack of reliability. A single
smoke detector will cost around $20, making a total cost of over $1million
for a 3.5 acre aircraft carrier if this was to be implemented. Not only is
this solution very costly but would significantly increase the number of
false alarms. Also, smoke detectors might only be able to detect some fires
after its initial early stages. Another intuition may be to increase the
number of personnel to roam the ship for possible sources of fire; however
this requires a large number of human resources to cover the entire ship,
thus increasing cost significantly. Therefore, an incentive is provided for
finding a newer and more efficient solution for fire detection and
prevention, one that must work to lower the cost of fire detection and at
the same time, be reliable and cut down on the number of false alarms.
|

|
1.3
Problem Formulation
1.3.1
Objectives
It follows from the needs assessment that an autonomous fire detection robot
is capable of eliminating the human cost factor, and if equipped with
advanced fire detection technology, can reduce false alarms while being able
to provide fire detection coverage for a larger area with a minimized cost.
The overall goal of this project
is to develop an autonomous early fire detection mobile robot system that is
capable of identifying fires at early stages, alerting fire personnel, and reducing
false alarms. There are three subcomponents of this project that will
identify more specifically the goals and objectives with respect to each
component.
For the navigation of the robots,
the goal is to allow for full navigation of the given area as quickly as
possible via the most optimal path while avoiding obstacles. The objectives
here are to cover 90% of the (accessible portions of the) pre-specified area,
at a minimum average speed of 500cm/s per robot, and to avoid 100% of any
static/dynamics obstacles.
In terms of localization, the
goal is to design an efficient location determination system to aid with the
navigation aspects. The main objective is to achieve an acceptable level of
accuracy in terms of location estimation.
The last component and arguably
the most important one is fire detection. The main goal here is to develop a
fire detection system that is accurate, minimizes false alarms, and works in
unison with an existing centralized fire alarm system. The objectives are to
have zero missed detection calls for most common fire scenarios, to be
functional in a smoke-filled environment, a system that can minimize false
alarms to below 25%, and to keep the cost under $5000 for the detection
system itself.
1.3.2
Constraints
In terms of constraints, the navigation aspect consists of three major
constraints. To cover 90% of the pre-specified area, a relatively accurate
map of the area is provided, and to have an average speed of 500cm/s, the
traveling surface for the robots should be relatively flat and offers
sufficient traction. Finally to avoid all static and dynamic obstacles, this
much exclude fast moving objects directed towards the robot.
For communication and
localization, the main constraint is to have the robot always be within range
of at least 4 Wi-Fi Access points so it can pick up at least 4 Wi-Fi signals.
One major constraint for fire
detection is the detection consists of class “A” fires and small electrical
fires such as paper, wood and organic material. A second constraint is for
detection in common locations such as desk fires garbage bins, dry storage
areas and control rooms. The last constraint is that false alarms be limited
to certain test cases consisting of cigarette smoking, cooking, welding, and
high temperature operations (ie. engine room).
There are four main criteria for
navigation, with the first one being to keep the implementation complexity at
a minimum to allow for better performance and faster computational speed. The
second criterion is to have smooth obstacle avoidance, that is to say the
robots will smoothly maneuver around an obstacle without stopping and turning
and making jerky movements. The last criterion is to have path optimality, so
that the robot will take the best path; this could mean the shortest path in
some circumstances.
There are two main criteria for
localization. The first is for the initial training phase technique, which
requires minimal manual overhead and minimal processing to give higher
priority to the fire detection process; complexity will need to be held at a
minimal level for this to be achieved. In terms of location determination
algorithms, a desired accuracy of within +/- 1.0 m of location is required as
well as minimal computational processing.
The criteria for fire detection
includes keeping the cost to under $5000 per unit, to decrease false alarm
rate to 25%, to have ease of integration with respect to power source, signal
conditioning, and programming. The proposed solution should try to minimize the
amount of work required in conditioning the signal output from the sensors of
each package; some sensors can be integrated and operated more easily than
others. Finally the robot should be able to operate in close proximity to a
fire and in smoke-filled environments so that the sensors can still maintain
their normal functionalities.
1.4
Patents
The following are a few current patents that relate to difference aspects of
the project, showing some of the research and development in recent years for
both areas of fire detection and navigation/localization.
1.4.1
Patent #1 - Fire detection
and extinguishment system
This patent was issues on Jan. 23rd, 1996 by John P. Wehrle, Ernest A. Dahl,
James R. Lugar, and assigned by The United States of America as represented
by the Secretary of the Navy. It overviews an early fire detection and
extinguishment system provided by using more than one unit. Each unit is
equipped with an extinguishment system and localized to a protected space,
the data is processed by central control unit to reduce false alarms and
increasing rate of sensitivity. Claims of this patent includes a fire
detection and extinguishment system for detecting and extinguishing early
stage fires in a protected space, a system of sensors for fire detection, and
a localized communication system.
1.4.2
Patent #2 - Fire detection
system with IR and UV ratio detector
This patent was issued on June 19th, 1984 by Roger A. Wendt, assigned by
Armtec Industries, Inc. it outlines an automatic fire detection system using
infrared (IR) and ultraviolet (UV) sensors, where the outputs of the sensors
are captured and compared to a predefined ratio of the inputs to determine
the presence of a fire, and generating an alarm signal. Its claims includes a
means for automatic fire detection using IR and UV radiation from
pre-selected zones, and comparing ratio of outputs to a set of known values
to generate a fire signal if the ratio falls into range of values that
characterises a flame.
1.4.3
Patent #3 - System and
method for WLAN signal strength determination
This patent was issued on March 28th, 2006 by Hamid Najafi, Xiping Wang, and
assigned by CSI Wireless LLC. This patent talks about a method for WLAN
signal strength determination by converting WLAN radio frequency (RF) signals
to voltages and comparing the voltage to a reference voltage and output the
data if it’s greater than the reference voltage. The claims are to have a
method of receiving WLAN RF signals and converting them to voltages
proportional to the signals as well as outputting data if the converted
voltage is greater than the reference.
1.4.4
Patent #4 - Location of
wireless nodes using signal strength weighting metric
This patent was issued on Oct. 3rd, 2006 by Paul F. Dietrich, Gregg Scott
Davi, Robert J. Friday, assigned by Airespace, Inc. this patent talks about A
method of directing to wireless node location mechanism that uses a signal
strength weighting metric to improve the accuracy of estimating the location
of a wireless node based on signals detected among a plurality of radio
transceivers. Its claims includes RF coverage map characterizing signal
strength for
locations in a physical region, as well as computing the estimated location
of wireless nodes based on collected signal strength.
2
Proposed Solution
2.1
Localization
The objective of the Localization system is to provide accurate information
to the robot about its position. The localization process is done first by teaching
the system which values of signal strength correspond to which specific
location. This teaching phase is the ‘Initial Training Phase’. When the
system has learned the mapping between signal strengths and physical
location, each robot then enters ‘Location Determination Phase’. Each robot
has an Wi-fi transceiver connected to it, which is able to determine signal
strength of all the Wi-fi Access Points (AP’s) in its detection range. The
robot will then run the appropriate algorithm and translate the signal
strength of the Wi-fi Access Points into a physical position.
2.1.1
Initial Training Phase
During this phase, a virtual grid is created in the robot’s software, which
contains all the physical coordinates that the robot will visit. These coordinates
are two-dimensional, similar to earth’s latitude and longitude system.
However, these coordinates will have a origin (0,0) datum point, which will
be the location of Central Master Controller. During this phase, the robots
learn which locations correspond to what values of signal strength of each of
the known Access Points. There are two techniques to learn the translation of
signal strength to physical location.
2.1.1.1
Manual Placement
In this technique, one robot will be manually placed at intervals of 1m, in
both vertical and horizontal directions. At each point that the robot is
placed, the robot will be turned on and 100 samples of signal strength will
be recorded for all the Wi-fi Access Points. Then, the data for 4 Access
Points with highest average signal strengths will be kept and others
discarded. This is to be done to limit the amount of data stored and later
analyzed to determine the location. Now, the physical location is manually
entered into the robot. Specifically the ‘x’ or horizontal coordinate, and
the ‘y’ or the vertical coordinate is entered into the system. Therefore, now
the robot is able to create a physical map of (x,y) coordinates and a signal
strength map with average signal-strength, and standard deviation at that particular
(x,y) coordinate. Finally after the Access Point signal strength at different
coordinates have been recorded, the data will be uploaded to the Central
Master Controller so that it could be given to other robots as well. Each of
the robots map a different area and supply that information to the Central
Master Controller, which eventually combines the data into a map of the
entire region, and supplies the full map back to the robots. In addition to
recording the signal strengths the robots can similarly create a virtual map
of LASER readings at each of the coordinates, where the signal strength is
also measured. The map upload, assembly, and download to robots phase can be
performed ‘offline’ before the robots start to detect fire, because it only needs
to be done once. This saves the computations to be carried on while detecting
fire, i.e.in ‘online’ mode. Therefore this way, the fire-detection process
gets higher priority, which is desirable.
2.1.1.2
Gaussian Process Regression
The Gaussian Process Regression is a sophisticated stochastic process which
is able to interpolate with very high accuracy. In this case, the robot will
be let go to go in a straight line, and will be recording signal strengths.
As each robot will record the signal strengths from the detected Wi-fi Access
Points, it will also record the position to which those signal strength
values correspond to. This can easily be done by knowing the radius
of the wheel and knowing the number rotations. The distance in this
case will be the wheel’s circumference times the number of rotations made,
which can be obtained from the software layer which deals with the movement
of the robot. Thus, when the robot is started it is told the starting
coordinates (x,y), and is let to go either in the ‘x’ or in the ‘y’
direction. So the coordinates at each rotation of the wheel have fixed ‘x’ or
‘y’ value. If the robot is moving in ‘y’ direction then the ‘x’ value is
fixed and the value of ‘y’ will be the number of wheel rotation times the
circumference of the wheel. Therefore, a rough translation between Wi-fi
Access Point Signal Strengths, LASER readings, and the actual coordinates is
made. The obtained from all robots is regressed at the Central Master
Controller, and a grid similar to the one obtained by Manual Placement is
obtained, but with less accuracy. However, this grid can be made more
accurate, and the grid point intervals can be reduced, i.e. from 1m intervals
to 0.5m to 0.1m, as more data is collected. [4] This Regression works best as
more data is recorded. Therefore this approach would require to have the
robots recording signal strength values as they are detecting fires as well,
i.e. in ‘online mode’. [4] Since the Gaussian Process is computationally
expensive and requires significantly larger processing capabilities than
Manual Placement, it will reduce the frequency of fire-detection. The details
of the Gaussian Process are outside the scope of the report, but more details
can be found in paper by Mr. F. Duvallet and Ms. A. D. Tews, entitled “WiFi
Position Estimation in Industrial Environments Using Gaussian Processes”.[4]
2.1.2
Location Determination
Phase
After the signal strengths from Wi-fi Access Points and 270 degree LASER
sensor values have been recorded, corresponding to physical locations, these
values should be used to calculate the position. The position determination
is done during this phase, once the robots are ‘online’ and actively
detecting fire. The position determination is required for the robots to know
where they are when they are trying to move through the actual physical
location and trying to get to a goal from a starting point. The location and
the goal are both positions, therefore it is critical for the robots to know
their positions. Two following approaches were considered for location
determination.
2.1.2.1
Euclidian Distance Location
Determination Algorithm
In this technique, the entire database is scanned and a value, namely the
Euclidian distance, El is calculated as: ,
and ‘n’ being the total number of sensors readings to be compared. Here, n =
5 = 4 Wi-fi Access Points and the LASER seonsor.
Where El = Euclidian
distance,
=
sensor value, either Wi-fi Access Point Signal Strength or LASER sensor in
the database.
=
sensor value, of sensor ‘i’ currently being recorded.
Therefore, this approach scans
the entire database, and compares current sensor readings to the sensor
readings in the database, for all position values. Clearly the position
values with the smallest Euclidean distance will be the output of this
algorithm. [5]. This approach has been able to achieve an accuracy of within
2 m, with 4 Wi-fi Access Points. [5] This means that the position value given
by this approach is actually off by +/- 2m from the actual position.
2.1.2.2
Monte Carlo Location
Determination Algorithm
This algorithm is applies a very sophisticated stochastic process and is able
to deliver a parameter, in our case position, from a set of data points of
that parameter and corresponding other parameters, in our case Wi-fi Access
Point Signal Strengths and LASER sensor readings. This algorithm is provided
by the open-source ‘player-stage’ platform which is already present on the
PC-Bot’s being used. The accuracy provided by this algorithm is within 0.5 m.
[6]. The entire details about this algorithm are outside the scope of this
report, but can be found in paper by Mr. F. Duvallet and Ms. A. D. Tews,
entitled “WiFi Position Estimation in Industrial Environments Using Gaussian
Processes”.[4]
Evaluation of
Initial Training Phase Technique
Criteria
\ Technique
|
Manual Placement
|
Gaussian Process
|
Score
|
Reasoning
|
Score
|
Reasoning
|
Manual Overhead
|
4
|
This technique requires
manual placement of robots at 1m intervals, therefore it requires
significant manual overhead
|
7
|
This technique only
requires manual input once, for the initial position. Therefore, it
requires less manual overhead.
|
Processing Required
|
8
|
Since this can be done
‘offline’, it would require no processing ‘online’, when the robots are also
detecting fires.
|
2
|
This technique requires
constant updates to its database therefore it takes processing time, while
fire-detection process is also ‘online’
|
Total Score
|
12
|
|
9
|
|
Table 1 – Evaluation of Initial Training Phase
Techniques
Clearly, from the above analysis, “Manual Placement”
Technique will be chosen.
Evaluation of
Location Determination Algorithm
Criteria \ Algorithm
|
Euclidian Distance
|
Monte Carlo
|
Score
|
Reasoning
|
Score
|
Reasoning
|
Accuracy
|
2
|
It can only give position
within an accuracy of 2m
|
10
|
It gives position within
accuracy of 0.5m
|
Processing Required
|
6
|
Since this looks up the
entire data-base, it is computationally very expensive.
|
7
|
It also looks up the entire
database and does some further processing on it, and therefore requires
more processing.
|
Total Score
|
8
|
|
17
|
|
Table 2 – Evaluation Location Determination Algorithms
2.2
Navigation
One of the main objectives of this project is to have the robots fully
navigate the given area as quickly as possible via this most optimal path
while avoiding obstacles. Obstacle avoidance and navigation path optimality
have been classic robotics problems and although many solutions have been
proposed, none have been universally perfect for all applications. As such,
it is important to use appropriate methodologies for each application
independently in order to meet the required specifications.
Navigation can be subcategorized into Global and Local Path
Planning. Global path planning involves finding the most optimal path (in
a known environment) from one point to another. Local path planning involves
performing fast, real-time obstacle avoidance. There are several methods
which are neither global nor local path planners such as the potential
field method. However, such methods are not guaranteed to be
optimal and may fail if the environment contains local minima (i.e. specific
arrangements of obstacles which may cause the robot to become permanently
immobilized). Therefore, such methodologies will be excluded from this design
in order to maintain navigation continuity. Local path planners by themselves
may also suffer from this problem and do not perform well when the goal is
far away; however, implementations which consist of both local and global
planners are often optimal and guarantee continuity. As such, this design
will include a hybrid navigation methodology which will consist of one technique
from each category of path planning. [8]
2.2.1
Global Path Planning
Global path planning methodologies are often computationally expensive and
require a relatively accurate map of the environment in order to determine
the optimal path. The frequency of re-planning is dependent on the efficiency
of the algorithm. Ideally, the global planner should update the environment
in real-time (as obstacles are found by the local planner) and recalculate
the globally optimal path. Some popular techniques to accomplish global path
planning include the Wavefront algorithm, the A* Search
algorithm and having the user manually input a desired global path.
In terms of determining an acceptable global path; having the end user
manually determine the path is perhaps the simplest methodology. However, in
terms of implementation complexity it is not the best solution because it
requires a relatively complex Graphic User Interface (GUI) to be built in
order to obtain the desired path from the user. Furthermore, manually
inputted paths are not guaranteed to be optimal even though they may be
acceptable to the end user; usually such solutions are offered as an
secondary options.
The wave front algorithm (also referred to as Distance Transform Path
Planning) is unique in the way that it determines the optimal path by
traversing backwards from the goal position towards the robot start position.
This method is guaranteed to offer an optimal path provided that the given
environment map is accurate. It is not too difficult to implement and offers
very good computational efficiency.
2.2.1.3
A* Search Algorithm
The A* algorithm is a best first tree search algorithm which uses a combination
of the path cost and a heuristic function to determine the order in which it
visits the tree nodes. The path cost is associated with the cost moving from
one position to another and the heuristic function provides an estimate of
the desirability of visiting a given node. For the purposes of this project,
the heuristic function can be the straight line distance from any given
position to the goal position and the path cost can be 1 per grid move. This
algorithm guarantees to find the optimal path to the goal position if one
exists. Its implementation complexity and time efficiency are both worse than
the wave front algorithm.
As shown in Table 3, the wavefront algorithm is the best method of
determining the optimal global path. It is more computationally efficient and
less complex to implement than the A* search algorithm. It also guarantees to
find the optimal path to the goal if one exists.
|
WF
|
A*
Search
|
UD
|
Time Efficiency
|
0.8
|
0.7
|
0.4
|
Complexity
|
0.5
|
0.4
|
0.5
|
Global Path Optimality
|
0.7
|
0.7
|
0.3
|
Total
|
2.0
|
1.8
|
1.2
|
Table 3 – Comparison of global path planning design
alternatives
As such, the wavefront algorithm will be incorporated into this project
and discussed in more detail in the following sections.
2.2.2
Local Path Planning
Local path planning (also referred to as Local Navigation and Obstacle
Avoidance) can be performed using a variety of existing methodologies.
Local path planners must be very computationally efficient as they are
required to dynamically detect environmental changes and reactively take
appropriate action all in real-time. A good local path planner will not
collide with any static or moving object and will try to smoothly steer
around obstacles without stopping (given a reasonable speed of movement for
both the robot as well as other objects). Some popular local planners include
the Vector Field Histogram (VFH) Algorithm, Edge Detection
Methods and the Dynamic Window Algorithm. [11]
2.2.2.1
Vector Field Histogram
The VFH algorithm has been recognized by many as the best method of
performing obstacle avoidance in existence today. It uses a polar histogram
of vector forces generating by obstacles and the target. The obstacles have a
different polarity from the target and the sum of the forces causes the robot
to be attracted towards the target and repelled from obstacles. The magnitude
of the forces is determined by many factors including the distance from the
robot to the obstacle/target, the direction of obstacle/target, certainty of
obstacle/target position and the estimated size of the obstacle. This
algorithm offers very fast, optimal and smooth local trajectories and does
not require the robot to stop at any given time. It is computationally
efficient and moderately difficult to implement.
2.2.2.2
Edge Detection
Edge detection methods have been around for a very long time. Upon detecting obstacles
(often using ultrasonic sensors), some variations try to follow a certain
edge of the obstacle until the robot has completely steered around it. Other
variations (also using Ultrasonic sensors) stop and take panoramic scans of
surroundings when an obstacle is detected. The data is filtered and analyzed
and a better direction of movement is determined. Some of the main problems
that exist with edge detection methods include slowness (require frequent
stopping), very sensitive to sensor misreading and great dependence on sensor
direction. [11]
2.2.2.3
Dynamic Window
The dynamic window approach is somewhat similar to VFH; however, it is
computationally faster because the search space is dramatically reduced by
only including velocities that are attainable within a short period of time
in the search. [10] Similar to VFH, dynamic window offers smooth obstacle
avoidance trajectories without requiring the robot to stop, takes robot
geometry into account and is fairly robust to sensor noise. However, this
approach is significantly more difficult to implement that VFH and requires
very accurate system modeling to be done.
As shown in Table 4, the VFH algorithm is the best method of performing
obstacle avoidance. It offers very smooth trajectories without requiring the
robot to stop, the generated local paths are optimal, it is computationally
efficient and the complexity of implementation is reasonable for this
project.
|
Edge
Detection
|
VFH
|
DWA
|
Complexity
|
0.8
|
0.7
|
0.2
|
Obs. Avoid. Smoothness
(without stopping)
|
0.1
|
0.7
|
0.7
|
Local Path Optimality
|
0.3
|
0.6
|
0.4
|
Efficiency
|
0.2
|
0.3
|
0.5
|
Total
|
1.4
|
2.3
|
1.8
|
Table 4 – Comparison of local path planning design
alternatives
As such, the VFH algorithm will be incorporated into this project and
discussed in more detail in the following sections.
2.3
Fire Detection System
As described in the problem
formulation section, the fire detection system must be designed such that it
uses advanced fire detection technology to reduce false alarms. The problem
with choosing a single sensor for fire detection is that there is no single
sensor that is capable of detecting all types of fire and smoke well and
consistently. Conventional point smoke and fire detectors such as ionization
and photoelectric detectors signal alarms because of a single circuit being
closed through the chemical and optical interference of smoke particles. It
is common for these detectors to throw false alarms from everyday activities
such as cooking, smoking, and even due to the fumes of some cleaning
solvents. Therefore they perform differently from one environment to another
due to the addition of potential agitators. Furthermore, these devices are
distance limited and for larger open areas are rendered ineffective [3].
Several competing technologies were
researched and it was found that the higher-end fire detection systems used a
combination of ultra-violet and infrared sensors and filters to identify
fires. On the basis that flames generate an immense amount of radiation at
specific frequencies in the ultra-violet and infrared region, the sensors are
used to identify when many of the target frequencies are being given off to
signal a fire alarm. These systems often claim significantly reduced false
alarm rates due to their inherent redundancy of using multiple sensors to
generate a “smart” alarm. Specialized electronics in these systems further
process sensor readings for flicker frequency, red vs. blue comparisons, and
energy per unit time comparisons to further improve the detection algorithm.
The flicker frequency is defined as the rate at which a flame is known to
oscillate in perceivable visibility, and is approximated as 10 Hz from
experiment [3]. One major weakness of these types of sensors is their
sensitivity to heat, and the proximity of heat sources such as furnaces and
engines can trigger false alarms.
NetSafety’s Triple IR sensor uses
three infrared sensors to detect three particular frequencies which correlate
to the most common gases in normal-combustible fires. This device also
incorporates many additional features to significantly reduce false alarms
such as advanced signal processing, flicker frequency analysis, and automatic
digital zoom. This system costs $3500 and is the most expensive stand-alone
fire detection unit studied for this project. Omniguard produces a similar
unit called the Omnigaurd 760 which analyses five spectral bands in the
infrared region and claims similar performance specifications. This system
has a cost of $2380 but does not use as many digital electronics for added
filtering.
Ultraviolet radiation detection
techniques have been discontinued from mainstream fire-detection practice as
they are highly sensitive to bright light from natural sources (i.e.
sunlight) and industrial practices such as welding. However, they can be used
to detect the presence of erroneous readings if used in conjunction with a
suite of fire detection sensors. For this purpose a minimal cost ultraviolet
sensor was researched. One bare ultraviolet sensing package is the Hamamatsu
UVTron Sensor which retails for approximately $80 CAD. It has a peak spectral
response for a narrow band of ultraviolet radiation (185nm-260nm) and is
insensitive to visible light.
There are also several commercial
vision systems designed for fire and smoke detection. They use wavelet domain
analysis techniques to identify flames and smoke in a camera’s field of view.
However, the costs of these systems are highly restrictive for the purposes
of this project as they go well above $10,000. A lower cost alternative is to
use open-source vision software such as OpenCV and an off-the-shelf video
camera. Several resources are available to assist in the development of fire
detection algorithms through vision and can be leveraged for due
implementation in this project.
2.3.1
Assessment
By grouping these sensors into various packages, a decision matrix is made
and is used to evaluate each fire detection system based on the criteria
identified in the Problem Formulation section. These packages are formed in
order to identify whether the performance of the more expensive sensors
justifies their cost.
Package 1
|
Package 2
|
Package 3
|
NetSafety Triple IR
USB Night Vision Camera
Combo Smoke Alarm
Hamamatsu UV Sensor
|
Omniguard
IR 760
USB
Night Vision Camera
Combo
Smoke Alarm
Hamamatsu
UV Sensor
|
USB
Night Vision Camera
Combo
Smoke Alarm
Hamamatsu UV Sensor
|
Table 5 – Sensor Package Alternatives
|
|
|
Sensor Package
|
Criteria (Weight %)
|
Weight
|
Package 1
|
Package 2
|
Package 3
|
Cost (20%)
|
|
|
|
|
|
|
Cost
|
20.00%
|
2.64
|
4.88
|
9.64
|
Sensor Quality (35%)
|
|
|
|
|
|
False
Alarm Immunity
|
10.00%
|
8
|
6
|
5
|
|
Performance
|
15.00%
|
9
|
7
|
4
|
|
Field
of View
|
10.00%
|
8
|
8
|
5
|
Ease of Integration (30%)
|
|
|
|
|
|
Power
source
|
5.00%
|
6
|
6
|
9
|
|
Signal
conditioning
|
10.00%
|
7
|
7
|
9
|
|
Programming
|
15.00%
|
6
|
6
|
2
|
Robustness (15%)
|
|
|
|
|
|
Fireproof
|
7.50%
|
9
|
4
|
1
|
|
Operation
in Smoke
|
7.50%
|
5
|
5
|
7
|
|
Total
|
100.00%
|
6.428
|
6.001
|
5.778
|
|
|
|
|
|
|
|
|
|
|
Table 6 – Sensor Package Decision
Matrix
Based on the valuation carried out
in the form of a decision matrix (Table 6), Package 1 seems to be the best
option for this design. The performance of the higher end triple IR sensor,
in fact, did justify the cost for the design criteria defined by the problem
formulation.
3
Design
3.1
914 PC-BOT
The 914 PC-BOT
(shown in Figure 1) from Whitebox Robotics (hereinafter referred to as pcbot)
has been predominantly designed to serve as a development platform for a
variety of applications such R&D, academic research and proof-of-concept
projects and offers a great deal of design versatility. The robot consists of
very standard PC hardware which can be very easily accessed, programmed,
configured and modified. It offers numerous enabling technologies such as
802.11g wireless communication (see Appendix A for detailed specifications).
Many operating systems such as Windows, UNIX and several flavors of Linux
(including Ubuntu) can be deployed on the pcbot. In order to utilize several
open source robotics and sensor interfacing projects such as Player/Stage,
Ubuntu Linux was chosen as the main platform for this project. [7]

Figure 1 – Whitebox Robotics 914
PC-BOT with and without body cover [7]
3.2
Player/Stage Project
Player/Stage is an open source project initiated by several academics in the
field of robotics. With the help of the open source community, it has now
grown to entail many other areas such as sensor and actuation interfacing,
wireless control, 2D and 3D simulation environments, camera visualization and
many more. It also offers very powerful multi-robot navigation, localization
and path-planning design tools. Player/Stage is compatible with a large
number of sensors including sonar, laser, radar, infrared, pan-zoom-tilt
cameras, and many more. Finally, it provides a capable library of reusable
code and serves as a platform to develop robotic systems more quickly and
efficiently.
Based on the widespread usage of
Player/Stage in the field of robotics, the potentially enormous benefits of
the above capabilities, the lack of any other viable competitors and the fact
that it is free, it was chosen as the underlying platform to be used for this
project.
3.3
Localization
3.3.1
Design of Manual Placement
Algorithm
The known Access Points are the Access Points which have been inputted into
the robot’s software. This kind of identification can be easily done by
differentiating different Access Points based on their ‘Service Set
Identifiers’ or SSID’s, which are broadcasted by all Access Points which are
powered up. By recognizing the known SSID’s, the robots will be able to
recognize the known Access Points. Therefore to collect the signal strength
data, a program will be created, which will record 100 samples of signal
strength data for each of the detected Wi-fi Access Points. The structure of
the program is mentioned below:
Program Record_Data
{
Record_Position();
for each (Wi-fi Access Point
in Range)
{
Record_AP_signal_strength(100)
}
For each (LASER Sensor)
{
Record_LASER_Sensor_Value(100)
}
}
The Record_Position() function
will use ask the user to input the current position. This is possible because
each of the robots can be connected to a monitor and a keyboard and be used
as a computer. Thus data can easily be written into the robot.
The Record_AP_signal_strength function will use
Linux’s “wireless-tools” services. Specifically “iwconfig” service will be
used to obtain signal strengths.
The Record_LASER_Seonsor_value function will
use ‘player-stage’ project’s sensor data acquisition service, for the LASER
seonsor.
In a graphical manner, the different mappings and
translations can be put together as in Figure 2 below. Note that the
following graphs are based on fictional data for illustration purposes.

Figure 2 - Mapping between
different layers
In the figure above, the most
bottom layer is that of ground coordinates. A grid can be seen which where
the values will be recorded. Above that layer, there is Wi-fi Access Point #
1 layer, which contains the signal strength data from Wi-fi Access Point #1.
Similarly there is a layer for Wi-fi Access Point #2. The above layers
represented in data form will be the final product of the Initial Training
Phase.
Finally, the data structure that
stores all the data, is designed as follows:
structure Mapping_Data
{
Ground_coordinates[5]
AP_Signal_Strengths[4][100] :
int
AP_SSID[4] : char
AP_Channel[4] : int
LASER_vals[100] : int
}
In the above data structure, Ground_coordinates
records the x and the y coordinate of that location.
AP_Signal_Strength, will hold the 100 samples for the 4 Wi-fi Access
Points. AP_SSID
will hold the SSID names for those Wi-fi Access Points. AP_Channel
will hold the channel on which the signal strength was recorded for each
Wi-fi Access Point. Finally, LASER_vals will hold the 100 samples
recorded from the LASER sensor.
Finally there will be a 2-dimensional array of Mapping_Data
structures to cover the entire area.
structure Map_Base_Data
{
Mapping_Data[MAX_X_COORDINATES][MAX_Y_COORDINATES]
}
This will hold the entire
information, which will be used by Monte Carlo algorithm to deliver position
data.
3.3.2
Design of Interface to
Monte Carlo Location Determination Algorithm
Since the Monte Carlo Algorithm is already provided with the ‘player-stage’
project, data will be provided to it from the Map_Base_Data above. Thus
position data can be easily obtained after the initial maps have been
created.
To analyse the design
feasibility, the signal strength can be easily recorded from “iwconfig” on
Linux Operating System. LASER values can be recorded by interfacing with the
LASER sensor through the “player-stage” project. Finally the Monte Carlo
Algorithm can also be used from the “player-stage” project.
As a Design Review, as discussed
earlier, the design will meet the objective of an accuracy within 1.0m, with
minimal processing required when the robots are also detecting fires, i.e. in
‘online’ mode. This is because “Manual Placement” data recordings will be
done in ‘offline’ mode before the robots start detecting fires, and only the
Location Determination will be carried out in ‘online’ mode.
To summarize the design, Manual
Placement technique will be used to record data, and Monte Carlo Algorithm to
determine location. The expected performance would be an accuracy of within
0.5m, and the processing will only be required by the “Monte Carlo”
algorithm, when the robots need to know position information.
3.4
Navigation
3.4.1
Global Path Planning –
Wavefront
Wavefront is a relatively simple yet very powerful global path planning
technique. As mentioned earlier, it uses the unique approach of starting from
the goal position and working towards the start position in order to
determine the optimal path. Initially, the given map of the environment is
discretized into a grid of uniformly sized squares. All obstacles in
the environment are identified and marked as occupied accordingly.
Each square is then assigned a cost value according to its position relative
to the goal square. Squares closer to the goal receive lower values and
increase with distance away from the goal so that the grid will consist of
many linearly strengthening virtual force fields around the goal square (as
shown in Figure 3). The optimal path from the start node to the goal node is
the lowest cost path or the path of least resistance. Additional logic can be
added to this algorithm in order make it more intelligent. For example,
increasing the cost values to squares near the obstacles will increase path
smoothness and reduce the chance of undesired collisions.

Figure 3 – Wavefront grid assignments
and linearly strengthening virtual force fields [9]
3.4.2
Local Path Planning – VFH
The VFH algorithm has been iteratively improving over many years now. The
original approach was called Vector Force Field (VFF) and was quite possibly
the first algorithm that offered smooth, high-speed trajectories without
requiring the robot to stop. [11] As shown in Figure 4, each cell in the
field of the vision of the robot (commonly via a sonar sensor) applies a
virtual force to the robot.

Figure 4 – Vector Force Field (VFF)
algorithm performing a data sweep in real-time [11]
Cells which do not associate with
an obstacle or the target have a force of zero. The sum of the forces R (Ftarget
- Frepulsive) causes a change in the direction and
speed of the robot in order to smoothly move around obstacles and towards the
target. Although VFF was revolutionary at the time of its proposal, it
suffered from several problems including operation in narrow hallways (as
shown in Figure 5). The forces applied by either side of the hall would cause
an unstable oscillatory motion which resulted in collision. The algorithm
behaved undesirably in other situations such as those where two obstacles
were very together and directly in front of goal. [11]

Figure 5 – Unstable oscillatory
motion of the robot using VFF in a narrow hallway [11]
The shortcomings of the VFF
algorithm lead to its optimization as the VFH algorithm. This optimization
involved the addition of a one-dimensional polar histogram to the existing
two-dimensional Cartesian histogram grid (as shown in Figure 6).

Figure 6 – VFH algorithm utilizing a
one-dimensional polar histogram [11]
This polar histogram creates a
probability distribution for each sector (of angular width α) based on
the density of obstacles and several other factors. This normalization fixes
the majority of the problems observed in the VFF algorithm. Vector forces are
no longer applied in a single line of action; instead, numerous blobs
of varying strengths push/pull the robot towards a general direction.
Additionally, a reduction in the amount of data leads to an increase in
efficiency in comparison to the VFF algorithm. [11]
Although the wavefront and VFH
algorithms each have the capability to reach the goal from the start position
individually, this will not necessarily guarantee path optimality. VFH
guarantees local but not global path optimality while wavefront does not
perform real-time obstacle avoidance. This design will involve the
implementation of a wrapper program in the central control system which will
systematically assign goal positions to the wavefront driver in order
to cover the given area in its entirety. The wavefront driver finds the
optimal path from the robot’s current position to the given goal. It then
forward smaller goal positions along the optimal path to the VFH driver in a
sequential manner. VFH will in turn perform real-time obstacle avoidance and
drive the robot to goal positions supplied by wavefront (as shown in Figure
7).

Figure 7 – Example of the proposed
system in action
The combination of wavefront and VHF algorithms will offer a highly optimized
hybrid methodology which will provide efficient and rapid navigation of
complex environments while smoothly avoiding obstacles as well as
guaranteeing local and global path optimality. This solution should meet and
in fact exceed all required navigation objectives mentioned in the previous
section.
3.5
Fire Detection System
The proposed fire detection system is made up of the four sensors detailed in
the Proposed Solution section. These sensors are to be mounted on top of the
PC-BOTS and aimed in the forward path direction of the robots. The minimum
viewing angle out of all of the sensors, with the exception of the smoke
alarm is 90⁰. If these sensors are placed adjacent to one another, a
problem arises with the field of view of each of the sensors not lining up
completely. This may cause one sensor to trigger a positive reading before
others and may cause a missed positive due to failed redundancy. This
is illustrated in the following figure.

Figure 8 – Alignment issue with sensor field of view
Since the sensors cannot
distinguish which region the measurement was taken, with the exception of the
night vision web camera, the robot will have to stop and pivot in one location
in order to verify that any readings in the unreliable region can be
confirmed to a certain degree by the remaining sensors. This will reduce the
efficiency of the fire search method but is only likely to occur when there
is either a large presence of false alarm stimuli in the same area or a fire.
An alternative is to stack the sensors vertically, however, there arises a
need for a complex housing to be custom built to prevent damage to the sensor
components when mounting the sensors on top of one another. Furthermore, this
only shifts the same problem to the vertical scale, as the vertical field of
view would no longer line up. For this reason, the sensors are to be mounted
adjacently and the robot is designed to pivot in the presence of only a single
positive fire signal.
The integration procedure of
these sensors into the overall robot system is illustrated in the following
figure. The analog outputs of the UVTron and NetSafety Triple IR are directly
connected to the expansion I/O card of the PC-BOT in order to convert the
signals to computer readable digital signals. The smoke alarm is also
connected to the I/O card after reconfiguring the alarm circuit to send the
alarm signal to the I/O card rather than the speaker. The webcam is directly
connected to the PC-BOT through a USB 2.0 connection.

Figure 9 – Fire Detection Sensor
Integration with Robot System
All of these sensors, with the
exclusion of the webcam, require basic high level drivers to be built using
Player API. Through these drivers the sensor measurements can be read by the
program running on the Player environment which is controlling each robot.
Along with drivers, an entire vision system needs to be written using open
source OpenCV vision libraries to provide added redundancy in fire and smoke
detection readings.
Several methods exist for the
detection of fire through live video capture. A few of these methods are
explained in [3], where emphasis is placed on the wavelet domain analysis of
moving object contours being the most effective technique of identifying
fires with a minimum number of false alarms. Wavelets are high-pass filtered
measurements of pixel colours. In the wavelet domain, a high-pass filter of
10Hz enables random agents such as fire to pass through due to its inherent
flicker frequency. However, it is also known that this frequency is not
constant for all fires, but rather changes for different fuel sources and
environment conditions. This frequency is not even consistent for a single
fire, but rather acts extremely randomly. This is why a Markov model is used
to analyze the contours of fire-coloured pixels that have already passed
through the filtered wavelet domain. The random flickering of the contours of
fire are to be used to identify the fires through video.

Figure 10 – Fire-coloured
object is identified using wavelet domain analysis of moving
contours [3]
Based on the design objectives
and constraints, the overall fire detection system does indeed meet all of
the requirements. In terms of accuracy and the minimization of false alarms,
four sensors are being used to build redundancy checks to ensure that only
fires set of alarms. Furthermore the inclusion of high-end infrared detection
technology and the latest vision algorithms makes this system severely less
prone to false alarms. The actual false alarm rate of this system has yet to
be determined. Lastly, the cost objective is also met as the sensor system
does not cost more than $4000.
Since the fires are being
constrained to class A fires, the infrared sensor alone is more than capable
of detecting these types of fires. Furthermore, the use of a vision system
for added smoke detection redundancy improves upon the ability of the fire
detection system to pick up desk fires and garbage bin fires, as part of the
design constraints. Lastly, the addition of an UV sensor allows for the
detection of erroneous false alarm stimuli such as welding, sunlight, etc.
4
Schedule and Budget
The chart below shows the scheduling of the project, which is
currently meeting its assigned deadlines (up until task H). The project’s
manufacturing schedule is highlighted in yellow, showing the development and
implementation of navigation system and obstacle avoidance from weeks 19 to
22. This done in parallel to the development of possible swarm communication
between the robots and WLAN localization mapping. In week 23, the sensor
packages will be mounted to the robots.

Figure 11 - Schedule for
manufacturing, commissioning, and testing
Task
|
Task
Description
|
Start
Time (week)
|
End
Time (week)
|
Duration
(weeks)
|
Prerequisites
|
A
|
Brainstorming and
researching
|
1
|
3
|
2
|
-
|
B
|
Preliminary Design
Presentation
|
2
|
3
|
1
|
A
|
C
|
Fire detection research
|
3
|
5
|
2
|
A,
B
|
D
|
Sensors assessment and
selection
|
3
|
5
|
2
|
C
|
E
|
Swarm
navigation and WLAN communication research
|
4
|
6
|
2
|
B
|
F
|
Design of sensor integration
package and navigation algorithm
|
5
|
10
|
5
|
D
|
G
|
Sensors acquisition
|
5
|
13
|
8
|
F
|
H
|
Final design presentation
and report
|
10
|
13
|
3
|
E,F
|
I
|
Exam Weeks and holidays
|
13
|
18
|
5
|
-
|
J
|
Testing and analysis of PC bots
|
18
|
19
|
1
|
-
|
K
|
Calibration of
communication protocols
|
18
|
20
|
2
|
J
|
L
|
Sensor calibration and
integration
|
18
|
21
|
3
|
G
|
M
|
Development of obstacle
avoidance in navigation
|
19
|
22
|
3
|
K
|
N
|
Development of swarm communication
|
20
|
22
|
2
|
K
|
O
|
Development of WLAN
localization mapping
|
20
|
23
|
3
|
K
|
P
|
Testing and debugging
|
23
|
24
|
1
|
M,N,O
|
Q
|
Testing of sensor package
for fire detection
|
21
|
22
|
1
|
L
|
R
|
Recalibration of sensors
|
22
|
23
|
1
|
Q
|
S
|
Mounting of sensors on
robots
|
23
|
23
|
0
|
R
|
T
|
Integration of sensor
package with robots
|
23
|
24
|
1
|
S,P
|
U
|
Initial test runs of robots
|
23
|
24
|
1
|
T
|
V
|
Debugging and optimization
|
24
|
26
|
2
|
U
|
W
|
Final test runs
|
25
|
26
|
1
|
V
|
X
|
Design symposium
|
27
|
27
|
0
|
W
|
In terms of commissioning, the only component would be the tasks highlighted
in green, integration of the sensor packages with the robotic system and
initial test runs of the robots. Testing is broken down into several stages.
First the robots will be tested in week 18 to make sure they are all up and
running and meeting the required expectations. The next stage of testing
comes in week 21-24, in testing and debugging of the navigation and
localization systems as well as testing of the sensor packages for detection.
The last component of testing will be done in weeks 24-26, where the robots
will be debugged for final test runs and optimization.
4.1
Budget
The proposed solution consists of a NetSafety Triple IR sensor for flame
detection, a Hamamatsu UVTron sensor to help identify false sources of fires,
a combination photoelectric/ionization smoke detector, and a USB night vision
camera for fire and smoke detection. The costs are as follows:
Sensor
|
Price ($)
|
NetSafety Triple IR
|
3500
|
Hamamatsu UVTron
|
80
|
Photoelectric/ionization smoke detector
|
50
|
Night vision camera
|
50
|
Table 7 – Overall system cost breakdown
As stated earlier, the objective was to reduce cost down to under $5000 per
detection package. This expected cost will meet the requirements of less than
$5000.
5
Conclusions
A mutli-sensor package has been selected for the implementation of the fire
detection system. This sensor system satisfies all cost and performance
objectives outlined during problem formulation. False alarm rates are yet to
be confirmed through testing however, there is a fair amount of certainty
that the overall system will perform adequately due to the built in
redundancy of sensor package. For Initial Training Phase, Manual Placement
technique will be used. To determine the location, the Monte Carlo Algorithm
will be used. Overall, the localization system will only use processing power
when determining the location, and will time-share the processor with the
fire-detection process. The system will be able to give position with an
accuracy of within 0.5m. A combination of two path planners (one global and
one local) have been used to obtain a highly optimized hybrid methodology
which will provide efficient and rapid navigation of complex environments
while smoothly avoiding obstacles as well as guaranteeing local and global
path optimality. The proposed solution meets and/or exceeds all required
objectives mentioned.
6
Recommendations
1.
It is recommended that the cost of the sensor package is miniaturized
even further so that it is easier to fund the purchase of three sets of these
sensor packages.
2.
It is recommended that a custom multi-IR sensor is designed using the
basic raw components found in the higher-end devices. The main issue with
this is that sourcing infrared sensors that operate in the 1-5µm range is
difficult.
3.
To increase the accuracy of the Localization system it is recommended
that the initial training be performed at intervals of 0.5m. The new data can
be used by the Monte Carlo Algorithm and thus the system could give a higher
accuracy than 0.5m.
4.
It is further recommended that a modular fire suppression system be
designed following the successful implementation of this autonomous robot
fire detection system.
7
References
[1] Mount, Mike. “U.S. Navy boots captain after fire on carrier,” CNN News,
7/30/2008. <http://www.cnn.com/2008/US/07/30/navy.captain.fired/index.html>
[2] S. Woodruff, T. Conway, C. Edwards, and J. Elder. “The United States navy
attracts young women who smoke,” Tob Control. 1999 June; 8(2): 222–223.
[3] Toreyin, B.U.; Cetin, A.E., "Online Detection of Fire in
Video," Computer Vision and Pattern Recognition, 2007. CVPR '07.
IEEE Conference on , vol., no., pp.1-5, 17-22 June 2007.
[4] F. Duvallet and A. D. Tews, “WiFi Position Estimation in Industrial
Environments Using Gaussian Processes,” 2008 IEEE/RSJ International
Conference on Intelligent Robots and Systems, pp. 2216–2221, September 2008.
Accessed: Nov. 16, 2008.
[5] S. Chantanetral, M. Sangworasilp, and P. Phasukkit, “WLAN Location
Determination Systems,”Faculty of Engineering, Computer Research and Service
Center (CRSC), King Mongkut's Institute of Technology Ladkrabang (KMITL),
Bangkok, Thailand Accessed: Nov. 16, 2008.
[6] A. Howard, S. Siddiqi and G. S. Sukhatme, “Localization using WiFi
Signal Strength,” http://robotics.usc.edu/~ahoward/projects_wifi.php,
Accessed: Nov. 16, 2008.
[7] “914 PC-BOT Robotics Development Platform – Linux Version”,
Whitebox Robotics, Inc, pp. 2, 2008.
[8] B. Gerkey, “Path Planning vs. Obstacle Avoidance,” Stamford University,
CS225B Lecture Slides, Oct. 2006.
[9] L. C. Wang, L. S. Yong, M. H. Ang, “Hybrid of Global Path Planning and
Local Navigation implemented on a Mobile Robot in Indoor Environment,” Gintic
Institute of Manufacturing Technology, National University of Singapore, pp.
1-3, Singapore, 2001.
[10] D. Fox, W. Burgard, S. Thrun, “The Dynamic Window Approach to Collision
Avoidance,” University of Bonn, pp. 2-6, Germany, 1996.
[11] J. Borenstein, Y. Koren, “The Vector Field Histogram - Fast Obstacle
Avoidance for Mobile Robots,” IEEE Journal of Robotics and Automation,
Vol 7, No 3, pp. 278-288, June 1991.
The following specifications have been provided by
Whitebox Robotics, Inc. [1]
· Height:
53.4 cm Weight: 25 kg
· Payload:
Up to 5 kg
· Maximum
Climb Slope: 8 degrees
· Differential
drive train with independent front suspension, patented self-cleaning roller
ball casters and 2 DC stepper motors
· Torso
unit containing: 2 foldable side bays (power supply housing/ bay 1, main
system board/ bay 2), 8 x 5.25" bays (5 available to user, 1 used for
sensors, 1 used for 5.25” speaker and 1 used for Slim DVD/CD-ROM and SATA
HDD).
· USB
Machine Management Module (M3) - motor controller and I/O board interface
· One
I/O board with 8 analog inputs for IR or other sensors, 8 digital outputs, 8
digital inputs and 2 USB ports sourced from the Mini-ITX.
· Two
M2-ATX power supplies with automatic battery monitoring and auto-shutoff
· Head
assembly containing one web camera
|