Teams of small autonomous UAVs can be used to map and explore unknown environments which are inaccessible to teams of human operators in humanitarian assistance and disaster relief efforts (HA/DR). In addition to HA/DR applications, teams of small autonomous UAVs can enhance Warfighter capabilities and provide operational stand-off for military operations such as cordon and search, counter-WMD, and other intelligence, surveillance, and reconnaissance (ISR) operations. This paper will present a hardware platform and software architecture to enable distributed teams of heterogeneous UAVs to navigate, explore, and coordinate their activities to accomplish a search task in a previously unknown environment.
Currently fielded small unmanned ground vehicles (SUGVs) are operated via teleoperation. This method of operation
requires a high level of operator involvement within, or near within, line of sight of the robot. As advances are made in
autonomy algorithms, capabilities such as automated mapping can be developed to allow SUGVs to be used to provide
situational awareness with an increased standoff distance while simultaneously reducing operator involvement.
In order to realize these goals, it is paramount the data produced by the robot is not only accurate, but also presented in
an intuitive manner to the robot operator. The focus of this paper is how to effectively present map data produced by a
SUGV in order to drive the design of a future user interface. The effectiveness of several 2D and 3D mapping
capabilities was evaluated by presenting a collection of pre-recorded data sets of a SUGV mapping a building in an
urban environment to a user panel of Soldiers. The data sets were presented to each Soldier in several different formats
to evaluate multiple factors, including update frequency and presentation style. Once all of the data sets were presented,
a survey was administered. The questions in the survey were designed to gauge the overall usefulness of the mapping
algorithm presentations as an information generating tool. This paper presents the development of this test protocol
along with the results of the survey.
Autonomous systems operating in militarily-relevant environments are valuable assets due to the increased situational
awareness they provide to the Warfighter. To further advance the current state of these systems, a collaborative
experiment was conducted as part of the Safe Operations of Unmanned Systems for Reconnaissance in Complex
Environments (SOURCE) Army Technology Objective (ATO). We present the findings from this large-scale experiment
which spanned several research areas, including 3D mapping and exploration, communications maintenance, and visual
intelligence.
For 3D mapping and exploration, we evaluated loop closure using Iterative Closest Point (ICP). To improve current
communications systems, the limitations of an existing mesh network were analyzed. Also, camera data from a
Microsoft Kinect was used to test autonomous stairway detection and modeling algorithms. This paper will detail the
experiment procedure and the preliminary results for each of these tests.
Efficient and accurate 3D mapping is desirable in disaster recovery as well as urban warfare situations. The
speed with which these maps can be generated is vital to provide situational awareness in these situations. A
team of mobile robots can work together to build maps more quickly. We present an algorithm by which a team
of mobile robots can merge 2D and 3D measurements to build a 3D map, together with experiments performed
at a military test facility.
Currently, the 3000+ robotic systems fielded in theater are entirely teleoperated. This constant dependence on operator
control introduces several problems, including a large cognitive load on the operator and a limited ability for the operator
to maintain an appropriate level of situational awareness of his surroundings. One solution to reduce the dependence on
teleoperation is to develop autonomous behaviors for the robot to reduce the strain on the operator.
We consider mapping and navigation to be fundamental to the development of useful field autonomy for small
unmanned ground vehicles (SUGVs). To this end, we have developed baseline autonomous capabilities for our SUGV
platforms, making use of the open-source Robot Operating System (ROS) software from Willow Garage, Inc. Their
implementations of mapping and navigation are drawn from the most successful published academic algorithms in
robotics.
In this paper, we describe how we bridged our previous work with the Packbot Explorer to incorporate a new processing
payload, new sensors, and the ROS system configured to perform the high-level autonomy tasks of mapping and
waypoint navigation. We document our most successful parameter selection for the ROS navigation software in an
indoor environment and present results of a mapping experiment.
Man portable robots have been fielded extensively on the battlefield to enhance mission effectiveness of soldiers in
dangerous conditions. The robots that have been deployed to date have been teleoperated. The development of assistive
behaviors for these robots has the potential to alleviate the cognitive load placed on the robot operator. While full
autonomy is the eventual goal, a range of assistive capabilities such as obstacle detection, obstacle avoidance, waypoint
navigation, can be fielded sooner in a stand-alone fashion. These capabilities increase the level of autonomy on the
robots so that the workload on the soldier can be reduced.
The focus of this paper is on the design and execution of a series of scientifically rigorous experiments to quantifiably
assess operator performance when operating a robot equipped with some of these assistive behaviors. The experiments
helped to determine a baseline for teleoperation and to evaluate the benefit of Obstacle Detection and Obstacle
Avoidance (OD/OA) vs. teleoperation and OD/OA with Open Space Planning (OSP) vs. teleoperation. The results of
these experiments are presented and analyzed in the paper.
Large gains in the automation of human detection and tracking techniques have been made over the past several years.
Several of these techniques have been implemented on larger robotic platforms, in order to increase the situational
awareness provided by the platform. Further integration onto a smaller robotic platform that already has obstacle
detection and avoidance capabilities would allow these algorithms to be utilized in scenarios that are not plausible for
larger platforms, such as entering a building and surveying a room for human occupation with limited operator
intervention.
However, transitioning these algorithms to a man-portable robot imparts several unique constraints, including limited
power availability, size and weight restrictions, and limited processor ability. Many imaging sensors, processing
hardware, and algorithms fail to adequately address one or more of these constraints.
In this paper, we describe the design of a payload suitable for our chosen man-portable robot, the iRobot Packbot. While
the described payload was built for a Packbot, it was carefully designed in order to be platform agnostic, so that it can be
used on any man-portable robot. Implementations of several existing motion and face detection algorithms that have
been chosen for testing on this payload are also discussed in some detail.
Large gains have been made in the automation of moving object detection and tracking. As these technologies continue to mature, the size of the field of regard and the range of tracked objects continue to increase. The use of a pan-tilt-zoom (PTZ) camera enables a surveillance system to observe a nearly 360° field of regard and track objects over a wide range of distances. However, use of a PTZ camera also presents a number of challenges. The first challenge is to determine how to optimally control the pan, tilt, and zoom parameters of the camera. The second challenge is to detect moving objects in imagery whose orientation and spatial resolution may vary on a frame-by-frame basis. This paper does not address the first issue, it is assumed that the camera parameters are controlled by either an operator or by an automated control process. We address only the problem of how to detect moving objects in imagery whose orientation and spatial resolution may vary on a frame-by-frame basis.
We describe a system for detection and tracking of moving objects using a PTZ camera whose parameters are not under our control. A previously published background subtraction algorithm is extended to handle arbitrary camera rotation and zoom changes. This is accomplished by dynamically learning 360°, multi-resolution, background models of the scene. The background models are represented as mosaics on 3D cubes. Tracking of local scale-invariant distinctive image features allows the determination of the camera parameters and the mapping from the current image to the mosaic cube. We describe the real-time implementation of the system and evaluate its performance on a variety of PTZ camera data.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.