Mapping the interior of buildings is of great interest to military forces operating in an urban battlefield. Throughwall
radars have the potential of mapping interior room layout, including the location of walls, doors and furniture.
They could provide information on the in-wall structure, and detect objects of interest concealed in buildings,
such as persons and arms caches. We are proposing to provide further context to the end user by fusing the
radar data with LIDAR (Light Detection and Ranging) images of the building exterior.
In this paper, we present our system concept of operation, which involves a vehicle driven along a path
in front of a building of interest. The vehicle is equipped with both radar and LIDAR systems, as well as a
motion compensation unit. We describe our ultra wideband through-wall L-band radar system which uses stretch
processing techniques to obtain high range resolution, and synthetic aperture radar (SAR) techniques to achieve
good azimuth resolution. We demonstrate its current 2-D capabilities with experimental data, and discuss the
current progress in using array processing in elevation to provide a 3-D image. Finally, we show preliminary
data fusion of SAR and LIDAR data.
To improve airborne search and rescue operations and the general aerial surveillance efficiency, new concepts of multiple field-of-view and variable-resolution imagers have been developed. These imagers provide surveillance operators with simultaneous wide area coverage and high-resolution imaging. Over the last years, several optical steering systems have been developed and field tested in operational conditions. These developments helped us to develop a novel camera steering system using achromatic Risley prisms. This simple and sturdy steering system allowed us to move rapidly and precisely the field of view of an infrared camera. To improve its performance, a three-dimensional (3D) refraction model was applied to enable fast pointing direction calibration and correction of distorted images. Image deformations were analyzed, and a fast linear image correction based on homographic transformation and the 3D refraction model are presented. Experimental results of mosaicking applications with real imaging systems in the 3- to 5-µm, 8- to 12-µm infrared, and 850- to 860-nm visible bands demonstrate the improvements achieved with the homographic image distortions correction and the fast pointing calibration procedure.
The objective of the Autonomous Intelligent Systems Section of Defence R&D Canada - Suffield is best described
by its mission statement, which is "to augment soldiers and combat systems by developing and demonstrating
practical, cost effective, autonomous intelligent systems capable of completing military missions in complex
operating environments." The mobility requirement for ground-based mobile systems operating in urban settings
must increase significantly if robotic technology is to augment human efforts in these roles and environments.
The intelligence required for autonomous systems to operate in complex environments demands advances in
many fields of robotics. This has resulted in large bodies of research in areas of perception, world representation,
and navigation, but the problem of locomotion in complex terrain has largely been ignored. In order to achieve
its objective, the Autonomous Intelligent Systems Section is pursuing research that explores the use of intelligent
mobility algorithms designed to improve robot mobility. Intelligent mobility uses sensing, control, and learning
algorithms to extract measured variables from the world, control vehicle dynamics, and learn by experience.
These algorithms seek to exploit available world representations of the environment and the inherent dexterity of
the robot to allow the vehicle to interact with its surroundings and produce locomotion in complex terrain. The
primary focus of the paper is to present the intelligent mobility research within the framework of the research
methodology, plan and direction defined at Defence R&D Canada - Suffield. It discusses the progress and future
direction of intelligent mobility research and presents the research tools, topics, and plans to address this critical
research gap. This research will create effective intelligence to improve the mobility of ground-based mobile
systems operating in urban settings to assist the Canadian Forces in their future urban operations.
The use of robots for (semi-) autonomous operations in complex terrains such as urban environments poses difficult mobility, mapping, and perception challenges. To be able to work efficiently, a robot should be provided with sensors and software such that it can perceive and analyze the world in 3D. Real-time 3D sensing and perception in this operational context are paramount. To address these challenges, DRDC Valcartier has developed over the past years a compact sensor that combines a wide baseline stereo camera and a laser scanner with a full 360 degree azimuth and 55 degree elevation field of view allowing the robot to view and manage overhang obstacles as well as obstacles at ground level. Sensing in 3D is common but to efficiently navigate and work in complex terrain, the robot should also perceive, decide and act in three dimensions. Therefore, 3D information should be preserved and exploited in all steps of the process. To achieve this, we use a multiresolution octree to store the acquired data, allowing mapping of large environments while keeping the representation compact and memory efficient. Ray tracing is used to build and update the 3D occupancy model. This model is used, via a temporary 2.5D map, for navigation, obstacle avoidance and efficient frontier-based exploration. This paper describes the volumetric sensor concept, describes its design features and presents an overview of the 3D software framework that allows 3D information persistency through all computation steps. Simulation and real-world experiments are presented at the end of the paper to demonstrate the key elements of our approach.
In order for an Unmanned Ground Vehicle (UGV) to operate effectively it must be able to perceive its environment in an accurate, robust and effective manner. This is done by creating a world representation which encompasses all the perceptual information necessary for the UGV to understand its surroundings. These perceptual needs are a function of the robots mobility characteristics, the complexity of the environment in which it operates, and the mission with which the UGV has been tasked. Most perceptual systems are designed with predefined vehicle, environmental, and mission complexity in mind. This can lead the robot to fail when it encounters a situation which it was not designed for since its internal representation is insufficient for effective navigation. This paper presents a research framework currently being investigated by Defence R&D Canada (DRDC), which will ultimately relieve robotic vehicles of this problem by allowing the UGV to recognize representational deficiencies, and change its perceptual strategy to alleviate these deficiencies. This will allow the UGV to move in and out of a wide variety of environments, such as outdoor rural to indoor urban, at run time without reprogramming. We present sensor and perception work currently being done and outline our research in this area for the future.
The Infrared Eye project was developed at DRDC Valcartier to improve the efficiency of airborne search and rescue operations. A high performance opto-mechanical pointing system was developed to allow fast positioning of a narrow field of view with high resolution, used for search and detection, over a wide field of view of lower resolution that optimizes area coverage. This system also enables the use of a step-stare technique, which rapidly builds a large area coverage image mosaic by step-staring a narrow field camera and properly tiling the resulting images. The resulting image mosaic covers the wide field of the current Infrared Eye, but with the high resolution of the narrow field. For the desired application, the camera will be fixed to an airborne platform using a stabilized mount and image positioning in the mosaic will be calculated using flight data provided by an altimeter, a GPS and an inertial unit. This paper presents a model of the complete system, a dynamic step-stare strategy that generates the image mosaic, a flight image taking simulator for strategy testing and some results obtained with this simulator.
As part of the Infrared Eye project, this article describes the design of large-deviation, achromatic Risley prisms scanning systems operating in the 0.5 - 0.92 and 8 - 9.5 μm spectral regions. Designing these systems is challenging due to the large deviation required (zero - 25 degrees), the large spectral bandwidth and the mechanical constraints imposed by the need to rotate the prisms to any position in 1/30 second. A design approach making extensive use of the versatility of optical design softwares is described. Designs consisting of different pairs of optical materials are shown in order to illustrate the trade-off between chromatic aberration, mass and vignetting. Control of chromatic aberration and reasonable prism shape is obtained over 8 - 9.5 μm with zinc sulfide and germanium. The design is more difficult for the 0.5 - 0.92 μm band. Trade-offs consist in using sapphire with Cleartran® over a reduced bandwidth (0.75 - 0.9 μm ) or acrylic singlets with the Infrared Eye in active mode (0.85 - 0.86 μm). Non-sequential ray-tracing is used to study the effects of fresnelizing one element of the achromat to reduce its mass, and to evaluate detector narcissus in the 8 - 9.5 μm region.
The Infrared (IR) Eye was developed with support from the National Search-and-Rescue Secretariat (NSS), in view of improving the efficiency of airborne search-and-rescue operations. The IR Eye concept is based on the human eye and uses simultaneously two fields of view to optimize area coverage and detection capability. It integrates two cameras: the first, with a wide field of view of 40 degree(s), is used for search and detection while the second camera, with a narrower field of view of 10 degree(s) for higher resolution and identification, is mobile within the wide field and slaved to the operator's line of sight by means of an eye-tracking system. The images from both cameras are fused and shown simultaneously on a standard high resolution CRT display unit, interfaced with the eye-tracking unit in order to optimize the man-machine interface. The system was flight tested using the Advanced System Research Aircraft (Bell 412 helicopter) from the Flight Research Laboratory of the National Research Council of Canada. This paper presents some results of the flight tests, indicates the strengths and deficiencies of the system, and suggests future improvements for an advanced system.
The Infrared Eye is a new concept of surveillance system that mimics human eye behavior to improve detection of small or low contrast target. In search and rescue operations (SAR), a wide field of view IR camera (WFOV) of approximately 20 degrees is used for detection of target and switched to a narrow field of view (NFOV) of approximately 5 degrees for a better target identification. In current SAR system, both FOVs cannot be used concurrently on the same display. The system presented in this paper fuses on the same high-resolution display the high- sensitivity WFOV image and the high-resolution NFOV image obtained from two IR cameras. The NFOV image movement within the WFOV image is slaved to the operator's eye movement by an eye-tracking device. The operator's central vision is always looking at the high-resolution IR image of the scene captured by the NFOV camera, while his peripheral vision is filled by the enhanced sensitivity (but low-resolution) image of the WFOV camera. This paper will describe the operation principle and implementation of the display, including its interface with an eye-tracking system and the opto-mechanical system used to steer the NFOV camera.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.