Greatly improved understanding of areas and objects of interest can be gained when real time, full-motion Flash LiDAR is fused with inertial navigation data and multi-spectral context imagery. On its own, full-motion Flash LiDAR provides the opportunity to exploit the z dimension for improved intelligence vs. 2-D full-motion video (FMV). The intelligence value of this data is enhanced when it is combined with inertial navigation data to produce an extended, georegistered data set suitable for a variety of analysis. Further, when fused with multispectral context imagery the typical point cloud now becomes a rich 3-D scene which is intuitively obvious to the user and allows rapid cognitive analysis with little or no training. Ball Aerospace has developed and demonstrated a real-time, full-motion LIDAR system that fuses context imagery (VIS to MWIR demonstrated) and inertial navigation data in real time, and can stream these information-rich geolocated/fused 3-D scenes from an airborne platform. In addition, since the higher-resolution context camera is boresighted and frame synchronized to the LiDAR camera and the LiDAR camera is an array sensor, techniques have been developed to rapidly interpolate the LIDAR pixel values creating a point cloud that has the same resolution as the context camera, effectively creating a high definition (HD) LiDAR image. This paper presents a design overview of the Ball TotalSight™ LIDAR system along with typical results over urban and rural areas collected from both rotary and fixed-wing aircraft. We conclude with a discussion of future work.
Ball Aerospace & Technologies Corp. has demonstrated real-time processing of 3D imaging LADAR point-cloud data to
produce the industry's first time-of-flight (TOF) 3D video capability. This capability is uniquely suited to the rigorous
demands of space and airborne flight applications and holds great promise in the area of autonomous navigation. It will
provide long-range, three dimensional video information to autonomous flight software or pilots for immediate use in
rendezvous and docking, proximity operations, landing, surface vision systems, and automatic target recognition and
tracking. This is enabled by our new generation of FPGA based "pixel-tube" processors, coprocessors and their
associated algorithms which have led to a number of advancements in high-speed wavefront processing along with
additional advances in dynamic camera control, and space laser designs based on Ball's CALIPSO LIDAR. This
evolution in LADAR is made possible by moving the mechanical complexity required for a scanning system into the
electronics, where production, integration, testing and life-cycle costs can be significantly reduced. This technique
requires a state of the art TOF read-out integrated circuit (ROIC) attached to a sensor array to collect high resolution
temporal data, which is then processed through FPGAs. The number of calculations required to process the data is
greatly reduced thanks to the fact that all points are captured at the same time and thus correlated. This correlation
allows extremely efficient FPGA processing. This capability has been demonstrated in prototype form at both Marshal
Space Flight Center and Langley Research Center on targets that represent docking and landing scenarios. This report
outlines many aspects of this work as well as aspects of our recent testing at Marshall's Flight Robotics Laboratory.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.