Unsupervised depth estimation methods that allow training on monocular image data have emerged as a promising tool for monocular vision-based vehicles that are not equipped with a stereo camera or a LIDAR. Predicted depths from single images could be used, for example, to avoid obstacles in autonomous navigation, or to improve in-vehicle change detection. We employ a self-supervised depth estimation network to predict depth in monocular image sequences acquired by a military vehicle and a UGV. We trained the models on the KITTI dataset, and performed a fine-tuning on monocular image data for each vehicle. The results illustrate that the estimated depths are visually plausible for on-road as well as for off-road environments. We also provide an example application by using the predicted depths for computing stixels, a medium-level representation of traffic scenes for self-driving vehicles.
The detection of potential threats such as IEDs is essential for the security of military vehicle convoys. We present an image change detection approach that estimates change probabilities in outdoor routes, using a monocular optical camera system. The camera system is part of a multi-sensor system mounted on the lead vehicle of a vehicle convoy, and the estimated change probabilities of each sensor are exploited by a data fusion algorithm. Our change detection approach is based on illumination-invariant image representations and is thus insensitive to illumination differences (e.g., due to shadows), and detects potential changes of multiple sizes. We introduce a temporal weighting scheme based on intra-sequence homography coherence to reduce false detections due to the parallax effect. For the experimental evaluation we have used image sequences acquired on outdoor routes which include different types of changes, strong illumination differences, and scenes with parallax.
Image change detection methods allow to automatically detect changes on outdoor routes with respect to a previous time point and to improve the identification of potential threats for vehicles (e.g., improvised explosive devices - IEDs). The change detection task is challenging, mainly due to the arbitrary type of changes and due to the global and local illumination differences. We have developed an illumination-invariant change detection approach based on intrinsic images and differences of Gaussians. The intrinsic images are obtained by the logchromaticity space of color images and in conjunction with reflectance images obtained by homomorphic filtering, illumination-invariant differences images are determined. We employ differences of Gaussians to detect blobs of multiple sizes in the differences images, which describe detected changes. To reduce false positives we propose to weight the scores of the detected blobs based on normalized cross-correlation, and to enforce temporally consistent changes we employ a temporal filtering scheme. The approach has been successfully applied to image sequences acquired on outdoor routes at different time points, demonstrating the consistent detection of changes over time, and the invariance to illumination differences, including shadows.
We have evaluated a HDR (high dynamic range) CMOS image sensor with logarithmic response for an image-based navigation system which allows vehicles to autonomously drive on reference trajectories. The goal of the evaluation is to investigate how the advantages (constant image intensities under varying illumination conditions) and the disadvantages (higher noise and artifact levels) of the sensor affect the performance of image-based navigation. We recorded HDR image sequences using a vehicle to evaluate the performance of intra-frame image analysis which is used to estimate the vehicle motion and the respective trajectories, and the performance of inter-frame image analysis which is used for the alignment of a current trajectory to a reference trajectory. We have also performed a comparison with simultaneously recorded LDR image data on itineraries with scenes with high dynamic range.
KEYWORDS: Image sensors, High dynamic range image sensors, High dynamic range imaging, Associative arrays, Data acquisition, Sensors, Image processing, Nonuniformity corrections, RGB color model, Image quality
The performance of mobile video-based applications using conventional LDR (Low Dynamic Range) image sensors highly depends on the illumination conditions. As an alternative, HDR (High Dynamic Range) image sensors with logarithmic response are capable to acquire illumination-invariant HDR images in a single shot. We have implemented a complete image processing framework for a HDR sensor, including preprocessing methods (nonuniformity correction (NUC), cross-talk correction (CTC), and demosaicing) as well as tone mapping (TM). We have evaluated the HDR sensor for video-based applications w.r.t. the display of images and w.r.t. image analysis techniques. Regarding the display we have investigated the image intensity statistics over time, and regarding image analysis we assessed the number of feature correspondences between consecutive frames of temporal image sequences. For the evaluation we used HDR image data recorded from a vehicle on outdoor or combined outdoor/indoor itineraries, and we performed a comparison with corresponding conventional LDR image data.
Vehicle-mounted change detection systems allow to improve situational awareness on outdoor itineraries of inter- est. Since the visibility of acquired images is often affected by illumination effects (e.g., shadows) it is important to enhance local contrast. For the analysis and comparison of color images depicting the same scene at different time points it is required to compensate color and lightness inconsistencies caused by the different illumination conditions. We have developed an approach for image enhancement and color constancy based on the center/surround Retinex model and the Gray World hypothesis. The combination of the two methods using a color processing function improves color rendition, compared to both methods. The use of stacked integral images (SII) allows to efficiently perform local image processing. Our combined Retinex/Gray World approach has been successfully applied to image sequences acquired on outdoor itineraries at different time points and a comparison with previous Retinex-based approaches has been carried out.
Real-time applications in the field of security and defense use dynamic color camera systems to gain a better
understanding of outdoor scenes. To enhance details and improve the visibility in images it is required to per-
form local image processing, and to reduce lightness and color inconsistencies between images acquired under
different illumination conditions it is required to compensate illumination effects. We introduce an automatic
hue-preserving local contrast enhancement and illumination compensation approach for outdoor color images.
Our approach is based on a shadow-weighted intensity-based Retinex model which enhances details and compensates the illumination effect on the lightness of an image. The Retinex model exploits information from a shadow
detection approach to reduce lightness halo artifacts on shadow boundaries. We employ a hue-preserving color
transformation to obtain a color image based on the original color information. To reduce color inconsistencies
between images acquired under different illumination conditions we process the saturation using a scaling function. The approach has been successfully applied to static and dynamic color image sequences of outdoor scenes
and an experimental comparison with previous Retinex-based approaches has been carried out.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.