PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
Proceedings Volume Electro-Optical and Infrared Systems: Technology and Applications XVIII and Electro-Optical Remote Sensing XV, 1186601 (2021) https://doi.org/10.1117/12.2614841
This PDF file contains the front matter associated with SPIE Proceedings Volume 11866 including the Title Page, Copyright information, and Table of Contents.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Electro-Optical and Infrared Systems: Technology and Applications XVIII and Electro-Optical Remote Sensing XV, 1186602 https://doi.org/10.1117/12.2613704
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Electro-Optical and Infrared Systems: Technology and Applications XVIII and Electro-Optical Remote Sensing XV, 1186603 https://doi.org/10.1117/12.2599786
Using a state of the art fully digital ROIC – topped with an uncooled bolometer - an infrared imaging platform has been developed to meet the expectations of more and more demanding applications. Indeed, cameras based on uncooled IR bolometers are becoming more popular and have been designed to fully meet the requirements of diverse markets, from machine vision applications to very lightweight, low power drones. Each application has its own unique requirements, which has led to specific interface developments (GigE with Precise Time Protocol / IEEE 1558 protocol, USB, MIPI CSI / DSI, and more) as well as developments in embedded image processing. In this paper, we will give an overview of the modular design process which has led to an easy to use modular interface. Of course, throughout this development real IR performance has to be taken into account - for example the capability to achieve a NETD of 50mK with a scene dynamic range higher than 1000°C without any adjustment settings; Image quality – with and without shutter – is also addressed paving the way to affordable, powerful thermal imaging modules and cameras.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Electro-Optical and Infrared Systems: Technology and Applications XVIII and Electro-Optical Remote Sensing XV, 1186604 (2021) https://doi.org/10.1117/12.2600113
In this paper, we study the influence of three different etching depths on electrical and electro-optical properties of nonpassivated T2SL nBn Ga-free pixel detector having a 5μm cut-off wavelength at 150 K. The study shows the strong influence of lateral diffusion length on the shallow etched pixel properties and therefore, the need to perform etching through the absorber layer to avoid lateral diffusion contribution. The lowest dark current density was recorded for a deep-etched detector, on the order of 1 × 10-5 A/cm2 at 150 K and operating bias equal to – 300 mV. The quantum efficiency of this deep-etched detector is measured close to 55 % at 150 K, without anti-reflection coating. A comparison between electro-optical performances obtained on the three etching depths demonstrates that the etching only through the middle of the absorber layer (Mid-etched) allows eliminating lateral diffusion contribution while preserving a good uniformity between the diode’s performance. Such result is suitable for the fabrication of IR focal plane arrays (FPA).
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Electro-Optical and Infrared Systems: Technology and Applications XVIII and Electro-Optical Remote Sensing XV, 1186605 (2021) https://doi.org/10.1117/12.2597969
Besides nowadays challenges in contactless measurement of body temperature, the market for uncooled thermal imager continuously increased in the last years. The size of the camera core is a parameter, that needs to follow the miniaturization of the whole camera body. State-of-the-art value for pixel sizes of microbolometers in uncooled thermal imagers is 10 μm. Pushing the microbolometer size to the optical limit, Fraunhofer IMS provides a manufacturing process for FIR-imagers (uncooled thermal imagers) based on a scalable microbolometer technology. Taking this scalable technology as a basis, we are introducing a fully implemented uncooled thermal imager with 6 μm pixel size. The 6 μm microbolometers are made by Fraunhofer IMS’s manufacturing technology for a thermal MEMS isolation realized by vertical nanotubes. Performance of the 6 μm microbolometers is estimated by a 17 μm digital readout integrated circuit in QVGA resolution. Responsivity and number of electrical defect pixels as well as NETD are determined by an electro-optical characterization based on a test setup with a black body at two different temperatures. NETD of the 6 µm microbolometers is estimated to be at 611 mK. Supporting the quantitative measurements, FIR test images will be presented to demonstrate the microbolometer’s functionality in a fully implemented uncooled thermal imager. In summary, a fully implemented uncooled thermal imager with QVGA resolution based on a 6 μm nanotube-microbolometer detector is presented here. Compared with commercially available uncooled thermal imagers, the highly limited absorption area of our microbolometers with structure sizes below the target wavelength causes an accordingly higher NETD. Nevertheless, it can be stated that 6 μm pixel size still shows the capability of absorbing infrared radiation at wavelengths of approximately 10 μm in an uncooled thermal imager.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Electro-Optical and Infrared Systems: Technology and Applications XVIII and Electro-Optical Remote Sensing XV, 1186606 (2021) https://doi.org/10.1117/12.2598159
In this communication, we report on electrical and electro-optical characterizations of InAs/InAsSb Type-II superlattice (T2SL) MWIR photodetector, showing a cut-off wavelength at 5 μm. The device, made of a barrier structure in XBn configuration, was grown by molecular beam epitaxy (MBE) on GaSb substrate. At 150K, dark current measurements shows a device in the Shockley-Read-Hall (SRH) regime but with an absolute value comparable to the state-of-the-art. A quantum efficiency of 50% at the wavelength of 3 μm for a 3 μm thick absorption layer is found in simple pass configuration and front-side illumination. Combined with lifetime measurements performed on dedicated samples through time resolved photoluminescence (TRPL) technique, mobility is extracted from these measurements by using a theoretical calculation of the quantum efficiency thanks to Hovel’s equations. Such an approach helps us to better understand the hole minority carrier transport in Ga-free T2SL MWIR XBn detector and therefore to improve its performance.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Electro-Optical and Infrared Systems: Technology and Applications XVIII and Electro-Optical Remote Sensing XV, 1186607 (2021) https://doi.org/10.1117/12.2600294
Increasing demands in imaging quality of all assembled optical systems require the optimization of lateral and axial alignment of individual lenses. An economic possibility to improve mechanical alignment is the step-by-step centration testing of the topmost lens surface during the assembly process. Regardless of the material properties of the lens, measurement equipment that operates in the visible spectral range is suitable for this application. When the assembled lens does not meet the expected imaging performance, an in-depth analysis is needed. Focusing electronic autocollimators in combination with high-precision air bearing spindles are commonly used to analyze the centration of each optical surface and lens element. The well-established TRIOPTICS OptiCentric family allows to determine the centration of inner surfaces using their powerful MultiLens technique. The obtained measurement data are processed to provide the shift and tilt of individual lenses or groups of lenses with respect to each other or a freely selectable datum. For IR lenses a wavelength that can penetrate the lens material is required. Autocollimators for MWIR or LWIR are to be combined with a VIS measurement head for all measurements on lens surfaces that are directly accessible from the outside. A measurement accuracy of 0.1μm is reached. The development of the new OptiCentric® IR autocollimation head, was mainly driven by optimizing the spot size and hence its accuracy. A centration measurement precision of below 0.25 micrometer for MWIR and LWIR wavelengths was obtained. For measurement of air spacings and center thicknesses through all IR lens materials the instrument incorporates a low coherence interferometer with an accuracy down to 0.15µm. The contribution describes how an IR lens assembly consisting of several lenses can be fully opto-mechanically characterized in a non-contact and non-destructive fashion. Optimized processes to effectively streamline processes are taken into consideration same as prerequisites like operator skills.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Electro-Optical and Infrared Systems: Technology and Applications XVIII and Electro-Optical Remote Sensing XV, 1186608 https://doi.org/10.1117/12.2600314
Based on general properties of Maxwell equations, we develop simple design rules to modify the dispersion relation of plasmonic resonators fabricated with nanostructured metallic films, to tune its far field response, and to couple plasmons to phonon polaritons. Appling such rules, a plasmonic trench resonator is designed as an electro-optical biosensor. The resonator is fed by a nanometric slit that can be electrically biased. Light traversing the slit excites surface plasmon polaritons in the resonator that produces high-Q transmission peaks, which are employed for real-time biosensing. Applying and RF electrical bias across the slit, the trench resonator can simultaneously serve as a dielectrophoretic trap able to attract or repel analytes. Trapped analytes are detected in a label-free manner using refractive-index sensing, enabled by interference between surface-plasmon standing waves in the trench and light transmitted through the slit. This active sample concentration mechanism enables detection of nanoparticles and proteins at a concentration as low as 10 pM. The electrically biased split-trench resonator can potentially applied in optoelectronics and for signal processing applications, as well as to trap quantum emitters, paving the way to study strong light−matter interactions, cavity polaritonics, electrical carrier injection, and electroluminescence.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Electro-Optical and Infrared Systems: Technology and Applications XVIII and Electro-Optical Remote Sensing XV, 1186609 https://doi.org/10.1117/12.2597079
A phase mask in the aperture stop of an imaging system can enhance its depth of field (DoF). This DoF extension capacity can be maximized by jointly optimizing the phase mask and the digital processing algorithm used to deblur the acquired image. This method, introduced by Cathey and Dowski with a cubic phase mask, has been generalized to different mask models. Among them, annular binary phase masks are easy to manufacture, and can be co-optimized with a simple unique Wiener deconvolution filter. Their performance and robustness have been characterized theoretically and experimentally in the case of monochromatic illumination. We perform here a theoretical and experimental study of codesigned DoF enhancing binary phase masks in panchromatic imagers. At first glance, this configuration is not optimal for binary phase masks. Indeed, the binary phase masks are most often manufactured by binary etching of a dielectric plate, so dephasing depends on the wavelength. The π radians dephasing is reached for only one wavelength. How do phase masks optimized for a particular wavelength respond to a wide illumination spectrum? Is it possible to take into account the illumination spectrum in the co-optimization of phase masks? What impact does this have on the result? We analyze the behavior of DoF enhancing phase masks in panchromatic imagers in terms of Modulation Transfer Function and of final image quality. The results are experimentally validated with imaging experiments carried out with a commercial lens, a Vis-NIR CMOS sensor and co-optimized phase masks. We study different phase masks co-optimized for different spectrum of illumination. We show that masks specifically optimized for wide spectrum illumination perform better under this type of illumination than monochromatically optimized phase masks under monochromatic illumination, especially when the targeted DoF range is large.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Electro-Optical and Infrared Systems: Technology and Applications XVIII and Electro-Optical Remote Sensing XV, 118660A https://doi.org/10.1117/12.2597122
Co-design consists in optimizing an imaging system by taking into account the scene and image formation model, the imaging system and the method of information extraction [Stork and Robinson 2008]. For several years, our team has co-designed phase masks to increase the depth-of-field of optical imaging systems where the end product is a restored image [Diaz et al. 2011, Burcklen et al. 2015, Falcón et al. 2017]. These masks produce a relatively blurred image which quality is independent of the axial position of the object. It is then possible to reconstruct the object at all depths by applying a unique deconvolution process. This co-optimization approach can be formulated by defining the optimization criterion of the phase function of the mask as the mean square difference between an ideal sharp image and the deconvolved image delivered by the system [Mirani et al. 2005, Robinson and Stork 2006, Mirani et al. 2008]. In general, it is preferable to optimize the masks using a closed-form criterion since it considerably accelerates optimization. That is the case if the deconvolution is carried out using a Wiener filter. However, nonlinear deconvolution algorithms are known to have better performance. The question therefore arises as to whether better imaging performance can be obtained by taking into account a nonlinear deconvolution algorithm instead of a linear one in the optimization criterion. To answer this question, we propose to compare the image qualities obtained with these two approaches. We show that the masks obtained by optimizing criteria based on linear and nonlinear algortihms are identical and propose a conjecture to explain this behavior [Lévêque et al. 2021]. This result is important since it justifies a frequent practice in co-design which consists in optimizing a system with a simple analytical criterion based on a linear deconvolution and restoring images with a more efficient nonlinear deconvolution algorithm [Portilla and Barbero 2018].
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Electro-Optical and Infrared Systems: Technology and Applications XVIII and Electro-Optical Remote Sensing XV, 118660B (2021) https://doi.org/10.1117/12.2600179
Raman spectroscopy is an efficient method for detection of explosives even in small quantities. A laser can be combined with a Coded Aperture Snapshot Spectral Imaging (CASSI) system to collect Raman spectra from a surface at stand-off distances. The CASSI-system decrease the data collection time but instead increase the reconstruction time for the Raman image. Reconstruction of Raman spectra from an ensemble of compressed sensing measurements using standard reconstruction methods such as Total Variation (TV) is rather time consuming and limits the application domain for the technique. Novel machine learning approaches such as Deep Learning (DL) has lately been applied to reconstruction problems. We evaluate our earlier developed DL approach for reconstruction of Raman spectra from an ensemble of measurements formulated as a regression problem. The DL network is trained by minimizing a loss-function which is composed of two components: a reconstruction error and a re-projection error. The evaluated method is trained on simulated data where the training data has been generated using a transfer function. The transfer function has been developed to mimic the optical properties of a CASSI system. The DL network has been trained on different training sets with different levels of background noise, different number of materials in the scene and different spatial configurations of the materials in the scene. The reconstruction results using the DL network has been qualitatively evaluated on simulated data and the results are also compared to the Two-Step Iterative Shrinkage/Thresholding (TwIST) algorithm in terms of reconstruction quality and computation time. The reconstruction time for the DL are orders of magnitude lower than for TwIST without reducing the quality of the reconstructed Raman spectra.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Electro-Optical and Infrared Systems: Technology and Applications XVIII and Electro-Optical Remote Sensing XV, 118660C (2021) https://doi.org/10.1117/12.2600585
The use of data obtained by a group of sensors requires the formation of parallel channels. The use of each separate channel leads to the need to allocate additional computing resources and the appointment of time intervals (timing) for a single-processor analysis system. The formation of the decision rule and the subsequent decision-making based on such data requires forming a combined inter-block criterion. This criterion should consider both the possible intersection of data and their discrepancy associated with the use of different parameters when processing the same data. The formation of combined data reduces the computational costs at the decision-making stage, which will improve the efficiency of post-processing and visual control systems. In using combined stationary systems, it is possible to create template fields that allow you to form transformation matrices for a specific space. If it is impossible to use fixed cameras or combined systems in a single body, forming stitching images is complicated. Combining data into a single information field also allows you to increase the operator's work efficiency, allowing you to analyze the entire process as a whole and not its scattered parts. The paper proposes a technique for forming a stitching thermal imaging image based on combined data analysis. For the formation of anchor points for stitching images, primary analysis methods with combined processing parameters are used. Images obtained from the outputs of thermal imaging cameras are pre-processed by the filtering method. The method used is based on the application of a multicriteria function. It automatically divides the image into regions (boundaries, highly detailed, and locally stationary areas) to reduce noise while preserving the transition boundaries. To increase the processing speed, a simplification algorithm is applied while maintaining the shapes and geometry of objects. The operation includes absorbing small objects and averaging for the ranges of histograms of color gradients. Analysis of local features and the formation of anchor points is based on the use of correlation analysis. As a method of non-linear change in color balance, modified alpha-rooting methods are used. As test data, a series of images of one object obtained from different fixation points in: visible (camera with a resolution of 1920 x 1080 pixels, color depth 8 bits) and far-infrared (thermal images with a resolution of 320 x 240 pixels, grayscale). Images have at least 40% overlap area of one object. Applications for both industrial production and the analysis of objects in the open areas are considered.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Electro-Optical and Infrared Systems: Technology and Applications XVIII and Electro-Optical Remote Sensing XV, 118660D (2021) https://doi.org/10.1117/12.2597723
Model-based performance assessment is a valuable approach in the process of designing or comparing electrooptical and infrared imagers since it alleviates the need for expensive field measurement campaigns. TRM4 serves this purpose and is primarily used to calculate the range performance based on parameters of the imaging system and the environmental conditions. It features a validated approach to consider aliasing in the performance assessment of sampled imagers. This paper highlights new features and major changes in TRM4.v3, which is to be released in autumn 2021. TRM4.v3 includes the calculation of an image quality metric based on the National Imagery Interpretability Rating Scale (NIIRS). The NIIRS value computation is based on the latest version of the General Image Quality Equation. This extends the performance assessment capability of TRM4 in particular to imagers used for aerial imaging. The three-dimensional target modelling was revised to cope with a wider range of scenarios: from ground imaging of aerial targets against a sky background to aerial imaging of ground targets, including groundto-ground imaging. For imagers working in the visible to the SWIR spectral range, TRM4.v3 provides not only an improved comparison basis between lab measurements and modelling, but also allows direct integration of measured device data. This is achieved by introducing and computing (in analogy to the Minimum Temperature Difference Perceived used for thermal imagers) the so-called Minimum Contrast Perceived (MCP). This device figure of merit is similar to the Minimum Resolvable Contrast (MRC) but also applicable at frequencies above Nyquist frequency. Using measured MCP or MRC data, range performance can be calculated for devices such as cameras, telescopic sights and night vision goggles. In addition, the intensified camera module introduced in a previous publication was further elaborated and a comparison to laboratory measurement results is presented. Lastly, the graphical user interface was improved to provide a better user experience. Specifically, an interactive user assistance in form of tooltips was introduced.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Electro-Optical and Infrared Systems: Technology and Applications XVIII and Electro-Optical Remote Sensing XV, 118660E (2021) https://doi.org/10.1117/12.2599731
Triangle Orientation Discrimination (TOD) developed by TNO Human Factors and Minimum Temperature Difference Perceived (MTDP) developed by Fraunhofer IOSB are competitive measurement methods for the assessment of well and under sampled thermal imagers. Key differences between the two methods are the different targets, bars for MTDP and equilateral triangles for TOD, and the measurement methodology. MTDP bases on a threshold measurement whereas TOD uses a psychophysical approach. All advantages and disadvantages of the methods trace back on these differences. The European Computer Model for Optronic System Performance Prediction (ECOMOS) includes range performance assessments according to both methods. This triggered work at Fraunhofer IOSB to do comparative TOD and MTDP measurements. Idea was checking if TOD- and MTDP-curves fall together when transferring the two target descriptive parameters, reciprocal angular subtense (one over triangle size expressed in angular units) and spatial frequency respectively, into each other using a conversion factor or function. Surprisingly, literature does not include such a measurement-based comparison to date. Extending IOSBs existing MTDP set-up with triangle targets and the associated turntable and shutter enabled the comparative measurements. The applied TOD measurement process uses the guidelines found in literature with some necessary adaptions. Both measurements included the same components (blackbody, collimator, monitor etc.) except the targets. Additionally the trained MTDP-observer also did the TOD measurements. Only the methods itself thus should cause differences in the results. Four thermal imagers with different magnitude of under sampling (MTF at Nyquist frequencies about 8 %, 14 %, 32 %, and 73 %) are the basis for the comparison. Their measurements allowed deriving a standard target for triangles according to the process known from target acquisition assessment. These calculations result in 1.5±0.2 line pairs on target. Multiplying reciprocal angular subtense with this factor gives corresponding MTDP and TOD curves when TOD is based 62.5 % instead of the standard 75 % probability. 62.5 % corrected for chance are 50 % probability and thus in correspondence with the threshold assumption of the MTDP. Deviations occur, when reciprocal angular subtense is near the cut-off because of unaccounted sampling effects. The proposed way to overcome this is normalizing spatial frequency and reciprocal angular subtense with camera line spread function full width at half maximum. A sigmoidal transition function is able to describe the resulting connection. This function could be valid for all thermal imagers, as indicated by the assessment of two additional ones. However, as the assessment bases only on six thermal imagers and one observer further comparative measurements by a larger number of observers or, alternatively, modeling is necessary.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Electro-Optical and Infrared Systems: Technology and Applications XVIII and Electro-Optical Remote Sensing XV, 118660F (2021) https://doi.org/10.1117/12.2597227
Target detection presents many challenges for military sensor imaging system where there is a strong dependency on the camera resolution. From a performance analysis perspective, the target image is typically considered to be either a fully resolved object or else an unresolved point source. However, in many longer-range detection systems, the size of the target’s image on the focal plane lies between these two states and often transitions between them during an engagement. Furthermore, the position of the target’s image relative to the centre point of a pixel will vary with time, and this produces a fluctuation in the measured target signature that affects the peak Signal to Noise Ratio (SNR). The modelling and simulation of an imaging sensor’s performance associated with the target image sampling by a focal plane array is discussed in terms of three critical factors: geometric resolution, optical blur, and image position on the focal plane. Image sampling factors are introduced to provide a correction to the SNR and the associated detection range equation. In many imaging applications, target detection is limited by scene clutter whose spatial characteristics vary with range. Different approaches for modelling the clutter in the target detection process are considered and the effect of image sampling on the system performance is discussed. A traditional approach for introducing clutter into the calculation of detection range is the use of a Clutter Equivalent Irradiance (CEI) term. In this paper, a correction to the CEI is introduced which compensates for clutter spatial correlation and the effects of sensor processing.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Electro-Optical and Infrared Systems: Technology and Applications XVIII and Electro-Optical Remote Sensing XV, 118660G (2021) https://doi.org/10.1117/12.2600271
Atmospheric turbulence often limits the performance of long-range imaging systems in applications. Realistic turbulence simulations provide means to evaluate this effect and assess turbulence mitigation algorithms. Current methods typically use phase screens or turbulent point spread functions (PSFs) to simulate the image distortion and blur due to turbulence. While the first method takes long computation times, the latter requires empirical models or libraries of PSF shapes and their associated tip and tilt motion, which might be overly simplistic for some applications. In this work, an approach is evaluated which tries to avoid these issues. Generative neural networks models are able to generate extremely realistic imitations of real (image) data with a short calculation time. To treat anisoplanatic imaging for the considered application, the model output is an imitation PSF-grid that has to be applied to the input image to yield the turbulent image. Certain shape features of the model outcome can be controlled by traversing within subsets of the model input space or latent space. The use of a conditional variational autoencoder (cVAE) appears very promising to yield fast computation times and realistic PSFs and is therefore examined in this work. The cVAE is trained on field trial camera images of a remote LED array. These images are considered as grids of real PSFs. First the images are pre-processed and their PSFs properties are determined for each frame. Main goal of the cVAE is the generation of PSF-grids under conditional properties, e.g. moments of PSFs. Different approaches are discussed and employed for a qualitative evaluation of the realism of the PSF-grids generated by the trained models. A comparison of required simulation computing time is presented and further considerations regarding the simulation method are discussed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Electro-Optical and Infrared Systems: Technology and Applications XVIII and Electro-Optical Remote Sensing XV, 118660H (2021) https://doi.org/10.1117/12.2600330
Air turbulence can be a major impairment source for long-range imaging applications. There is great interest in the assessment of turbulence mitigation techniques based on machine learning models. In general such models require lots of image data for robust training and validation. Experimental acquisition of image data in field trials is time-consuming and environmental conditions such as daytime and weather cannot be specifically controlled. Several methods for turbulence simulation have been proposed in recent years. Many of these are based on phase screens or models turbulent point spread functions (PSFs). Often simple turbulence models such as the Kolmogorov or Von Karman spectrum are used. Therefore these methods cannot provide insight in the influence and relevance of other turbulence parameters such as inner scale and (non-)Kolmogorov power slope. In this work a data fitting procedure for the determination of turbulence model parameters based on experimental data is shown. Hereby the Generalized modified Von Karman spectrum (GMVKS) is used. Differential tilt variances (DTV) are calculated from centroid displacements in video sequences of a recorded LED grid. Then the experimental data is fitted to theoretical expressions of DTV by numerical integration over the turbulence model. Image data was acquired in field trials on several days at the same location. Then a beam propagation method using Markov GMVKS phase screens with determined model parameters is used to generate a grid of PSF images which represent degradation for different viewing angles. For validation, DTVs based on centroid displacements of the simulated PSFs are calculated and compared with the corresponding measured data of LED centroid displacements and theoretical data. Cumulative distribution functions of the model parameters for all recording dates are provided to show the diversity of turbulence conditions. These can be used as prior knowledge for future turbulence simulations to include various model parameters and hence different conditions of image degradation. Finally the extensibility of the data fitting approach to other turbulence spectra, e.g. anisotropic spectra, is discussed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Electro-Optical and Infrared Systems: Technology and Applications XVIII and Electro-Optical Remote Sensing XV, 118660I (2021) https://doi.org/10.1117/12.2600250
Cognitive radar systems adapt processing, receiver and transmitted waveform parameters by continuously learning and interacting with the operative environment. IRST systems are passive; as such no RF emission is involved. Nevertheless, the cognitive paradigm can be applied to passive sensors in order to optimize operational modes choice, platform and processing parameters on the fly. A cognitive based IRST, while enhancing the overall performance of the system, would also reduce the crew workload during the mission. In this paper, steps and challenge toward cognitive IRST are described, along with a proof-of-concept example of improved tracking capabilities using reinforcement learning methods.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Electro-Optical and Infrared Systems: Technology and Applications XVIII and Electro-Optical Remote Sensing XV, 118660J (2021) https://doi.org/10.1117/12.2597343
Near-eye displays – displays positioned in close proximity to the observer’s eye – are a technology steadily gaining significance in industrial and defense applications, e.g. for augmented reality and digital night vision. In the light of the increasing use of such displays and their ongoing technological development, a specialized measurement setup is designed as a basis for evaluating these types of displays as part of the optoelectronic imaging chain. We developed a prototype measurement setup to analyze different properties of near-eye displays, with our primary focus on the Modulation Transfer Function (MTF) as a first step. The setup consists of an imaging system with a high-resolution CMOS camera and a motorized positioning system. It is intended to run different measurement procedures semi-automatically, performing the desired measurements at specified points on the display. This paper presents a comparison between different MTF measurement methods in terms of their applicability for different pixel structures. As a first step, the measurement setup’s imaging capabilities are determined using a slanted edge target. A commercial virtual reality headset is then used as a sample display to test and compare different standard MTF measurement methods, such as the slanted edge or bar target method. The results are discussed with the goal to find the best measurement procedures for our setup.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Electro-Optical and Infrared Systems: Technology and Applications XVIII and Electro-Optical Remote Sensing XV, 118660K (2021) https://doi.org/10.1117/12.2598137
Optical window characterization is performed with a CO2 laser heating the material to understand the optical effects, thermal effects, and temperature dependence of the index of refraction. Distortions in the optical window caused by operating in challenging aerothermal environments can impact an imager’s performance. Uneven heating of the window will induce a temperature gradient, which when coupled with the temperature dependence of the refractive index, causes a flat sapphire window to act as an imperfect lens. Experimental capability allows multiple sensors and diagnostic equipment to collect synchronized data. A long-wave infrared (LWIR) camera images the sample’s front and back surfaces to measure temperatures and temperature gradients. A transmitted laser probe beam is captured simultaneously by a visible imager and wavefront sensor. The visible imager captures how a point source appears observed through the window. A transmitted wavefront is reconstructed from the wavefront sensor. The reconstructed wavefront includes effects from both dn/dT and mechanical deformation of the window. Using the reconstructed wavefront and imager optics in Zemax, the point spread function (PSF) of the imager looking through the heated window is generated and compared with the experimentally measured PSF.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Electro-Optical and Infrared Systems: Technology and Applications XVIII and Electro-Optical Remote Sensing XV, 118660L (2021) https://doi.org/10.1117/12.2596135
We report on the development and field trials of an active polarimetric imager in the SWIR domain. Polarization states are controlled for both emission and analysis. Based on past experience, we focus on Orthogonal State Contrast (OSC) imaging for which two images with orthogonal polarizations are needed. An important feature of the imager is the use of two InGaAs imaging detectors mounted orthogonally on a polarization beam splitter. This allows the synchronous imaging with the two orthogonal polarizations and the real time acquisition of OSC images at video frame rate without temporal artefacts. The demonstrator has been operated during field trials with static and moving scenes. These trials were mainly aimed at the detection of man-made objects (weapons, vehicles …) in complex scenes at up to a few hundreds of meters. Along with the presentation of some example of results, we discuss different representation modes of the polarimetric information.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Andreas Peckhaus, Patrick Kuhne, Maike Neuland, Thomas Hall, Carsten Pargmann, Frank Duschek
Proceedings Volume Electro-Optical and Infrared Systems: Technology and Applications XVIII and Electro-Optical Remote Sensing XV, 118660M (2021) https://doi.org/10.1117/12.2599511
The operation of lasers in free space involves the potential risk of unintentionally exposing the human eye and skin to radiation. In addition to direct exposure, indirect scattered radiation of high-power lasers may pose a threat to operators, working personnel, and third parties. Hazard assessments are usually performed based on laser safety standards. However, these standards would have to be extended for outdoor environments and therefore it is advisable to substantiate models and safety calculations with measurements of the absolute scattered radiant flux under realistic conditions. For the quantification of scattered radiation, a radiometric sensor has been developed. The sensor consists of an optical, electronic, and mechanical unit. Two realizations of the optical detection unit with a side-on photomultiplier (PMT) and a photodiode amplifier (PDA) have been built according to German safety policies. The different detector types facilitate the detection of scattered radiation over a wide power range. The electronic unit includes the data acquisition and processing of the optical detection unit and peripheral devices (i.e. environmental sensors and GPS module). A lock-in amplifier is used to reduce the contribution of background radiation. The optical and electronic units are housed separately in a weather-resistant case on a tripod and a mobile container, respectively. Radiometric calibration is performed for each optical detection unit. The calibration involves a two-step procedure allowing for a direct conversion of the output voltage of the lock-in amplifier into an absolute scattered power considering the detector area and collection solid angle of the optical detection unit. Goniometer-based reflection measurements of solid surface samples are used for the characterization of the performance of the optical detection unit in terms of dynamic range, the influence of background noise, accuracy, and repeatability and contribute to a better understanding of the sensor in future field deployment.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Electro-Optical and Infrared Systems: Technology and Applications XVIII and Electro-Optical Remote Sensing XV, 118660N (2021) https://doi.org/10.1117/12.2600070
Backscaning step and stare imaging is a popular method for widening the coverage of Electro-Optical/Infra-Red (EO/IR) system. Meanwhile the high fresh rate of thermal panoramic scene is a significant factor for the false alarm rate of system in early warning, the precise motion of steering optical component in this short time as some micro seconds is a challenge, especially in high disturbance environment. The synchronization between optical movement and the integrating point demands the strict time and fine motion for speed-up, back-can and fly-back process of fast steering mirror (FSM). The paper presents the high-order trajectory scanning profile design method for FSM, thereby optimizing the required performance of systems component such as voice coil motor and also improving the overall quality of fly-back process. Next, the linearization feedback controller and linear state observer are used to control the system in both tracking follow trajectory and rejecting disturbance. Finally, a prototype version of fast steering system is built to apply new controller algorithms. The simulation and experimental results show the good agreement that RMS error of Line of Sight (LOS) can be achieved smaller than one hundred micro radian during back-scan process
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Electro-Optical and Infrared Systems: Technology and Applications XVIII and Electro-Optical Remote Sensing XV, 118660O (2021) https://doi.org/10.1117/12.2600218
Polarimetric imaging can be done with a division of focal plane (DoFP) camera. This type of camera uses a grid of superpixels. Each superpixel consists of four neighbor pixels with four polarizers having different orientations in front of them. Thus, this kind of camera enables to estimate the linear Stokes vector in a single acquisition. Full stokes polarimetric imaging can be realized by adding a retarder in front of the DoFP camera and performing at least two acquisitions with two different values of retarder orientation. The effective retardance of the retarder depends on several parameters such as temperature and wavelength, which are not always controlled when using such a camera on the field. Therefore, this retardance may not be known precisely, and using a retardance value different from the true one will lead to a bias in estimating the Stokes parameter S3, which contains the information about circular polarization. This bias may become greater than the estimation standard deviation due to noise and thus have a significant impact on estimation. We demonstrate that thanks to measurement redundancy, it is possible to calibrate this retardance directly from the measurements, provided that three acquisitions instead of two are performed and the signal to noise ratio is sufficient. This autocalibration totally cancels the bias and yields a Stokes vector estimation variance identical to that obtained with the true value of the retardance. We study the practical conditions under which this method can be applied, perform experimental validation of its performance, and propose a criterion to decide if it can be applied depending on the acquired measurements.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Electro-Optical and Infrared Systems: Technology and Applications XVIII and Electro-Optical Remote Sensing XV, 118660P (2021) https://doi.org/10.1117/12.2600231
IRST systems development for aircraft is based on strong theoretical foundations about IR physics, on accurate management of each component and on advanced signal and data processing. Although the expected performance can be analytically estimated using detector and optics data, atmospheric models and algorithms simulations, the check in real environment remains a must for the assessment of system behaviors. In this paper, we propose an IRST product cycle named M3T which guides the system development up to the final desired performance. The process goes from theory and models to the gathering of real data during flight trials, which are used to tune the signal processing routines and test the system from all the angles. The labeling and organisation of recorded data, the calculation of metrics and the design of tools for replicating the real system behavior on ground all contribute to minimize the number of flights necessary to get the requested level of performance. Moreover, the approach described in this paper can be tailored to the user needs, driving to a proactive collaboration between industry and customers.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Electro-Optical and Infrared Systems: Technology and Applications XVIII and Electro-Optical Remote Sensing XV, 118660S (2021) https://doi.org/10.1117/12.2597554
Unmanned aerial vehicles (UAV:s) have become an increasing threat in both civilian and military arenas. While military UAV:s often are relatively large and complex, the supply in the civilian hobby market is characterized by small and cheap systems with the capacity to stream high-definition video, carry a variety of other sensors and transport critical goods (eg food or medicine) to hard-to-reach places. The criminal world has quickly realized how UAV:s can be used to smuggle weapons or drugs, for example. Militarily, UAV:s are established for reconnaissance, fire control and electronic warfare operations. Laser-guided weapons from a UAV, is an example of a widely used system for precision operations during later conflicts. This paper examines and summarizes various laser functions and their role for detecting, recognizing, tracking and combating a UAV. The laser can be used as a support sensor to others like radar or IR to detect end recognise and track the UAV and it can dazzle and destroy its optical sensors. A laser can also be used to sense the atmospheric attenuation and turbulence in slant paths, which are critical to the performance of a high power laser weapon aimed to destroy the UAV.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Electro-Optical and Infrared Systems: Technology and Applications XVIII and Electro-Optical Remote Sensing XV, 118660T (2021) https://doi.org/10.1117/12.2597192
Sensor-based monitoring of the surroundings of civilian vehicles is primarily relevant for driver assistance in road traffic, whereas in military vehicles, far-reaching reconnaissance of the environment is crucial for accomplishing the respective mission. Modern military vehicles are typically equipped with electro-optical sensor systems for such observation or surveillance purposes. However, especially when the line-of-sight to the onward route is obscured or visibility conditions are generally limited, more enhanced methods for reconnaissance are needed. The obvious benefit of micro-drones (UAVs) for remote reconnaissance is well known. The spatial mobility of UAVs can provide additional information that cannot be obtained on the vehicle itself. For example, the UAV could keep a fixed position in front and above the vehicle to gather information about the area ahead, or it could fly above or around obstacles to clear hidden areas. In a military context, this is usually referred to as manned-unmanned teaming (MUM-T). In this paper, we propose the use of vehicle-based electro-optical sensors as an alternative way for automatic control of (cooperative) UAVs in the vehicle’s vicinity. In its most automated form, the external control of the UAV only requires a 3D nominal position relative to the vehicle or in absolute geocoordinates. The flight path there and the maintaining of this position including obstacle avoidance are automatically calculated on-board the vehicle and permanently communicated to the UAV as control commands. We show first results of an implementation of this approach using 360° scanning LiDAR sensors mounted on a mobile sensor unit. The control loop of detection, tracking and guidance of a cooperative UAV in the local environment is demonstrated by two experiments. We show the automatic LiDAR-controlled navigation of a UAV from a starting point A to a destination point B. with and without an obstacle between A and B. The obstacle in the direct path is detected and an alternative flight route is calculated and used.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Electro-Optical and Infrared Systems: Technology and Applications XVIII and Electro-Optical Remote Sensing XV, 118660U (2021) https://doi.org/10.1117/12.2598151
Two Gated-Viewing instruments of different design, but similar mean optical power, were compared during a field test: The TRAGVIS sensor is an experimental, scientific development which was designed for particular needs of maritime search and rescue operations. The instrument uses pulsed VCSELs in the NIR, and a CMOS camera in multi-integration mode. As designed for distances < 400 m, a fixed focal length (wide angular FOV of ≈ 9° ) is used, and the repetition rate is high, while the pulse energy is low. The MODAR is a commercial multi-sensor platform comprising a Gated-Viewing instrument designed for security operations (e.g. police) both on sea and on land. Aiming at distances up to several kilometers, both camera and laser illumination are equipped with zoom optics, and the repetition rate is small, while the pulse energy is high. In contrast to TRAGVIS, an image intensifier is used. TRAGVIS and MODAR were compared in terms of signal-to-noise ratio (SNR) and image contrast using Lambertian reflectors at different distances. TRAGVIS was found to perform better than MODAR at distances < 350 m, but its performance decreases with distance while MODAR’s performance stays constant as a result of the laser and camera zoom. When used in ungated (continuous exposure) mode, TRAGVIS shows > 5 times larger SNR than in gated mode, and almost one order of magnitude larger SNR than MODAR due to the lack of an image intensifier. This demonstrates the instrument’s ability to be used for both, Gated-Viewing as well as simple active illumination mode. However, for the same reason (image intensifier) MODAR’s shutter suppression, which is crucial for reducing the back-scatter signal and therefore vision enhancement, was found to be at least 5-6 times better than that of TRAGVIS.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Electro-Optical and Infrared Systems: Technology and Applications XVIII and Electro-Optical Remote Sensing XV, 118660V (2021) https://doi.org/10.1117/12.2599892
Maritime search and rescue operations (SAR) are highly affected by harsh environmental conditions and darkness (night time operation). Especially at low visibility and high humidity scenarios like fog, mist or sea spray, gated-viewing offers an active-imaging solution to effectively suppress atmospheric back-scatter and enhance target contrast. The presented TRAGVIS gated-viewing system is designed to fill the needs in SAR operations: at least 185 m detection range at a minimum FOV of 7°x6° and operates in the NIR at 804 nm emission wavelength, combining a high repetition rate VCSEL illuminator with an accumulation mode CMOS camera. The performance of the demonstrator in a wide range of different visibility fog events and different sets of system parameters has been evaluated by analysing the target signal, contrast and signal to noise ratio SNR as a function of the optical depth OD, which was measured by an atmospheric visibility sensor. As the back-scattered signal (suppressed by the camera shutter) overcomes the target signal of a 41% reflectivity target at OD > 4, it was found, together with a low target signal, to be the major reason for the drop of contrast after a vision enhancement up to OD ≈ 3. A limitation of the system to approximately OD = 5.3 is estimated, as the image shows a decent contrast of 10%, but at an SNR of only ∼ 2.2. The highest potential for improvements was found in an optimised placement of the illuminator with respect to the receiver and scene geometry.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Electro-Optical and Infrared Systems: Technology and Applications XVIII and Electro-Optical Remote Sensing XV, 118660W (2021) https://doi.org/10.1117/12.2600279
A line scanning ladar can generate detailed three-dimensional images of a scene, so-called point clouds, by emitting individual laser pulses in quick succession in various directions and measuring the time before arrival of return pulses. As a typical mode of operation, the pulses are emitted along horizontal lines, starting from bottom of the field of view, before gradually increasing the elevation angles of subsequent scanning lines. This paper aims to address an inherent problem with object recognition within point clouds acquired by a line scanning ladar. If some of the scene objects are moving, their position will change slightly between each sweep of a horizontal scanning line. This causes the shape of the moving objects to deform in the resulting point cloud. The problem becomes more severe for wide view angles, slow scan speeds and fast moving objects. An object recognition algorithm is proposed that corrects for shape deformations caused by the delay between individual point measurements. In addition, the algorithm is able to estimate the velocity of the recognized object. The algorithm matches observed objects against a 3D model of the object of interest, by optimally aligning them with each other while simultaneously estimating the optimal shape deformation caused by motion during acquisition. If the observed object and 3D model aligns sufficiently well, according to a certain recognition confidence measure, the observed object is regarded as recognized and its velocity is induced from the estimated shape deformation. To solve the underlying optimization problem, the ”Iterative Closest Point” (ICP) algorithm is modified by incorporating an additional substep, where the shape deformation – and thereby the corresponding velocity - is updated incrementally each iteration. Experiments on simulated and real world data indicate that moving objects can be recognized with high confidence and their velocities can be estimated with high accuracy.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Electro-Optical and Infrared Systems: Technology and Applications XVIII and Electro-Optical Remote Sensing XV, 118660X https://doi.org/10.1117/12.2600348
The operation of a coherent Doppler lidar, developed by NASA for missions to planetary bodies, is analyzed and its projected performance is described. The lidar transmits three laser beams at different but fixed directions and measures line-of-sight range and velocity along each beam. The three line-of-sight measurements are then combined in order to determine the three components of the vehicle velocity vector and its altitude relative to the ground. Operating from over five kilometers altitude, the NDL provides velocity and range data with a few cm/sec and a few meters precision, respectively, depending on the vehicle dynamics. This paper explains the sources of measurements error and analyzes the impacts of vehicle dynamics on the lidar performance.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Electro-Optical and Infrared Systems: Technology and Applications XVIII and Electro-Optical Remote Sensing XV, 118660Y (2021) https://doi.org/10.1117/12.2598911
There is a need for sensor technologies capable of identifying illegal border crossings through foliage. In this work, we study the use of a novel active hyperspectral sensor for remote identification of persons and vehicles through foliage. The AHS sensor is based on a continuously tunable near-infrared supercontinuum light source and a microelectromechanical Fabry-Pérot interferometer for transmission band selection. Real-time spectral detection algorithms are used to identify the targets based on the spectral content of the back-scattered light. Preliminary results are presented from both in-lab and outdoors.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Electro-Optical and Infrared Systems: Technology and Applications XVIII and Electro-Optical Remote Sensing XV, 118660Z (2021) https://doi.org/10.1117/12.2599026
In the case of weapons, the inner surface of the barrel bore, due to difficult operating conditions, such as high pressure, high temperature and chemically aggressive products of propellant combustion, are exposed to wear and damage. The paper presents an analysis using both a numerical simulation and experimental tests, conducted in order to check capability of non-destructive testing of gun barrels using eddy current thermography. The obtained results have confirmed the capability of detecting defects on the barrel bore surface by means of this method.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Electro-Optical and Infrared Systems: Technology and Applications XVIII and Electro-Optical Remote Sensing XV, 1186610 (2021) https://doi.org/10.1117/12.2599680
One of the main challenging issues in video analysis is the recovery of the original video frames from the annotated and text marked in the guided user interface (GUI) tool, particularly in cases where the original video is not available. Removing annotation from video frames is essential for any kind of algorithm development, such as noise removal, dehazing, object detection, recognition, identification in the video and tracking specific objects in the maritime environment, and further testing process. In this research work, we developed the algorithm for the removal of all annotations from any portion of the video frame, without affecting the integrity of the original video content. Here, we present a novel technique to remove unnecessary annotations and markers using a progressive switching median filter with wavelet thresholding. Experimental studies have shown that the annotation-free images generated from the proposed method can be used for the development of any basic algorithm.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Electro-Optical and Infrared Systems: Technology and Applications XVIII and Electro-Optical Remote Sensing XV, 1186611 (2021) https://doi.org/10.1117/12.2601697
At present, spectral instruments have become widespread in various spheres of human activity, with their help it becomes possible to obtain information on the distribution of the intensity of radiation arriving at the input of the instrument from the object under study, over wavelengths. One of the possible methods for obtaining spectral information is hyperspectral imaging. A hyperspectral image is a three-dimensional data array in which each pair of spatial coordinates is associated with a set of values distributed over the spectrum. Thus, a hyperspectral camera provides complete information about the spatial and spectral structure of the object of observation. Also, the field of application of hyperspectral imaging is remote sensing of the Earth. Remote sensing is widely used in various spheres of human activity, such as surface monitoring, geodetic surveys, and agriculture. In addition to spectral research, remote sensing of the Earth uses other optical observation methods (such as LIDAR technology), radar and acoustic instruments. Unmanned aerial vehicles (UAVs) are used for regular monitoring. The advantage of unmanned launches is that they are significantly less expensive than manned flights. The main type of UAV currently used to solve a wide range of tasks is a quadrocopter. weight reduction is one of the important requirements in the development of wearable equipment for unmanned aerial vehicles. Another factor that determines the suitability of the instrument design for use in an aircraft is the structural rigidity and its resistance to vibrations arising from the operation of the engines. Minimizing the mass and increasing the rigidity and vibration resistance of the structure are two main requirements, in addition to ensuring the optical design requirements, which must be borne in mind when developing the design of a hyperspectral camera for installation on a quadcopter. The aim of this work is to develop a design for a hyperspectral camera. The design should provide the ability to install the system on a quadcopter.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Electro-Optical and Infrared Systems: Technology and Applications XVIII and Electro-Optical Remote Sensing XV, 1186612 (2021) https://doi.org/10.1117/12.2600499
A characteristic feature of the modern level of development of radio communication systems is the problem of increasing their security and resilience in transmitting information. There are increasing requirements for the secrecy of transmitted information, both for military and civilian radiocommunication systems. The protection of information from unauthorized access and the security of the connection are based on many different methods of hiding messages so that they are incomprehensible to a eavesdropper who has intercepted a hidden message. The paper presents a method of an improved information protection against non-allowed access by scrambling and descrambling on two levels. On the first level this process directly concerns the primary signal, which is a carrier of information in digital kind. The second level has controlling functions in regard to the random sequences. They change in time according to a given dependency defined by a code combination. The input information spectrum expands by switching a number of pseudo random sequences. Switching is controlled by another similar sequence, the symbols of which last much longer. The proposed method is characterized by advantages in relation to the already-known methods of protection against the non-allowed access with information transmitting in the expanded spectrum systems. The disadvantage of the system is the necessity of elements for coordinating and synchronizing the information and control components of the system. The method can be applied to the modern telecommunication systems of expanded spectrum with high requirements related to the protection of information against non-allowed access. One possible application is in remote sensing in terms the acquired data requiring protection against non-allowed access. This work is supported by Bulgarian National Science Fund under Contract number KP-06-M27/2 (КП-06-М27/2).
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.