Cooled infrared detectors are typically characterized by well-known electro-optical parameters: responsivity, noise equivalent temperature difference, shot noise, 1/f noise, and so on. Particularly important for staring arrays is also the residual fixed pattern noise (FPN) that can be obtained after the application of the nonuniformity correction (NUC) algorithm. A direct measure of this parameter is usually hard to define because the residual FPN strongly depends, other than on the detector, on the choice of the NUC algorithm and the operative scenario. We introduce three measurable parameters: instability, nonlinearity, and a residual after a polynomial fitting of the detector response curve, and we demonstrate how they are related to the residual FPN after the application of an NUC (the relationship with three common correction algorithms is discussed). A comparison with experimental data is also presented and discussed.
Long range imaging systems have applications in vessel traffic monitoring, border and coastal observation, and generic surveillance. Often, sign reading and identification capabilities are required, and medium or long-wave infrared systems are simply not the best solution for these tasks, because of the low scene contrast. Among reflected light imagers, the short-wave infrared has a competitive advantage over the visible and near-infrared spectrum, being less affected by path attenuation, scattering and turbulence. However, predicting a SWIR system long range performance still represents a challenge because of the need of an accurate atmospheric modelling. In this paper, we present the key limiting performance factors for long range applications, and how we used popular atmospheric models to extract the synthetic simulation parameters needed for range performance prediction. We then present a case study for a long range application, where the main requirement is to read a vessel name at distances greater than 10km. The results show a significant advantage of SWIR over visible and near-infrared solutions for long range identification tasks.
Due to the fast-growing of cooled detector sensitivity in the last years, on the image 10-20 mK temperature difference between adjacent objects can theoretically be discerned if the calibration algorithm (NUC) is capable to take into account and compensate every spatial noise source. To predict how the NUC algorithm is strong in all working condition, the modeling of the flux impinging on the detector becomes a challenge to control and improve the quality of a properly calibrated image in all scene/ambient conditions including every source of spurious signal. In literature there are just available papers dealing with NU caused by pixel-to-pixel differences of detector parameters and by the difference between the reflection of the detector cold part and the housing at the operative temperature. These models don’t explain the effects on the NUC results due to vignetting, dynamic sources out and inside the FOV, reflected contributions from hot spots inside the housing (for example thermal reference far of the optical path). We propose a mathematical model in which: 1) detector and system (opto-mechanical configuration and scene) are considered separated and represented by two independent transfer functions 2) on every pixel of the array the amount of photonic signal coming from different spurious sources are considered to evaluate the effect on residual spatial noise due to dynamic operative conditions. This article also contains simulation results showing how this model can be used to predict the amount of spatial noise.
The raw output of a generic infrared vision system, based on staring arrays, is spatially not uniform. This spatial noise
can be much greater than the system NETD, and determines a strong drop in system performance.
Therefore we need to model all system non-uniformity (NU) sources to highlight the parameters that should be
controlled by optical and mechanical design, the ones depending on the focal plane array and those that can be corrected
in post-processing.
In this paper, we identify the main NU sources (optical relative irradiance, housing straylight, detector pixel-pixel
differences and non linearity), we show how to model these sources and how they are related to the design and physical
parameters of the system. We then describe the total signal due to these sources at the detector output. Applying different
NUC algorithms to this signal, the final results on the image can be simulated finding a proper correction algorithm. At
the end we show the agreement between the model with the experimental data taken on a real system.
Changing a limited set of parameters, this model can be applied to many third generation thermal imager configurations.
Third-generation thermal cameras have high dynamic range (up to 14 bits) and collect images that are difficult to visualize because their contrast exceeds the range of traditional display devices. Thus, sophisticated techniques are required to adapt the recorded signal to the display, maintaining, and possibly improving, objects' visibility and image contrast. The problem has already been studied in relation to images acquired in the visible spectral region, while it has been scarcely investigated in the infrared. In this work, this latter subject is addressed, and a new method is presented that combines dynamic-range compression and contrast enhancement techniques to improve the visualization of infrared images. The proposed method is designed to meet typical requirements in infrared sensor applications. The performance is studied through experimental data and compared with that yielded by three well-established algorithms. Evaluation is performed through subjective analysis, assigning each algorithm a score on the basis of the average opinion of human observers. The results demonstrate the effectiveness of the proposed technique in terms of perceptibility of details, edge sharpness, robustness against the horizon effect, and presence of very warm objects.
The visualization of IR images on traditional display devices is often complicated by their high dynamic range.
Classical dynamic range compression techniques based on simple linear mapping, reduce the perceptibility of small
objects and often prevent the human observer from understanding some of the important details. Thus, more
sophisticated techniques are required to adapt the recorded signal to the monitor maintaining, and possibly
improving, object visibility and image contrast. The problem has already been studied with regard to images
acquired in the visible spectral domain, but it has been scarcely investigated in the IR domain. In this work, we
address this latter subject and propose a new method for IR dynamic range compression which stems from the
lesson learnt from existing techniques. First, we review the techniques proposed in the literature for contrast
enhancement and dynamic range compression of images acquired in the visible domain. Then, we present the new
algorithm which accounts for the specific characteristics of IR images. The performance of the proposed method are
studied on experimental IR data and compared with those yielded by two well established algorithms.
In this paper we study passive focusing techniques for infrared sensors. We present a survey of existing focus measures,
i.e. functionals that give an estimate of the quality of focus as a function of the lens position. We synthesize the material
proposed in the literature and show that all the approaches exploit the same general layout differing only for the choice
of the filtering technique used to extract the image details. We present and discuss experimental results obtained on real
infrared data taken in many operating conditions. The experimental analysis aims at comparing the quality of the focus
measures and at evaluating their impact of the subsequent algorithm that searches the best focus position of the lens. For
this purpose, we propose a comparative analysis based on three important properties of the focus measure: symmetry,
smoothness and peakdness.
The development of a compact and high performance MWIR step zoom camera based on the 640x480 staring focal
plane array (FPA) is described. The camera has a 20 magnification step zoom ranging between 24°x20° for the wide
field of view up to 1.2° x 1° for the narrow field of view and an aperture of F#4. The processing electronics is based on a
flexible and expandable architecture. Special emphasis is spent on the solutions adopted for the design of this high zoom
ratio and fast optics FLIR and on the electronic architecture and algorithms for image processing. An overview of the
performance is given.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.