Multi-spectral sensor systems that record spatially and temporally registered image video have a variety of applications
depending on the spectral band employed and the number of colors available. The colors can be selected to highlight
physically meaningful portions of the image, and the resulting imagery can be used to decode relevant phenomenology.
For example, the images can be in spectral bands that identify materials that are intrinsic to the target while uncommon
in the backgound, providing an anomaly detection cue. These multi-spectral video sensor engines can also be employed
in conjunction with conventional fore-optics such as astronomical telescopes or microscopes to exploit useful
phenomenology at dissimilar scales. Here we explore the relevance of multi-spectral video in a space application. This
effort coupled a terrestrial multispectral video camera to an astronomical telescope. Data from a variety of objects in
Low Earth Orbit (LEO) were collected and analyzed both temporally, using light curves, and spectrally, using principal
component analysis (PCA). We find the spectral information is correlated with temporal information, and that the
spectral analysis adds the most value when the light curve period is long. The value of spectral-temporal signatures,
where the signature is the difference in either the harmonics or phase of the spectral light curves, is investigated with
inconclusive results.
Unlike straightforward registration problems encountered in broadband imaging, spectral imaging in fielded instruments
often suffers from a combination of imaging aberrations that make spatial co-registration of the images a challenging
problem. Depending on the sensor architecture, typical problems to be mitigated include differing focus, magnification,
and warping between the images in the various spectral bands due to optics differences; scene shift between spectral
images due to parallax; and scene shift due to temporal misregistration between the spectral images. However, typical
spectral images sometimes contain scene commonalities that can be exploited in traditional ways. As a first step toward
automatic spatial co-registration for spectral images, we exploit manually-selected scene commonalities to produce
transformation parameters in a four-channel spectral imager. The four bands consist of two mid-wave infrared channels
and two short-wave infrared channels. Each of the four bands is blurred differently due to differing focal lengths of the
imaging optics, magnified differently, warped differently, and translated differently. Centroid location techniques are
used on the scene commonalities in order to generate sub-pixel values for the fiducial markers used in the
transformation polygons, and conclusions are drawn about the effectiveness of such techniques in spectral imaging
applications.
The detection, determination of location, and identification of unknown and uncued energetic events within a large field of view represents a common operational requirement for many staring sensors. The traditional imaging approach involves forming an image of an extended scene and then rejecting background clutter. However, some important targets can be limited to a class of energetic, transient, point-like events, such as explosions, that embed key discriminants within their emitted, temporally varying spectra; for such events it is possible to create an alternative sensor architecture tuned specifically to these objects of interest. The resulting sensor operation, called pseudo imaging, includes: optical components designed to encode the scene information such that the spectral-temporal signature from the event and its location are easily derived; and signal processing intrinsic to the sensor to declare the presence of an event, locate the event, extract the event spectral-temporal signature, and match the signature to a library in order to identify the event.
This treatise defines pseudo imaging, including formal specifications and requirements. Two examples of pseudo imaging sensors are presented: a sensor based on a spinning prism, and a sensor based on an optical element called a Crossed Dispersion Prism. The sensors are described, including how the sensors fulfill the definition of pseudo imaging, and measured data is presented to demonstrate functionality.
Spectral imaging is the art of quantifying the spectral and spatial characteristics of a scene. The current state of the art in spectral imaging comprises a wide range of applications and sensor designs. At the extremes are spectrometers with high spectral sampling over a limited number of imaging pixels and those with little spectral sampling over a large number of pixels. The predominant technical issue concerns the acquisition of the three-dimensional spectral imagery (X,Y,l) using an inherently two-dimensional imaging array; consequently, some form of multiplexing must be implemented. This paper will discuss a new class of sensors, broadly referred to as Spectral Temporal Sensors (STS), which capture the position and spectra of uncued point sources anywhere in the optical field. These sensors have large numbers of pixels (>512x512) and colors (>50). They can be used to sense explosions, combustion, rocket plumes, LASERs, LEDs, LASER/LED excitations and the outputs of fiber optic cables. This paper will highlight recent developments on an STS that operates in a Pseudo-imaging (PI) mode, where the location of an uncued dynamic event and its spectral evolution in time are the data products. Here we focus on the sensor's ability to locate the event to within approximately 1/20th pixel, however we will also discuss its capabilities at fully characterizing event spectral temporal signature at rates greater than 100Hz over a large field of view (greater than 30°).
A common technique in the design of a spectral imaging system is the use of a prism as the dispersive element to disperse the colors onto the focal plane. The imaging system can serve as a spectrometer for point events with minimal computational load because the spectral data from a point event is spread directly on the FPA. The fidelity of the spectrum depends in this case on several factors, including the relative orientation between prism and FPA; the relative sizes of pixel pitch and sensor point spread function; and algorithms to determine spectral calibration and content. In this paper, we elucidate some methods for extracting the spectral data from the two-dimensional array of measurements, including the use of radial basis functions, and demonstrate the procedure with data from a spectral imager.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.