We evaluate recently developed randomized matrix decomposition methods for fast lossless compression and reconstruction of hyperspectral imaging (HSI) data. The simple random projection methods have been shown to be effective for lossy compression without severely affecting the performance of object identification and classification. We build upon these methods to develop a new double-random projection method that may enable security in data transmission of compressed data. For HSI data, the distribution of elements in the resulting residual matrix, i.e., the original data subtracted by its low-rank representation, exhibits a low entropy relative to the original data that favors high-compression ratio. We show both theoretically and empirically that randomized methods combined with residual-coding algorithms can lead to effective lossless compression of HSI data. We conduct numerical tests on real large-scale HSI data that shows promise in this case. In addition, we show that randomized techniques can be applicable for encoding on resource-constrained on-board sensor systems, where the core matrix-vector multiplications can be easily implemented on computing platforms such as graphic processing units or field-programmable gate arrays.
Nonnegative matrix factorization and its variants are powerful techniques for the analysis of hyperspectral images
(HSI). Nonnegative matrix underapproximation (NMU) is a recent closely related model that uses additional
underapproximation constraints enabling the extraction of features (e.g., abundance maps in HSI) in a recursive
way while preserving nonnegativity. We propose to further improve NMU by using the spatial information:
we incorporate into the model the fact that neighboring pixels are likely to contain the same materials. This
approach thus incorporates structural and textural information from neighboring pixels. We use an ℓ1-norm
penalty term more suitable to preserving sharp changes, and solve the corresponding optimization problem using
iteratively reweighted least squares. The effectiveness of the approach is illustrated with analysis of the real-world
cuprite dataset.
This work describes numerical methods for the joint reconstruction and segmentation of spectral images
taken by compressive sensing coded aperture snapshot spectral imagers (CASSI). In a snapshot, a CASSI
captures a two-dimensional (2D) array of measurements that is an encoded representation of both spectral
information and 2D spatial information of a scene, resulting in significant savings in acquisition time and data
storage. The double disperser coded aperture snapshot imager (DD-CASSI) is able to capture a hyperspectral
image from which a highly underdetermined inverse problem is solved for the original hyperspectral cube
with regularization terms such as total variation minimization. The reconstruction process decodes the
2D measurements to render a three-dimensional spatio-spectral estimate of the scene, and is therefore an
indispensable component of the spectral imager. In this study, we seek a particular form of the compressed
sensing solution that assumes spectrally homogeneous segments in the two spatial dimensions, and greatly
reduces the number of unknowns. The proposed method generalizes popular active contour segmentation
algorithms such as the Chan-Vese model and also enables one to jointly estimate both the segmentation
membership functions and the spectral signatures of each segment. The results are illustrated on a simulated
Hubble Space Satellite hyperspectral dataset, a real urban hyperspectral dataset, and a real DD-CASSI image
in microscopy.
With a multi-lenslet camera, we can capture multiple low resolution (LR) images of the same scene and use them to
reconstruct a high resolution (HR) image. For this purpose, two major computation problems need to be solved, the
image registration and the super resolution (SR) reconstruction. For the first, one major hurdle is the spatially variant
shifts estimation, because objects in a scene are often at different depths, and due to parallax, shifts between imaged
objects often vary on a pixel basis. This poses a great computational challenge as the problem is NP complete. The
multi-lenslet camera with a single focal plane provides us a unique opportunity to take advantage of the parallax
phenomenon, and to directly relate object depths with their shifts, and thus we essentially reduced the parameter space
from a two dimensional (x, y) space to a one dimensional depth space, which would greatly reduce the computational
cost. As results, not only we have registered LR images, the estimated depth map can also be valuable for some
applications. After registration, LR images along with estimated shifts can be used to reconstruct an HR image. A
previously developed algorithm will be employed to efficiently compute for a large HR image in the size of 1024x1024.
An important aspect of spectral image analysis is identification of materials present in the object or scene being
imaged. Enabling technologies include image enhancement, segmentation and spectral trace recovery. Since
multi-spectral or hyperspectral imagery is generally low resolution, it is possible for pixels in the image to
contain several materials. Also, noise and blur can present significant data analysis problems. In this paper,
we first describe a variational fuzzy segmentation model coupled with a denoising/deblurring model for material
identification. A statistical moving average method for segmentation is also described. These new approaches
are then tested and compared on hyperspectral images associated with space object material identification.
An integrated array computational imaging system, dubbed PERIODIC, is presented which is capable of exploiting a
diverse variety of optical information including sub-pixel displacements, phase, polarization, intensity, and
wavelength. Several applications of this technology will be presented including digital superresolution, enhanced
dynamic range and multi-spectral imaging. Other applications include polarization based dehazing, extended depth of
field and 3D imaging. The optical hardware system and software algorithms are described, and sample results are
shown.
Analytical approximations of translational subpixel shifts in both signal and image registrations are derived by
setting the derivatives of a normalized cross correlation function to zero and solving them. Without the need of
iterative searching, this methods achieves a complexity of only O (mn), given an image size of m × n. Without
the need to upsample, computation memory is also saved. Tests using simulated signals and images show good
results.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.