The Hyperspectral Image Analysis Toolbox (HIAT) is a collection of algorithms that extend the capability of
the MATLAB numerical computing environment for the processing of hyperspectral and multispectral imagery.
The purpose the Toolbox is to provide a suite of information extraction algorithms to users of hyperspectral
and multispectral imagery. HIAT has been developed as part of the NSF Center for Subsurface Sensing and
Imaging (CenSSIS) Solutionware that seeks to develop a repository of reliable and reusable software tools that can be shared by researchers across research domains. HIAT provides easy access to feature extraction/selection,
supervised and unsupervised classification algorithms, unmixing and visualization developed at Laboratory of
Remote Sensing and Image Processing (LARSIP). This paper presents an overview of the tools, application
available in HIAT using as example an AVIRIS image. In addition, we present the new HIAT developments,
unmixing, new oversampling algorithm, true color visualization, crop tool and GUI enhancement.
A texture-based method for classification of land cover and benthic habitats from hyperspectral and multispectral images is presented. The features considered in this work are a set of statistical and multiresolution texture features, including a 2-level wavelet transform that uses an orthonormal Daubechies filter. These features are computed over spatial extents from each band of the image. A stepwise sequential feature selection process is applied that results in the selection of optimal features from the original feature set. A supervised classification is performed with a distance metric. Results with AVIRIS hyperspectral and IKONOS multispectral images show that texture features perform well under different land cover scenarios and are effective in characterizing the texture information at different wavelengths. Results over coastal regions show that wavelet texture features computed over the reflectance spectrum can accurately detect the benthic classes.
Benthic habitats are the different bottom environments as defined by distinct physical, geochemical, and biological
characteristics. Remote sensing is increasingly being used to map and monitor the complex dynamics associated with
estuarine and nearshore benthic habitats. Advantages of remote sensing technology include both the qualitative benefits
derived from a visual overview, and more importantly, the quantitative abilities for systematic assessment and
monitoring. Advancements in instrument capabilities and analysis methods are continuing to expand the accuracy and
level of effectiveness of the resulting data products. Hyperspectral sensors in particular are rapidly emerging as a more
complete solution, especially for the analysis of subsurface shallow aquatic systems. The spectral detail offered by
hyperspectral instruments facilitates significant improvements in the capacity to differentiate and classify benthic
habitats. This paper reviews two techniques for mapping shallow coastal ecosystems that both combine the retrieval of
water optical properties with a linear unmixing model to obtain classifications of the seafloor. Example output using
AVIRIS hyperspectral imagery of Kaneohe Bay, Hawaii is employed to demonstrate the application potential of the two
approaches and compare their respective results.
A fundamental challenge to Remote Sensing is mapping the ocean floor in coastal shallow waters where variability, due to the interaction between the coast and the sea, can bring significant disparity in the
optical properties of the water column. The objects to be detected, coral reefs, sands and submerged aquatic vegetation, have weak signals, with temporal and spatial variation. In real scenarios the absorption and backscattering coefficients have spatial variation due to different sources of variability (river discharge, different depths of shallow waters, water currents) and temporal fluctuations. This paper presents the development of algorithms for retrieving information and its application to the recognition, classification
and mapping of objects under coastal shallow waters. A mathematical model that simplifies the radiative transfer equation was used to quantify the interaction between the object of interest, the medium and the sensor. The retrieval of information requires the development of mathematical models and processing tools in the area of inversion, image reconstruction and detection. The algorithms developed were applied to one set of remotely sensed data: a high resolution HYPERION hyperspectral imagery. An inverse problem arises as this spectral data is used for mapping the ocean shallow waters floor. Tikhonov method of regularization was used in the inversion process to estimate the bottom albedo of the ocean floor using a priori information in the form of stored spectral signatures, previously measured, of objects of interest, such as sand, corals, and sea grass.
This paper presents a method for combining both spectral and spatial features to perform hyperspectral image classification. Texture based spatial features computed from statistical, wavelet multiresolution, Fourier spectrum and Gabor filters are considered. A step wise feature selection method selects optimal set of features from the combined feature set. A comparison of the different spatial features for improving hyperspectral image classification is presented. The results show that wavelet based features and statistical features perform best. The effect of band subset selection using information based subset selection methods on the combined feature set is presented. Several results with hyperspectral images show the efficacy of utilizing spatial features.
The Hyperspectral Image Analysis Toolbox (HIAT) is a collection of algorithms that extend the capability of the MATLAB numerical computing environment for the processing of hyperspectral and multispectral imagery. The purpose of the HIAT Toolbox is to provide information extraction algorithms to users of hyperspectral and multispectral imagery in environmental and biomedical applications. HIAT has been developed as part of the NSF Center for Subsurface Sensing and Imaging (CenSSIS) Solutionware that seeks to develop a repository of reliable and reusable software tools that can be shared by researchers across research domains. HIAT provides easy access to supervised and unsupervised classification algorithms developed at LARSIP over the last 8 years.
This paper evaluates the performance of 5 previously presented in the literature cluster validity indices for the Fuzzy C-Means (FCM) clustering algorithm. The first two indices, the Fuzzy Partition Coefficient (PC), Fuzzy Partition Entropy Coefficient (PEC) select the number of clusters for which the fuzzy partition is more “crisp-like” or less fuzzy. The other three indices are the Fuzzy Davies-Bouldin Index (FDB), Xie-Beni Index (XB), and the Index I (I) choose the number of clusters which maximizes the inter-cluster separation and minimizes the within cluster scatter. A modification to these three indices is proposed based on the Bhattacharyya distance between clusters. The results show that this modification improves upon the performance of Index I. On the data sets presented on this paper the modifications of indices FDB and XB performed adequately.
Hyperspectral imagery provides high spectral and spatial resolution that can be used to discriminate between object and clutter occurring in subsurface remote sensing for applications such as environmental monitoring and biomedical imaging. We look at using a noncausal auto-regressive Gauss-Markov Random Field (GMRF) model to model clutter produced by a scattering media for subsurface estimation, classification, and detection problems. The GMRF model has the advantage that the clutter covariance only depends on 4 parameters regardless of the number of bands used. We review the model and parameter estimation methods using least squares and approximate maximum likelihood. Experimental and simulation model identification results are presented. Experimental data is generated by using a subsurface testbed where an object is placed in the bottom of a fish tank filled with water mixed with TiO2 to simulate a mild to high scattering environment. We show that, for the experimental data, least square estimates produce good models for the clutter. When used in a subsurface classification problem, the GMRF model results in better broad classification with loss of some spatial structure details when compared to spectral only classification.
Hyperspectral Remote Sensing has the potential to be used as an effective coral monitoring system from either space or airborne sensors. The problems to be addressed in hyperspectral imagery of coastal waters are related to the medium, which presents high scattering and absorption, and the object to be detected. The object to be detected, in this case coral reefs or different types of ocean floor, has a weak signal as a consequence of its interaction with the medium. The retrieval of information about these targets requires the development of mathematical models and processing tools in the area of inversion, image reconstruction and detection. This paper presents the development of algorithms that does not use labeled samples to detect coral reefs under coastal shallow waters. Synthetic data was generated to simulate data gathered using a high resolution imaging spectrometer (hyperspectral) sensor. A semi-analytic model that simplifies the radiative transfer equation was used to quantify the interaction between the object of interest, the medium and the sensor. Tikhonov method of regularization was used as a starting point in order to arrive at an inverse formulation that incorporates a priori information about the target. This expression will be used in an inversion process on a pixel by pixel basis to estimate the ocean floor signal. The a priori information is in the form of previously measured spectral signatures of objects of interest, such as sand, corals, and sea grass.
Feature extraction, implemented as a linear projection from a higher dimensional space to a lower dimensional subspace, is a very important issue in hyperspectral data analysis. This reduction must be done in a manner that minimizes the redundancy, maintaining the information content. This paper proposes methods for feature extraction and band subset selection based on Relative Entropy Criteria. The main objective of the feature extraction and band selection methods implemented is to reduce the dimensionality of the data maintaining the capability of discriminating objects of interest from the cluttered background. These methods accomplish the described goal by maximizing the difference between the data distribution of the lower dimensional subspace and the standard Gaussian distribution. The difference between the low dimensional space and the Gaussian distribution is measured using relative entropy, also known as information divergence. A Projection Pursuit unsupervised algorithm based on an optimization algorithm of the relative entropy is presented. An unsupervised version for selecting bands in hyperspectral data will be presented as well. The relative entropy criterion will measure the information divergence between the probability density function of the feature subset and the Gaussian probability density function. This augments the separability of the unknown clusters in the lower dimensional space. One advantage of these methods is that there is no use of labeled samples. These methods were tested using simulated data as well as remotely sensed data.
The main challenge for the retrieval of information using hyperspectral sensors is that due to the high dimensionality provided by them there is not comparably enough a priori data to produce well-estimated parameters to solve our detection problem. This lack of enough a priori information for an estimation yields to a rank-deficient problem. As a consequence, this leads to an increment in false alarms and increase in the probability of missing throughout the classification process. An approach based on a regularization technique applied to the data collected from the hyperspectral sensor is used to simultaneously minimize the probabilities of false alarms and missing. This procedure is implemented using algorithms that apply regularization techniques by biasing the covariance matrix, which enable the simultaneous reduction of the probability of false alarm and the decrease of the probability of missing; thus, enhancing the Maximum Likelihood parameter estimation.
A segmentation algorithm for underwater multispectral images based on the Hough transform (HT) is presented. The segmentation algorithm consists of three stages: The first stage consists in computing the HT of the original image and segmenting the desired object in its boundary. The HT has several known challenges such as the end point (infinite lines) and the connectivity problem, which lead to false contours. Most of these problems are canceled over the next two stages. The second stage starts by clustering the original image. Fuzzy C-means clustering segmentation technique is used to capture the local properties of the desired object. In the third stage, the edges of the clustering segmentation are extended to the closest HT detected lines. The boundary information (HT) and local properties (Fuzzy C-means) of the desired object are fused together and false contours are eliminated. The performance of the segmentation algorithm is demonstrated in underwater multispectral images generated in laboratory containing known objects of varying size and shape.
Hyperspectral Remote Sensing has the potential to be used as an effective coral monitoring system from space. The problems to be addressed in hyperspectral imagery of coastal waters are related to the medium, clutter, and the object to be detected. In coastal waters the variability due to the interaction between the coast and the sea can bring significant disparity in the optical properties of the water column and the sea bottom. In terms of the medium, there is high scattering and absorption. Related to clutter we have the ocean floor, dissolved salt and gases, and dissolved organic matter. The object to be detected, in this case the coral reefs, has a weak signal, with temporal and spatial variation. In real scenarios the absorption and backscattering coefficients have spatial variation due to different sources of variability (river discharge, different depths of shallow waters, water currents) and temporal fluctuations.
The retrieval of information about an object beneath some medium with high scattering and absorption properties requires the development of mathematical models and processing tools in the area of inversion, image reconstruction and detection. This paper presents the development of algorithms for retrieving information and its application to the recognition and classification of coral reefs under water with particles that provide high absorption and scattering. The data was gathered using a high resolution imaging spectrometer (hyperspectral) sensor. A mathematical model that simplifies the radiative transfer equation was used to quantify the interaction between the object of interest, the medium and the sensor. Tikhonov method of regularization was used in the inversion process to estimate the bottom albedo, ρ, of the ocean floor using a priori information. The a priori information is in the form of measured spectral signatures of objects of interest, such as sand, corals, and sea grass.
Feature extraction, implemented as a linear projection from a higher dimensional space to a lower dimensional subspace, is a very important issue in hyperspectral data analysis. The projection must be done in a matter that minimizes the redundancy, maintaining the information content. In hyperspectral data analysis, a relevant objective of feature extraction is to reduce the dimensionality of the data maintaining the capability of discriminating object of interest from the cluttered background. This paper presents a comparative study of different unsupervised feature extraction mechanisms and shows their effects on unsupervised detection and classification. The mechanisms implemented and compared are an unsupervised SVD based band subset selection mechanism, Projection Pursuit, and Principal Component Analysis. For purposes of validating the unsupervised methods, supervised mechanisms as Discriminant Analysis and a supervised band subset selection using Bhattacharyya distance were implemented and its results were compared with the unsupervised methods. Unsupervised band subset selection based on SVD chooses automatically the most independent set of bands. Projection Pursuit based feature extraction algorithm automatically searches for projections that optimize a projection index. The projection index we optimized is one that measures the information divergence between the probability density function of the projected data and the Gaussian probability density function. This produces a projection where the probability density function of the whole data set is multi-modal, instead of a Gaussian uni-modal distribution. This augments the separability of the unknown clusters in the lower dimensional space. Finally they were compared with well-known and used Principal Component Analysis. The methods were tested using synthetic as well as remotely sensed data obtained from AVIRIS and LANDSAT. They were compared using unsupervised classification methods in a known ground truth area.
This paper presents a two-stage band optimal band selection algorithm for hyperspectral imagery. The algorithm tries to compute the closest subset of bands to the principal components in the sense of having the smallest canonical correlation. The first stage of the algorithm computes and initial guess for the closest bands using matrix-factorization-based band subset selection. The second stage refines the subset of bands using a steepest ascent algorithm. Experimental results using AVIRIS imagery from the Cuprite Mining District are presented.
Hyperspectral images contain a great amount of information in terms of hundreds of narrowband channels. This should lead to better parameter estimation and to more accurate classifications. However, traditional classification methods based on multispectral analysis fail to work properly on this type of data. High dimensional space poses a difficulty in obtaining accurate parameter estimates and as a consequence this makes unsupervised classification a challenge that requires new techniques. Thus, alternative methods are needed to take advantage of the information provided by the hyperdimensional data. Data fusion is an alternative when dealing with such large data sets in order to improve classification accuracy. Data fusion is an important process in the areas of environmental systems, surveillance, automation, medical imaging, and robotics. The uses of this technique in Remote Sensing have been recently expanding. A relevant application is to adapt the data fusion approaches to be used on hyperspectral imagery taking into consideration the special characteristics of such data. The approach of this paper is to presents a scheme that integrates information from most of the hyperspectral narrow-bands in order to increase the discrimination accuracy in unsupervised classification.
Hyperspectral imaging sensors provide high-spectral resolution images about natural phenomena in hundreds of bands. High storage and transmission requirements, computational complexity, and statistical modeling problems motivate the idea of dimension reduction using band selection. The optimal band-selection problem can be formulated as a combinatorial optimization problem where p-bands from a set of n-bands are selected such that some measure of information content is maximized. Potential applications for automated band selection include classifier feature extraction, and band location in sensor design and in programming of reconfigurable sensors. The computational requirements for standard search algorithms to solve the optimal band selection problem are prohibitive. In this paper, we present the use of singular value and rank revealing QR matrix factorizations for band selection. These matrix factorizations can be use to determine the most independent columns of a matrix. The selected columns represent the most independent bands that contain most of the spatial information. It can be shown that under certain circumstances, the bands selected using these matrix factorizations are good approximations to the principal components explaining most of the image spatial variability. The advantage of matrix factorizations over the combinatorial optimization approach is that they take polynomial time and robust and proven numerical routines for their computation and are readily available from many sources. In the paper, we will present results comparing the performance of the algorithms using AVIRIS and LANDSAT imagery. Algorithms are compared in their computational requirements, their capacity to approximate the principal
components, and their performance as an automated feature extraction processor in a classification algorithm. Preliminary results show that under certain circumstances selected bands can have over 90% correlation with principal components and classifiers using these algorithms in feature extraction can outperform spectral angle classifiers.
Unsupervised classification algorithms are techniques to extract information from Remote Sensing imagery based on machine calculation without prior knowledge of labeled samples. Most of current unsupervised algorithms only use the spectral response as information. The clustering algorithms that takes into consideration the spatial information have a trade off between being accurate and time consuming, or being fast and losing relevant details in the spatial mapping. This paper will present an unsupervised classification system developed to extract information from multispectral and hyperspectral data as well, considering the spectral response, hyperdimensional data characteristics, and the spatial context of the pixel that will be classified. This algorithm constructs local spatial neighborhoods in order to measure their degrees of homogeneity. It resembles the supervised version of the ECHO classifier. An advantage of this mechanism is that the mathematical developments to estimate the degrees of homogeneity enable implementations based on statistical pattern recognition. This clustering algorithm is fast and its results have shown superiority in recognizing objects in multispectral and hyperspectral data over other known mechanism.
Recent developments of more sophisticated sensors enable the measurement of radiation in many more spectral intervals at a higher spectral resolution than previously possible. As the number of bands in high spectral resolution data increases, the capability to detect more objects and the detection accuracy should increase as well. Most of the detection techniques presently used in hyperspectral data require the use of spectral libraries that contain information on specific objects to be detected. An example of one technique used for detection purposes in hyperspectral imagery is the spectral angle approach based on the Euclidean inner product of the spectral signatures. This method has good performance on objects that have sufficient differences between their spectral signatures. This paper presents a partially supervised detection approach that uses previously measured spectral responses as inputs and is capable of differentiating objects that have similar spectral signatures. Two versions will be presented: one that is based on Statistical Pattern Recognition and other based on Fuzzy Pattern Recognition. The detection mechanisms are tested with objects of very similar spectral signatures and the detection results are compared with those from the spectral angle approach.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.