Given the unprecedented size of ultraspectral sounder data, there is a special process of radiance thinning in assimilating this data to reduce the data volume with minimal loss of atmospheric information. Considering the potential correlation between the selected data by radiance thinning and the unselected data, a lossless compression method for ultraspectral sounder data is proposed based on key information extraction and spatial–spectral prediction. Sensitive channels are first selected by stepwise iteration based on information entropy to maintain critical atmospheric information, and then auxiliary channels are further selected based on information content and correlation constraints to facilitate prediction. All of the selected channels are spatially thinned to generate key information, which is then used to predict original ultaspectral sounder data by spatially bicubic interpolation and spectrally sparse reconstruction. The residual errors are processed by the least-squares linear prediction to further reduce data redundancy. Together with the key information, the final residual errors are then fed into a range coder after positive mapping and histogram packing. Experimental results with IASI-L1C data show that the proposed method achieves an average compression ratio of 2.68, which is 4.7% higher than that of the typical methods, including JPEG-LS, JPEG-2000, M-CALIC, CCSDS-122.0, CCDS-123.0, and HEVC.
To accelerate the massive remote sensing images (RSIs) coding in a ground service-oriented remote sensing system, this study proposes three-level (i.e., tree-level, bit-plane level, and byte-level) parallel-set partitioning in hierarchical trees (TP-SPIHT) coding on a collaborative central and graphic processing unit (CPU and GPU) to parallelize SPIHT by optimizing its dynamic processing with the linked list. Basic parallel SPIHT coding is presented with preprocessing, tree-level parallel coding, and bit-stream organization using three kinds of static marker matrices instead of the dynamic linked lists originally used to remove the data dependency of the original SPIHT. The bit-stream organization is implemented on CPU and other processes are implemented on GPU using GPU streams. The bit-stream organization is further divided into a bit-plane level parallel bit-plane stream extraction and a final bit-stream organization on a multicore CPU. Because no dependencies exist between the different byte operations in the final bit-stream organization, this organization is accelerated by byte-level parallelization on the GPU. Experimental results with different sized RSIs show that TP-SPIHT takes 292.03 ms to code a 2048×2048 image and achieves a 6.27 times speedup compared with an optimized CPU implementation. The speedup ratio improves as the image increases from 256×256 to 2048×2048.
Traditional remote sensing change-detection algorithms only generate change-detection map and few quantitative evaluation indicators as the results, but they are unable to provide comprehensive analysis and further understanding of the detected changes. Aiming to assess regional development, we develop a comprehensive analysis method for human-driven environmental change by multitemporal remote sensing images. In order to adapt to analyze the time-varying multiple changed objects, an observed object-specified dynamic Bayesian network (i.e., OOS-DBN) is first proposed to adjust DBN structure and variables. Using semantic analysis for the relationship between multiple changed objects and regional development, all levels of situations and evidences (i.e., detected attributes of changed objects) are extracted as hidden variables and observed variables and inputted to OOS-DBN. Furthermore, conditional probabilities are computed by levels and time slices in OOS-DBN, resulting in the comprehensive analysis results. The experiments on the coastal region in Huludao, China, from 2003 to 2014 demonstrate that comprehensive analysis of changes reflecting that reclamation, construction of infrastructure, and New Huludao port contributed to the regional development. During four time slices, this region experienced rapid and medium-speed development, whose corresponding probabilities are 0.90, 0.87, 0.41, and 0.54, respectively, which is consistent with our field surveys.
The simplex volume algorithm (SVA)1 is an endmember extraction algorithm based on the geometrical properties of a
simplex in the feature space of hyperspectral image. By utilizing the relation between a simplex volume and its
corresponding parallelohedron volume in the high-dimensional space, the algorithm extracts endmembers from the initial
hyperspectral image directly without the need of dimension reduction. It thus avoids the drawback of the N-FINDER
algorithm, which requires the dimension of the data to be reduced to one less than the number of the endmembers. In this
paper, we take advantage of the large-scale parallelism of CUDA (Compute Unified Device Architecture) to accelerate
the computation of SVA on the NVidia GeForce 560 GPU. The time for computing a simplex volume increases with the
number of endmembers. Experimental results show that the proposed GPU-based SVA achieves a significant 112.56x
speedup for extracting 16 endmembers, as compared to its CPU-based single-threaded counterpart.
KEYWORDS: Principal component analysis, Hyperspectral imaging, Feature extraction, Signal to noise ratio, Data processing, Image processing, Dimension reduction, Parallel computing, Data compression, Denoising
PCA (principal components analysis) algorithm is the most basic method of dimension reduction for high-dimensional
data1, which plays a significant role in hyperspectral data compression, decorrelation, denoising and feature extraction. With the development of imaging technology, the number of spectral bands in a hyperspectral image is getting larger and larger, and the data cube becomes bigger in these years. As a consequence, operation of dimension reduction is more and more time-consuming nowadays. Fortunately, GPU-based high-performance computing has opened up a novel approach for hyperspectral data processing6. This paper is concerning on the two main processes in hyperspectral image feature extraction: (1) calculation of transformation matrix; (2) transformation in spectrum dimension. These two processes belong to computationally intensive and data-intensive data processing respectively. Through the introduction of GPU parallel computing technology, an algorithm containing PCA transformation based on eigenvalue decomposition 8(EVD) and feature matching identification is implemented, which is aimed to explore the characteristics of the GPU parallel computing and the prospects of GPU application in hyperspectral image processing by analysing thread invoking and speedup of the algorithm. At last, the result of the experiment shows that the algorithm has reached a 12x speedup in total, in which some certain step reaches higher speedups up to 270 times.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.