PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
Assuming that approximate registration is given within a few pixels by a systematic correction system, we develop automatic image registration methods for multi-sensor data with the goal of achieving sub-pixel accuracy. Automatic image registration is usually defined by three steps; feature extraction, feature matching, and data resampling or fusion. Our previous work focused on image correlation methods based on the use of different features. In this paper, we study different feature matching techniques and present five algorithms where the features are either original gray levels or wavelet-like features, and the feature matching is based on gradient descent optimization, statistical robust matching, and mutual information. These algorithms are tested and compared on several multi-sensor datasets covering one of the EOS Core Sites, the Konza Prairie in Kansas, from four different sensors: IKONOS (4m), Landsat-7/ETM+ (30 m), MODIS (500 m), and SeaWIFS (1000m).
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The presented work aims to automatically register high-resolution polarimetric SAR images with each other and other types of images. A digital topographic map is used as an aid for the registration. SAR images are very different from visual or infrared images. The idea is to identify, for each type of image, objects present on the map and easily detectable in the image. Detecting these objects in the image and matching them between map and image provides a first registration. Several object detectors were developed for the subsequent stages of the registration. Each of these detectors is briefly described. The actual registration uses a hierarchical method. First the SAR image is converted into ground range. Then a rough registration between image and map is obtained based on the position of forests and/or built-up areas. A voting method is used to find the parameters of a simple transformation model and to match the objects between map and image. The third step finds the parameters of an affine transformation based on the objects matched by the voting method. To improve the registration, objects with low 3D structure, e.g. roads and rivers, are used. The method for detecting these in SAR images yields an incomplete results leading to ambiguities for the optimal local displacement. Optimisation methods are used to overcome this problem and yield the parameters of a global transformation model. The accuracy of the registration is now within the accuracy of the map. Once the different images are registered with the map, the results of edge detectors are used to refine the registration between them.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper a method for joint segmentation and compression of remotely sensed images is described. The segmentation task, which is the main topic of this paper, is especially tailored for the identification of Objects Of Interest (OOIs), also called Foreground (FGND) Objects, placed over a non-interesting and homogeneous Background (BGND). These images, collected by satellites or high- altitude platforms, are of particular interest in scientific applications, such as space-borne image analysis, sea observation, regional public services for agriculture, hydrology, fire protection, and so forth. In the case presented here, a suitable compression scheme is then applied to each data stream outcoming from the segmentation block, depending upon its relevance, in order to obtain a selective lossless image compression. Of course, the same segmentation technique can also be a component of many other image processing schemes. An interesting feature of the suggested segmentation method is its versatility and reduced complexity, due to the implementation of the segmentation on the basis of a weighted graph, representing chromatic and morphological features of the regions into which the image is partitioned. The segmentation is based on a step-wise optimization performed with a data-drive decomposition of the image and it is achieved as a region-growing approach based upon the fusion of the best neighbor nodes in the graph. Another important aspect of the proposed technique is its robustness to the variation of represented subjects: neither hypothesis nor restrictions are formulated on the properties of OOIs, because the segmentation procedure identifies the BGND, by using its homogeneity characteristic. Therefore the method can be considered as almost application-independent. Practical applications of the suggested method shown in this paper will demonstrate its effectiveness. Moreover the improvement of Compression Ratio achievable with the proposed technique with respect to classical lossless image compression schemes will be shown on the basis of results obtained on a corpus of images.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper proposes a scale correlation-based edge detection scheme. A scale correlation function is defined as the product of detection filter's response at two scales. With the proper choice of detection filters such as the first derivative of Gaussian, the scale correlation will magnify the edge structures and suppress the noise. Unlike many of the multiscale techniques that first form the edge maps at several scales and then synthesize them together, in our scheme, edges are determined as the local maxima directly in the correlation function. The detection and localization criteria of the scale correlation are defined. It is shown that with little loss in detection criterion, much improvement is gained on localization criterion. Using scale correlation, the dislocation of neighboring edges is also improved when the width of detection filter is set large to smooth noise.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Wavelets have become a popular tool in many research areas because of the combination of a nice theoretical foundation and promising applications. The theoretical foundation reveals new insights and has thrown a new light on several application areas. One of the applications is speckle reduction and enhancement of synthetic aperture radar (SAR) images. The use of wavelet thresholding as noise reduction method is based on the following properties: Wavelet transformation creates a sparse signal (because of the decorrelation property of the transform); noise is spread out equally over all wavelet transform coefficients; noise level is not too high and thus signal wavelet transformation coefficients can be recognized. In this paper, we will review the use of wavelet,translation invariant wavelet, almost translation invariant wavelet (complex wavelet), and multi-wavelet transformations in speckle reduction of SAR images. Several nonlinear thresholding functions, i.e., hard, soft, adaptive sigmoid, and a function based on generalized cross validation are investigated and compared in experiments.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents an original application of fuzzy logic to restoration of interferometric phase images from IFSAR, which are affected by zero-mean uncorrelated noise, whose variance depends on the underlying coherence, thus resulting in a nonstationary random process. Spatial filtering of the phase noise is recommended, either before phase unwrapping is accomplished, or simultaneously with it. In fact, phase unwrapping basically relies on a smoothness constraint of the phase field, which is severely hampered by the noise. Space-varying linear MMSE estimation is stated as a problem of matching pursuits, in which the estimator is obtained as an expansion in series of a finite number of prototype estimators, fitting the spatial features of the different statistical classes encountered, e.g., fringes, and steep slope areas. Such estimators are calculated in a fuzzy fashion through an automatic training procedure. The space-varying coefficients of the expansion are stated as degrees of fuzzy membership of a pixel to each of the estimators. Besides the fact that neither a priori knowledge on the noise variance is required, nor a particular signal model is assumed, a performance comparison on simulated noisy images highlights the advantages of the proposed approach. Results on simulated noisy versions of Lenna show a steady SNR improvement of almost 3 dB over Kuan's LLMMSE filtering, irrespective of noise model and intensity. Applications of the proposed filter to interferometric phase images demonstrate a superior ability of preserving fringes discontinuities, together with an effective smoothing performance, irrespective of local coherence characteristics.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The automatic analysis of Ground Penetrating Radar (GPR) images is an interesting topic in remote sensing image processing, since it involves the use of pre-processing, detection and classification tools with the aim of near-real time or at least very fast data interpretation. However, actual chains of preprocessing tools for GPR images do not consider usually denoising, essentially because most of the successive data interpretation is based on single radar trace analysis. So, no speckle noise analysis and denoising has been attempted, perhaps assuming that this point is immaterial for the following interpretation or detection tools. Instead, we expect that speckle denoising procedures would help. In this paper we address this problem, providing a detailed and exhaustive comparison of many of the statistical algorithms for speckle reduction provided in literature, i.e. Kuan, Lee, Median, Oddy and wavelet filters. For a more precise comparison, we use the Equivalent Number of Look (ENL), the Variance Ratio (VR). Moreover, we validate the denoising results by applying an interpretation step to the pre-processed data. We show that a wavelet denoising procedure results in a large improvement for both the ENL and VR. Moreover, it also allows the neural detector to individuate more targets and less false positive in the same GPR data set.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The success of Richardson-Lucy (RL) algorithm is that it forces the restored image to be non-negative and to conserve global flux at each iteration. The problem with RL algorithm is that it produces solutions that are highly unstable, with high peaks and deep valleys. Our aim is to modify RL algorithm in order do regularize it while preserving positivity and total photometry as far as possible. Data instances that are not compatible with others can cause singularities in the restoration solution. So, we have an ill-posed problem and a regularization method is needed to replace it to a well-posed problem. The regularization approach overcomes this difficulty by choosing among the possible objects one 'smooth' that approximate the data. The basic underlying idea in most regularization approaches is the incorporation of 'a priori' knowledge into the restoration. In this article we try to give a simple method of spatial regularization deriving from RL algorithm in order to overcome the problem of noise amplification during the image reconstruction process. It is very important in astronomy and remote sensing to regularize images while having under control their photometric behavior. We propose a new reconstruction method preserving both the global photometry and local photometric aspects.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Many approaches to short-term forecasting the motion of rain structures widely rely on correlation between radar rain maps using local rain intensities. A different approach can be taken in considering rain structures as the base for analysis, while local rain intensities only serve the purpose of detecting, locating and shaping the former. RBF (equalsRadial Basis Function) Neural Networks (NN) provide a means of implementing such approach. Rain maps submitted to RBF NN for training results in turning them into sets of parameters describing observed rain structures. Reiterating the training on time series of maps results in time series of parameters possibly depicting typical trends. Forecasting such parameters and translating forecasted values back into maps should provide a forecast of rain distribution in the near future. We found the best forecasting strategy to be a mix where some of the parameters are forecasted linearly and some else using more RBF NNs. We got further improvement by using GRBF (equalsGradient RBF) in place of RBF in forecasting phase, and making the synthesis phase more stable and reliable by introducing some novelties into the algorithm. In this paper we explain the technique we developed and evaluate the results we obtained.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This algorithm can be used for creation the optimal electronic information system. For instance: locator, connection line or measuring system.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The problem of detecting target signals in clutter arises in various applications, such as radar, sonar, communication, active or passive electro-optical sensors, etc. In many instances, the signals or objects are dim or partially obscured in a severe clutter environment that can vary widely. The inherent difficulties of such a detection process are the limited prior information about the target signal and the statistical properties of the clutter. In this paper, the signal detection problem is reduced to the problem of detecting a change point in a sequence of the GLRT statistics. A change point is defined to be an index (tau) in a sequence x1, x2, ..., xT of the GLRT statistics such that x1, x2, ..., x(tau ) have a common distribution F0(x) and x(tau +1), ..., xT have a common distribution F1(x), where F0(x)does not equal F1(x). Note that there is no change point if (tau) equals T. Many authors have presented approaches to solving the above problem. These include tests for a change in mean level, likelihood ratio tests, Bayesian approaches to inference about (tau) , and distribution-free approaches. In order to solve the change point problem, i.e., to determine whether or not a change point exists in a sequence of the GLRT statistics, we use a method which makes no assumption about F0(x) and F1(x). Essentially, there are two problems associated with change point detection: detecting the change and making inferences about the change point. For solving these problems, a non-parametric technique is proposed. The test for testing the null hypothesis of 'no change' (clutter alone) against the alternative of 'change' (signal present) is based on a version of the Waerden statistic. Estimating the change point is based on a version of the Mann-Whitney statistic. The proposed procedure can be used for segmentation of non- stationary signals into 'homogeneous' parts. The problem of segmenting the homogeneous parts of a digital signal, or detecting abrupt changes in a signal, is a key point which frequently arises in various application areas where modeling and processing of non-stationary digital signals is required. The results of computer simulations confirm the validity of the theoretical predictions of performance of the suggested technique.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
GIS data may include optical and radar imagery, and categorical information such as soils maps and land planning strategies, all of which can assist thematic mapping. We now have several decades of experience with thematic mapping from spectral data alone. We also have experience with the analysis of radar imagery, while hyperspectral thematic mapping techniques are now also becoming feasible. However, successful machine-assisted analysis of mixed optical and radar data is not straightforward, and is complicated further when categorical data is also involved. Often simplistic methods involving stacked vectors of all the available data are used, but the incommensurate data types means that a single analytical procedure, even if acceptable, will often yield poor results. Methods commonly used for mixed image data analysis are reviewed and a set of desirable criteria for an operational method for thematic mapping from disparate data types are presented. Finally we propose a fusion strategy based on (i) analysing each data type with procedures most suited to its particular characteristics and (ii) fusing at the class level, involving combination rules that work with labels rather than measurement vectors. The method is proposed as suited to GIS analysis, particularly when the data sets are distributed and thus accessed over a network.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We propose an automatic classification procedure for multichannel remote sensing data. The method consists of several stages. An important stage is the correction of misclassifications based on the use of a nonlinear graph-based estimation technique recently introduced by us. The misclassification correction method is optimized by means of a training-based framework using genetic algorithms. It is shown that this provides a considerable improvement in classification accuracy. After primary local recognition and misclassification correction of all component images, an approach to further use the obtained data is considered. At this joint classification stage we introduce novel subclasses like 'common homogeneous region, common edge, small sized object in one or two components, etc. Numerical simulation data as well as real image processing results are presented to confirm the basic steps of remote sensing data classification and the efficiency of the proposed approach.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, a new independent component analysis (ICA) method is proposed that makes use of the higher order statistics. We name it joint cumulant ICA (JC-ICA) algorithm. It can be implemented efficiently by a neural network. Its applications in AVIRIS data and change detection are discussed. The results show the potential usage in image processing problems.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents a multidimensional data nonlinear projection method applied to the dimensionality reduction of hyperspectral images. The method, called Curvilinear Component Analysis (CCA) consists in reproducing at best the topology of the joint distribution of the data in a projection subspace whose dimension is lower than the dimension of the initial space, thus preserving a maximum amount of information. The Curvilinear Distance Analysis (CDA) is an improvement of the CCA that allows data including high nonlinearities to be projected. Its interest for reducing the dimension of hyperspectral images is shown. The results are presented on real hyperspectral images and compared with usual linear projection methods.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A study is presented in which several different representations of polarimetric SAR data for visual interpretation are evaluated. Using a group of observers the tasks 'land use classification' and 'object detection' were examined. For the study, polarimetric SAR data were used with a resolution of 3 meters. These data were obtained with the Dutch PHARUS sensor from two test areas in the Netherlands. The land use classes consisted of bare soil, water, grass, urban and forest. The objects were farmhouses. It was found that people are reasonably successful in performing land use classification using SAR data. Multi- polarized data are required, but these data need not to be fully polarimetric, since the best results were obtained with the hh- and hv-polarization combinations displayed in the red and green color channels. Detection of objects in SAR imagery by visual inspection is very difficult. Most representations gave minimal results. Only when the hh- and hv-polarization combinations were displayed in the red and green channels, somewhat better results were obtained. Comparison with an automatic classification procedure showed that land use classification by visual inspection appears to be the more effective. Automatic detection of objects gave better results than by visual inspection, but many 'false' objects were also detected.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Hyperspectral images contain a great amount of information in terms of hundreds of narrowband channels. This should lead to better parameter estimation and to more accurate classifications. However, traditional classification methods based on multispectral analysis fail to work properly on this type of data. High dimensional space poses a difficulty in obtaining accurate parameter estimates and as a consequence this makes unsupervised classification a challenge that requires new techniques. Thus, alternative methods are needed to take advantage of the information provided by the hyperdimensional data. Data fusion is an alternative when dealing with such large data sets in order to improve classification accuracy. Data fusion is an important process in the areas of environmental systems, surveillance, automation, medical imaging, and robotics. The uses of this technique in Remote Sensing have been recently expanding. A relevant application is to adapt the data fusion approaches to be used on hyperspectral imagery taking into consideration the special characteristics of such data. The approach of this paper is to presents a scheme that integrates information from most of the hyperspectral narrow-bands in order to increase the discrimination accuracy in unsupervised classification.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The characterization and the mapping of land cover/land use of forest areas, such as the Central African rainforest, is a very complex task. This complexity is mainly due to the extent of such areas and, as a consequence, to the lack of full and continuous cloud-free coverage of those large regions by one single remote sensing instrument. In order to provide improved vegetation maps of Central Africa and to develop forest monitoring techniques for applications at the local and regional scales, we propose to utilize multi- sensor remote sensing observations coupled with in-situ data. Fusion and clustering of multi-sensor data are the first steps towards the development of such a forest monitoring system. In this paper, we will describe some preliminary experiments involving the fusion of SAR and Landsat image data of the Lope Reserve in Gabon. Similarly to previous fusion studies, our fusion method is wavelet- based. The fusion provides a new image data set which contains more detailed texture features and preserves the large homogeneous regions that are observed by the Thematic Mapper sensor. The fusion step is followed by unsupervised clustering and provides a vegetation map of the area.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Nowadays, terrestrial dynamics study is more and more often performed with the help of satellite sensors. Usually, vegetation cover surveys are performed with wide field of view sensors, because of their high temporal resolution. However, a high spatial resolution will be appreciable to distinguish each component in a landscape. We propose to create merged images combining both sensors: our fusion method is based on both theories of pyramid algorithms and mathematical morphology. Let call HR (resp. BR) the spatial resolution of the high resolution (resp. coarse) sensor image, for example SPOT 4 HRVIR and VEGETATION. The principle is : 1) To decompose the high resolution image into a low-frequency and several high-frequencies images (HFI). 2) To perform the inverse transform on the HFI images and the coarse resolution sensor data and produce the merged image. Consequently, from a temporal set of VEGETATION data and from a few HRVIR scenes, we are able to create 20m (or less) resolution synthesis data having the temporal repetitivity of the VEGETATION data set.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
An unsupervised change detection problem can be viewed as a classification problem with only two classes corresponding to the change and no-change areas, respectively. Due to its simplicity, image differencing represents a popular approach for change detection. It is based on the idea to generate a difference image that represents the modulus of the spectral change vector associated to each pixel in the study area. To separate the change and no-change classes in the difference image, a simple thresholding-based procedure can be applied. However, the selection of the best threshold value is not a trivial problem. In the present work, several simple thresholding methods are investigated and compared. The combination of the Expectation-Maximization algorithm with a thresholding method is also considered with the aim of achieving a better estimation of the optimal threshold value. For experimental purpose, a study area affected by a forest fire is considered. Two Landsat TM images of the area acquired before and after the event are utilized to reveal the burned zones and to assess and compare the above mentioned unsupervised change detection methods.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The problem of detecting land-cover transitions in multitemporal remote-sensing images by exploiting the Bayes rule for compound classification is addressed. In particular, a novel method is proposed, which is based on the use of radial basis function (RBF) neural networks. The proposed method is composed of three main steps: i) the statistical terms involved in the compound classification rule are estimated separately on each single image by using the available training set and a standard procedure for the training of RBF neural networks; ii) the joint class probabilities are roughly initialised according to a simple estimation procedure; iii) the estimation of all parameters of the networks are iteratively improved by formulating the estimation process in terms of expectation-maximisation algorithm. The main advantages of the proposed approach with respect to our previous work on this topic are the following: i) to jointly perform and optimise the estimation process of all the required statistical terms; ii) to use both labelled and unlabeled samples in the estimation of all the required statistical terms (this is made possible by the use of a specific RBF architecture); iii) to better model the temporal correlation between classes in multitemporal images by replacing the class conditional independence assumption with a less restrictive one. Experiments, carried out on a multitemporal remote-sensing data set, confirm the effectiveness of the proposed automatic approach.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The recognition and classification of urban structures from SAR observations is a particularly complex task. In this article we present a new concept aiming at the accurate and detailed classification of the city scenes observed with metric resolution SAR sensors. SAR images of build-up areas at resolution of 2-3 meters are characterized by strong patterns induced by the geometry of buildings and the phenomenology of scattering of the radar signals. Thus, resulting in high complexity images. The accuracy of image interpretation relies on the descriptive power of the low level image information extraction. The article presents a method based on the Bayesian concepts. A hierachical 3 layers model is used for the SAR observations. The first layer describes the speckle effect as a Gamma distribution, the second, the cross-section, is modeled as Gibbs Random Field (GRF), the third layer the parameters of the Gibbs random field is considered a Jeffrey's prior. The GRF describes the cross-section structures induced by the geometry of the buildings. The model is non-stationary, its parameters adapt locally to the image structures.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The present article focuses on the development of interactive exploratory tools for visually mining the image content in large remote sensing archives. Two aspects are treated: the iconic visualization of the global information in the archive and the progressive visualization of the image details. The proposed methods are integrated in the Image Information Mining (I2M) system. The images and image structure in the I2M system are indexed based on a probabilistic approach. The resulting links are managed by a relational data base. Both the intrinsic complexity of the observed images and the diversity of user requests result in a great number of associations in the data base. Thus new tools have been designed to visualize, in iconic representation the relationships created during a query or information mining operation: the visualization of the query results positioned on the geographical map, quick-looks gallery, visualization of the measure of goodness of the query, visualization of the image space for statistical evaluation purposes. Additionally the I2M system is enhanced with progressive detail visualization in order to allow better access for operator inspection. I2M is a three-tier Java architecture and is optimized for the Internet.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this work, near-lossless compression yielding strictly bounded reconstruction error, is proposed for high-quality compression of remote sensing images. A space-varying linear-regression prediction is obtained through fuzzy-logic techniques as a problem of matching pursuit, in which a predictor different for every pixel is obtained as an expansion in series of a finite number of prototype nonorthogonal predictors, that are calculated in a fuzzy fashion as well. To enhance entropy coding, the spatial prediction is followed by context-based statistical modeling of prediction errors. Performance comparisons with JPEG 2000 and previous works by the authors, highlight the advantages of the proposed fuzzy approach to data compression.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper we describe efforts toward a hyperspectral land remote sensing data analysis procedure that would be maximally effective for use by a broad community of future users. Though there would be dependence of performance achieved on the spectral subspace and the classification algorithm used, the major dependence is on how well the user quantitatively defines the classes desired. Thus the attempt is to measure this dependence for typical users and to introduce means that mitigate the problem of class and training definition.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We present a method for calculating the relative density of atmospheric gases at each spatial location of a hyperspectral image (HSI) based on a simple model of gas attenuation. These gas-density maps are used to search for gas plumes and to improve atmospheric correction of the HSI. Our premise is that the attenuation spectra of gases typically vary much more rapidly with color than the reflectance of surface materials. For a set of gas attenuation spectra, we seek the corresponding densities that provide the smoothest restored spectrum at each pixel. The attenuation spectra of common gasses can be estimated based on tabulated values for color-calibrated sensors. Our method infers the attenuation spectra of gases directly from the HSI, requiring minimal accuracy in color calibration. These inferred spectra can be compared to tabulated values to refine the color calibration of the sensor. We seek attenuation spectra that are consistent with physical models and improve the smoothness of the average spectrum. This method can also be used to search for novel gases to be recognized based on the inferred attenuation spectrum. Results are presented for AVIRIS and simulated HIRIS imagery. Removing this strongly nonlinear effect leads to significant improvements in subsequent processing.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
One of the most widely used approaches to analyze hyperspectral data is pixel unmixing, which relies on the identification of the purest spectra from the data cube. Once these elements, known as 'endmembers', are extracted, several methods can be used to map their spatial distributions, associations and abundances. A large variety of methodologies have been recently proposed with the purpose of extracting endmembers from hyperspectral data. Nevertheless, most of them only rely on the spectral response; spatial information has not been fully exploited yet, specially in unsupervised classification. The integration of both spatial and spectral information is becoming more relevant as the sensors tend to increase their spatial/spectral resolution. Mathematical morphology is a non-linear image analysis and pattern recognition technique that has proved to be especially well suited to segment images with irregular and complex shapes, but has rarely been applied to the classification/segmentation of multivariate remote sensing data. In this paper we propose a completely automated method, based on mathematical morphology, which allows us to integrate spectral and spatial information in the analysis of hyperspectral images. The accuracy of the proposed algorithm is tested by its application to real hyperspectral data, and the results provided are compared to those found using other existing endmember extraction algorithms.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
At the beginning of next year (2002), CNES will launch the SPOT 5 satellite. Before the launching, CNES has to validate the choices made for the sensors with the use of simulated images. Our study has been made as part of the use of SPOT 5 panchromatic images for urban area analysis. The objective of this study is to extract built-up urban areas and urban thoroughfares from simulated images. Looking at SPOT 5 panchromatic simulated images shows us that the grey level is not a good characteristic to discriminate buildings of the urban areas from other types of regions such as thoroughfares or nude-grounds. In the meantime, buildings define heterogeneous areas whereas thoroughfares and nude-grounds are homogeneous regions. So in order to classify built-up urban areas and thoroughfares, we suggest to use the directional variance of the image. With this end in view, we define a new operator of texture analysis, and we propose to combine it with classification algorithms and edge extraction operators. In order to validate our method, we present some results we have obtained on the city of Strasbourg, using simulated SPOT 5 images of 5m per pixel resolution, provided by CNES agency.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The accurate knowledge of the shoreline position is of fundamental importance in several applications such as cartography and ships positioning1. Moreover, the coastline could be seen as a relevant parameter for the monitoring of the coastal zone morphology, as it allows the retrieval of a much more precise digital elevation model of the entire coastal area. The study that has been carried out focuses on the development of a reliable technique for the detection of coastlines in remotely sensed images. An innovative approach which is based on the concepts of fuzzy connectivity and texture features extraction has been developed for the location of the shoreline. The system has been tested on several kind of images as SPOT, LANDSAT and the results obtained are good. Moreover, the algorithm has been tested on a sample of a SAR interferogram. The breakthrough consists in the fact that the coastline detection is seen as an important features in the framework of digital elevation model (DEM) retrieval. In particular, the coast could be seen as a boundary line all data beyond which (the ones representing the sea) are not significant. The processing for the digital elevation model could be refined, just considering the in-land data.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Conventional methods for cartographic shape representation of objects from satellite images are usually inaccurate and will provide only a rough shape description if they are to work in a fully automated mode. For example, existing algorithms for skeletal thinning fail to provide a correctly shaped skeleton if the input images contain noise or the objects of interest are sparse and exhibit discontinuities. The proposed method for extraction of skeletons of 2-D objects is based on an efficient algorithm for multi-scale structural analysis of images obtained from satellite data. The form and topology of hydrological objects, such as rivers and lakes, can be extracted by applying a multi-scale relevance function in a quick, reliable and scale-independent way. The description of objects is obtained in the form of piecewise linear skeletons (multi-scale structural graph) and includes local scales at graph vertices, which correspond to local maxima of the relevance function. The experimental test results using Landsat-7 images show good accuracy of the relevance function approach and its potential for fully automated hydrographic mapping.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A variety of techniques exist for change detection of multitemporal remotely sensed satellite data. The Intensity- Hue-Saturation (IHS) color space is very useful for image processing because it separates the color information in ways that correspond to the human visual system's response. In this study, a novel approach, emphasizing the use of the hue component of the IHS transformations of Landsat data, is proposed and examined for multitemporal change detection. Two Landsat Thematic Mapper (TM) scenes acquired on 1987 and 1997 covering the western part of El-Fayoum area and El- Rayan lakes in Egypt have been processed (geometrically corrected and radiometrically balanced) and transformed to the IHS space. The results of using the hue component in detecting the changes are very promising. A number of changed areas including water and agriculture land were successfully detected. The used color theme print, which display the spatial pattern of change in map form, was of great significance in interpreting the environmental changes and the statistical estimation of these changes has been carried out as well.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
It is of great importance in image restoration to remove noise while preserving and enhancing edges. This paper presents a spatial correlation thresholding scheme for image restoration. The dyadic wavelet transform that acts as a Canny edge detector is employed here to characterize the significant structures, which would be strongly correlated along the wavelet scales. A correlation function is defined as the multiplication of two adjacent wavelet subbands with a translation to maximize the mathematical expectation. In the correlation function, edge structures are more discriminable because they are amplified while noise being diluted. Unlike most of the traditional schemes that threshold directly the wavelet coefficients, the proposed scheme applies thresholding on the correlation function to better preserve edges while suppressing noise. A robust threshold is presented and the experiment shows that the proposed scheme outperforms the traditional thresholding schemes not only in SNR comparison but also in the edge preservation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Often in remote sensing applications, when using statistical classifiers, it is implicitly assumed that the training samples used to train a classifier represent the true classes. This assumption may not be valid for several reasons. First, it is unlikely that enough training samples will be available to accurately estimate the parameters of the class density functions. Second, samples from one class often come from adjacent areas on the ground, and so they are spectrally more similar than it can be expected from the entire class. Third, in the case where the classes are multimodal, the non-parametric density techniques are used that require many more training samples than parametric ones. Four, there is some evidence that the accuracy of a classifier tends to decrease near the boundaries between two classes where the sample is a mixture of two or more classes. The purpose of this paper is to propose a new method, based on evidential reasoning, in which the classification can be more reliable and accurate despite of an insufficiency and a poor quality of the training data. Using real data, the obtained results are satisfactory. The scheme outperforms the conventional statistical classifiers. It may be used for various applications with multisource data.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The identification and tracking of mid-latitude weather systems in satellite imagery is explored using an automatic and objective technique based on Fourier Shape detectors. The method originating from medical image processing combines model-based information into a boundary-finding process for continuously deformable objects. In order to implement the method, a preliminary approach has been to test it on synthetic storms. In this paper we also present a first attempt on a real image of a mid-latitude storm that occurred over Scandinavia on the 15th December 1999. Further work will be to analyze IR and Visible daily composites since 1978 from the Advanced Very-High Resolution Radiometer (AVHRR) on the NOAA polar-orbiter satellite.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Observations from hyperspectral imaging sensors lead to high dimensional data sets from hundreds of images taken at closely spaced narrow spectral bands. High storage and transmission requirements, computational complexity, and statistical modeling problems motivate the idea of data reduction. A standard approach for data reduction is principal component (PC) analysis. A well-known fact for hyperspectral images (HSI) is that most of the spatial information content is summarized by the first few principal components. A disadvantage of this approach is the inherent transformation of the original HSI into linear combinations of bands with no physical relation to the spectral information content of the original image. An alternative data reduction approach is band subset selection where a subset of bands that will summarize most of the information contained in the original HSI is selected. Many approaches presented in the literature try to select bands that are a good approximation to PC because of their optimality under several criteria. This paper presents a comparison between several of these methods in terms of how well the selected bands approximate the principal components. The conditions under which good approximation of the first few principal components using a subset of bands can be achieved are presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The signal to noise ratio is one of parameters checked in flight to assess the image quality of remote sensing satellites. A simple method to estimate this parameter consists in selecting huge snowy areas. As the landscape is nearly uniform, a correct estimation of the standard deviation of the noise can be done by calculating the standard deviation of the signal. In order to avoid viewing such specific scenes, we suggest two different approaches. The first one is restricted to additive noises. As there is little correlation between the noise and the landscape, images can be decomposed in an image considered as pure landscape and an image of noise where the signal to noise ratio is estimated by using a block computation method. Different simulations show that the assessment errors are less than 10% and usually near 5%. The second one is a particular application of a general approach of image quality assessment. It can be applied to any kind of noise model. It is based on artificial neural network use. The principle is to use artificial neural network to learn the signal to noise ratio of simulated or perfectly known images, then use it to assess the signal to noise ratio of unknown images. The assessment errors are near 10%.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
During the MINEO flight-campaign in summer 2000 HyMap data was recorded for the test site Kirchheller Heide north of the Ruhr district. The aim of this project is to use hyperspectral spectrometer data to detect environmental and ecological changes. They are caused by an affected hydrological balance due to deep hard coal mining. Dynamic mining demands regular updates of the spatial information. This data will become part of an environmental monitoring system which shall comprise analysis of hyperspectral data as an important constituent. As an essential preprocessing step for vegetation studies and multitemporal analyses, ATCOR-4 was used for atmospheric correction. Additionally, atmospheric data from Deutscher Wetterdienst (DWD) and ground reflectance spectra were recorded as an essential data-input for the ATCOR 4 model. The field spectra were used first to control the accuracy of the standard calibration files which are provided with the hyperspectral data. This workflow provides accurate results for wavelengths shorter than 1 micrometers directly. In an interactive manner the inflight calibration module of ATCOR-4 allows to built up and adapt new calibration files suitable for all wavelengths. The first calculated atmospheric lookup-tables and this final calibration file were used to perform the atmospheric correction for the HyMap scene.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Image enhancement and evaluation play an important role in modern information and measurement technologies. Because of the physical and technical limitations the obtained images are often distorted by noise influence and low resolution. Image enhancement provided by a conventional linear filtering and known standard smoothing methods can be not effective due to a contrast reduction. Image enhancement can be provided effectively by the adaptive nonlinear algorithms such as local gray-level histogram modification method recently investigated with application to images of special class like moire fringe patterns. This method was developed for noise suppression without decreasing the image contrast. It can be used for solving the image enhancement and restoration problems with application to different image kinds inherent in the remote sensing technologies.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We present a method of image description based on sequential approximation. The spectrum of each pixel is represented as a sum of generalized reference spectra. At designing a new reference spectra the undescribed spectra components are modified using a predefined group of transformation in order to put them closer to each other. The reference spectrum is defined as a mean of modified spectra of pixels. The spectra poorly approximating by this scheme are assumed to be anomalous in respect to their surroundings. The pixels are eliminating from the process of reference spectrum's designing when the given accuracy of their spectrum's description is reached. We can interrupt the image coding at any time and translate the abnormal pixels distortionless, whereas the normal pixels would be described with predefined accuracy. If the main task is to detect and classify pixels with abnormal spectra, the 'normal' pixels can be translated roughly or ignored. This way we can compress the image without loss of information important for application.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Targets usually have relatively steady geometrical forms. Backgrounds are characterized by uncontrollable variations of brightness. Those characterizations are taken into account in the model of the form of targets and in the model of background. If all the models are randomized, then we can distinguish the targets by an effective principle of the maximum likelihood of distinction. If the model of the geometrical form of the targets is deterministic and the background is an additive gaussian noise, then the morphological principle of distinction of the targets is effective. The distinction of targets is not effective in Cartesian space, for example at inverting brightness of the images of targets on a uniform background. In systems of radio-vision it is necessary to account into consideration the antenna pattern for effective distinction of targets. The local-linear method of super-resolution is included in the distinction and indication problems. The main idea of the new super-resolution method is in the resolving function. The local resolving functions are steady calculated in the local domains where antenna pattern is defined. The local resolving functions save the mean brightness' in the local domains. This property can be used in order to suppress the uncontrollable background. These and other questions are considered in the report by using examples of the model images of targets and backgrounds.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The local-linear method of super-resolution was developed for the solution of problems of compensation (resolving) of PSF distortions at low values of a signal/noise ratio. The resolving function, noise factor and value of increased resolution are introduced into consideration. The mathematical questions of applications of the local-linear method of super-resolution and problem of parallel calculations are considered.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.