Edge-Directed Radial Basis Functions (EDRBF) are used to compute super resolution(SR) image from a given set of low resolution (LR) images differing in subpixel shifts. The algorithm is tested on remote sensing images and compared for accuracy with other well-known algorithms such as Iterative Back Projection (IBP), Maximum Likelihood (ML) algorithm, interpolation of scattered points using Nearest Neighbor (NN) and Inversed Distance Weighted (IDW) interpolation, and Radial Basis Functin(RBF) . The accuracy of SR depends on various factors besides the algorithm (i) number of subpixel shifted LR images (ii) accuracy with which the LR shifts are estimated by registration algorithms (iii) and the targeted spatial resolution of SR. In our studies, the accuracy of EDRBF is compared with other algorithms keeping these factors constant. The algorithm has two steps: i) registration of low resolution images and (ii) estimating the pixels in High Resolution (HR) grid using EDRBF. Experiments are conducted by simulating LR images from a input HR image with different sub-pixel shifts. The reconstructed SR image is compared with input HR image to measure the accuracy of the algorithm using sum of squared errors (SSE). The algorithm has outperformed all of the algorithms mentioned above. The algorithm is robust and is not overly sensitive to the registration inaccuracies.
In this paper, we demonstrate simple algorithms that project low resolution (LR) images differing in subpixel shifts on a high resolution (HR) also called super resolution (SR) grid. The algorithms are very effective in accuracy as well as time efficiency. A number of spatial interpolation techniques using nearest neighbor, inverse-distance weighted averages, Radial Basis Functions (RBF) etc. are used in projection. For best accuracy of reconstructing SR image by a factor of two requires four LR images differing in four independent subpixel shifts. The algorithm has two steps: i) registration of low resolution images and (ii) shifting the low resolution images to align with reference image and projecting them on high resolution grid based on the shifts of each low resolution image using different interpolation techniques. Experiments are conducted by simulating low resolution images by subpixel shifts and subsampling of original high resolution image and the reconstructing the high resolution images from the simulated low resolution images. The results of accuracy of reconstruction are compared by using mean squared error measure between original high resolution image and reconstructed image. The algorithm was tested on remote sensing images and found to outperform previously proposed techniques such as Iterative Back Projection algorithm (IBP), Maximum Likelihood (ML) algorithms. The algorithms are robust and are not overly sensitive to the registration inaccuracies.
Linear features from airport images correspond to runways, taxiways and roads. Detecting runways helps pilots to focus on runway incursions in poor visibility conditions. In this work, we attempt to detect linear features from LiDAR swath in near real time using parallel implementation on G5-based apple cluster called Xseed. Data from LiDAR swath is converted into a uniform grid with nearest neighbor interpolation. The edges and gradient directions are computed using standard edge detection algorithms such as Canny's detector. Edge linking and detecting straight-line features are described. Preliminary results on Reno, Nevada airport data are included.
An ideal image compression technique for image data archival, retrieval and distribution would be one with the asymmetrical computational requirements of vector quantization (VQ), but without the complications arising from VQ codebooks. Codebook generation and maintenance are stumbling blocks which have limited the use of VQ as a practical image compression algorithm. Model-based VQ (MVQ), a variant of VQ described here, has the computational properties of VQ but does not require explicit codebooks. The codebooks are internally generated using mean removed error and human visual system (HVS) models. The error model assumed is the Laplacian distribution with mean, (lambda) , computed from a sample of the input image. A Laplacian distribution with mean, (lambda) , is generated with a uniform random number generator. These random numbers are grouped into vectors. These vectors are further conditioned to make them perceptually meaningful by filtering the DCT coefficients from each vector. The DCT coefficients are filtered by multiplying by a weight matrix that is found to be optimal for human perception. The inverse DCT is performed to produced the conditioned vectors for the codebook. The only image dependent parameter used in the generation of codebook is the mean, (lambda) , that is included in the coded file to repeat the codebook generation process for decoding.
Managing massive databases of scientific images requires new techniques that address indexing visual content, providing adequate browse capabilities, and facilitating querying by image content. Subband decomposition of image data using wavelet filters is offered as an aid to solving each of these problems. It is fundamental to a vidual indexing scheme that constructs a pruned tree of significant subbands as a first level of index. Significance is determined by feature vectors including Markov random field statistics, in addition to more common measures of energy and entropy. Features are retained at the nodes of the pruned subband tree as a second level of index. Query images, indexed in the same manner as database images, are compared as closely as desired to database indexes. Browse images for matching images are transmitted to the user in the form of subband coefficients, which constitute the third level of index. These coefficients, chosen for their unique significance to the indexed image, are likely to contain valuable information for the subject area specialist. This paper presents the indexing scheme in detail, and reports some preliminary results of selecting subbands for reconstruction as browse images based on their significance for indexing purposes.
Traditionally the distribution of the prediction error has been treated as the single-parameter Laplacian distribution, and based on this assumption one can design a set of Huffman codes selected through an estimate of the parameter. More recently, the prediction error distribution has been compared to the Gaussian distribution about mean zero when the value is relatively high. However when using nearly quantized prediction errors in the context model, the relatively high variance case is seen to merge conditional distributions surrounding both positive edges and negative edges. Edge information is available respectively from large negative or positive prediction errors in the neighboring pixel positions. In these cases, the mean of the distribution is usually not zero. By separating these two cases, making appropriate assumptions on the mean of the context-dependent error distribution, and other techniques, additional cost-effective compression can be achieved.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.