The present work aims to improve on the existing solutions for inverting the discrete Radon transform (DRT) by using less data, reducing computational cost, and ensuring well-conditioned and stable algorithms for the inversion.
An analytical framework and a heuristic for finding possible inverse algorithms have been proposed. The study suggests an approach for finding a fast algorithm with a complexity of O(N2 log2N) by analyzing operation trees for consecutive input sizes.
The study also discusses the impact of noise on the proposed solutions, showing that the proposed algorithms lead to a better approximation than one iteration of Press’ inversion for added random error up to 40% of the signal’s magnitude. However, restricting the number of quadrants used in the algorithm leads to increased error.
Two algorithms are introduced for the computation of discrete integral transforms with a multiscale approach operating in discrete three-dimensional (3-D) volumes while considering its real-time implementation. The first algorithm, referred to as 3-D discrete Radon transform of planes, will compute the summation set of values lying in discrete planes in a cube that imitates, in discrete data, the integrals on two-dimensional planes in a 3-D volume similar to the continuous Radon transform. The normals of these planes, equispaced in ascents, cover a quadrilateralized hemisphere and comprise 12 dodecants. The second proposed algorithm, referred to as the 3-D discrete John transform of lines, will sum elements lying on discrete 3-D lines while imitating the behavior of the John or x-ray continuous transform on 3-D volumes. These discrete integral transforms do not perform interpolation on input or intermediate data, and they can be computed using only integer arithmetic with linearithmic complexity, thus outperforming the methods based on the Fourier slice-projection theorem for real-time applications. We briefly prove that these transforms have fast inversion algorithms that are exact for discrete inputs.
Refocusing a plenoptic image by digital means and after the exposure has been thoroughly studied in the last years, but few efforts have been made in the direction of real time implementation in a constrained environment such as that provided by current mobile phones and tablets. In this work we address the aforementioned challenge demonstrating that a complete focal stack, comprising 31 refocused planes from a (256ff16)2 plenoptic image, can be achieved within seconds by a current SoC mobile phone platform. The election of an appropriate algorithm is the key to success. In a previous work we developed an algorithm, the fast approximate 4D:3D discrete Radon transform, that performs this task with linear time complexity where others obtain quadratic or linearithmic time complexity. Moreover, that algorithm does not requires complex number transforms, trigonometric calculus nor even multiplications nor oat numbers. Our algorithm has been ported to a multi core ARM chip on an off-the-shelf tablet running Android. A careful implementation exploiting parallelism at several levels has been necessary. The final implementation takes advantage of multi-threading in native code and NEON SIMD instructions. As a result our current implementation completes the refocusing task within seconds for a 16 megapixels image, much faster than previous attempts running on powerful PC platforms or dedicated hardware. The times consumed by the different stages of the digital refocusing are given and the strategies to achieve this result are discussed. Time results are given for a variety of environments within Android ecosystem, from the weaker/cheaper SoCs to the top of the line for 2013.
We develop a new algorithm that extends the bidimensional fast digital radon transform from Götz and Druckmüller (1996) to digitally simulate the refocusing of a 4-D lightfield into a 3-D volume of photographic planes as previously done by Ng et al. (2005) but with the minimum number of operations. This new algorithm does not require multiplications just sums and its computational complexity is O(N4) to achieve a volume consisting of 2N photographic planes focused at different depths from a N4 plenoptic image. This reduced complexity allows for the acquisition and processing of a plenoptic sequence with the purpose of estimating 3-D shape at video rate. Examples are given of implementations on GPU and CPU platforms. Finally, a modified version of the algorithm to deal with domains of sizes different than a power of two is proposed.
Plenoptic cameras have been developed the last years as a passive method for 3d scanning, allowing focal stack capture
from a single shot. But data recorded by this kind of sensors can also be used to extract the wavefront phases associated
to the atmospheric turbulence in an astronomical observation.
The terrestrial atmosphere degrades the telescope images due to the diffraction index changes associated to the
turbulence. Na artificial Laser Guide Stars (Na-LGS, 90km high) must be used to obtain the reference wavefront phase
and the Optical Transfer Function of the system, but they are affected by defocus because of the finite distance to the
telescope.
Using the telescope as a plenoptic camera allows us to correct the defocus and to recover the wavefront phase
tomographically, taking advantage of the two principal characteristics of the plenoptic sensors at the same time: 3D
scanning and wavefront sensing. Then, the plenoptic sensors can be studied and used as an alternative wavefront sensor
for Adaptive Optics, particularly relevant when Extremely Large Telescopes projects are being undertaken.
In this paper, we will present the first observational wavefront phases extracted from real astronomical observations,
using punctual and extended objects, and we show that the restored wavefronts match the Kolmogorov atmospheric
turbulence.
Plenoptic cameras have been developed over the last years as a passive method for 3d scanning. Several superresolution
algorithms have been proposed in order to increase the resolution decrease associated with lightfield acquisition with a
microlenses array. A number of multiview stereo algorithms have also been applied in order to extract depth information
from plenoptic frames. Real time systems have been implemented using specialized hardware as Graphical Processing
Units (GPUs) and Field Programmable Gates Arrays (FPGAs).
In this paper, we will present our own implementations related with the aforementioned aspects but also two new
developments consisting of a portable plenoptic objective to transform every conventional 2d camera in a 3D CAFADIS
plenoptic camera, and the novel use of a plenoptic camera as a wavefront phase sensor for adaptive optics (OA).
The terrestrial atmosphere degrades the telescope images due to the diffraction index changes associated with the
turbulence. These changes require a high speed processing that justify the use of GPUs and FPGAs. Na artificial Laser
Guide Stars (Na-LGS, 90km high) must be used to obtain the reference wavefront phase and the Optical Transfer
Function of the system, but they are affected by defocus because of the finite distance to the telescope. Using the
telescope as a plenoptic camera allows us to correct the defocus and to recover the wavefront phase tomographically.
These advances significantly increase the versatility of the plenoptic camera, and provides a new contribution to relate
the wave optics and computer vision fields, as many authors claim.
In this work we develop a new algorithm, that extends the bidimensional Fast Digital Radon transform from
Götz and Druckmüller (1996), to digitally simulate the refocusing of a 4D light field into a 3D volume of
photographic planes, as previously done by Ren Ng et al. (2005), but with the minimum number of operations.
This new algorithm does not require multiplications, just sums, and its computational complexity is O(N4) to
achieve a volume consisting of 2N photographic planes focused at different depths, from a N4 plenoptic image.
This reduced complexity allows for the acquisition and processing of a plenoptic sequence with the purpose of
estimating 3D shape at video rate. Examples are given of implementations on GPU and CPU platforms. Finally,
a modified version of the algorithm to deal with domains of sizes different than power of two, is proposed.
ELT laser guide star wavefront sensors are planned to handle an expected amount of data to be overwhelmingly large
(1600×1600 pixels at 700 fps). According to the calculations involved, the solutions must consider to run on specialized
hardware as Graphical Processing Units (GPUs) or Field Programmable Gate Arrays (FPGAs), among others.
In the case of a Shack-Hartmann wavefront sensor is finally selected, the wavefront slopes can be computed using
centroid or correlation algorithms. Most of the developments are designed using centroid algorithms, but precision ought
to be taken in account too, and then correlation algorithms are really competitive.
This paper presents an FPGA-based wavefront slope implementation, capable of handling the sensor output stream in a
massively parallel approach, using a correlation algorithm previously tested and compared to the centroid algorithm.
Time processing results are shown, and they demonstrate the ability of the FPGA integer arithmetic in the resolution of
AO problems.
The selected architecture is based in today's commercially available FPGAs which have a very limited amount of
internal memory. This limits the dimensions used in our implementation, but this also means that there is a lot of margin
to move real-time algorithms from the conventional processors to the future FPGAs, obtaining benefits from its
flexibility, speed and intrinsically parallel architecture.
Large degree-of-freedom, real-time adaptive optics control requires reconstruction algorithms that are computationally efficient and readily parallelized for hardware implementation. Poyneer et al. [J. Opt. Soc. Am. A 19, 2100–2111 (2002)] have shown that the wavefront reconstruction with the use of the fast Fourier transform (FFT) and spatial filtering is computationally tractable and sufficiently accurate for its use in large Shack–Hartmann-based adaptive optics systems (up to 10,000 actuators). We show here that by the use of graphical processing units (GPUs), a specialized hardware capable of performing FFTs on big sequences almost 5 times faster than a high-end CPU, a problem of up to 50,000 actuators can already be done within a 6-ms limit. We give the method to adapt the FFT in an efficient way for the underlying architecture of GPUs.
Large degree-of-freedom real-time adaptive optics control requires reconstruction algorithms computationally
efficient and readily parallelized for hardware implementation. Lysa Poyneer (2002) has shown that the wavefront
reconstruction with the use of the fast Fourier transform (FFT) and spatial filtering is computationally
tractable and sufficiently accurate for its use in large Shack-Hartmann-based adaptive optics systems (up to
10,000 actuators). We show here that by use of Graphical Processing Units (GPUs), a specialized hardware
capable of performing FFTs on big sequences almost 7 times faster than a high-end CPU, a problem of up to
50,000 actuators can be already done within a 6 ms limit. The method to adapt the FFT in an efficient way for
the underlying architecture of GPUs is given.
We have developed a Shack-Hartmann sensor simulation, moving the complex amplitude of the electromagnetic field using Fast Fourier Transforms. The Shack-Hartmann sensor takes as input the atmospheric wavefront frames generated by the Roddier algorithm, and provides, as output, the subpupil images. The centroids and the wavefront phase maps are computed combining GPU and CPU.
The algorithms used on the GPU are written using nVidia language C for Graphics (Cg) and run on a CineFx graphical engine. Such a graphical engine provides a computational power several times greater than usual CPU-FPU combination, with a reduced cost. Any algorithm implemented on these engines must be previously adapted from their original form to fit the pipeline capabilities. To achieve an optimal performance, we compare the results with the same algorithm implemented on GPU and CPU.
We present here, for the first time, preliminary results on wavefront phase recovery using GPU. We have chose a zonal algorithm that fits better on the stream paradigm of the GPU's. The result shows a 10x speedup in the GPU centroid algorithm implementation and a 2x speedup in the phase recovery one compared with the same on CPU.
The asymmetry factor, the hemispheric backscattering to total scattering ratio and the backscatter fraction are key parameters for the climate model calculations. The inversion method of King et al. has been modified to retrieve these key parameters from scattering coefficients at four wavelengths. The inversion of multi wavelength aerosol optical properties is an alternative way to obtain the asymmetry factor, the hemispheric backscattering to total scattering ratio and the backscatter fraction, using the retrieved size distributions as an intermediate step.
This work is a preliminary study of the viability of retrieving macro physical and micro physical cloud parameters from nighttime radiances provided by MODIS sensor, onboard Terra spacecraft. It is based on the analysis of the sensitivity of every MODIS IR band to each of the parameters that describe the different layers composing the earth-cloud-atmosphere system. IN order to make this analysis, an atmospheric radiative transfer model that makes use of the discrete ordinates method DISORT is employed. Multiple simulations are performed for a great variety of clouds and atmospheric conditions, taking into account the main absorbers in each band. As a first result, the more adequate bands for our purpose are select and, using these channels, the proposed method extracts the parameters characterizing the different layers through a numerical inversion of the radiative model based on an evolutionary method for solving optimization problems called scatter search. In addition, a sensitivity analysis is carried out in order to estimate the impact on the retrieved values of the uncertainties in model inputs and assumptions.
The Optical System for Imaging and low Resolution Integrated Spectroscopy (OSIRIS) will be a Day-One instrument of the Spanish 10.4 m telescope Gran Telescopio Canarias, whose first light is planned for 2002. GTC will be installed at the Observatorio del Roque de los Muchachos in La Palma, Spain. OSIRIS three primary modes are imaging and low resolution long slit and multiple object spectroscopy. The instrument is designed to operate from 365 to 1000 nm with a field of view of 7 by 7 arcminutes and a maximum spectral resolution of 5000. Among the OSIRIS main features are the use of tunable filters for direct imaging, the use of Volume Phase Holographic Gratings as dispersive elements for spectroscopy, and the implementation of an articulated camera to provide maximum spectroscopic efficiency and versatility. Here we present a general description and an overview of the main instrument characteristics.
In this work, a method for the retrieval of droplet radius, temperature and optical thickness of oceanic stratocumulus is developed.It is based on night imagery obtained from the NOAA-AVHRR IR channels and an atmospheric radiative transfer model that makes use of the discrete ordinate method called DISORT. Using this mode, we have simulated the theoretical radiance that reaches the satellite supposing a planar homogeneous cloud layer. The stratocumulus clouds are assumed to be composed by spherical water droplets with a gamma size distribution that provides a particular effective radius. The single scattering parameters are deduced from Mie's theory. Once evaluated the model behavior, we must invert a non lineal system of three equations to obtain the cloud parameters from the channels, 3,4 and 5 brightness temperatures. The main problem is the behavior of the radiative parameters when the effective radius is varied, because exist several values that provide the same temperatures. That implies that the systems have not a unique solution and, in order to avoid this problem we propose an optimal radii discretization on the basis of the above-mentioned microphysical features.
In this work, a method to estimate the emissivity distribution is developed for stratocumulus clouds paying special attention to water vapor over these clouds. The main aim is to obtain an approximate distribution of effective radius for optically thick stratocumulus clouds. This method is based on night imagery obtained from the NOAA-AVHRR IR channels, and an atmospheric radiative transfer model that makes use of the discrete ordinate method called DISORT. We have solved the problem of the local estimation, for the Canary Islands, of the influence of the water vapor over the stratocumulus level. Finally we associate the emissivity distribution with the distribution of the droplet effective radius. This allows us to estimate a unique effective radius for the cloud taking advantage of statistic in the image in order to avoid the non-monotonous behavior of emissivity with the droplet effective radius in the 3.7 micrometers band. The results are compared with satellite data from NOAA-14.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.