SignificanceThree-dimensional (3D) imaging and object tracking is critical for medical and biological research and can be achieved by multifocal imaging with diffractive optical elements (DOEs) converting depth (z) information into a modification of the two-dimensional image. Physical insight into DOE designs will spur this expanding field.AimTo precisely track microscopic fluorescent objects in biological systems in 3D with a simple low-cost DOE system.ApproachWe designed a multiring spiral phase plate (SPP) generating a single-spot rotating point spread function (SS-RPSF) in a microscope. Our simple, analytically transparent design process uses Bessel beams to avoid rotational ambiguities and achieve a significant depth range. The SPP was inserted into the Nomarski prism slider of a standard microscope. Performance was evaluated using fluorescent beads and in live cells expressing a fluorescent chromatin marker.ResultsBead localization precision was <25 nm in the transverse dimensions and ≤70 nm along the axial dimension over an axial range of 6 μm. Higher axial precision (≤50 nm) was achieved over a shallower focal depth of 2.9 μm. 3D diffusion constants of chromatin matched expected values.ConclusionsPrecise 3D localization and tracking can be achieved with a SS-RPSF SPP in a standard microscope with minor modifications.
KEYWORDS: Polarization, Point spread functions, Imaging systems, 3D image processing, Polarimetry, 3D acquisition, Wavefronts, Scanning probe microscopy, Sensors, 3D modeling
The present work generalizes the theoretical model of the rotating PSF imaging based three-dimensional (3D) localization of point sources to high numerical aperture (NA) microscopy for which non-paraxial propagation of the imaging beam and the associated nontrivial vector character of light fields must be properly accounted for. Our analysis supports the prospects of simultaneous acquisition of the state of polarization and 3D location of a point source with high-NA objectives. A second approach for doing joint polarimetry and localization using a specialized birefringent plate without the need for high-NA objectives is also discussed briefly.
This work describes numerical methods for the joint reconstruction and segmentation of spectral images
taken by compressive sensing coded aperture snapshot spectral imagers (CASSI). In a snapshot, a CASSI
captures a two-dimensional (2D) array of measurements that is an encoded representation of both spectral
information and 2D spatial information of a scene, resulting in significant savings in acquisition time and data
storage. The double disperser coded aperture snapshot imager (DD-CASSI) is able to capture a hyperspectral
image from which a highly underdetermined inverse problem is solved for the original hyperspectral cube
with regularization terms such as total variation minimization. The reconstruction process decodes the
2D measurements to render a three-dimensional spatio-spectral estimate of the scene, and is therefore an
indispensable component of the spectral imager. In this study, we seek a particular form of the compressed
sensing solution that assumes spectrally homogeneous segments in the two spatial dimensions, and greatly
reduces the number of unknowns. The proposed method generalizes popular active contour segmentation
algorithms such as the Chan-Vese model and also enables one to jointly estimate both the segmentation
membership functions and the spectral signatures of each segment. The results are illustrated on a simulated
Hubble Space Satellite hyperspectral dataset, a real urban hyperspectral dataset, and a real DD-CASSI image
in microscopy.
The solar-reflected brightness distribution of a man-made space object has regions of spatially uniform brightness
and spectral content that are interrupted only by boundaries separating one material region from another.
The relatively simple structure of this distribution permits, as we demonstrate here, spectral-correlation-based
strategies to extract information about the boundaries and material constituents of the segments of the object
surface. Still simpler compressive-sensing (CS) based approaches that require no specific spectral analysis can
also efficiently perform such information extraction, which is a critical task of any space-object identification
(SOI) system. We analyze here these latter approaches by means of statistical information theory (IT) in the
context of a highly idealized satellite model with rectilinear material boundaries and quasi-one-dimensional (1D)
brightness distribution. Our analysis includes spectrally dependent diffractive blur as well as detector noise
against which we optimize, via our IT calculations, the choice of the CS mask set, the bandwidth of the spectral
measurements, and the minimum number of measurements needed for extracting information about the boundary
locations and material identities.
An important aspect of spectral image analysis is identification of materials present in the object or scene being
imaged. Enabling technologies include image enhancement, segmentation and spectral trace recovery. Since
multi-spectral or hyperspectral imagery is generally low resolution, it is possible for pixels in the image to
contain several materials. Also, noise and blur can present significant data analysis problems. In this paper,
we first describe a variational fuzzy segmentation model coupled with a denoising/deblurring model for material
identification. A statistical moving average method for segmentation is also described. These new approaches
are then tested and compared on hyperspectral images associated with space object material identification.
An integrated array computational imaging system, dubbed PERIODIC, is presented which is capable of exploiting a
diverse variety of optical information including sub-pixel displacements, phase, polarization, intensity, and
wavelength. Several applications of this technology will be presented including digital superresolution, enhanced
dynamic range and multi-spectral imaging. Other applications include polarization based dehazing, extended depth of
field and 3D imaging. The optical hardware system and software algorithms are described, and sample results are
shown.
Digital super-resolution refers to computational techniques that exploit the generalized sampling theorem to
extend image resolution beyond the pixel spacing of the detector, but not beyond the optical limit (Nyquist
spatial frequency) of the lens. The approach to digital super-resolution taken by the PERIODIC multi-lenslet
camera project is to solve a forward model which describes the effects of sub-pixel shifts, optical blur, and
detector sampling as a product of matrix factors. The associated system matrix is often ill-conditioned, and
convergence of iterative methods to solve for the high-resolution image may be slow.
We investigate the use of pupil phase encoding in a multi-lenslet camera system as a means to physically
precondition and regularize the computational super-resolution problem. This is an integrated optical-digital
approach that has been previously demonstrated with cubic type and pseudo-random phase elements. Traditional
multi-frame phase diversity for imaging through atmospheric turbulence uses a known smooth phase perturbation
to help recover a time series of point spread functions corresponding to random phase errors. In the context of a
multi-lenslet camera system, a known pseudo-random or cubic phase error may be used to help recover an array
of unknown point spread functions corresponding to manufacturing and focus variations among the lenslets.
In any image post-processing system, an imperfect knowledge of the system point-spread function (PSF) leads to
errors in the estimation of the truth image from the image data. Here we treat the problem of PSF uncertainty
resulting from either uncorrected or partially corrected atmospheric turbulence related phase errors and its
impact on image estimation in the presence of Poisson count statistics.
Digital superresolution (DSR) is the process of improving image resolution by overcoming the sampling limit
of an imaging sensor, while optical superresolution (OSR) is the recovery of object spatial frequencies with
magnitude higher than the diraction limit of the imaging optics. The present paper presents an integrated,
Fisher-information-based analysis of the two superresolution (SR) processes applied to a sequence of sub-pixel
shifted images of an object whose support is precisely known. As we shall see, prior information about the
object support makes it possible to achieve OSR whose delity in fact improves with increasing size of the image
sequence. The interplay of the two kinds of SR is further explored by varying the ratio of the detector sampling
rate to the Nyquist rate.
We investigate the use of a novel multi-lens imaging system in the context of biometric identification, and more
specifically, for iris recognition. Multi-lenslet cameras offer a number of significant advantages over standard
single-lens camera systems, including thin form-factor and wide angle of view. By using appropriate lenslet spacing
relative to the detector pixel pitch, the resulting ensemble of images implicitly contains subject information
at higher spatial frequencies than those present in a single image. Additionally, a multi-lenslet approach enables
the use of observational diversity, including phase, polarization, neutral density, and wavelength diversities. For
example, post-processing multiple observations taken with differing neutral density filters yields an image having
an extended dynamic range. Our research group has developed several multi-lens camera prototypes for the
investigation of such diversities.
In this paper, we present techniques for computing a high-resolution reconstructed image from an ensemble of
low-resolution images containing sub-pixel level displacements. The quality of a reconstructed image is measured
by computing the Hamming distance between the Daugman4 iris code of a conventional reference iris image,
and the iris code of a corresponding reconstructed image. We present numerical results concerning the effect of
noise and defocus blur in the reconstruction process using simulated data and report preliminary work on the
reconstruction of actual iris data obtained with our camera prototypes.
KEYWORDS: Imaging systems, Signal to noise ratio, Modulation transfer functions, Image quality, Personal protective equipment, Lawrencium, Monochromatic aberrations, Digital image processing, Image restoration, Sensors
Appropriate modifications of the image-capture process in a modern imaging system can potentially enhance the digital restorability of the collected image data and thus lead to improved final-image quality. Examples of such modification are insertion of a phase mask in the pupil that encodes depth dependent intensity distribution in the wavefront, insertion of a specific defocus phase in one of the two arms of a conventional phase-diverse speckle imaging system, and use of progressively larger sub-pixel tip-tilts in the otherwise identical low-resolution image channels of an array imaging system. In each case, the final reconstructed image has a higher quality than the intermediate raw image(s) recorded by the sensor. This paper discusses the application of Fisher information to characterize the
performance of these three model imaging systems that exploit optical preconditioning to improve the digital restorability and thus the quality of the final image.
The insertion of a suitably designed phase plate in the pupil of an imaging system makes it possible to encode the depth dimension of an extended three-dimensional scene by means of an approximately shift-invariant PSF. The so-encoded image can then be deblurred digitally by standard image recovery algorithms to recoup the depth dependent detail of the original scene. A similar strategy can be adopted to compensate for certain monochromatic aberrations of the system. Here we consider two approaches to optimizing the design of the phase plate that are somewhat complementary - one based on Fisher information that attempts to reduce the sensitivity of the phase encoded image to misfocus and the other based on a minimax formulation of the sum of singular values of the system blurring matrix that attempts to maximize the resolution in the final image. Comparisons of these two optimization approaches are discussed. Our preliminary demonstration of the use of such pupil-phase engineering to successfully control system aberrations, particularly spherical aberration, is also presented.
Computational imaging systems are modern systems that consist of generalized aspheric optics and image processing capability. These systems can be optimized to greatly increase the performance above systems consisting solely of traditional optics. Computational imaging technology can be used to advantage in iris recognition applications. A major difficulty in current iris recognition systems is a very shallow depth-of-field that limits system usability and increases system complexity. We first review some current iris recognition algorithms, and then describe computational imaging approaches to iris recognition using cubic phase wavefront encoding. These new approaches can greatly increase the depth-of-field over that possible with traditional optics, while keeping sufficient recognition accuracy. In these approaches the combination of optics, detectors, and image processing all contribute to the iris recognition accuracy and efficiency. We describe different optimization methods for designing the optics and the image processing algorithms, and provide laboratory and simulation results from applying these systems and results on restoring the intermediate phase encoded images using both direct Wiener filter and iterative conjugate gradient methods.
Automated iris recognition is a promising method for noninvasive verification of identity. Although it is noninvasive, the procedure requires considerable cooperation from the user. In typical acquisition systems, the subject must carefully position the head laterally to make sure that the captured iris falls within the field-of-view of the digital image acquisition system. Furthermore, the need for sufficient energy at the plane of the detector calls for a relatively fast optical system which results in a narrow depth-of-field. This latter issue requires the user to move the head back and forth until the iris is in good focus. In this paper, we address the depth-of-field problem by studying the effectiveness of specially designed aspheres that extend the depth-of-field of the image capture system. In this initial study, we concentrate on the cubic phase mask originally proposed by Dowski and Cathey. Laboratory experiments are used to produce representative captured irises with and without cubic asphere masks modifying the imaging system. The iris images are then presented to a well-known iris recognition algorithm proposed by Daugman. In some cases we present unrestored imagery and in other cases we attempt to restore the moderate blur introduced by the asphere. Our initial results show that the use of such aspheres does indeed relax the depth-of-field requirements even without restoration of the blurred images. Furthermore, we find that restorations that produce visually pleasing iris images often actually degrade the performance of the algorithm. Different restoration parameters are examined to determine their usefulness in relation to the recognition algorithm.
A novel and successful optical-digital approach for removing certain
aberrations in imaging systems involves placing an optical mask between an image-recording device and an object to encode the wavefront phase before the image is recorded, followed by digital image deconvolution to decode the phase. We have observed that when appropriately engineered, such an optical mask can also act as a form of preconditioner for certain deconvolution algorithms. It can boost information in the signal before it is recorded well above the noise level, leveraging digital restorations of very high quality. In this paper, we 1) examine the influence that a phase mask has on the incoming signal and how it subsequently affects the performance of restoration algorithms, and 2) explore the design of optical masks, a difficult nonlinear optimization problem with multiple design parameters, for removing certain aberrations and for maximizing
restorability and information in recorded images.
The year 2003 marks the 20th anniversary of introducing interdisciplinary graduate optics education at the University of New Mexico. The Ph.D. program in Optical Sciences has produced over 75 graduates. A new M.S. program in Optical Science and Engineering, introduced in Fall 2002, is rapidly gaining popularity. This paper reviews both programs, focusing on their unique features.
University of New Mexico has developed a comprehensive plan for a new B.S. degree in Optical Science and Engineering, accompanied by teacher training and enhancement of K-12 optics education. The plan incorporates curriculum development, creation of new laboratories, development of optics courses for teachers, creation of outreach programs, and involvement of industry and government laboratories.
By suitably phase-encoding optical images in the pupil plane and then digitally restoring them, one can greatly improve their quality. The use of a cubic phase mask originated by Dowski and Cathey to enhance the depth of focus in the images of 3-d scenes is a classic example of this powerful approach. By using the Strehl ratio as a measure of image quality, we propose tailoring the pupil phase profile by minimizing the sensitivity of the quality of the phase-encoded image of a point source to both its lateral and longitudinal coordinates. Our approach ensures that the encoded image will be formed under a nearly shift-invariant imaging condition, which can then be digitally restored to a high overall quality nearly free from the aberrations and limited depth of focus of a traditional imaging system. We also introduce an alternative measure of sensitivity that is based on the concept of Fisher information. In order to demonstrate the validity of our general approach, we present results of computer simulations that include the limitations imposed by detector noise.
In optical synthesis imaging, the incomplete sampling of the
complex visibility function, atmospherically induced phase
perturbations, detector noise, and low light levels all limit
one's ability to produce high-fidelity images. One improves upon
these limitations by using iterative non-linear deconvolution and
self-calibration techniques to produce image models that are
consistent with the data. The question of image fidelity, or how
well these models faithfully represent the true object, is an
important question to ask. A possible answer could be formulated
using the concepts of statistical information theory. We apply
the concepts of Shannon Information to an optical interferometer
and propose methods for monitoring the information content of
images as a measure of image quality.
The next generation of stellar interferometer arrays will develop methods for observing fainter sources with greater resolution and create synthesized images at optical and IR wavelengths like those obtained from radio telescope arrays. Techniques using single-mode (SM) fiber optics offer significant advantages for future ground and space-based interferometers.
SM fibers and integrated optics components can be used for nearly lossless transport and combination of light beams in a stellar interferometer, avoiding the multiple lossy reflective surfaces that must be kept in precise alignment with conventional optics. Furthermore, SM fibers spatially filter light corrupted by atmospheric seeing fluctuations or optical aberrations, increasing the fringe visibility and potentially improving measurement accuracy for bright sources by an order of magnitude. Controlling the dispersion and polarization properties of SM fibers is possible, but the poor coupling efficiency to aberrated light is a major limitation.
MC fibers consist of a symmetrical arrangement of SM fiber cores inside a common cladding. One MC fiber is placed at the focus of each telescope, where light couples into the fiber and propagates through the cores for a short distance. Each MC fiber is then drawn apart into individual single-core fibers, and the resulting SM fiber beams are interfered pair-wise with beams from other telescopes. MC fibers are predicted to have an improved coupling efficiency over standard SM fibers, and the MC fiber geometry is well suited for transporting and combining light beams with minimal losses in an interferometer array.
Computer simulations of fiber-linked interferometer arrays were performed to evaluate the performance of MC and standard SM fibers with different conditions of atmospheric, photon and detector noise. The effects of waveguide and material dispersion over a broad band at visible wavelengths are also included. The simulations determine the fiber modes, calculate the light coupling, propagate light through the fibers, and measure the beam correlations. Photon and detector noise are added and noisy estimates of the fringe power and bispectrum are found for the interferometer baselines. Statistics are then calculated over large ensembles and the measurements are processed to reconstruct an image of the source object.
MC fibers are found to have greatly improved coupling efficiency over conventional SM fibers to aberrated light, and the noise sensitivity of visibility measurements and images also improves under certain conditions. Simulated images are shown and attempts to make MC fibers are discussed.
KEYWORDS: Microelectromechanical systems, Deconvolution, Image restoration, Signal to noise ratio, Spatial frequencies, Data processing, Imaging systems, Information theory, Point spread functions, Distance measurement
A crucial step in image restoration involves deconvolving the true object from noisy and often poorly sampled image data. Deconvolution under these conditions represents an ill-posed inversion problem, in that no unique computationally stable solution exists. We propose a statistical information based approach to regularize the deconvolution process. Using Shannon Information, one monitors the information about the object that is processed during the deconvolution in order to obtain an optimal stopping criterion and hence the``best' solution to the inversion problem. The optimal stopping criterion is based on how Shannon Information changes in the spatial frequency domain as the deconvolution proceeds. We present results for the Maximum Entropy Method (MEM) and Richardson-Lucy (RL) non-linear deconvolution techniques.
The optics facility at the University of New Mexico (UNM) are proposing to create a Master's degree program in Optical Science and Engineering. A natural complement to the highly successful Ph.D. program in optics over the past 15 years at UNM, the Master's program, unlike the Ph.D. program, will be a multiple-option program that will serve the educational, research, and training needs of an entire spectrum of students, professionals, and institutions. Currently only in the stages of a well developed proposal, it has garnered wide support from industry, academia, and government laboratories in the State of New Mexico, and is on track for an expected implementation in Fall 2000.
We present results of extensive simulation of the performance of an optical interferometer array based on a new fiber concept with closely spaced multiple cores symmetrically arranged inside a common cladding. While sharing the principal advantages of single-mode (SM) fibers, including spatial filtering of phase-corrupted light and lossless propagation, the multiple core (MC) fibers are predicted to have an enhanced coupling efficiency and comparable noise sensitivity for typical observing conditions of low light levels and moderate to strong turbulence. Moreover, MC fibers have unique practical advantages: by presenting a larger face, they permit relatively easy focusing of light into the fiber, and they are well studied to purely fiber-based beam splitting and beam recombination, making it possible to construct an all- fiber interferometer with reduced complexity and cost. Our simulations of the MC fiber-linked interferometer array encompass a variety of conditions of atmospheric turbulence and photon-counting noise, for observations of a monochromatic point source. We compare the result from simulation to predictions from previous detailed theoretical calculations. Comparisons are made to an interferometer linked with standard SM fibers. We find that the coupling efficiency and sensitivity of the simulated interferometer using MC fibers generally agrees with theoretical predictions. Efforts to manufacture mC fibers are also reviewed.
Principally because single-mode fibers are perfect spatial filters, they have great potential for ground-based interferometry. The author of this paper has recently shown how a novel multiple-core design can enhance the overall efficiency of a SM fiber many-fold when the fiber is coupled to atmospherically degraded wavefronts. We present theoretical results on the possible enhancement of signal-to-noise ratios in image reconstruction with interferometers using such fibers with a multiple-core geometry. Our results have strong implications for the feasibility of an all-fiber interferometry concept, both on the ground and in space.
KEYWORDS: Telescopes, Interferometers, Wavefronts, Signal to noise ratio, Space telescopes, Image restoration, Single mode fibers, Sensors, Photodetectors, Point spread functions
We discuss the performance of a fiber-coupled image-plane interferometer in which one transports individual image-plane wavefronts point by point by means of bundles of single-mode fibers to a beam-combining station where such wavefronts are cross-correlated. We compute the SNR of image reconstruction under low-light conditions where the principal source of noise is the shot noise of photodetection.
We review our previous work in which we have shown that imaging by an ideal optical interferometric array, which suffers only from the Poisson noise of photoelectron counting, is essentially insensitive to how light beams are split and recombined. Any physically large, monolithic array such as those planned for space will however also suffer from mechanical noise in its structure. We show that inclusion of this technical noise amounts to an effective decorrelation that degrades sensitivity. We calculate this decorrelation factor for an otherwise ideal nC2 array and make some pertinent comments about its effect on other sorts of arrays as well. Finally, we review our work on ground-based arrays which suffer sharp reductions in the sensitivity compared to ideal arrays, particularly at low count rates.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.