We developed and tested a system for measuring the through-focus point spread function (PSF) for IOLs and converting it to the modulation transfer function (MTF). The system consists of a light source, eye model, a test IOL, a 10X magnifier, and a 16-bit CCD camera. By capturing the image of the IOL through a range of focus positions, the PSF can be found and then converted to MTF. Unlike basic monofocal lenses, multifocal IOLs can focus at two or more positions, while extended depth of field (EDOF) IOLs lenses provide a continuum of foci. These advanced IOLs are beneficial as they more closely resemble the natural range of focus of the eye. The modulation transfer function (MTF) is a standard approach for IOL characterization, but existing MTF measurement methodology is not optimized for multifocal or EDOF IOLs. As IOLs continue to evolve, using the MTF to predict image quality is vital to implanting the most appropriate lens in the patient’s eyes.
The effects of the misalignment of a test surface in a deflectometry system were explored. A Stewart Platform Hexapod stage was used to create the intentional misalignment, rotate the test surface position around the X and Y axes as well as displace their Z location. The measured surface maps were analyzed in coefficients of a fit of the surface map to the Zernike coefficients. The Zernike term results were adopted to show the relationship between the induced pose changes and the reconstructed surface map. Such an understanding of the errors in location and orientation and deflectometry measurement results would be beneficial in the future measurement of ophthalmic optics with a deflectometry system.
Augmented Reality (AR) is emerging as an innovative frontier in medical applications. The ability to provide preoperative training, thorough explanation and demonstration of procedures to patients, and intra-operative information and navigation are among many valuable use cases. Head mounted display (HMD) based AR devices are particularly promising for use in medical settings due to their hands-free capabilities. HMD AR devices have shown immense promise in the effort to produce better trained surgeons and to pioneer new risk-reduced surgical protocols to advance modern medicine. Current studies on the benefit of HMD AR in medical applications have been limited to off-the-shelf devices which are not specifically tailored for their use cases. Most FDA approved devices for AR in medical applications are off-the-shelf as well. However, specific use case tailoring may potentially improve HMD AR device use further for medical settings. This study proposes a HMD AR device tailored for use in medical applications. This proposed design shall contain the expected hallmarks of a user-friendly HMD AR device such as lightweight design and high see-through transmittance, with particular attention paid to facets that affect medical applications: battery longevity (for continued use during surgery) and display brightness (for use in bright operating room environments).
A system has been developed to measure spherocylindrical spectacle lenses. The global pandemic limited access to regular eye exams. The system bypasses this limitation by providing at-home prescription measurements. The power and orientation of the spectacle lenses are obtained with a cell phone camera, a displayed or printed target, and a magnetic stripe card. The spatially varying magnification of the lenses is calculated by examining the image captured through the lens of the target at a fixed distance. This information is then used to calculate a clinical prescription, i.e., Sph/Cyl × Axis. The system is tested with 48 single-power spherocylindrical lenses with a range of −11.25 to +4.25 D for Sph, −5.25 to −0.25 D for Cyl, and a whole range of Axis. The results are plotted in comparison with these reference prescriptions provided by a commercially available lensmeter and show good agreement. The image processing and clinical prescription calculation are discussed here.
Guest editors Daniel Malacara-Hernández, Alfredo Dubra, Jim Schwiegerling, Pablo Artal, Yobani Mejía, and Eva Acosta Plaza introduce the Special Section on Advances in Optical Measurements and Instrumentation for Ophthalmology and Optometry.
A deflectometry simulation system for measuring and generating the surface profiles of freeform optical elements was designed. Unlike alternative optical metrology methods, deflectometry systems utilize the principle of pixel to pixel point mapping to measure a specular optical surface. The in-lab set up uses readily available materials such as an LCD monitor, a CMOS camera, and other basic lab items such as optical posts and post holders. A software that enlists the usage of phase unwrapping in order to derive the incident and reflected light vectors from a surface under test. These vectors provide slope information which can then be integrated into a surface reconstruction. This allows for a non-contact surface reconstruction method as well as a simulation to help calculate the best respective placements of monitor, camera, and test surface. This type of system is useful in a measuring more challenging optical surfaces such as free forms and convex optical surfaces. Disclosures of the system and distance sensing lasers could enable a very user friendly and intuitive handler experience for creating the surface reconstruction profiles with much more reliable system geometry information and reduction of excess light and scattering noise. Multi rotation stages can also be used for adjusting tip and tilt angles of test surfaces.
The intraocular lens (IOL) industry is continuously evolving with more complex surface designs that require affordable and timely surface measurements of extended depth of focus (EDOF) and diffractive multifocal lenses. Current systems to measure grating profiles, such as AFM, SEM, or optical profilometers are expensive, need intensive training, and are sometimes destructive. Furthermore, the fields of view of these systems are typically limited, so measuring the full aperture of the lens requires repeated measures and stitching of the result. The system developed allows for quick profile measurements with easy to obtain equipment that will help examine and develop diffractive multifocal IOLs. This study integrates a 3D GelSight camera (GelSight, Inc, Waltham, Massachusetts), a stepper motor, and an Arduino board with driver board to automate the measurement of IOL gratings. The GelSight 1.0X camera has a height resolution of 1.0 μm and is provided with software that will be used to select and export data from areas of interest. Post processing will be required to analyze the data from the GelSight, but can be customized to the user’s needs. Using simple Arduino code and a stepper motor to move the camera onto the sample allows for a hands-free measurement technique that promotes accuracy and repeatability. Automation allows for beginners to quickly use the newly proposed system for many profile measurement applications with little setup time. The system will benefit the development of IOLs as a quick and easy check for the production process of these advanced lens designs.
A system is developed for simulating the image quality and dysphotopsia of multifocal lenses. To achieve this, the simulation modifies a High Dynamic Range (HDR) photograph by blurring it with the lens’ point spread function in MATLAB. Dysphotopsias are instances of unwanted or missing light within the eye. Common forms of dysphotopsia include glare, starburst (radial lines emanating from bright sources), and halo (rings of light surrounding bright sources) with the latter two typically occurring at night or in other high contrast settings. Dysphotopsia is considered the most common complaint of patients after successful cataract surgery and have thus earned significant attention in the context of intraocular lenses (IOLs). There have been fewer studies of multifocal contact lens dysphotopsia, but this is despite the documented impact dysphotopsia has on the image quality of multifocal lenses. This simulation is the first handling of dysphotopsia that combines HDR images and specifics of the lens design to predict how the dysphotopsia will appear to patients. Being able to show patients accurate simulations of dysphotopsia has the benefit of setting proper patient expectations before they begin using multifocal lenses. Furthermore, these simulated images can also potentially help diagnose patient problems by giving patients an accurate baseline to compare to.
A system for measuring the through-focus point spread function (PSF) for intra-ocular lenses (IOLs) and converting to modulation transfer function (MTF) is developed. The system consists of a light source, eye model, IOL, magnifier, and CCD camera. By capturing the resulting image through a range of focus positions, the PSF is found and converted to MTF. The MTF displays differences in the depth of focus for monofocal, multifocal, and extended depth of focus (EDOF) IOLs. As multifocal and EDOF IOLs evolve, using the MTF to predict the image quality is vital to implanting the most appropriate lens in the patient’s eyes.
The SpectRx system has been developed to measure sphero-cylindrical spectacle lens power as an alternative to clinical lensmeters. This work was inspired by the ongoing global pandemic, which limited physical access to eye care facilities for regular eye exams. The SpectRx system aims to bypass this limitation by providing at-home prescription measurements. The power and orientation of the spectacle lenses are obtained by the use of readily available objects such as a cell phone camera, a displayed or printed target, and a fixed-dimension magnetic stripe card. The magnification of the lenses can be calculated by examining the image captured through the lens of the target at a fixed distance. The magnification may be spatially varying due to the cylinder component of the lens. Processing the pictures captured with a cell phone camera is done automatically with standard image processing algorithms. The processed images, in turn, are used to calculate a clinical prescription, i.e., Sph/Cyl×Axis. The SpectRx may expand access to quality eye care in not only the current pandemic situation but also in locations where eye care may not be easily accessible, such as some rural or remote areas. The image processing and clinical prescription calculation are discussed here.
The diffraction efficiency of conventional diffractive lenses is typically analyzed using the complex Fourier series expansion coefficients. While conventional diffractive lenses typically target high diffraction efficiency in a single diffractive order, applications such as multifocal intraocular lenses seek high diffraction efficiency in multiple diffractive orders. Here, the complex Fourier series technique is generalized to handle these multifocal lenses, and applied to a novel trifocal intraocular lens design.
A system for measuring the orientation and power of sphero-cylindrical lenses has been developed. The system attempts to minimize the need for specialized equipment and training and instead relies on the ubiquitous cell phone camera, a magnetic stripe card, and a target pattern. By capturing an image of the target through the lenses under test and analyzing the distortion in the resulting image, the orientation and powers on sphero-cylindrical lenses can be determined. In modern eye clinics, the measurement of sphero-cylindrical spectacle lenses is readily measured with a lensmeter. However, there are many examples where this measurement is not feasible. This may include remote or rural locations where access to eye care may not exist, or require impractical travel. Furthermore, the on-going global pandemic has often put restrictions on contact between the patient and the eye care provider. Telemedicine, which can connect patients to eye care providers, lacks physical access to the spectacles for measurement. The system developed in this effort overcomes this limitation by allowing remote measurement of the lenses with items found in most households. Such a system would be beneficial to often underserved populations and expand access to quality eye care.
The Shack Hartmann wavefront sensor was adapted to measure the aberrations of the human eye in the 1990s. The ability to rapidly and accurately measure ocular aberrations unleashed a flurry of activity targeting understanding the dynamics of the eye’s aberrations, as well as the development of a wide array of technologies to correct these aberrations on an individual basis. This paper describes some of the adaptations necessary to enable the Shack Hartmann sensor to work with the eye, and illustrates several different form factors and novel techniques that have been used to expand the dynamic range of the sensor. Furthermore, some of the revelations of population-based studies of ocular aberrations will be reviewed, including insights into the optical design of the eye. Finally, various means of correcting the measured aberrations including laser refractive surgery, custom contact lenses and even spectacle lenses will be described to illustrate current capabilities of ocular wavefront correction and potential pitfalls associated with the various modalities.
Extended Depth of Focus (EDOF) systems have a broad set of applications in optics including enhancement of microscopy images and the treatment of presbyopia in the aging eye. The goal of EDOF systems is to axially elongate the region of focus for the optical system while simultaneously keeping the transverse dimension of the focus small to ensure resolution. The pinhole effect of reducing the size of the aperture is a well-known means of extending the depth of focus. The pinhole effect however has the drawback of reducing the light entering the system and reducing the resolution of the image. Alternatively, phase masks can be placed in the pupil to enhance depth of focus, maintain light levels and improve resolution relative to the pinhole system. Here, a technique for decomposing these phase masks into a set of quadratic phase factors is explored. Each term in the set acts like a lens and the foci of these lenses add coherently to give the overall focus profile of the system. This technique can give insight into existing EDOF techniques and be used to create novel EDOF phase masks.
The light field describes the radiance at a given point provided by a ray coming from a particular direction. Integrating the light field for all possible rays passing through that point gives total irradiance. For a static scene, the light field is unique. Cameras act as integrators of the light field. Previously, it was demonstrated that freeware rendering software can be used to simulate the light field entering an arbitrary camera lens. This is accomplished by placing an array of ideal pinhole cameras at the entrance pupil location and rendering. The pinhole camera images encode the ray directions for rays passing through the pinholes. The set of images from this array then describes the light field. Images for real camera lenses with different types of aberrations are then simulated directly from the light field. The advantage of this technique is that the light field only needs to be calculated once for a given scene. Calculation of the light field is computationally expensive and the practicality of implementing high resolution light field simulations on a desktop computer is limited. However, cloud-based rendering services with arrays of CPUs and GPUs are now readily available and affordable. These services enable more realistic simulations and different scenes to be rapidly created. Here, the techniques are demonstrated for different real lens aberration forms.
Wavefront coding refers to the use of a phase modulating element in conjunction with deconvolution to extend the depth of focus of an imaging system. The coding element is an asymmetrical phase plate shape, for most applications in the form of a trefoil or a cubic polynomial. Phase plates with trefoil shape generate not only the desired amount of trefoil aberration but also spherical aberration. It has been recently shown that a wavefront coding based optical system shows high tolerance to spherical aberration for monochromatic images; however, the depth of focus is considerably shortened for color images. In this work, we will show how to modify the shape of a phase plate in order to optimize its performance for color imaging. The design parameters of the phase plate are obtained by minimizing a merit function by means of genetic algorithms developed for this purpose. The evaluation of the optical characteristics of the phase plates for a feedback with the optimization algorithm is obtained by Zemax. Results will be illustrated by numerical simulations of color images.
The light field describes the radiance at a given point from a ray coming from a particular direction. Total irradiance comes from all rays passing the point. For a static scene, the light field is unique. Cameras integrate the light field. Each camera pixel sees the integration of the light field over the entrance pupil for ray directions associated with lens aberrations. Images of this scene for any lens can then be simulated if the light field is known at its entrance pupil. Freeware rendering software was used to create a scene’s light field and images for real camera lenses with different aberrations are simulated.
Realistic image simulation is useful understanding artifacts introduced by lens aberrations. Simple simulations which convolve the system point spread function (PSF) with a scene are typically fast because Fast Fourier transform (FFT) techniques are used to calculate the convolution in the Fourier domain. This technique, however, inherently assumes that the PSF is shift invariant. In general, optical systems have a shift variant PSF and the speed of FFT is lost. To simulate shift variant cases, the scene is often broken down into a set of sub-regions over which the PSF is approximately isoplanatic. The FFT methods can then be employed over each sub-regions and then the sub-regions are recombined to create the image simulation. There is an obvious tradeoff between the number of sub-regions and the fidelity of the final image. This fidelity is dependent upon how quickly the PSF changes between adjacent sub-regions. Here, a different strategy is employed where PSFs at different points in the field are sampled and decomposed into a novel set of basis functions. The PSF at locations between the sampled points are estimated by interpolating the expansion coefficients of the decomposition. In this manner, the image simulation is built up by combining the interpolated PSFs across the scene. The technique is verified by generating a grid of PSFs in raytracing software and determining the interpolated PSFs at various points in the field. These interpolated PSFs are compared to the PSFs calculated directly for the same field point.
Quadratic pupils representing Gaussian apodization and defocus are expanded into Zernike polynomials. Combinations of the pupil expansion coefficients are used, in turn to expand the Optical Transfer Function into a novel set of basis functions.
Phoropters are the most common instrument used to detect refractive errors. During a refractive exam, lenses are flipped in front of the patient who looks at the eye chart and tries to read the symbols. The procedure is fully dependent on the cooperation of the patient to read the eye chart, provides only a subjective measurement of visual acuity, and can at best provide a rough estimate of the patient’s vision. Phoropters are difficult to use for mass screenings requiring a skilled examiner, and it is hard to screen young children and the elderly etc. We have developed a simplified, lightweight automatic phoropter that can measure the optical error of the eye objectively without requiring the patient’s input. The automatic holographic adaptive phoropter is based on a Shack-Hartmann wave front sensor and three computercontrolled fluidic lenses. The fluidic lens system is designed to be able to provide power and astigmatic corrections over a large range of corrections without the need for verbal feedback from the patient in less than 20 seconds.
A technique for decomposing the Optical Transfer Function (OTF) into a novel set of basis functions has been developed. The decomposition provides insight into the performance of optical systems containing both wavefront error and apodization, as well as the interactions between the various components of the pupil function. Previously, this technique has been applied to systems with circular pupils with both uniform illumination and Gaussian apodization. Here, systems with annular pupils are explored. In cases of annular pupil with simple defocus, analytic expressions for the OTF decomposition coefficients can be calculated. The annular case is not only applicable to optical systems with central obscurations, but the technique can be extended to systems with multiple ring structures. The ring structures can have constant area as is often found in zone plates and diffractive lenses or the rings can have arbitrary areas. Analytic expressions for the OTF decomposition coefficients again can be determined for ring structures with constant and quadratic phase variations. The OTF decomposition provides a general tool to analyze and compare a diverse set of optical systems.
The Zernike polynomials provide a generalized framework for analyzing the aberrations of non-rotationally symmetric optical systems with circular pupils. Even when systems are designed to be rotationally symmetric, fabrication and alignment errors will lead to non-rotationally symmetric aberrations. The properties of the Zernike polynomials are reviewed to illustrate their properties. Different indexing, normalization and ordering schemes are found in the literature and commercial software. He the schemes are compared to demonstrate some of the potential pitfalls of comparing Zernike polynomial results from different sources.
The Point Spread Function (PSF) indirectly encodes the wavefront aberrations of an optical system and therefore is a metric of the system performance. Analysis of the PSF properties is useful in the case of diffractive optics where the wavefront emerging from the exit pupil is not necessarily continuous and consequently not well represented by traditional wavefront error descriptors such as Zernike polynomials. The discontinuities in the wavefront from diffractive optics occur in cases where step heights in the element are not multiples of the illumination wavelength. Examples include binary or N-step structures, multifocal elements where two or more foci are intentionally created or cases where other wavelengths besides the design wavelength are used. Here, a technique for expanding the electric field amplitude of the PSF into a series of orthogonal functions is explored. The expansion coefficients provide insight into the diffraction efficiency and aberration content of diffractive optical elements. Furthermore, this technique is more broadly applicable to elements with a finite number of diffractive zones, as well as decentered patterns.
The Optical Transfer Function (OTF) and its modulus the Modulation Transfer Function (MTF) are metrics of optical system performance. However in system optimization, calculation times for the OTF are often substantially longer than more traditional optimization targets such as wavefront error or transverse ray error. The OTF is typically calculated as either the autocorrelation of the complex pupil function or as the Fourier transform of the Point Spread Function. We recently demonstrated that the on-axis OTF can be represented as a linear combination of analytical functions where the weighting terms are directly related to the wavefront error coefficients and apodization of the complex pupil function. Here, we extend this technique to the off-axis case. The expansion technique offers a potential for accelerating OTF optimization in lens design, as well as insight into the interaction of aberrations with components of the OTF.
KEYWORDS: Point spread functions, Cameras, Scene simulation, Binary data, Convolution, 3D acquisition, Device simulation, Optical simulations, 3D image processing, Photography
Simulated images provide insight into optical system performance. If the Point Spread Function (PSF) is assumed spatially invariant, then the simulated image is the convolution of the PSF with the object scene. If the PSF varies across the field, multiple field points are sampled and interpolated to give the PSF. Simulated images are then assembled from the individual PSFs. These techniques assume a 2D planar object. We extend the image simulations to 3D scenes. The PSF now depends on the field point location and distance in object space. We investigate 3D scene simulation and examine approximations that reduce computation time.
Early diagnosis of glaucoma, which is a leading cause for visual impairment, is critical for successful treatment. It has been shown that Imaging polarimetry has advantages in early detection of structural changes in the retina. Here, we theoretically and experimentally present a snapshot Mueller Matrix Polarimeter fundus camera, which has the potential to record the polarization-altering characteristics of retina with a single snapshot. It is made by incorporating polarization gratings into a fundus camera design. Complete Mueller Matrix data sets can be obtained by analyzing the polarization fringes projected onto the image plane. In this paper, we describe the experimental implementation of the snapshot retinal imaging Mueller matrix polarimeter (SRIMMP), highlight issues related to calibration, and provide preliminary images acquired from the camera.
This book connects the dots between geometrical optics, interference and diffraction, and aberrations to illustrate the development of an optical system. It focuses on initial layout, design and aberration analysis, fabrication, and, finally, testing and verification of the individual components and the system performance. It also covers more specialized topics such as fitting Zernike polynomials, representing aspheric surfaces with the Forbes Q polynomials, and testing with the Shack–Hartmann wavefront sensor. These techniques are developed to the point where readers can pursue their own analyses or modify to their particular situations.
Plenoptic cameras have emerged in recent years as a technology for capturing light field data in a single snapshot. A conventional digital camera can be modified with the addition of a lenslet array to create a plenoptic camera. Two distinct camera forms have been proposed in the literature. The first has the camera image focused onto the lenslet array. The lenslet array is placed over the camera sensor such that each lenslet forms an image of the exit pupil onto the sensor. The second plenoptic form has the lenslet array relaying the image formed by the camera lens to the sensor. We have developed a raytracing package that can simulate images formed by a generalized version of the plenoptic camera. Several rays from each sensor pixel are traced backwards through the system to define a cone of rays emanating from the entrance pupil of the camera lens. Objects that lie within this cone are integrated to lead to a color and exposure level for that pixel. To speed processing three-dimensional objects are approximated as a series of planes at different depths. Repeating this process for each pixel in the sensor leads to a simulated plenoptic image on which different reconstruction algorithms can be tested.
The Shack Hartmann wavefront sensor is a technology that was developed at the Optical Sciences Center at the University of Arizona in the late 1960s. It is a robust technique for measuring wavefront error that was originally developed for large telescopes to measure errors induced by atmospheric turbulence. The Shack Hartmann sensor has evolved to become a relatively common non-interferometric metrology tool in a variety of fields. Its broadest impact has been in the area of ophthalmic optics where it is used to measure ocular aberrations. The data the Shack Hartmann sensor provides enables custom LASIK treatments, often enhancing visual acuity beyond normal levels. In addition, the Shack Hartmann data coupled with adaptive optics systems enables unprecedented views of the retina. This paper traces the evolution of the technology from the early use of screen-type tests, to the incorporation of lenslet arrays and finally to one of its modern applications, measuring the human eye.
Plenoptic cameras have emerged in recent years as a technology for capturing light field data in a single snapshot. A
conventional digital camera can be modified with the addition of a lenslet array to create a plenoptic camera. The camera
image is focused onto the lenslet array. The lenslet array is placed over the camera sensor such that each lenslet forms an
image of the exit pupil onto the sensor. The resultant image is an array of circular exit pupil images, each corresponding
to the overlying lenslet. The position of the lenslet encodes the spatial information of the scene, whereas as the sensor
pixels encode the angular information for light incident on the lenslet. The 4D light field is therefore described by the
2D spatial information and 2D angular information captured by the plenoptic camera. In aberration theory, the
transverse ray error relates the pupil coordinates of a given ray to its deviation from the ideal image point in the image
plane and is consequently a 4D function as well. We demonstrate a technique for modifying the traditional transverse ray
error equations to recover the 4D light field of a general scene. In the case of a well corrected optical system, this light
field is easily related to the depth of various objects in the scene. Finally, the effects of sampling with both the lenslet
array and the camera sensor on the 4D light field data are analyzed to illustrate the limitations of such systems.
Capturing light field data with a plenoptic camera has been discussed extensively in the literature. However, recent
improvements in digital imaging have made demonstration and commercialization of plenoptic cameras feasible. The
raw images obtained with plenoptic cameras consist of an array of small circular images, each of which capture local
spatial and trajectory information regarding the light rays incident on that point. Here, we seek to develop techniques for
representing such images with a natural set of basis functions. In doing so, reconstruction of slices through the light
field data, as well as image compression can be easily achieved.
In designing optical systems where the eye serves as the final detector, assumptions are typically made regarding the
optical quality of the eye system. Often, the aberrations of the eye are ignored or minimal adjustments are built into the
system under design to handle variations in defocus found within the human population. In general, the eye contains
aberrations that vary randomly from person to person. In this investigation, a general technique for creating a random set
of aberrations consistent with the statistics of the human eye is developed. These aberrations in turn can be applied to a
schematic eye model and their effect on the combined visual instrument/eye system can be determined. Repeated
application of different aberration patterns allows for tolerance analysis of performance metrics such of the modulation
transfer function (MTF).
Three-dimensional displays have become increasingly present in consumer markets. However, the ability to capture threedimensional
images inexpensively and without major modifications to current cameras is uncommon. Our goal is to create
a modification to a common commercial camera that allows a three dimensional reconstruction. We desire such an imaging
system to be inexpensive and easy to use. Furthermore, we require that any three-dimensional modification to a camera
does not reduce its resolution.
Here we present a possible solution to this problem. A commercial digital camera is used with a projector system
with astigmatic focus to capture images of a scene. By using an astigmatic projected pattern we can create two different
focus depths for horizontal and vertical features of the projected pattern, thereby encoding depth. This projector could be
integrated into the flash unit of the camera. By carefully choosing a pattern we are able to exploit this differential focus
in image processing. Wavelet transforms are performed on the image that pick out the projected pattern. By taking ratios
of certain wavelet coefficients we are able to correlate the distance an object at a particular transverse position is from the
camera to the contrast ratios.
We present our information regarding construction, calibration, and images produced by this system. The nature of
linking a projected pattern design and image processing algorithms will be discussed.
Non-mechanical variable lenses are important for creating compact imaging devices. Various
methods employing dielectrically actuated lenses, membrane lenses, and/or liquid crystal lenses
were previously proposed1-4. Here we present tunable-focus flat liquid crystal diffractive lenses
(LCDL) employing binary Fresnel zone electrodes fabricated on Indium-Tin-Oxide using
conventional micro-photolithography. The phase levels can be adjusted by varying the effective
refractive index of a nematic liquid crystal sandwiched between the electrodes and a reference
substrate. Using a proper voltage distribution across various electrodes the focal length can be
changed. Electrodes are shunted such that the correct phase retardation step sequence is
achieved. If the number of 2π zone boundaries is increased by a factor of m the focal length is
changed from f to f/m based on the digitized Fresnel zone equation: f = rm2/2mλ, where rmis mth
zone radius, and λ is the wavelength.
The lenses operate at very low voltage levels (±2.5V ac input), exhibit fast switching
times (20-150 ms), can have large apertures (>10 mm), and small form factor, and are robust and
insensitive to vibrations, gravity, and capillary effects that limit membrane and dielectrically
actuated lenses. Several tests were performed on the LCDL including diffraction efficiency
measurement, switching dynamics, and hybrid imaging with a refractive lens. Negative focal
lengths are achieved by adjusting the voltages across electrodes. Using these lenses in
combination, magnification can be changed and zoom lenses can be formed. The promising
results make LCDL a good candidate for non-mechanical auto-focus and zoom lenses.
We demonstrate that, by using circular array of electrode pattern and applying multi-level phase modulation in each zone, a high-efficiency switchable electro-optic diffractive lens using liquid crystal as the active medium can be produced as a switchable eyewear. The lens is flat and the thickness of the liquid crystal is 5 μm. Two different designs are presented. In one design, all the patterned electrodes are distributed in one layer with a 1-μm gap between the electrodes. In the other design, the odd- and even-numbered electrodes are separately patterned in two layers without any lateral gaps between the electrodes. In both cases, vias are made for interconnection between the electrodes and the conductive wires. With the one-layer electrode design, both 1-diopter and 2-diopter 8-level lenses are demonstrated with an aperture of 10 mm. With the two-layer electrode design, a 2-diopter, 15-mm, 4-level lens is demonstrated. The diffraction efficiency of the 8-level lens can be higher than 90%. The ON- and OFF-state of the electrically controlled lens allow near- and distance-vision respectively for presbyopic eyes. The focusing power of the lens can be adjusted to be either positive or negative. The focusing power of the 8-level lens can be adjusted for near-, intermediate-, and distance vision. The lens is compact and easy to operate with fast response time, low voltages and low power dissipation. This is the first demonstration of the switchable lenses that almost meet the requirements for spectacle lens.
Liquid crystal spatial light modulators, lenses, and bandpass filters are becoming increasingly capable as material and electronics development continues to improve device performance and reduce fabrication costs. These devices are being utilized in a number of imaging applications in order to improve the performance and flexibility of the system while simultaneously reducing the size and weight compared to a conventional lens. We will present recent progress at Sandia National Laboratories in developing foveated imaging, active optical (aka nonmechanical) zoom, and enhanced multi-spectral imaging systems using liquid crystal devices.
An optical testbed has been developed for the comparative analysis of wavefront sensors based on a modified Mach Zender interferometer design. This system provides simultaneous measurements of the wavefront sensors on the same camera by using a common aberrator. The initial application for this testbed was to evaluate a Shack-Hartmann and Phase Diversity wavefront sensors referenced to a Mach-Zender interferometer. In the current configuration of the testbed, aberrations are controlled using a liquid crystal spatial light modulator, and corrected using a deformable mirror. This testbed has the added benefit of being able to train the deformable mirror against the spatial light modulator and evaluate its ability to compensate the spatial light modulator. In the paper we present results from the wavefront sensors in the optical testbed.
Visual optics requires an understanding of both biology and optical engineering. This Field Guide assembles the anatomy, physiology, and functioning of the eye, as well as the engineering and design of a wide assortment of tools for measuring, photographing, and characterizing properties of the surfaces and structures of the eye. Also covered are the diagnostic techniques, lenses, and surgical techniques used to correct and improve human vision.
Purpose: To measure ocular aberrations before and at several time periods after LASIK surgery to determine the change to the aberration structure of the eye. Methods: A Shack-Hartmann wavefront sensor was used to measure 88 LASIK patients pre-operatively and at 1 week and 12 months following surgery. Reconstructed wavefront errors are compared to look at induced differences. Manifest refraction was measured at 1 week, 1 month, 3 months, 6 months and 12 months following surgery. Sphere, cylinder, spherical aberration, and pupil diameter are analyzed. Results: A dramatic elevation in spherical aberration is seen following surgery. This elevation appears almost immediately and remains for the duration of the study. A temporary increase in pupil size is seen following surgery. Conclusions: LASIK surgery dramatically reduces defocus and astigmatism in the eye, but simultaneously increases spherical aberration levels. This increase occurs at the time of surgery and is not an effect of the healing response.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.