In digital holography, it is critical to know the distance in order to reconstruct the multi-sectional object. This autofocusing is traditionally solved by reconstructing a stack of in-focus and out-of-focus images and using some focus metric, such as entropy or variance, to calculate the sharpness of each reconstructed image. Then the distance corresponding to the sharpest image is determined as the focal position. This method is effective but computationally demanding and time-consuming. To get an accurate estimation, one has to reconstruct many images. Sometimes after a coarse search, a refinement is needed. To overcome this problem in autofocusing, we propose to use deep learning, i.e., a convolutional neural network (CNN), to solve this problem. Autofocusing is viewed as a classification problem, in which the true distance is transferred as a label. To estimate the distance is equated to labeling a hologram correctly. To train such an algorithm, totally 1000 holograms are captured under the same environment, i.e., exposure time, incident angle, object, except the distance. There are 5 labels corresponding to 5 distances. These data are randomly split into three datasets to train, validate and test a CNN network. Experimental results show that the trained network is capable of predicting the distance without reconstructing or knowing any physical parameters about the setup. The prediction time using this method is far less than traditional autofocusing methods.
Digital holography is a well-known method to perform three-dimensional imaging by recording the light wavefront information originating from the object. Not only the intensity, but also the phase distribution of the wavefront can then be computed from the recorded hologram in the numerical reconstruction process. However, the reconstructions via the traditional methods suffer from various artifacts caused by twin-image, zero-order term, and noise from image sensors. Here we demonstrate that an end-to-end deep neural network (DNN) can learn to perform both intensity and phase recovery directly from an intensity-only hologram. We experimentally show that the artifacts can be effectively suppressed. Meanwhile, our network doesn’t need any preprocessing for initialization, and is comparably fast to train and test, in comparison with the recently published learning-based method. In addition, we validate that the performance improvement can be achieved by introducing a prior on sparsity.
In this paper, we present a new design for lightfield acquisition. In comparison with the conventional lightfield acquisition
techniques, the key characteristic of our system is its ability to achieve a higher resolution lightfield given a fixed sensor. In
particular, the system architecture employs two attenuation masks respectively positioned at the aperture stop and the optical
path of the camera, so that the four-dimensional (4D) lightfield spectrum is encoded and sampled by a two-dimensional
(2D) camera sensor in a single snapshot. In post-processing, by exploiting the coherence embedded in a lightfield, we are
able to retrieve the desired 4D lightfield of a higher resolution using inverse imaging. We demonstrate the performance of
our proposed method with simulations based on the actual lightfield dataset.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.