Open Access
11 September 2017 Sparse superresolution phase retrieval from phase-coded noisy intensity patterns
Vladimir Y. Katkovnik, Karen Egiazarian
Author Affiliations +
Abstract
We consider a computational superresolution inverse diffraction problem for phase retrieval from phase-coded intensity observations. The optical setup includes a thin lens and a spatial light modulator for phase coding. The designed algorithm is targeted on an optimal solution for Poissonian noisy observations. One of the essential instruments of this design is a complex-domain sparsity applied for complex-valued object (phase and amplitude) to be reconstructed. Simulation experiments demonstrate that good quality imaging can be achieved for high-level of the superresolution with a factor of 32, which means that the pixel of the reconstructed object is 32 times smaller than the sensor’s pixel. This superresolution corresponds to the object pixel as small as a quarter of the wavelength.

1.

Introduction

In modern science and technology, phase and wavefield imaging are popular and well-established techniques for high-accuracy measuring, recording, and reconstructing of two- (2-D) and three-dimensional (3-D) objects. The areas of applications are varying from astronomy and engineering to medicine and biology.1,2 In engineering, phase and wavefield sensing methods serve for nondestructive testing/control and precise measurements (e.g., see Refs. 3 and 4). In medicine and biology, phase measurements are exploited in microscopy and coherent tomography.

Phase imaging is a unique instrument to study details of internal structure of transparent or semitransparent specimens. While only intensity of light fields can be measured, visualization of phase from intensity observations is an important issue. In phase contrast microscopy, methods of wavefront modulation in the Fourier plane have been developed to resolve the visualization problem (Frits Zernike 1930s, Nobel prize 1953). Despite the revolutionary success of these methods, only qualitative visualization of phase can be achieved in this way, where features of specimens even visible maybe so distorted that accurate measurements and even proper interpretations can be problematic.

Quantitative visualization is targeted on direct phase imaging and precise phase measuring. Roughly speaking, there are two ways to achieve this goal. The first one is holography with measurements given as intensities of the sums of reference and object beams. The second one is phase retrieval, treated as an inverse diffusion imaging, essential alternative to holography, which does not require a reference beam. In modern science and technology, the quantitative phase imaging techniques are fundamentally based on digital data processing.

Let us start from the following general formalization of the phase retrieval problem:

Eq. (1)

ys=|Ps{uo}|2,s=1,,L,
where uoCN×N is an N×N complex-valued 2-D image of an object (specimen); Ps: CN×NCM×M is a complex-valued operator of wavefront propagation from object to sensor plane, ysR+M×M are M×M intensity images of the wavefronts at the sensor plane.

L experiments are assumed in Eq. (1), where s indicates the result for each of them. Equation (1) defines relations between the complex-valued wavefronts at the object plane and the power of the wavefronts at the sensor plane. It is convenient to also introduce a notation for the complex-valued wavefront at the sensor plane

Eq. (2)

us=Ps{uo},s=1,,L.

The observations for the noiseless case corresponding to Eq. (1) are of the form

Eq. (3)

ys=|us|2,s=1,,L,
and for noisy observations

Eq. (4)

zs=G{|us|2},s=1,,L,
where G stands for a generator of random observations.

In this paper, we assume that the observations have a Poissonian distribution typical for optics with observations based on photon counting.

Reconstruction of the complex-valued object uo (phase and amplitude) from noiseless or noisy observations is phase retrieval problem. Here, phase emphasizes that the object phase is a variable of the first priority while the object amplitude is treated as an auxiliary variable often useful only in order to improve phase imaging.

Note that the term phase retrieval is originated from the following mathematical problem. Let us be the Fourier transform (FT) of uo, us=F(uo). If us is given, uo can be precisely calculated as the inverse FT, uo=F1(us). Now let the absolute value |us| of us be given and the phase of us is unknown. Is it possible to reconstruct the phase of us and in this way the original uo from the amplitude of FT |us|? In general, the answer is negative and positive only for the special classes of uo, in particular, for the so-called minimum phase signals, or provided some restrictions. In this phase retrieval formulation, the term phase points on the phase of the FT, us.

In optics, the priority is different. The phase to be retrieved is phase of the object uo. The image formation operators Ps in Eq. (1) depend on optical setups. Various methods are developed in order to make these Ps sufficiently different for different s and gain observation diversity, enabling finding uo from observations {ys}1L. Defocusing of the registered images is one of the popular instruments to get a sufficient phase diversity.58 In a recent development, a spatial light modulator (SLM) is exploited for defocusing (e.g., see Refs. 9 and 10). The 4f-optical configuration with SLM in the Fourier plane for defocus imitation is proposed in Ref. 11 and further studied in Ref. 12.

Random phase modulation of the wavefront is another tool to achieve a desirable phase diversity. It results in observations known as coded diffraction patterns (e.g., see Refs. 1314.15)

Eq. (5)

ys=|P{Ms·uo}|2,s=1,,L,
where P, CN×NCM×M, denotes the propagation operator from the object to sensor planes fixed for all L experiments, and the items of the phase masks MsCN×N are complex exponents Ms(k,l)=exp[jϕk,l(s)].

The phases ϕk,l(s) in Ms can be generated as deterministic or random. The phase modulation is able to dramatically change the diffraction pattern of P{uo} by redistribution of the observed intensities from low to high frequencies.

1.1.

Phase Retrieval Algorithms

There is a growing flow of publications on phase retrieval. Various versions of the Gerchberg–Saxton (GS) techniques are quite universal and applicable for different setups (e.g., see Refs. 7, 8, 1617.18). These algorithms based on alternating projections between the object and observations planes allow to incorporate any information available for the variables in these planes. Recent developments in this area as well as a review can be seen in Ref. 19.

Contrary to this type of the intuitive heuristic algorithm, the variational formulations have a strong mathematical background and lead to algorithms solving optimization problems. In particular, in Ref. 20 one can find constrains sufficient for uniqueness of the solution and algorithms that are very different from GS, such as the semidefinite programming phase lifting using matrix completion (PhaseLift algorithm)21 and the greedy sparse phase retrieval (GESPAR algorithm).22

There are many publications on revisions of the GS techniques using optimization formulations. In particular, the links between the conventional GS and variational formulations are studied in Refs. 23 and 24.

Concerning variational formulations for algorithm design, we wish to note the recent Wirtinger flow and truncated Wirtinger flow (TWF) algorithms.14,25 Methodologically, these algorithms are developed for the Poissonian likelihood criterion, i.e., for Poissonian noisy observations. Simulation experiments confirm that these algorithm works precisely provided nearly noiseless observations. However, they are not so efficient for noisy observations.26

Phase retrieval from coded diffraction patterns of the type (5) is of special interest in the recent publications (e.g., see Refs. 14 and 27). The uniqueness of solution for this scenario is proved in the later paper.

A new variational algorithm for phase retrieval from noisy data based on transform domain sparsity for the object phase and amplitude is developed in Ref. 26. Simulation experiments demonstrate that this algorithm enables the accuracy identical to the accuracy of the TWF algorithm for noiseless data and the principal advantage for noisy data. The sparsity concept as a tool for phase retrieval is a topic of the paper,28 where an original sparsifying learning transform is developed.

The spatial resolution in phase retrieval is limited by two principal factors: low-pass filtering by the propagation operator P and by pixel size in the pixelated discrete sensor and modulation phase masks. Due to these factors, high-frequency spatial information is lost in intensity observations, which can be treated as observations of the subsampled true object uo. Various methods for superresolution imaging allowing compensation of these subsampling effects are of special interest. One of the straightforward approaches to overcome the pixel size limitations is to use a sequence of laterally shifted holograms (e.g., see Refs. 2930.31). Compressed sensing (CS) or sparse imaging is a computational approach for the restoration of subsampled data based on a special mathematical modeling of uo. Applications of this sort of the technique in optics can be seen in Refs. 3233.34.35.36.

Other factors limiting the spatial resolution concern observation errors. First, we need to mention that a Poissonian noise appears due to the measurement process in optics counting the photons hitting the sensor. Second, the use of a digital camera introduces the readout noise usually modeled by a Gaussian distribution and quantization errors. The later ones can be modeled as a uniform distribution random variables. The quantization effects for phase retrieval are studied,37 and it is shown that a low-bit quantization may seriously diminish the accuracy of the phase retrieval.

1.2.

Contribution and Structure of This Paper

In this paper, we consider phase retrieval from Poissonian noisy phase coded diffraction patterns with the optical setup shown in Fig. 1. The complex-valued object uo to be reconstructed is placed against the SLM applied for phase modulation of the wavefront. A distance between the object and the digital sensor is equal to 2f, where f is a focal length of this lens, located in the middle between the object and lens. The system is illuminated by a uniform monochromatic coherent laser beam of wavelength λ and intensity equal to 1. The forward propagation operator P in Eq. (5) is calculated as a rescaled FT, provided that the axial Fresnel approximation for the propagation is assumed. In this setup, the object, the SLM, and lens are considered as phase transformers of the wavefronts. The superresolution sparse phase and amplitude reconstruction (SR-SPAR) algorithm proposed in this paper is designed for superresolution phase/amplitude imaging, which is optimal for Poissonian observations. It is shown by computational experiments that high-accuracy superresolution reconstructions can be achieved with spatial resolution going up to a quarter of wavelength. The superresolution GS (SR-GS) algorithm is introduced as a simplified and faster version of SR-SPAR efficient for noiseless data. For the later case, both SR-GS and SR-SPAR demonstrate identical accuracy. SR-SPAR design is based on the methodology developed in Ref. 26 for pixel-resolution phase retrieval (SPAR algorithm).

Fig. 1

A sketch of single lens optical setup: object (o), SLM, lens, and sensor (s).

OE_56_9_094103_f001.png

This paper is organized as follows. In Secs. 1.3 and 1.4, image formation and noisy observation modeling are presented. The sparsity in the complex domain and the SR-SPAR and SR-GS algorithms are given in Sec. 2. Section 3 concerns the experimental study of the proposed algorithms.

1.3.

Image Formation

For wavefront propagation from the object to sensor planes, we use the paraxial Fresnel modeling. It gives the following link between the object wavefront uo and the wavefront at the sensor plane us [Ref. 38, Eq. (5.19)]:

Eq. (6)

us(ξ,η)=μuo(x,y)Ms(x,y)exp[j2πλf(xξ+vη)]dxdy,
where μ=1/(j·fλ).

Here uo(x,y) and us(ξ,η) are complex-valued distributions of the wavefronts at the object plane [lateral coordinates (x,y)] and the sensor plane [lateral coordinates (ξ,η)]. M(x,y) is a complex-valued transmission function of SLM. Using FT, the input–output model (6) can be given in the form

Eq. (7)

us(ξ,η)=μFuo·Ms(ξ/λf,η/λf),
where Fuo·M stands for FT of the product uo(x,y)Ms(x,y).

Equations (6) to (7) define the forward propagation operator Ps in Eq. (2). Note that the model (6) as used in Eq. (5) is discrete-continuous with continuous uo and a physical discretization imposed on us(ξ,η) by the pixelated sensor and by the pixelated SLM on Ms.

It is useful to mention that the forward propagation [Eq. (6)] is valid also for a single lens system, provided that the lens is located in the object/SLM plane and the distance between this plane and the sensor is equal to f.

1.4.

Poissonian Observations

The measurement process in optics amounts to count the photons hitting the sensor. This process is well modeled by the independent Poisson random variables in the following form:

Eq. (8)

p(zs[l]=k)=exp(ys[l]χ)(ys[l]χ)kk!,
where p(zs[l]=k) is the probability that the random observation zs[l] takes integer value k0 and ys[l] is the intensity of the wavefront at the pixel l defined by Eq. (1).

The parameter χ>0 in Eq. (8) is the scaling factor of the Poisson distribution that can be interpreted as an exposure time. Recall that the mean and the variance of Poisson random variable zs are equal and are given by ysχ, i.e., E{zs}=var{zs}=ysχ. Defining the observation signal-to-noise ratio (SNR) as the ratio between the square of the mean and the variance of zs, we have SNR=E2{zs}/var{zs}=ysχ. It follows that the relative noisiness of observations becomes stronger as χ0 (SNR0) and approaches zero when χ (SNR). The later case corresponds to the noiseless scenario, i.e., zs/χys with the probability 1.

The scale parameter χ is of particular importance for modeling as it allows to control the level of noise in the observations.

2.

Superresolution Sparse Phase Retrieval

2.1.

Sparse Wavefront Representations

The recent (10 to 15 years) sparse approximation and CS techniques state that a signal/image can be sampled at a rate much smaller than what is commonly prescribed by Shannon–Nyquist. The sampling of a signal can indeed be performed as a function of its “intrinsic dimension” rather than according to its cutoff frequency. The modern advanced techniques are able to design parametric regression models for small patches of images and combine them in high-quality reconstructions. What is considered in our paper is a development of this sort of thinking to the complex domain, in particular, for superresolution in phase retrieval.

Image sparsity follows from the commonly observed self-similarity of small fragments (patches) of images, meaning that similar features can be found in patches located in different parts of the image. It follows that an image may admit sparse representations: it can be well approximated by linear combinations of a small number of basic functions. Sparsity has been a hot topic in the last years for various imaging problems (e.g., see Ref. 39).

For the complex domain images, such as the object uo=Boexp(iφo), sparse modeling can be presented in a number of ways essentially different from the methods standard for real-valued variables. This principal difference starts from the fact that uo is defined by two variables: phase φo and amplitude Bo.

The sparse representation can be imposed on complex-valued uo directly using complex-valued basic functions or on the following pairs of real-valued variables:

  • 1. Phase φ (interferometric or absolute) and amplitude Bo;

  • 2. Real and imaginary parts of uo.

Remember that an interferometric (wrapped) phase is restricted to the interval [π,π), whereas an absolute (unwrapped) phase is different by adding an integer number of 2π to the interferometric phase. In what follows, we denote the interferometric (wrapped) phase of the object as φo and the corresponding absolute phase as φo,abs. We introduce the phase-wrap operator W:R[π,π), linking the absolute and principal phases as φo=W(φo,abs). We also define the unwrapped phase as φo,abs=W1(φo). Notice that W1 is not an inverse operator for W because the latter is highly nonlinear, and for signals of dimension two and higher, there is no one-to-one relation between φo,abs and φo.

In principle, the absolute phase always can be reconstructed as the interferometric one with the application of unwrapping as postprocessing. However, for objects with phase varying beyond the interval [π,π), the absolute phase sparse modeling brings essential advantage. It is because, wrapped phases are complicated by fringes, making images more difficult for sparse approximation.

The success of the sparsity approach depends on how rich and redundant are the used dictionaries/transforms (sets of basic functions). In this paper, the sparsity analysis and synthesis are based on the recent and proved to be very efficient, block-matching 3D (BM3D) denoising algorithm.40

Let us mention the basic steps of this advanced technique known as nonlocal self-similarity sparsity. At the first stage, the image is partitioned into small overlapping square patches. For each patch, a group of similar patches is collected that are stacked together to form a 3-D array (group). This stage is called grouping. The entire 3-D group-array is projected onto a 3-D predefined transform basis. The spectral coefficients obtained as a result of this transform are hard-thresholded (small coefficients are zeroed) and the inverse 3-D transform gives the filtered patches, which are returned to the original position of these patches in the image. This stage is called collaborative filtering. This process is repeated for all patches of the entire image and obtained overlapped filtered patches are aggregated in the final image estimate. This last stage is called aggregation. The details of this algorithm can be seen in Ref. 40.

The links of the BM3D algorithm with the general sparsity concept are revealed in Ref. 41, where it is shown that the grouping operations define the data adaptive analysis and synthesis image transforms (frames) and these transforms combined with the thresholding define the thresholding stage of the BM3D algorithm. It is emphasized that sparsity is achieved mainly due to the grouping, which allows the joint analysis of similar patches and, in this way, to guaranty the sparsity (self-similarity of patches), at least for each of the 3-D groups.

Note that the standard BM3D algorithm, as it is presented in Ref. 40, is composed of two successive stages: thresholding and Wiener filtering. In this paper, we use a simplified version of BM3D, as it is introduced in Ref. 41, including grouping, transforms, and thresholding without Wiener filtering.

In what follows, we exploit, for the complex-valued uo, the sparsity imposed on phase and amplitude. The variational formulation for reconstruction of a complex-valued uo, optimal for noisy data, results in the likelihood type criteria and optimization with the constrained imposed on sparsity. It has been shown for various optical problems (e.g., see Refs. 4243.44) that the algorithms are iterative and the sparsity appeared as BM3D filtering applied separately to estimates of phase and amplitude.

This filtering can be represented in the form

Eq. (9)

φ^o=BM3Dphase(φo,thφ),

Eq. (10)

B^o=BM3Dampl(Bo,thB),
where the filters are applied to the phase and amplitude of uo.

Here φ^o and B^o are sparse approximations of φo and Bo; phase and amplitude as indices of BM3D are used in order to emphasize that the parameters of BM3D can be different for phase and amplitude; and thφ and thB are threshold parameters of the algorithms. The phase in Eq. (9) can be interferometric or absolute depending on the sparsity formulation.

The implementation of the sparsity hypothesis in the form of the special filters Eqs. (9)–(10) is in line with the recent concept plug-and-play,4547 stating that any efficient filter can serve as a potentially good prior and efficient regularizator in variational design of data processing algorithms.

2.2.

Superresolution SPAR Algorithm

2.2.1.

Discretization

The computational wavefront restoration is going from the continuous domain wavefront propagation [Eq. (7)] to the corresponding discrete model based on pixelation of the object uo, thus, we arrive to the discrete modeling of the system linking discrete values of the sensor output (observations) with the discrete values of the object uo.

Conventionally, the pixels are square of the size ΔSLM×ΔSLM and Δs×Δs for the SLM and sensor, respectively. In what follows, for simplicity, ΔSLM=Δs. A continuous object is discretized by pixels of the size Δo×Δo. This discretization is necessary both for digital data processing as well as for modeling of wavefront propagation and image formation. Contrary to the pixels of SLM and sensor defined by the corresponding optical-electronic devices, the object pixels are computational, which maybe be taken arbitrary small.

Assuming for a moment that Δo=Δs=ΔSLM, the reconstruction of uo from the observations {zs} is the standard phase retrieval problem with an object resolution dictated by the pixel size of the sensor and the SLM.

Let us term this case pixel-resolution imaging.

If Δo<Δs, we arrive to a much more challenging problem of pixel superresolution or subpixel resolution imaging. Further, if Δo is so small that Δoλ, then it is wavelength resolution. Going further to Δo<λ we arrive to subwavelength or wavelength superresolution. The superresolution phase retrieval with smaller and very small Δo as compared with Δs and λ is the goal of this paper.

It is convenient to assume that Δs=rs·Δo, where rs1 is an integer pixel superresolution factor. In this case, the SLM pixel ΔSLM×ΔSLM covers rs2 computational object pixels and provides the same modulation phase-shift to all object pixels in this group.

Using for calculation the fast Fourier transform (FFT), we arrive to the discrete analog of Eq. (7), within an invariant factor μ, in the form

Eq. (11)

us[k,l]=FFT{uo[s,t]·Ms[s,t]}Δo2λf,

Eq. (12)

ΔsΔo=λfND,
where FFT stands for 2-D FFT, and ND is a side length of a square-support for [k,l] and [s,t].

Here, the variables us[k,l] and uo[s,t]·Ms[s,t] are sampled with the computational period Δo. Then, in particular, the modulation function Ms[s,t] is a piecewise invariant with rs×rs squares of invariant values covering the corresponding pixels of SLM.

The constraint (12) is typical for use of FFT for the calculation of discrete FT. All functions and FFT in Eq. (11) are calculated for the square support ND×ND, where ND is always higher (even much higher) than the pixelated sizes of the object, the SLM, and the sensor.

According to Eq. (5), the discrete diffraction pattern is calculated as ys[k,l]=|us[k,l]|2 with the noisy observations obtained according to the Poissonian distribution (8). Note that these computational ys[k,l] are given with the computational period Δo while the observations are introduced with the sampling period Δs. Equations (11)–(12) define the discrete forward propagation model of the system shown in Fig. 1. In order to simplify the presentation, we preserve the notation P for this discrete model initially introduced for the continuous domain variables.

2.2.2.

SR-SPAR Algorithm

The presented SR-SPAR algorithm is derived from the variational formulation introduced for optimal reconstruction of uo from Poissonian observations {zs[k,l]}. The corresponding minus log-likelihood for Poissonian observations according to Eq. (8) is as follows:

Eq. (13)

L=s=1Lk,l[|us[k,l]|2χzs[k,l]log(|us[k,l]|2χ)].

This criterion should be minimized with respect to uo[k,l], provided Eq. (11) linking uo and us and restrictions imposed by the sparsity requirements.

The derivation of the algorithm is similar to the technique developed in Ref. 26 for the pixel-resolution phase retrieval. The difference mainly concerns the sampling rates: Δo=Δs in Ref. 26 and in this paper Δo=Δs/rs meaning that the observations should be upsampled by a factor rs.

We present the SR-SPAR algorithm in the form given in Table 1. It is emphasized that SR-SPAR, being based on the minimization of Eq. (13), is optimal, in the statistical sense, for Poissonian observations.

Table 1

SR-SPAR phase retrieval algorithm.

Input:{z˜s}; s=1,,L,x1
for t=1,..,T;
1.Forward propagation:
vst=P{Ms·xt}, vstSD, s=1,,L;
2.Poissonian noise suppression:
ust={bstexp[j·angle(vst)],vstSS,vst,vstSDSS,s=1,,L;
3.Backward propagation:
xt=1Ls=1LMs*·P1{ust};
4.Phase unwrapping:
φabst=W1(angle(xt));
5.Sparse phase and amplitude filtering:
φabst+1=BM3Dphase(φabst,thφ),
Bt+1=BM3Dampl(abs(xt),tha);
6.Object wavefront update:
xt+1=Bt+1exp(jφabst+1);
Output:φ^o,abs=φabsT+1, B^o=BT+1.

The inputs z˜s in this algorithm are the observations zs upsampled by a factor rs. We use the zero-order upsampling giving z˜s as piecewise invariant function with the invariant values for computational pixels corresponding to each of the larger size pixels of the sensor. All calculations in the SR-SPAR algorithm are produced for high-resolution variables with the sampling Δo.

At step 1, the object wavefront estimate xt is multiplied by the phase mask Ms and propagated by the operator P to the sensor plane, with the result denoted as vst. These wavefronts are calculated for the diffraction area ND×ND denoted as SD.

At step 2, the wavefront is updated to the variable ust by filtering the amplitude of vst according to the given observations z˜s. The following formula, as derived in Ref. 26, defines the rule on how the updated amplitude bs is calculated:

Eq. (14)

bs=|vs|+|vs|2+4z˜sγ1(1+γ1χ)2(1+γ1χ).

These calculations are pixelwise; γ1>0 is the parameter of the algorithm. This update is produced provided known observation z˜s, i.e., for the pixels belonging to the sensor area SS, vsSS. In our modeling, the computational diffraction area SD is always equal or larger than the sensor area, SSSD. For the area out of the sensor, the wavefront values are preserved, us=vs for vsSDSS.

At step 3, the estimates {ust} backpropagate to the object plane and update the object wavefront estimate xt+1. Here, Ms* means a complex conjugate Ms and P1{ust}=FFT1{ust[k,l]}λf/Δo2.

The sparsification (filtering on the base of sparse approximations) is produced in step 5. The unwrapping of the phase with reconstruction of the absolute phase in step 4 is necessary only if the range of the object phase goes beyond 2π.

Following to Ref. 26, we introduce also a simplified version of SR-SPAR named the SR-GS. It differs in two points from SR-SPAR in Table 1: the phase unwrapping and BM3D filtering (steps 4 and 5) are omitted and the Poissonian filtering in step 2 is replaced by the rule bsz˜s/χ, which corresponds to the amplitude update standard for the GS style algorithms. This later rule follows from the optimal solution (14) provided that the data are noiseless, i.e., χ is very large.

It was demonstrated in Ref. 42 that using the update shown in step 2 and different for vstSDSS and vstSS allows to improve the accuracy of the wavefront reconstruction. In Ref. 48, this effect is interpreted as a self-extrapolation of holograms applied for resolution enhancement.

We make publicly available the MATLAB® democodes49 of the developed SR-GS and SR-SPAR algorithms, which can be used to reproduce the experiments presented in this paper as well as for further tests.

3.

Numerical Experiments

Both algorithms, SR-SPAR and SR-GS, were tested for various models of uo. In what follows, we are restricted mainly to 256×256 phase-objects of invariant amplitude and three types of varying phase: test-images Lena normalized to the interval [0, π/2]; Gaussian shape absolute phase (phase range 50 radians), and discontinuous shear plane (phase range 65 rad). Respectively, we treat experiments with Lena as interferometric phase imaging, as they do not require phase unwrapping, and experiments with Gaussian and shear plane as absolute phase imaging requiring the unwrapping operation in SR-SPAR. The PUMA algorithm50 is used for phase unwrapping in step 4 of SR-SPAR.

The fixed parameters of the experiments are: Δs=ΔSLM=5.2  μm, λ=632.8  nm, sensor size, in pixels, 4096×4096, computational diffraction area SD of size 5120×5120. The main varying parameters are the computational sampling period Δo, the Poissonian noise parameter χ, and computational resolution factor calculated with respect to the sensor as rs=Δs/Δo. It is assumed that rs is integer and takes values rs=1, 8, 16, and 32. The rs=1 corresponds to the pixel-resolution and larger rs mean superresolution of the higher order. The sensor of a much larger size than the object is taken in order to enable a good quality of superresolution with large values of the superresolution factors rs.

It is natural to also measure the superresolution with respect to the wavelength λ as the ratio rλ=Δo/λ=Δs/(rsλ). Then, rs=1 gives the pixel-resolution with rλ=8.21, i.e., the sensor and SLM pixels are about eight times larger than the wavelength λ. For rs=8, we obtain the wavelength resolution with rλ=1.03, the higher values rs=16, 32 correspond to the subwavelength resolution with computational pixels Δo smaller than the wavelength with the wavelength resolution factors rλ=0.515 and 0.257.

Note that according to the restriction (11), smaller Δo (larger rs) assumes that the lens with a smaller focal distance should be used in the considered optical setup. For the introduced set rs, we obtain the following focal distances f=[54.7,6.8,3.4,1.7]  mm, respectively.

In our experiments, the phase modulation masks Ms(k,l)=exp[jϕk,l(s)] are random with the Gaussian independent zero-mean phase values, ϕk,l(s)N(0,π/4).

The accuracy of the wavefront reconstruction is characterized by RMSE criteria calculated independently for amplitude and phase. The object phase image can be estimated at least within an invariant global phase-shift φshift. It is estimated using as reference the phase of the true object. This correction of the phase is done only for the calculation of the criteria and for result imaging and is not used in the algorithm iterations.

In what follows, we produce calculations for noisy and nearly noiseless data with the Poissonian scale parameter χ taking values in the internal [1, 1000]. The smallest χ results in the noisiest observations. The corresponding SNR is calculated in dB as

Eq. (15)

SNR=10log10(χ2s=1LysF2/s=1LysχzsF2)  dB.

For the superresolution experiments, we use the objects with a fixed number of computational pixels of size Δo=Δs/rs, thus larger rs means a smaller physical size of the object. The successful superresolution imaging, in particular the wavelength resolution, requires a sensor size being much larger than the object size.

3.1.

Modulation Phase Mask and Sparsity

Let us start from qualitative observations concerning the effects of the basic ingredients of the considered setup and the developed algorithm. Figure 2 shows the reconstruction of the object phase for noiseless data (χ=1000), provided that only a single experiment is produced, L=1. The left image shows the true phase. The next completely destroyed image in Fig. 2 is obtained from the experiment with no phase modulation and without the sparse modeling for phase and amplitude. Thus, the diffraction pattern is a squared amplitude of FT of the object complex exponent. The third image is obtained from a single experiment, where the phase modulation is employed and no BM3D filters are used (SR-GS algorithm). This modulation makes the main features of the phase distribution at least visible but quite noisy. This noise is a result of the used phase modulation. The fourth image is obtained by the SR-SPAR algorithm, i.e., with phase modulation and BM3D filtering for the phase and amplitude. It shows nearly perfect reconstruction of the object phase.

Fig. 2

Phase reconstructions from left to right: (a) true Lena image, (b) reconstruction without phase modulation and BM3D filtering, (c) reconstruction with phase modulation but without BM3D filtering (SR-GS), (d) reconstruction with phase modulation and with BM3D filtering (SR-SPAR), L=1, χ=1000.

OE_56_9_094103_f002.png

For a larger number of experiments (L>1), the accuracy of the phase reconstruction with the phase modulation improves quickly both for processing with SR-SPAR and with SR-GS. Figure 2 and above comments are given for the pixel-resolution imaging, rs=1, and for nearly noiseless observations.

3.2.

Superresolution for Interferometric Phase

The reconstruction results for the Lena phase test-image with the superresolution factor rs=8 are shown in Figs. 3 and 4. The cross sections for phase and amplitude are shown for middle horizontal lines, where the red (solid) and blue (dotted) curves correspond to the reconstructions and true images, respectively. The reconstructions in Fig. 3 are of the high accuracy. They are obtained for the low level noise (χ=1000, SNR=60  dB). The results in Fig. 4 are much worse but we need to take into account that the observations are very noisy (χ=1, SNR=30.5  dB) for superresolution with the factor rs=8. Thus, we can treat these results as of acceptable quality. It can be noted that the low level noise reconstruction is, visually, nearly perfect. The accuracy of reconstruction is good for both phase and amplitude. The experiments produced for higher order resolution (not shown) demonstrate a noticeable degradation of results for rs=16 and fail completely for rs=32.

Fig. 3

SR-SPAR, Lena phase image and amplitude reconstructions: superresolution with rs=8, for nearly noiseless observations, χ=1000. In the cross sections (middle horizontal line), the solid (red) curves are for the reconstructions and the dotted (blue) ones for the true data.

OE_56_9_094103_f003.png

Fig. 4

SR-SPAR, Lena phase image and amplitude reconstructions: superresolution with rs=8, noisy observations, χ=1. In the cross sections (middle horizontal line), the solid (red) curves are for the reconstructions and the dotted (blue) ones for the true data.

OE_56_9_094103_f004.png

3.3.

Absolute Phase Imaging

SR-SPAR phase imaging for shear plane phase distribution (256×256) with the maximum value of about 65 rad is shown in Figs. 5Fig. 67. In Fig. 5, the results are shown for the resolution factor rs=8. Visually, the obtained 3-D surface is very close to the true one, thus, it is not necessary to show it. The 2-D images in these figures show the wrapped phase and amplitude reconstructions. As it is seen from the wrapped phase, the errors in the reconstruction can be noticed. Nevertheless, the quality of this superresolution reconstruction is very good. In Fig. 6, the similar results are shown for the much higher resolution factor rs=16. The shown 3-D surface is of a quite acceptable quality while the wrapped phase and amplitude reconstruction, shown as 2-D images, definitely demonstrate that these results are of a much lower quality than those achieved for rs=8. Finally in Fig. 7, we show the attempt to get reconstruction for the superresolution factor rs=32. These results are definitely negative, and the phase reconstruction failed.

Fig. 5

SR-SPAR, shear plane phase image, phase and amplitude reconstructions: nearly noiseless data, χ=1000. The superresolution reconstruction is produced for rs=8. The 3-D image is very close to the true phase image. The 2-D images are given for the wrapped phase and amplitude reconstructions.

OE_56_9_094103_f005.png

Fig. 6

SR-SPAR, shear plane phase image, phase and amplitude reconstructions: nearly noiseless data, χ=1000. The superresolution reconstruction is produced for rs=16. The 2-D images are given for the wrapped phase and amplitude reconstructions. The 3-D image surface is covered by well seen square blocks 16×16. This discretization of the surface is due to SLM pixels having size 16×16 in the computational pixels.

OE_56_9_094103_f006.png

Fig. 7

SR-SPAR, shear plane phase image, phase and amplitude reconstructions: nearly noiseless data, χ=1000. The superresolution reconstruction is produced for rs=32. The 2-D images are given for the wrapped phase and amplitude reconstructions. The algorithm failed.

OE_56_9_094103_f007.png

Phase imaging for Gaussian phase distribution (256×256) with maximum value of about 50 rad is shown in Figs. 8 and 9. The reconstructions are obtained for quite noisy data χ=100 and the superresolution factors rs=8 and rs=16. For rs=8 (Fig. 8), the quality of the reconstruction is very good and the 3-D phase reconstruction is very close to the true Gaussian phase object. The situation becomes much worse for the superresolution factor rs=16 (Fig. 9). The errors in the phase reconstruction are obvious and quite large. The attempt to get reconstruction for the superresolution factor rs=32 failed and we do not show these images.

Fig. 8

SR-SPAR, Gaussian plane phase image, phase and amplitude reconstructions: noisy data, χ=100. The superresolution reconstruction is produced for rs=8. The 2-D images are given for the wrapped phase and amplitude reconstructions.

OE_56_9_094103_f008.png

Fig. 9

SR-SPAR, Gaussian plane phase image, phase and amplitude reconstructions: noisy data, χ=100. The superresolution reconstruction is produced for rs=16. The 2-D images are given for the wrapped phase and amplitude reconstructions.

OE_56_9_094103_f009.png

In conclusion of this section, we wish to note that we are talking about very high levels of the pixel superresolution rs=8 and rs=16 corresponding to the wavelength and half wavelength superresolution.

3.4.

More on Subwavelength Resolution

Let us demonstrate a few interesting tests on subwavelength imaging with rs=32 corresponding to rλ=0.257 with the object size 128×128.

In Figs. 10 and 11, the reconstructions for the two-peak phase object are shown. The four squares clearly seen in the amplitude reconstructions are patterns of four pixels of SLM each covering 32×32 computational pixels of the object. These images confirm that both developed algorithms SR-GS and SR-SPAR are able to reconstruct two pointwise phase peaks separated by the distance equal to 0.257λ. It is a demonstration of the subwavelength resolution.

Fig. 10

Two-picks reconstructions, SR-GS algorithms: distance between peaks 0.257λ, χ=1000. The four 32×32 squares well seen in the amplitude reconstruction correspond to four pixels of SLM.

OE_56_9_094103_f010.png

Fig. 11

Two-picks reconstructions, SR-SPAR algorithms: distance between peaks 0.257λ, χ=1000. The four 32×32 squares well seen in the amplitude reconstruction correspond to four pixels of SLM.

OE_56_9_094103_f011.png

3.5.

Parameters of the SR-SPAR Algorithm

The performance of the SR-SPAR algorithm essentially depends on its parameters. Optimization can be produced for each magnitude/phase distribution and noise level. However, in our experiments, the parameters are fixed. The image patches in BM3D are of size 8×8. The group size is limited to 39 patches. The step size among the neighboring patches is equal to 3. The transforms DCT (for patches) and Haar (for the group length) are used for 3-D group data processing in BM3D. In the shown results as an initial guess for the iterative SR-GS and SR-SPAR algorithm, we use an image with the invariant amplitude equal to 1.3 and zero phase.

The parameters defining the iterations of the algorithm are as follows: γ1=1/χ, tha=4.0, and thφ=4.0. The number of the iterations is fixed to 50.

For our experiments, we use MATLAB R2015a and a computer with the processor Intel(R) Core(TM) i7-4800MQ@ 2.7 GHz.

The complexity of the algorithm is characterized by the time required for processing. For 50 iterations, L=12 and 256×256 images this time is as follows: SR-GS2500 s; SR-SPAR without phase unwrapping 3300  s; SR-SPAR with phase unwrapping 1500  s.

4.

Conclusion

Computational superresolution phase retrieval is considered for phase-coded intensity observations. The proposed algorithm is derived as an optimal solution for Poissonian noisy observations. One of the essential instruments of the algorithm is a sparsity hypothesis applied to both phase and amplitude. The efficiency of the algorithm is confirmed by simulation experiments. It is shown that high level superresolution can be achieved with the pixel superresolution factor up to 32, i.e., the pixel size of the reconstructed object in 32 times smaller than the pixel size of the sensor and the SLM. In comparison with the wavelength, the superresolution up to one-quarter of the wavelength is demonstrated.

Acknowledgments

This work was supported by Academy of Finland, project no. 287150, 2015-2019.

References

1. 

R. K. Tyson, Principles of Adaptive Optics, CRC Press, Taylor&Francis Group(2015). Google Scholar

2. 

L. V. Wang and H.-I. Wu, Biomedical Optics: Principles and Imaging, John Wiley & Sons, Chichester (2012). Google Scholar

3. 

B. C. Kress and P. Meyrueis, Applied Digital Optics: From Micro-optics to Nanophotonics, John Wiley & Sons, Chichester (2009). Google Scholar

4. 

J. Gluckstad and D. Palima, Generalized Phase Contrast: Applications in Optics and Photonics, Springer(2009). Google Scholar

5. 

D. Misell, “An examination of an iterative method for the solution of the phase problem in optics and electron optics: I. Test calculations,” J. Phys. D Appl. Phys., 6 (18), 2200 –2216 (1973). http://dx.doi.org/10.1088/0022-3727/6/18/305 JPAPBE 0022-3727 Google Scholar

6. 

W. Saxton, “Correction of artifacts in linear and non-linear high-resolution electron-micrographs,” J. Microsc. Spectrosc. Electron., 5 665 –674 (1980). JMSEDI 0395-9279 Google Scholar

7. 

G. Pedrini, W. Osten and Y. Zhang, “Wave-front reconstruction from a sequence of interferograms recorded at different planes,” Opt. Lett., 30 (8), 833 –835 (2005). http://dx.doi.org/10.1364/OL.30.000833 OPLEDP 0146-9592 Google Scholar

8. 

P. Almoro, G. Pedrini and W. Osten, “Complete wavefront reconstruction using sequential intensity measurements of a volume speckle field,” Appl. Opt., 45 (34), 8596 –8605 (2006). http://dx.doi.org/10.1364/AO.45.008596 APOPAI 0003-6935 Google Scholar

9. 

C. Kohler, F. Zhang and W. Osten, “Characterization of a spatial light modulator and its application in phase retrieval,” Appl. Opt., 48 (20), 4003 –4008 (2009). http://dx.doi.org/10.1364/AO.48.004003 APOPAI 0003-6935 Google Scholar

10. 

L. Camacho et al., “Quantitative phase microscopy using defocusing by means of a spatial light modulator,” Opt. Express, 18 (7), 6755 –6766 (2010). http://dx.doi.org/10.1364/OE.18.006755 OPEXFF 1094-4087 Google Scholar

11. 

C. Falldorf et al., “Phase retrieval by means of a spatial light modulator in the Fourier domain of an imaging system,” Appl. Opt., 49 (10), 1826 –1830 (2010). http://dx.doi.org/10.1364/AO.49.001826 APOPAI 0003-6935 Google Scholar

12. 

V. Katkovnik and J. Astola, “Phase retrieval via spatial light modulator phase modulation in 4f optical setup: numerical inverse imaging with sparse regularization for phase and amplitude,” JOSA A, 29 (1), 105 –116 (2012). http://dx.doi.org/10.1364/JOSAA.29.000105 Google Scholar

13. 

P. F. Almoro et al., “Enhanced wavefront reconstruction by random phase modulation with a phase diffuser,” Opt. Lasers Eng., 49 (2), 252 –257 (2011). http://dx.doi.org/10.1016/j.optlaseng.2010.09.012 Google Scholar

14. 

E. J. Candes, X. Li and M. Soltanolkotabi, “Phase retrieval from coded diffraction patterns,” Appl. Comput. Harmonic Anal., 39 (2), 277 –299 (2015). http://dx.doi.org/10.1016/j.acha.2014.09.004 ACOHE9 1063-5203 Google Scholar

15. 

P. F. Almoro et al., “Enhanced intensity variation for multiple-plane phase retrieval using a spatial light modulator as a convenient tunable diffuser,” Opt. Lett., 41 (10), 2161 –2164 (2016). http://dx.doi.org/10.1364/OL.41.002161 OPLEDP 0146-9592 Google Scholar

16. 

P. F. Almoro, A. M. S. Maallo and S. G. Hanson, “Fast-convergent algorithm for speckle-based phase retrieval and a design for dynamic wavefront sensing,” Appl. Opt., 48 (8), 1485 –1493 (2009). http://dx.doi.org/10.1364/AO.48.001485 APOPAI 0003-6935 Google Scholar

17. 

R. W. Gerchberg, “A practical algorithm for the determination of phase from image and diffraction plane pictures,” Optik, 35 237 –246 (1972). OTIKAJ 0030-4026 Google Scholar

18. 

J. R. Fienup, “Phase retrieval algorithms: a comparison,” Appl. Opt., 21 (15), 2758 –2769 (1982). http://dx.doi.org/10.1364/AO.21.002758 APOPAI 0003-6935 Google Scholar

19. 

C. Guo, S. Liu and J. T. Sheridan, “Iterative phase retrieval algorithms. I: optimization,” Appl. Opt., 54 (15), 4698 –4708 (2015). http://dx.doi.org/10.1364/AO.54.004698 APOPAI 0003-6935 Google Scholar

20. 

Y. Shechtman et al., “Phase retrieval with application to optical imaging: a contemporary overview,” IEEE Signal Process. Mag., 32 (3), 87 –109 (2015). http://dx.doi.org/10.1109/MSP.2014.2352673 ISPRE6 1053-5888 Google Scholar

21. 

E. J. Candes et al., “Phase retrieval via matrix completion,” SIAM Rev., 57 (2), 225 –251 (2015). http://dx.doi.org/10.1137/151005099 SIREAD 0036-1445 Google Scholar

22. 

Y. Shechtman, A. Beck and Y. C. Eldar, “Gespar: efficient phase retrieval of sparse signals,” IEEE Trans. Signal Process., 62 (4), 928 –938 (2014). http://dx.doi.org/10.1109/TSP.2013.2297687 ITPRED 1053-587X Google Scholar

23. 

J. R. Fienup, “Phase-retrieval algorithms for a complicated optical system,” Appl. Opt., 32 (10), 1737 –1746 (1993). http://dx.doi.org/10.1364/AO.32.001737 APOPAI 0003-6935 Google Scholar

24. 

H. H. Bauschke, P. L. Combettes and D. R. Luke, “Phase retrieval, error reduction algorithm, and Fienup variants: a view from convex optimization,” JOSA A, 19 (7), 1334 –1345 (2002). http://dx.doi.org/10.1364/JOSAA.19.001334 Google Scholar

25. 

Y. Chen and E. Candes, “Solving random quadratic systems of equations is nearly as easy as solving linear systems,” in Advances in Neural Information Processing Systems, 739 –747 (2015). Google Scholar

26. 

V. Katkovnik, “Phase retrieval from noisy data based on sparse approximation of object phase and amplitude,” (2016) http://www.cs.tut.fi/lasip/DDT/index3.html Google Scholar

27. 

A. Fannjiang, “Absolute uniqueness of phase retrieval with random illumination,” Inverse Probl., 28 (7), 075008 (2012). http://dx.doi.org/10.1088/0266-5611/28/7/075008 INPEEY 0266-5611 Google Scholar

28. 

Q. Lian, B. Shi and S. Chen, “Transfer orthogonal sparsifying transform learning for phase retrieval,” Digital Signal Process., 62 11 –25 (2017). http://dx.doi.org/10.1016/j.dsp.2016.10.014 DSPREJ 1051-2004 Google Scholar

29. 

W. Bishara et al., “Holographic pixel super-resolution in portable lensless on-chip microscopy using a fiber-optic array,” Lab. Chip, 11 (7), 1276 –1279 (2011). http://dx.doi.org/10.1039/c0lc00684j LCAHAM 1473-0197 Google Scholar

30. 

K. Guo et al., “Optimization of sampling pattern and the design of Fourier ptychographic illuminator,” Opt. Express, 23 (5), 6171 –6180 (2015). http://dx.doi.org/10.1364/OE.23.006171 OPEXFF 1094-4087 Google Scholar

31. 

A. Greenbaum et al., “Increased space-bandwidth product in pixel super-resolved lensfree on-chip microscopy,” Sci. Rep., 3 1717 (2013). http://dx.doi.org/10.1038/srep01717 Google Scholar

32. 

S. Gazit et al., “Super-resolution and reconstruction of sparse sub-wavelength images,” Opt. Express, 17 (26), 23920 –23946 (2009). http://dx.doi.org/10.1364/OE.17.023920 OPEXFF 1094-4087 Google Scholar

33. 

M. Marim et al., “Off-axis compressed holographic microscopy in low-light conditions,” Opt. Lett., 36 (1), 79 –81 (2011). http://dx.doi.org/10.1364/OL.36.000079 OPLEDP 0146-9592 Google Scholar

34. 

Y. Rivenson, A. Stern and B. Javidi, “Compressive Fresnel holography,” J. Disp. Technol., 6 (10), 506 –509 (2010). http://dx.doi.org/10.1109/JDT.2010.2042276 IJDTAL 1551-319X Google Scholar

35. 

Y. Rivenson, A. Stern and J. Rosen, “Compressive multiple view projection incoherent holography,” Opt. Express, 19 (7), 6109 –6118 (2011). http://dx.doi.org/10.1364/OE.19.006109 OPEXFF 1094-4087 Google Scholar

36. 

J. Song et al., “Sparsity-based pixel super resolution for lens-free digital in-line holography,” Sci. Rep., 6 24681 (2013). http://dx.doi.org/10.1038/srep24681 Google Scholar

37. 

A. M. S. Maallo, P. F. Almoro and S. G. Hanson, “Quantization analysis of speckle intensity measurements for phase retrieval,” Appl. Opt., 49 (27), 5087 –5094 (2010). http://dx.doi.org/10.1364/AO.49.005087 APOPAI 0003-6935 Google Scholar

38. 

J. W. Goodman, Introduction to Fourier Optics, Roberts and Company Publishers, Englewood (2005). Google Scholar

39. 

M. Elad, Sparse and Redundant Representations: from Theory to Applications in Signal and Image Processing, Springer, New York (2010). Google Scholar

40. 

K. Dabov et al., “Image denoising by sparse 3-D transform-domain collaborative filtering,” IEEE Trans. Image Process., 16 (8), 2080 –2095 (2007). http://dx.doi.org/10.1109/TIP.2007.901238 IIPRE4 1057-7149 Google Scholar

41. 

A. Danielyan, V. Katkovnik and K. Egiazarian, “BM3D frames and variational image deblurring,” IEEE Trans. Image Process., 21 (4), 1715 –1728 (2012). http://dx.doi.org/10.1109/TIP.2011.2176954 IIPRE4 1057-7149 Google Scholar

42. 

V. Katkovnik and J. Astola, “High-accuracy wave field reconstruction: decoupled inverse imaging with sparse modeling of phase and amplitude,” JOSA A, 29 (1), 44 –54 (2012). http://dx.doi.org/10.1364/JOSAA.29.000044 Google Scholar

43. 

V. Katkovnik and J. Astola, “Compressive sensing computational ghost imaging,” JOSA A, 29 (8), 1556 –1567 (2012). http://dx.doi.org/10.1364/JOSAA.29.001556 Google Scholar

44. 

V. Katkovnik and J. Astola, “Sparse ptychographical coherent diffractive imaging from noisy measurements,” JOSA A, 30 (3), 367 –379 (2013). http://dx.doi.org/10.1364/JOSAA.30.000367 Google Scholar

45. 

F. Heide et al., “Flexisp: a flexible camera image processing framework,” ACM Trans. Graphics, 33 (6), 231 (2014). http://dx.doi.org/10.1145/2661229 ATGRDF 0730-0301 Google Scholar

46. 

S. Sreehari et al., “Plug-and-play priors for bright field electron tomography and sparse interpolation,” IEEE Trans. Comput. Imaging, 2 (4), 408 –423 (2016). http://dx.doi.org/10.1109/TCI.2016.2599778 Google Scholar

47. 

C. A. Metzler, A. Maleki and R. G. Baraniuk, “BM3D-prgamp: compressive phase retrieval based on BM3D denoising,” in IEEE Int. Conf. on Image Processing (ICIP), 2504 –2508 (2016). http://dx.doi.org/10.1109/ICMEW.2016.7574718 Google Scholar

48. 

T. Latychevskaia and H.-W. Fink, “Resolution enhancement in digital holography by self-extrapolation of holograms,” Opt. Express, 21 (6), 7726 –7733 (2013). http://dx.doi.org/10.1364/OE.21.007726 OPEXFF 1094-4087 Google Scholar

49. 

Egiazarian K. and V. Katkovnik, “Sparse complex domain wave field imaging,” (2017) http://www.cs.tut/sgn/imaging/sparse Google Scholar

50. 

J. M. Bioucas-Dias and G. Valadao, “Phase unwrapping via graph cuts,” IEEE Trans. Image Process., 16 (3), 698 –709 (2007). http://dx.doi.org/10.1109/TIP.2006.888351 IIPRE4 1057-7149 Google Scholar

Biography

Vladimir Katkovnik received his PhD and DSc degrees in technical cybernetics from Leningrad Polytechnic Institute (LPI) in 1964 and 1974, respectively. From 1964 to 1991, he was an associate professor and then a professor in the Department of Mechanics and Control Processes, LPI. Since 2003, he has been with the Department of Signal Processing, Tampere University of Technology, Finland. He published six books and over 350 refereed journal and conference papers. His research interests include stochastic image/signal processing, nonparametric estimation, computational imaging, and computational phase imaging.

Karen Egiazarian received his MSc degrees from Yerevan State University, in 1981, his PhD from Moscow State University, Russia, in 1986, and the DTech degree from Tampere University of Technology (TUT), Finland, in 1994. He is a professor leading the Computational Imaging Group, Signal Processing Laboratory, TUT. He has authored about 650 refereed journal and conference papers. His research interests include computational imaging, sparse coding, and image and video restoration. He serves as an associate editor for the IEEE Transactions of Image Processing and is the editor-in-chief of the Journal of Electronic Imaging.

CC BY: © The Authors. Published by SPIE under a Creative Commons Attribution 4.0 Unported License. Distribution or reproduction of this work in whole or in part requires full attribution of the original publication, including its DOI.
Vladimir Y. Katkovnik and Karen Egiazarian "Sparse superresolution phase retrieval from phase-coded noisy intensity patterns," Optical Engineering 56(9), 094103 (11 September 2017). https://doi.org/10.1117/1.OE.56.9.094103
Received: 27 February 2017; Accepted: 16 August 2017; Published: 11 September 2017
Lens.org Logo
CITATIONS
Cited by 18 scholarly publications.
Advertisement
Advertisement
KEYWORDS
Super resolution

Phase retrieval

Sensors

Spatial light modulators

Phase shift keying

Wavefronts

Algorithm development

Back to Top