Accurate models of the X-ray attenuation process are required for quantitative estimation of iodine concentration with model-based reconstruction methods. The choice of model is influenced not only by the accuracy sought but also by the increasing complexity when more free parameters need to be reconstructed. The applicability of three attenuation models was investigated in a single pixel problem using either two or three monochromatic beams near the K-edge energy of iodine. We found that an empirical model with five components, proposed by Midgley, leads to the lowest error when modeling iodine free materials and small error in estimating iodine concentration (0.1% and 3.39%), whereas the decomposition into contributions due to photoelectric effect and incoherent scatter results in more accurate estimation of the iodine concentration (0.72%) but has larger error (8.9%) when reconstructing iodine free materials. Decomposition into base materials shows the worst results on both objectives (8.9% and 62%).
Purpose: To evaluate whether combining a polychromatic reconstruction algorithm for breast CT with projection data acquired using alternating high and low energy spectra allows a significant dose reduction while maintaining image quality. Materials and Methods: A breast phantom was scanned on a clinical breast CT scanner using the automatic exposure control selected exposure at the regular spectrum with a tube voltage of 49 kV and a 1.576 mm aluminum filter and with a second, higher energy spectrum created by adding a 0.254 mm copper filter. An acquisition with spectrum switching was simulated by interleaving projections from the standard and high energy datasets, and a previously developed polychromatic reconstruction algorithm was modified to reconstruct the breast CT images. Image quality was assessed using the signal difference-to-noise ratio (SDNR) of a high and a low contrast target present in the phantom. A Monte Carlo simulation was performed to determine the mean glandular dose (MGD) of each scan. Results: Acquisition of the simulated scan with spectrum switching would result in an MGD of 6.57 mGy, compared to the standard acquisition MGD of 10.4 mGy, a reduction of 37%. At the same time, the measured SDNR of the mixed spectrum reconstructions was slightly higher than that of the standard acquisition, with an increase in SDNR of 6.6% (p < 0.01) for the high contrast target and 5.3% (p = 0.12) for the low contrast target. Conclusion: Our approach combining a polychromatic reconstruction algorithm for breast CT with an advanced acquisition protocol using alternating high and low energy spectra can lower dose by at least a third without loss of target SDNR
We propose the use of an aperture diverse imaging system for high-resolution imaging through strong atmospheric
turbulence. The system has two channels. One channel partitions the aperture into a set of annular apertures that provide
a set of images of the target at different spatial resolutions. The other channel feeds an imaging Shack-Hartmann wavefront
sensor with a small number of sub-apertures. The combined imagery from this setup is processed using a blind
restoration algorithm that captures the inherent temporal correlations in the observed atmospheric wave fronts. This
approach shows significant promise for providing high-fidelity imagery for observations acquired through strong
atmospheric turbulence. The approach also allows for the separation of the phase perturbations from different layers of
the atmosphere. This characteristic offers potential for the accurate restoration of images with fields of view substantially
larger than the isoplanatic angle.
We investigate the use of a novel multi-lens imaging system in the context of biometric identification, and more
specifically, for iris recognition. Multi-lenslet cameras offer a number of significant advantages over standard
single-lens camera systems, including thin form-factor and wide angle of view. By using appropriate lenslet spacing
relative to the detector pixel pitch, the resulting ensemble of images implicitly contains subject information
at higher spatial frequencies than those present in a single image. Additionally, a multi-lenslet approach enables
the use of observational diversity, including phase, polarization, neutral density, and wavelength diversities. For
example, post-processing multiple observations taken with differing neutral density filters yields an image having
an extended dynamic range. Our research group has developed several multi-lens camera prototypes for the
investigation of such diversities.
In this paper, we present techniques for computing a high-resolution reconstructed image from an ensemble of
low-resolution images containing sub-pixel level displacements. The quality of a reconstructed image is measured
by computing the Hamming distance between the Daugman4 iris code of a conventional reference iris image,
and the iris code of a corresponding reconstructed image. We present numerical results concerning the effect of
noise and defocus blur in the reconstruction process using simulated data and report preliminary work on the
reconstruction of actual iris data obtained with our camera prototypes.
Many iterative methods that are used to solve Ax=b can be derived as quasi-Newton methods for minimizing the quadratic function 1/2 xTATAx-xTATb. In this paper, several such methods are considered, including conjugate gradient least squares (CGLS), Barzilai-Borwein (BB), residual norm steepest descent (RNSD) and Landweber (LW). Regularization properties of these methods are studied by analyzing the so-called "filter factors". The algorithm proposed by Barzilai and Borwein is shown to have very favorable regularization and convergence properties. Secondly, we find that preconditioning can result in much better convergence properties for these iterative methods.
Serra-Capizzano recently introduced anti-reflecting boundary conditions (AR-BC) for blurring models: the idea seems promising both from the computational and approximation viewpoint. The key point is that, under certain symmetry conditions, the AR-BC matrices can be essentially simultaneously diagonalized by the (fast) sine transform DST I and, moreover, a C1 continuity at the border is guaranteed in the 1D case. Here we give more details for the 2D case and we perform extensive numerical simulations which illustrate that the AR-BC can be superior to Dirichlet, periodic and reflective BCs in certain applications.
A flexible preconditioning approach based on Kronecker product
and singular value decomposition (SVD) approximations is presented. The approach can be used with a variety of boundary conditions, depending on what is most appropriate for the specific deblurring application. It is shown that, regardless of the imposed boundary condition, SVD approximations can be used effectively in filtering methods, such as the truncated SVD, as well as in designing preconditioners for iterative methods.
In image restoration and reconstruction applications, unconstrained Krylov subspace methods represent an attractive approach for computing approximate solutions. They are fast, but unfortunately they do not produce approximate solutions preserving nonnegativity. As a consequence the error of the computed approximate solution can be large. Enforcing a nonnegativity constraint can produce much more accurate approximate solutions, but can also be computationally expensive. This paper considers a nonnegativity constrained minimization algorithm which represents a variant of an algorithm proposed by Kaufman. Numerical experiments show that the algorithm can be more accurate and computationally competitive with unconstrained Krylov subspace methods.
In image restoration, a separable, spatially variant blurring function has the form k(x, y; s, 1) =ki(x,s)k2(y, t). If this kernel is known, then discretizations lead to a blurring matrix which is a Kronecker product of two matrices of smaller dimension. If k is not known precisely, such a discretization is not possible. In this paper we describe an interpolation scheme to construct a Kronecker product approximation to the blurring matrix from a set of observed point spread functions for separable, or nearly separable, spatially variant blurs. An approximate singular value decomposition is then computed from this Kronecker factorization.
Keywords: Image restoration, Interpolation, Kronecker product, space variant blur, SVD
We describe how to efficiently apply a spatially-variant blurring operator using linear interpolation of measured point spread functions. Numerical experiments illustrate that substantially better resolution can be obtained at very little additional cost compared to piecewise constant interpolation.
This paper concerns solving deconvolution problems for atmospherically blurred images by the preconditioned conjugate gradient algorithm, where a new approximate inverse preconditioner is used to increase the rate of convergence. Removing a linear, shift-invariant blur from a signal or image can be accomplished by inverse or Wiener filtering, or by an iterative least squares deblurring procedure. Because of the ill-posed characteristics of the deconvolution problem, in the presence of noise, filtering methods often yield poor results. On the other hand, iterative methods often suffer from slow convergence at high spatial frequencies. Theoretical results are established to show that fast convergence for our iterative algorithm can be expected, and test results are reported for a ground-based astronomical imaging problem.
An algorithm for computing solutions to ill-conditioned banded Toeplitz least squares problems by a rank revealing URV factorization is considered. The factorization is computed in O((beta) nlogn + (beta) n2), where (beta) is the bandwidth of the coefficient matrix. An approximate solution to ill-conditioned banded Toeplitz systems, in the presence of noise, is then obtained by truncating the factorization. Numerical results are provided that illustrate truncated URV can compute solutions comparable to the more expensive truncated singular value decomposition.
Pure nuclear quadrupole resonance (NQR) of 14N nuclei is quite promising as a method for detecting explosives such as RDX and contraband narcotics such as cocaine and heroin in quantities of interest. Pure NQR is conducted without an external applied magnetic field, so potential concerns about damage to magnetically encoded data or exposure of personnel to large magnetic fields are not relevant. Because NQR frequencies of different compounds are quite distinct, we do not encounter false alarms from the NQR signals of other benign materials. We have constructed a proof-of-concept NQR explosives detector which interrogates a volume of 300 liters (10 ft3). With minimal modification to the existing explosives detector, we can detect operationally relevant quantities of (free base) cocaine within the 300-liter inspection volume in 6 seconds. We are presently extending this approach to the detection of heroin base and also examining 14N and 35,37Cl pure NQR for detection of the hydrochloride forms of both materials. An adaptation of this NQR approach may be suitable for scanning personnel for externally carried contraband and explosives. We first outline the basics of the NQR approach, highlighting strengths and weaknesses, and then present representative results for RDX and cocaine detection. We also present a partial compendium of relevant NQR parameters measured for some materials of interest.
Discretized 2-D deconvolution problems arising, e.g., in image restoration and seismic tomography, can be formulated as 1eas squares compuaions, mm lib— Tx112 where T is often a large-scale rectangular Toeplitz-block matrix. We consider solving such block least squares problems by the preconditioned conjugate gradient algorithm using square nonsingular circulant-block and related preconditioners, constructed from the blocks of the rectangular matrix T. Preconditioning with such matrices allows efficient implementation using the 1-D or 2-D Fast Fourier Transform (FFT). It is well known that the resolution of ill-posed deconvolution problems can be substantially improved by regularization to compensate for their ill-posed nature. We show that regularization can easily be incorporated into our preconditioners, and we report on numerical experiments on a Cray Y-MP. The experiments illustrate good convergence properties of these FET—based preconditioned iterations.
We study fast preconditioned conjugate gradient (PCG) methods for solving least squares problems min (parallel)b-T(chi) (parallel)2, where T is an m X n Toeplitz matrix of rank n. Two circulant preconditioners are suggested: one, denoted by P, is based on a block partitioning of T and the other, denoted by N, is based on the displacement representation of TTT. Each is obtained without forming TTT. We prove formally that for a wide class of problems the PCG method with P converges in a small number of iterations independent of m and n, so that the computational cost of solving such Toeplitz least squares problems is O(m log n). Numerical experiments in using both P and N are reported, indicating similar good convergence properties for each preconditioner.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.