Dual-energy computed tomography (CT) has the potential to decompose tissues into different materials. However, the classic direct inversion (DI) method for multimaterial decomposition (MMD) cannot accurately separate more than two basis materials due to the ill-posed problem and amplified image noise. We propose an integrated MMD method that addresses the piecewise smoothness and intrinsic sparsity property of the decomposition image. The proposed MMD was formulated as an optimization problem including a quadratic data fidelity term, an isotropic total variation term that encourages image smoothness, and a nonconvex penalty function that promotes decomposition image sparseness. The mass and volume conservation rule was formulated as the probability simplex constraint. An accelerated primal-dual splitting approach with line search was applied to solve the optimization problem. The proposed method with different penalty functions was compared against DI on a digital phantom, a Catphan® 600 phantom, a quantitative imaging phantom, and a pelvis patient. The proposed framework distinctly separated the CT image up to 12 basis materials plus air with high decomposition accuracy. The cross talks between two different materials are substantially reduced, as shown by the decreased nondiagonal elements of the normalized cross correlation (NCC) matrix. The mean square error of the measured electron densities was reduced by 72.6%. Across all datasets, the proposed method improved the average volume fraction accuracy from 61.2% to 99.9% and increased the diagonality of the NCC matrix from 0.73 to 0.96. Compared with DI, the proposed MMD framework improved decomposition accuracy and material separation. |
1.IntroductionConventional x-ray computed tomography (CT) projections are acquired with a single-energy spectrum. The reconstructed single-energy CT (SECT) provides the linear attenuation coefficients (LACs) of the scanning object, but the LACs depend on both the effective atomic number and the electron density, making it insufficient to determine the material components. Dual-energy CT (DECT) is acquired with two distinct energy spectra that are attenuated differently by the tissues. Therefore, DECT provides enhanced information to better differentiate and quantify material compositions. DECT shows promise in many clinical applications, including virtual unenhancement imaging,1,2 liver lesion characterization,3 kidney stone characterization,4 oncologic imaging,1,5 bone removal,1 etc. Despite the potential of using DECT for multimaterial decomposition (MMD), the problem is ill-defined for decomposing more than two materials without additional assumption.6,7 Existing DECT MMD approaches can be classified into three categories: projection-based method, integrated method, and image-based method. The projection-based method decomposes the two independent sinograms into two basis components by interpolating the lookup table and then performs separate image reconstruction.8 The integrated method incorporates the DECT acquisition into the forward projection model and then reconstructs basis material images directly from the DECT projections.9 The projection-based method and integrated method are robust to beam hardening artifacts,8,9 but there are several limitations. First, the projection-based method is limited to decomposing two basis materials. It is neither straightforward nor desirable to introduce additional constraints in the projection domain. Second, the projection-based method requires strict spatial and temporal consistency between the high-energy (HE) and low-energy (LE) acquisition. The condition is not met by many DECT systems using dual-source or single-source with fast kilovoltage-switching. Third, the integrated method is computationally expensive due to the repeated projection and backprojection during the reconstruction of basis image components. Fourth, the coupling between the reconstruction and image decomposition in the integrated method, especially with multiple basis materials, leads to decomposition results that are sensitive to parameter tuning. Alternatively, image-based decomposition methods were investigated, which perform basis material decomposition on the HE and LE CT images. This method is applicable to all DECT systems, straightforward, and computationally tractable. Mendonça et al.10 proposed an image-based MMD method that introduces two assumptions: mass and volume conservation, and that there are no more than three materials in each voxel. For each voxel, the method loops over a material triplet library, identifies the best-fit triplet, and decomposes the voxel into the three materials via direct matrix inversion. The method is termed direct inversion (DI), and it demonstrates the capability of decomposing CT into more than three basis materials. However, the decomposition accuracy is sensitive to the selection of material triplet library, which limits the flexibility of MMD, and the resultant images suffer from substantially amplified noise. In this study, we propose a DECT MMD method that decomposes the CT volume into multiple basis materials accurately while suppressing the decomposition image noise. Instead of rigidly confining each voxel to contain at most three basis materials, a nonconvex sparsity term is introduced to penalize the number of materials that are simultaneously present in the same voxel. 2.Method2.1.FormulationThe image domain DECT MMD problem is formulated as follows: where the optimization variable is the volume fraction (VF) matrix of dimension by . The voxels on the component image are indexed by , and the decomposition materials are indexed by . is the VF of the ’th material for the ’th voxel. is the component-to-attenuation transformation matrix defined as where and are the LACs of the ’th material at HE and LE, respectively. Matrix transforms the VF into the LACs at HE and LE for each voxel. is the attenuation image at HE and LE from measurements, defined as where and denotes the LAC of the ’th voxel on the CT image at LE and HE, respectively. is the isotropic total variation regularization, defined as where and are the derivative matrix along the row and column directions, respectively. is the probability simplex set that enforces the mass and volume conservation rule on each voxel, such that each column of the VF matrix satisfies the sum-to-one and non-negative constraints:The first term is the quadratic data fidelity term that minimizes the difference between the measured DECT image and the estimated DECT image calculated from the VF matrix . The second term is the isotropic TV regularization term, applied on each component image to encourage image smoothness while preserving image edges. The last term is the penalty function in the form of (), which promotes sparsity on the number of materials that simultaneous present in the same voxel. In this study, we specifically focus on four cases with . In the case of , the sparsity term reduces to a constant under the mass and volume conservation constraint . In other cases, the penalty functions are nonconvex with the nonconvexity increases as goes to 0. 2.2.AlgorithmThis study utilizes an accelerated primal-dual splitting approach with line search for both convex and nonconvex problems,11,12 which solves the optimization problem of the form: where is convex, possibly nonconvex, and a linear operator.The optimization problem in Eq. (1) is formulated into the canonical form shown in Eq. (3) by defining is equivalent to the objective function in Eq. (1). is an indicator function that equals to 0 if and infinity if , which enforces the sum-to-one and non-negative constraints.The accelerated primal-dual algorithm with line search12 is presented in Algorithm 1, where the key steps are the evaluations of the proximal operator of functions and . The proximal operator of a function with step size is defined as follows:13 Following the definition, the proximal operator of reduces to the projection onto the probability simplex , which can be efficiently solved by sorting and thresholding the input vector, as presented in Algorithm 2.14Algorithm 1Accelerated primal-dual algorithm with line search.
Algorithm 2Proximal operator evaluation of G (ProxtG(x)).
With the separable sum rule, the evaluation of the proximal operator of reduces to evaluating the proximal operator with respect to each variable: Following the definition of proximal operator in Eq. (4), the proximal operators of and are Due to the separability, the proximal operator of reduces to pointwise proximal operator evaluation of , as follows: Exact analytic solutions to Eq. (5) exist for the scenarios considered in this study , which can be found in Sec. 6.1. For other values of , the proximal operators could be evaluated numerically with an iterative approach, such as Newton’s method.15With the evaluations of the proximal operator of functions and , the DECT MMD optimization problem in Eq. (1) could be solved following Algorithm 1. 2.3.EvaluationThe proposed framework with different sparsity parameters in the penalty functions was evaluated on a digital phantom, a Catphan® 600 phantom, a quantitative imaging phantom, and a pelvis patient, and is compared with the DI method.10 The quantitative imaging phantom data were acquired on a Siemens SOMATOM Force DECT. The Catphan and pelvis patient data were acquired on a Siemens SOMATOM Definition Flash. For the pelvis patient data, the mAs and CT dose index for the [LE, HE] acquisitions were [170 mAs, 131 mAs] and [4.92 mGy, 3.66 mGy], respectively. The LE and HE CT images, as shown in Figs. 1, 3, 5, and 7, were reconstructed using the standard filtered backprojection (FBP). The HE LAC and LE LAC of the basis material were computed as the average values of the HE LACs and LE LACs of the region of interest (ROIs), as shown on the LE CT images for all cases. These LAC values are presented in Table 4 in Sec. 6. For quantitative evaluation of the material decomposition accuracy, the mean and standard deviation (STD), electron density, as well as the VF, were computed within each ROI on the decomposition component image. The electron density at voxel is calculated by where is the electron density of the ’th material. The VF accuracy in a uniform ROI is defined as follows: where is the mean material component vector over all voxels within the uniform ROI and is the material component vector of the ground truth decomposition.To quantify the amount of overlap between different material decompositions across the whole image, the normalized cross correlation (NCC) coefficients at zero lag are evaluated for every pair of materials. The NCC coefficient at zero lag of material and is defined as follows: which equals to 0 if the two materials are completely separated and equals to 1 if they have identical distribution on the whole image. The NCC matrix is defined as a matrix with entries of . The diagonal elements of the NCC matrix are equal to 1 by definition, and the other elements are between 0 and 1, indicating the extent of material separation. If all materials are completely separated for all voxels, then all the off-diagonal elements should be 0. The diagonality of the NCC matrix is computed using the Pearson correlation coefficient: where equals to 1 for the diagonal matrix, for the antidiagonal matrix, and 0 for the uniform matrix. In the case of every voxel on the CT image being composed of only a single material, the diagonality of the NCC matrix is 1. On the other hand, in the case where every voxel contains an equal amount of all basis materials, the diagonality of the NCC matrix is 0. The diagonality of the NCC matrix summarizes the decomposition separability over the whole image and across all materials.3.Results3.1.Digital PhantomFigure 1 shows the LE and HE CT image of the digital phantom. This simple digital phantom is made up of four basis materials, including bone, iodine, water, and air, corresponding to the four ROIs indicated by the rectangular area. Figure 2 shows the decomposition results on the digital phantom using the proposed framework with different values and the classic DI method. In the cases when , the proposed framework distinctly separated all basis materials and with low noise, achieving a clear border between different materials. On the contrary, neither DI nor the proposed framework with achieved the desired sparsity or assigned the correct material to each ROI. The material separation capability of the proposed framework with is further confirmed with the NCC map in Fig. 2, showing little or no cross talks between two materials. With DI, the iodine component is mixed up with the bone and water. In the case of , none of the component is separated. 3.2.Catphan® 600 PhantomFigure 3 shows the LE and HE CT images of the Catphan® 600 phantom with contrast rods, which are made of six basis materials, including Teflon, Delrin, iodine solution of , polymethyl pentene (PMP), inner soft tissue, and air, corresponding to the six ROIs on the LE CT image [Fig. 3(a)]. The materials in the labeled rods on the HE CT [Fig. 3(b)] are Teflon, Delrin, iodine solution of , polystyrene, low-density polyethylene, PMP, and iodine solution of , respectively. The VF accuracy was evaluated on the six ROIs, and the electron density was evaluated on the seven contrast rods. Figure 4 shows the decomposition image and the NCC map. The proposed framework with successfully separated the phantom into the six basis components with minimal cross talk between different materials. With the classical DI method, the off-diagonal elements of the NCC matrix are up to 0.36, showing that the two corresponding basis materials, Delrin and PMP, are not well-separated. The proposed method with is unable to achieve material separation at all. Table 1 shows the evaluated electron densities for the seven contrast rods. The mean square error of the electron density was reduced by 72.6% for the proposed framework with compared with DI. Table 1The electron densities measured on the Catphan contrast rods labeled in Fig. 3(b). The RMSE is evaluated for each method as the mean square error of the seven rods.
3.3.Quantitative Imaging PhantomFigure 5 show the DECT image for the quantitative imaging phantom, which consists of 12 basis materials, including iodine solution (ROI1), iodine solution (ROI2), iodine solution (ROI3), iodine solution (PMP) (ROI4), calcium solution (ROI5), calcium solution (ROI6), calcium solution (ROI7), HE blood 70 (ROI8), HE blood 100 (ROI9), adipose (ROI10), water (ROI11), and brain (ROI12). In this phantom study, to avoid being trapped in undesired local minima with the increased number of basis materials, the algorithm is initialized by setting for the basis material that is closest to each pixel. Figure 6 shows the decomposition images with the corresponding NCC map. The proposed framework with decomposed the DECT images into 12 different basis components plus air and improved the VF accuracy from 51% using DI method to 100%. The nondiagonal entries of the NCC map are close to 0 for the proposed framework with , showing clean separation of the basis materials. The off-diagonal entries are close to 1 for the scenario. The classical DI method is unable to separate similar materials. For example, the HE blood 100 (ROI9), adipose (ROI10), and water (ROI11) have similar LAC values, resulting in substantial nonzero NCC elements using DI. 3.4.Pelvis PatientFigures 7 and 8 show the DECT image and the decomposition images with the corresponding NCC map, respectively, for the pelvis patient. The proposed framework with decomposed the DECT images into bone (ROI1), iodine (ROI2), muscle (ROI3), fat (ROI4), and air, and achieved an NCC matrix with nondiagonal coefficients close to 0. Table 2 summarizes the VF accuracy and diagonality of the NCC matrix for all datasets. The proposed method with achieves a significantly higher VF accuracy and diagonality than the comparison methods and approaching nearly perfect material decomposition. Across all datasets, the proposed method improved the average VF accuracy from 61.2% to 99.9% and increased the diagonality of the NCC matrix from 0.73 to 0.96. Table 2The VF accuracy and diagonality of the NCC matrix for all datasets.
Table 3 presents the runtime and the hyperparameters used in all cases. Despite that our algorithm used MATLAB built-in GPU computing tools for acceleration, the proposed framework takes 12 min on average, which is slower than the DI method, requiring only 3 min on CPU. The hyperparameters were tuned case-by-case to achieve visually desired sparseness and smoothness. The sparseness and the smoothness can be promoted by increasing and , respectively. and are related to the step sizes in the algorithms, which were tuned in a trial-and-error way for faster and more stable convergence. Table 3The runtime and the hyperparameters used in all cases.
Figure 9 shows the convergence plots for different values on the Catphan and the pelvis patient case. The objective values for different cases were not comparable since different hyperparameter values were used in different cases. However, it is worth noting that the plots show different converging patterns. For the convex case , the objective goes down nicely with a convex-shaped convergence curve, showing stable and robust convergence. For the nonconvex case , the curve goes down with sudden changes. For the values in between, the curve patterns are more similar to that of the curves, whereas the curves are more irregular and bumpy. 4.DiscussionsThe standard DI method for DECT-based MMD imposes sparsity constraint by enforcing a hard ceiling of three on the number of materials in each voxel and then solves the basis material components by direct matrix inversion. However, the three-material constraint is arbitrary and unrealistically rigid. Moreover, the direct matrix inversion inevitably amplifies the image noise, as shown in previous publications10 and the current study. Our proposed DECT MMD framework utilizes a TV regularization that regulates the decomposition image noise and uses a sparsity regularization to penalize the number of materials that are simultaneously present at the same voxel. The soft sparsity regularization allows the number of basis materials to vary across different voxels. The sparsity term is in the form of (), where were specifically studied. The sparsity term reduces to the L1 norm of the with and further reduces to a constant under the mass and volume constraint. Therefore, the sparsity term with does not promote material sparsity despite its desirable mathematical properties of being convex. When , the corresponding sparsity term has a closed-form proximal operator, which could be difficult to evaluate for other values of . The differences between , , and are subtle with respect to the decomposition results. The material penalty term with is also referred to as the counting norm and a mathematically rigorous description of the material sparsity. This is reflected in the best quantitative performance achieved with this norm. However, due to its extreme nonconvexity, it is more challenging to tune the parameters for convergence to an acceptable local minimum. The sparsity terms with and are better behaving nonconvex functions, showing more robust performance to optimization parameter selection. The nonconvex optimization problem in this study was solved using an accelerated primal-dual algorithm with line search proposed in Ref. 12. The primal-dual algorithm reduces to the primal-dual hybrid gradient (PDHG) method16,17 when applied to convex optimization problems. PDHG is one of the first-ordered algorithms that are well-suited for large-scaled optimization problems. Other first-ordered algorithms such as the fast iterative shrinkage-thresholding algorithm,18 which achieves a convergence rate of , substantially faster than the convergence rate of the PDHG, will be investigated in future studies on the nonconvex optimization problems. 5.ConclusionsThe proposed method accurately decomposed the DECT phantom and patient images up to 12 basis materials, markedly reduced cross talk among materials, suppressed decomposition image noise, and retained image spatial resolution. The proposed method using nonconvex material sparsity penalty outperforms convex penalty and the standard DI method. 6.Appendix6.1.Evaluate the Proximal Operator of for Different ValuesFor , the proximal operator in Eq. (5) reduces to a hard thresholding operation: For , the closed-form solution was proposed by McKelvey:19 where .For , the solution of the proximal operator is a root of the quartic polynomial: for which the analytic solution exists and could be evaluated following the work in Ref. 20. The details on the proximal operator evaluation for are shown in Algorithm 3.Algorithm 3Proximal operator evaluation (α=2/3), the solution to Eqs. (5) and (7).
For , the proximal operator results in soft thresholding: 6.2.LAC ValuesTable 4 presents the HE LAC and LE LAC values of the basis materials, which were computed as the average values of the HE LACs and LE LACs within the ROIs shown on the LE CT images in Figs. 1, 3, 5, and 7. Table 4The HE LAC and LE LAC values of the basis materials in all cases.
AcknowledgmentsThis research is supported by DOE under Grant Nos. DE-SC0017057 and DE-SC0017687, and NIH under Grant Nos. R44CA183390, R01CA188300, R01CA230278, R21CA228160, and R43CA183390. We would like to thank Shuai Leng at Mayo Clinic for providing the quantitative imaging phantom DECT data. An earlier version of this work was published in the conference proceedings of SPIE Medical Imaging 2019.21 ReferencesM. D. Agrawal et al.,
“Oncologic applications of dual-energy CT in the abdomen,”
RadioGraphics, 34
(3), 589
–612
(2014). https://doi.org/10.1148/rg.343135041 Google Scholar
H. A. Lee et al.,
“Comparison of virtual unenhanced images derived from dual-energy CT with true unenhanced images in evaluation of gallstone disease,”
Am. J. Roentgenol., 206
(1), 74
–80
(2016). https://doi.org/10.2214/AJR.15.14570 AJROAM 0092-5381 Google Scholar
Q. Wang et al.,
“Quantitative analysis of the dual-energy CT virtual spectral curve for focal liver lesions characterization,”
Eur. J. Radiol., 83
(10), 1759
–1764
(2014). https://doi.org/10.1016/j.ejrad.2014.07.009 EJRADR 0720-048X Google Scholar
D. T. Boll et al.,
“Renal stone assessment with dual-energy multidetector CT and advanced postprocessing techniques: improved characterization of renal stone composition—pilot study,”
Radiology, 250
(3), 813
–820
(2009). https://doi.org/10.1148/radiol.2503080545 RADLAX 0033-8419 Google Scholar
C. N. De Cecco et al.,
“Dual-energy CT: oncologic applications,”
Am. J. Roentgenol., 199
(Suppl. 5), S98
–S105
(2012). https://doi.org/10.2214/AJR.12.9207 AJROAM 0092-5381 Google Scholar
R. E. Alvarez and A. MacOvski,
“Energy-selective reconstructions in x-ray computerized tomography,”
Phys. Med. Biol., 21 733
–744
(1976). https://doi.org/10.1088/0031-9155/21/5/002 PHMBA7 0031-9155 Google Scholar
A. Macovski et al.,
“Energy dependent reconstruction in x-ray computerized tomography,”
Comput. Biol. Med., 6 325
–334
(1976). https://doi.org/10.1016/0010-4825(76)90069-X CBMDAW 0010-4825 Google Scholar
C. H. McCollough et al.,
“Dual- and multi-energy CT: principles, technical approaches, and clinical applications,”
Radiology, 276
(3), 637
–653
(2015). https://doi.org/10.1148/radiol.2015142631 RADLAX 0033-8419 Google Scholar
Y. Long and J. A. Fessler,
“Multi-material decomposition using statistical image reconstruction for spectral CT,”
IEEE Trans. Med. Imaging, 33
(8), 1614
–1626
(2014). https://doi.org/10.1109/TMI.2014.2320284 ITMID4 0278-0062 Google Scholar
P. R. S. Mendonça, P. Lamb and D. V. Sahani,
“A flexible method for multi-material decomposition of dual-energy CT images,”
IEEE Trans. Med. Imaging, 33
(1), 99
–116
(2014). https://doi.org/10.1109/TMI.2013.2281719 ITMID4 0278-0062 Google Scholar
T. Moellenhoff et al.,
“Low rank priors for color image regularization,”
Lect. Notes Comput. Sci., 8932 126
–140
(2014). https://doi.org/10.1007/978-3-319-14612-6 LNCSD9 0302-9743 Google Scholar
Y. Malitsky and T. Pock,
“A first-order primal-dual algorithm with linesearch,”
SIAM J. Optim., 28
(1), 411
–432
(2018). https://doi.org/10.1137/16M1092015 SJOPE8 1095-7189 Google Scholar
N. Parikh and S. Boyd,
“Proximal algorithms,”
Found. Trends Optim., 1
(3), 123
–231
(2013). https://doi.org/10.1561/2400000003 Google Scholar
W. Wang and M.-Á. Carreira-Perpiñán,
“Projection onto the probability simplex: an efficient algorithm with a simple proof, and an application,”
(2013). Google Scholar
S. Boyd and L. Vandenberghe,
“Convex optimization,”
Cambridge University Press, Cambridge
(2004). Google Scholar
T. Pock et al.,
“An algorithm for minimizing the Mumford-Shah functional,”
in Proc. IEEE Int. Conf. Comput. Vision,
(2009). https://doi.org/10.1109/ICCV.2009.5459348 Google Scholar
A. Chambolle and T. Pock,
“A first-order primal-dual algorithm for convex problems with applications to imaging,”
J. Math. Imaging Vision, 40
(1), 120
–145
(2011). https://doi.org/10.1007/s10851-010-0251-1 JIMVEC 0924-9907 Google Scholar
A. Beck and M. Teboulle,
“A fast iterative shrinkage-thresholding algorithm for linear inverse problems,”
SIAM J. Imaging Sci., 2
(1), 183
–202
(2009). https://doi.org/10.1137/080716542 Google Scholar
J. P. McKelvey,
“Simple transcendental expressions for the roots of cubic equations,”
Am. J. Phys., 52
(3), 269
–270
(1984). https://doi.org/10.1119/1.13706 AJPIAS 0002-9505 Google Scholar
D. Krishnan, R. Fergus,
“Fast image deconvolution using hyper-Laplacian priors,”
in Proc. 22nd Int. Conf. Neural Inf. Process. Syst.,
1033
–1041
(2009). Google Scholar
Q. Lyu et al.,
“Image-domain multi-material decomposition for dual-energy CT with non-convex sparsity regularization,”
Proc. SPIE, 10949 1094903
(2019). https://doi.org/10.1117/12.2508037 PSISDG 0277-786X Google Scholar
BiographyQihui Lyu received her BS degree in physics from Nanjing University in Nanjing, China. Currently, she is a PhD candidate in the Physics and Biology in Medicine Program at University of California, Los Angeles. She is interested in advanced optimization algorithms for inverse treatment planning, medical image reconstruction, and dual-energy CT. Daniel O’Connor received his PhD in mathematics from University of California, Los Angeles. He is an assistant professor at University of San Francisco. His postdoctoral research in the UCLA Department of Radiation Oncology focused on applications of optimization in radiation therapy, medical imaging, and machine learning. His research interests are in large scale optimization modeling and algorithms. Tianye Niu received his PhD in physics and electronics from the University of Science and Technology of China. He is a professor at the Institute of Translational Medicine at Zhejiang University. His research interests include compressed sensing, low-dose reconstruction, dual-energy CT imaging, spectral CT imaging, IGRT, proton therapy, radiomics, and cone-beam CT instrument. Ke Sheng received his PhD in medical physics from the University of Wisconsin–Madison. He is a professor and an associate vice chair of radiation oncology at University of California, Los Angeles. His main research areas include inverse optimization, 4π noncoplanar radiotherapy, image guided radiotherapy, biological outcome modeling, and small animal irradiation. |