PurposeProton radiation therapy may achieve precise dose delivery to the tumor while sparing non-cancerous surrounding tissue, owing to the distinct Bragg peaks of protons. Aligning the high-dose region with the tumor requires accurate estimates of the proton stopping power ratio (SPR) of patient tissues, commonly derived from computed tomography (CT) image data. Photon-counting detectors for CT have demonstrated advantages over their energy-integrating counterparts, such as improved quantitative imaging, higher spatial resolution, and filtering of electronic noise. We assessed the potential of photon-counting computed tomography (PCCT) for improving SPR estimation by training a deep neural network on a domain transform from PCCT images to SPR maps.ApproachThe XCAT phantom was used to simulate PCCT images of the head with CatSim, as well as to compute corresponding ground truth SPR maps. The tube current was set to 260 mA, tube voltage to 120 kV, and number of view angles to 4000. The CT images and SPR maps were used as input and labels for training a U-Net.ResultsPrediction of SPR with the network yielded average root mean square errors (RMSE) of 0.26% to 0.41%, which was an improvement on the RMSE for methods based on physical modeling developed for single-energy CT at 0.40% to 1.30% and dual-energy CT at 0.41% to 3.00%, performed on the simulated PCCT data.ConclusionsThese early results show promise for using a combination of PCCT and deep learning for estimating SPR, which in extension demonstrates potential for reducing the beam range uncertainty in proton therapy.
In CT radiomics, numerical parameters extracted from CT images are analyzed to find biomarkers. Since these numerical parameters can vary with imaging parameters, there is a need to optimize acquisition protocols for radiomics. In this work, we investigate the effect of deep-learning-based image reconstruction on the accuracy of radiomic parameters of tumors. We image a 3D printed lung phantom containing four tumors (ellipsoidal, lobulated, spherical, and spiculated), using the CAD model as ground truth. The phantom was 3D printed using fused deposition modeling with a PLA filament and 80% fill rate with a gyroidal pattern to mimic soft tissue. CT images of the 3D printed phantom and tumors were acquired with a GE revolution scanner with 120kVp and 213mAs. We reconstructed images using FBP and a vendor-supplied deep learning image reconstruction (DLIR) method (TrueFidelity, GE HealthCare). We also applied 24 custom convolutional neural network denoisers with a U-net architecture, trained on the AAPM-Mayo Clinic Low Dose CT dataset. After segmentation, 14 radiomic features were extracted using SlicerRadiomics. Results show that the vendor-supplied DLIR gave a smaller relative error than FBP for 87% of radiomic features. 8 out of 24 custom denoisers yielded a smaller error than FBP in 50% or more of the radiomic measurements. One denoiser, (VGG16+L1 loss, 32 features, batch size 16), outperformed FBP in 84% of measurements and outperformed the vendor-supplied DLIR in 63% of the measurements. In conclusion, our results demonstrate that deep-learning-based denoising has the potential to improve the accuracy of CT radiomics.
Proton radiation therapy has the potential of achieving precise dose delivery to the tumor while sparing non-cancerous surrounding tissue, owing to the sharp Bragg peaks of protons. Aligning the high dose region with the tumor requires accurate estimates of the proton stopping power ratio (SPR) of patient tissues, commonly derived from computed tomography (CT) image data. Photon-counting detectors within CT have demonstrated advantages over their energy-integrating counterparts, such as improved quantitative imaging, higher spatial resolution and filtering of electronic noise. In this study, the potential of photon-counting computed tomography for improving SPR estimation was assessed by training a deep neural network on a domain transform from photon-counting CT images to SPR maps. XCAT phantoms of the head were generated and used to simulate photon-counting CT images with CatSim, as well as to compute corresponding ground truth SPR maps. The CT images and SPR maps were then used as input and labels to a neural network. Prediction of SPR with the network yielded mean root mean square errors (RMSE) of 0.26-0.41 %, which is an improvement on errors reported for methods based on dual energy CT (DECT). These early results show promise for using a combination of photon-counting CT and deep learning for predicting SPR, which in extension demonstrates potential for reducing the beam range uncertainty in proton therapy.
Photon-counting detectors are greatly improving the resolution and image quality in computed tomography (CT) technology. The drawback is, however, that the reconstruction becomes more challenging. This is because there is a considerable increment of the processing data due to the multiple energy bins and materials in the reconstruction analysis, as well as improved resolution. Yet efficient material decomposition and reconstruction methods tend to generate noisy images that do not completely satisfy the expected image quality. Therefore, there is a need for efficient denoising of the resulting material images. We present a new and fast denoiser that is based on a linear minimum mean square error (LMMSE) estimator. The LMMSE is very fast to compute, but not commonly used for CT image denoising, probably due to its inability to adapt the amount of denoising to different parts of the image and the difficulty to derive accurate statistical properties from the CT data. To overcome these problems we propose a model-based deep learning strategy, that is, a deep neural network that preserves an LMMSE structure (model-based), providing more robustness unseen data, as well as good interpretability to the result. In this way, the solution adapts to the anatomy in every point of the image and noise properties at that particular location. In order to asses the performance of the new method, we compare it to both to a conventional LMMSE estimator and to a “black-box” CNN in a simulation study with anthropomorphic phantoms.
Next generation X-ray computed tomography, based on photon-counting detectors, is now clinically available. These new detectors come with the promise of higher contrast-to-noise ratio and spatial resolution and improved low-dose imaging. However, the multi-bin nature of photon-counting detectors renders the image reconstruction problem more difficult. Common approaches, such as the two-step projection-based approach, may result in material basis images with an excessive degree of noise, which limits the clinical usefulness of the images. One possible solution is to “assist” the conventional image reconstruction by post-processing the reconstructed images using deep learning. Such networks are often trained using some pixel-wise loss, such as the mean squared error. This low-level per-pixel comparison is known to lead to over-smoothing and a loss of fine-grained details that are important to the perceptual quality and clinical usefulness of the image. In this abstract, we propose to tackle this issue by including an adversarial loss based on the Wasserstein generative adversarial network with gradient penalty. The adversarial loss will encourage the distribution of the processed images to be similar to that of the ground truth. This helps prevent over-smoothing and ensures that the ground truth texture is well preserved. In particular, we train a version of the UNet using a combination of the mean absolute error and an adversarial loss to correct for noise in the material basis images. We demonstrate that the proposed method can produce denoised virtual monoenergetic images, with realistic texture, at a range of energy levels.
Photon-counting spectral CT is a novel technology with a lot of promise. However, one common issue is detector inhomogeneity which results in streak artifacts in the sinogram domain and ring artifacts in the image domain. These rings are very conspicuous and limit the clinical usefulness of the images. We propose a deep learning based image processing technique for ring artifact correction in the sinogram domain. In particular, we train a UNet using a perceptual loss function with VGG16 as feature extractor to remove streak artifacts in the basis sinograms. Our results show that this method can successfully produce ring-corrected virtual monoenergetic images at a range of energy levels.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.