Data scarcity and data imbalance are two major challenges in training deep learning models on medical images, such as brain tumor MRI data. The recent advancements in generative artificial intelligence have opened new possibilities for synthetically generating MRI data, including brain tumor MRI scans. This approach can be a potential solution to mitigate the data scarcity problem and enhance training data availability. This work focused on adapting the 2D latent diffusion models to generate 3D multi-contrast brain tumor MRI data with a tumor mask as the condition. The framework comprises two components: a 3D autoencoder model for perceptual compression and a conditional 3D Diffusion Probabilistic Model (DPM) for generating high-quality and diverse multi-contrast brain tumor MRI samples, guided by a conditional tumor mask. Unlike existing works that focused on generating either 2D multi-contrast or 3D single-contrast MRI samples, our models generate multi-contrast 3D MRI samples. We also integrated a conditional module within the UNet backbone of the DPM to capture the semantic class-dependent data distribution driven by the provided tumor mask to generate MRI brain tumor samples based on a specific brain tumor mask. We trained our models using two brain tumor datasets: The Cancer Genome Atlas (TCGA) public dataset and an internal dataset from the University of Texas Southwestern Medical Center (UTSW). The models were able to generate high-quality 3D multi-contrast brain tumor MRI samples with the tumor location aligned by the input condition mask. The quality of the generated images was evaluated using the Fréchet Inception Distance (FID) score. This work has the potential to mitigate the scarcity of brain tumor data and improve the performance of deep learning models involving brain tumor MRI data.
The quality of brain MRI volumes is often compromised by motion artifacts arising from intricate respiratory patterns and involuntary head movements, manifesting as blurring and ghosting that markedly degrade imaging quality. In this study, we introduce an innovative approach employing a 3D deep learning framework to restore brain MR volumes afflicted by motion artifacts. The framework integrates a densely connected 3D U-net architecture augmented by generative adversarial network (GAN)-informed training with a novel volumetric reconstruction loss function tailored to 3D GAN to enhance the quality of the volumes. Our methodology is substantiated through comprehensive experimentation involving a diverse set of motion artifact-affected MR volumes. The generated high-quality MR volumes have similar volumetric signatures comparable to motion-free MR volumes after motion correction. This underscores the significant potential of harnessing this 3D deep learning system to aid in the rectification of motion artifacts in brain MR volumes, highlighting a promising avenue for advanced clinical applications.
PurposeDeep learning has shown promise for predicting the molecular profiles of gliomas using MR images. Prior to clinical implementation, ensuring robustness to real-world problems, such as patient motion, is crucial. The purpose of this study is to perform a preliminary evaluation on the effects of simulated motion artifact on glioma marker classifier performance and determine if motion correction can restore classification accuracies.ApproachT2w images and molecular information were retrieved from the TCIA and TCGA databases. Simulated motion was added in the k-space domain along the phase encoding direction. Classifier performance for IDH mutation, 1p/19q co-deletion, and MGMT methylation was assessed over the range of 0% to 100% corrupted k-space lines. Rudimentary motion correction networks were trained on the motion-corrupted images. The performance of the three glioma marker classifiers was then evaluated on the motion-corrected images.ResultsGlioma marker classifier performance decreased markedly with increasing motion corruption. Applying motion correction effectively restored classification accuracy for even the most motion-corrupted images. Motion correction of uncorrupted images exceeded the original performance of the network.ConclusionsRobust motion correction can facilitate highly accurate deep learning MRI-based molecular marker classification, rivaling invasive tissue-based characterization methods. Motion correction may be able to increase classification accuracy even in the absence of a visible artifact, representing a new strategy for boosting classifier performance.
Isocitrate dehydrogenase (IDH) mutation status is an important marker in glioma diagnosis and therapy. We propose an automated pipeline for noninvasively predicting IDH status using deep learning and T2-weighted (T2w) magnetic resonance (MR) images with minimal preprocessing (N4 bias correction and normalization to zero mean and unit variance). T2w MR images and genomic data were obtained from The Cancer Imaging Archive dataset for 260 subjects (120 high-grade and 140 low-grade gliomas). A fully automated two-dimensional densely connected model was trained to classify IDH mutation status on 208 subjects and tested on another held-out set of 52 subjects using fivefold cross validation. Data leakage was avoided by ensuring subject separation during the slice-wise randomization. Mean classification accuracy of 90.5% was achieved for each axial slice in predicting the three classes of no tumor, IDH mutated, and IDH wild type. Test accuracy of 83.8% was achieved in predicting IDH mutation status for individual subjects on the test dataset of 52 subjects. We demonstrate a deep learning method to predict IDH mutation status using T2w MRI alone. Radiologic imaging studies using deep learning methods must address data leakage (subject duplication) in the randomization process to avoid upward bias in the reported classification accuracy.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.