Including uncertainty information in the assessment of a segmentation of pathologic structures on medical images, offers the potential to increase trust into deep learning algorithms for the analysis of medical imaging. Here, we examine options to extract uncertainty information from deep learning segmentation models and the influence of the choice of cost functions on these uncertainty measures. To this end we train conventional UNets without dropout, deep UNet ensembles, and Monte-Carlo (MC) dropout UNets to segment lung nodules on low dose CT using either soft Dice or weighted categorical cross-entropy (wcc) as loss functions. We extract voxel-wise uncertainty information from UNet models based on softmax maximum probability and from deep ensembles and MC dropout UNets using mean voxel-wise entropy. Upon visual assessment, areas of high uncertainty are localized in the periphery of segmentations and are in good agreement with incorrectly labelled voxels. Furthermore, we evaluate how well uncertainty measures correlate with segmentation quality (Dice score). Mean uncertainty over the segmented region (Ulabelled) derived from conventional UNet models does not show a strong quantitative relationship with the Dice score (Spearman correlation coefficient of -0.45 for the soft Dice vs -0.64 for the wcc model respectively). By comparison, image-level uncertainty measures derived from soft Dice as well as wcc MC UNet and deep UNet ensemble models correlate well with the Dice score. In conclusion, using uncertainty information offers ways to assess segmentation quality fully automatically without access to ground truth. Models trained using weighted categorical cross-entropy offer more meaningful uncertainty information on a voxel-level.
Several digital reference objects (DROs) for DCE-MRI have been created to test the accuracy of pharmacokinetic modeling software under a variety of different noise conditions. However, there are few DROs that mimic the anatomical distribution of voxels found in real data, and similarly few DROs that are based on both malignant and normal tissue. We propose a series of DROs for modeling Ktrans and Ve derived from a publically-available RIDER DCEMRI dataset of 19 patients with gliomas. For each patient’s DCE-MRI data, we generate Ktrans and Ve parameter maps using an algorithm validated on the QIBA Tofts model phantoms. These parameter maps are denoised, and then used to generate noiseless time-intensity curves for each of the original voxels. This is accomplished by reversing the Tofts model to generate concentration-times curves from Ktrans and Ve inputs, and subsequently converting those curves into intensity values by normalizing to each patient’s average pre-bolus image intensity. The result is a noiseless DRO in the shape of the original patient data with known ground-truth Ktrans and Ve values. We make this dataset publically available for download for all 19 patients of the original RIDER dataset.
Retinopathy of prematurity (ROP) is a disease that affects premature infants, where abnormal growth of the retinal blood vessels can lead to blindness unless treated accordingly. Infants considered at risk of severe ROP are monitored for symptoms of plus disease, characterized by arterial tortuosity and venous dilation at the posterior pole, with a standard photographic definition. Disagreement among ROP experts in diagnosing plus disease has driven the development of computer-based methods that classify images based on hand-crafted features extracted from the vasculature. However, most of these approaches are semi-automated, which are time-consuming and subject to variability. In contrast, deep learning is a fully automated approach that has shown great promise in a wide variety of domains, including medical genetics, informatics and imaging. Convolutional neural networks (CNNs) are deep networks which learn rich representations of disease features that are highly robust to variations in acquisition and image quality. In this study, we utilized a U-Net architecture to perform vessel segmentation and then a GoogLeNet to perform disease classification. The classifier was trained on 3,000 retinal images and validated on an independent test set of patients with different observed progressions and treatments. We show that our fully automated algorithm can be used to monitor the progression of plus disease over multiple patient visits with results that are consistent with the experts’ consensus diagnosis. Future work will aim to further validate the method on larger cohorts of patients to assess its applicability within the clinic as a treatment monitoring tool.
In the last five years, advances in processing power and computational efficiency in graphical processing units have catalyzed dozens of deep neural network segmentation algorithms for a variety of target tissues and malignancies. However, few of these algorithms preconfigure any biological context of their chosen segmentation tissues, instead relying on the neural network’s optimizer to develop such associations de novo. We present a novel method for applying deep neural networks to the problem of glioma tissue segmentation that takes into account the structured nature of gliomas – edematous tissue surrounding mutually-exclusive regions of enhancing and non-enhancing tumor. We trained separate deep neural networks with a 3D U-Net architecture in a tree structure to create segmentations for edema, non-enhancing tumor, and enhancing tumor regions. Specifically, training was configured such that the whole tumor region including edema was predicted first, and its output segmentation was fed as input into separate models to predict enhancing and non-enhancing tumor. We trained our model on publicly available pre- and post-contrast T1 images, T2 images, and FLAIR images, and validated our trained model on patient data from an ongoing clinical trial.
The analysis of large data sets can help to gain knowledge about specific organs or on specific diseases, just as big data analysis does in many non-medical areas. This article aims to gain information from 3D volumes, so the visual content of lung CT scans of a large number of patients. In the case of the described data set, only little annotation is available on the patients that were all part of an ongoing screening program and besides age and gender no information on the patient and the findings was available for this work. This is a scenario that can happen regularly as image data sets are produced and become available in increasingly large quantities but manual annotations are often not available and also clinical data such as text reports are often harder to share. We extracted a set of visual features from 12,414 CT scans of 9,348 patients that had CT scans of the lung taken in the context of a national lung screening program in Belarus. Lung fields were segmented by two segmentation algorithms and only cases where both algorithms were able to find left and right lung and had a Dice coefficient above 0.95 were analyzed. This assures that only segmentations of good quality were used to extract features of the lung. Patients ranged in age from 0 to 106 years. Data analysis shows that age can be predicted with a fairly high accuracy for persons under 15 years. Relatively good results were also obtained between 30 and 65 years where a steady trend is seen. For young adults and older people the results are not as good as variability is very high in these groups. Several visualizations of the data show the evolution patters of the lung texture, size and density with age. The experiments allow learning the evolution of the lung and the gained results show that even with limited metadata we can extract interesting information from large-scale visual data. These age-related changes (for example of the lung volume, the density histogram of the tissue) can also be taken into account for the interpretation of new cases. The database used includes patients that had suspicions on a chest X-ray, so it is not a group of healthy people, and only tendencies and not a model of a healthy lung at a specific age can be derived.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.