Computed tomography (CT) is a widely available, low-cost neuroimaging modality primarily used as a brain examination tool for visual assessment of structural brain integrity in neurodegenerative diseases such as dementia disorders. In this study, we developed a deep learning model to expand the applications of CT to morphological brain segmentation and volumetric extraction. We trained densely connected 3D convoluted neural network variants called U-Nets to segment intracranial volume (ICV), grey matter (GM), white matter (WM) and cerebrospinal fluid (CSF). Dice similarity scores and volumetric Pearson correlation were the evaluation metrics incorporated. Our pilot study created a model that enables automated segmentation in CT with results comparable to magnetic resonance imaging.
A robust model for the automatic segmentation of human brain images into anatomically defined regions across the
human lifespan would be highly desirable, but such structural segmentations of brain MRI are challenging due to age-related
changes. We have developed a new method, based on established algorithms for automatic segmentation of
young adults' brains. We used prior information from 30 anatomical atlases, which had been manually segmented into
83 anatomical structures. Target MRIs came from 80 subjects (~12 individuals/decade) from 20 to 90 years, with equal
numbers of men, women; data from two different scanners (1.5T, 3T), using the IXI database. Each of the adult atlases
was registered to each target MR image. By using additional information from segmentation into tissue classes (GM,
WM and CSF) to initialise the warping based on label consistency similarity before feeding this into the previous
normalised mutual information non-rigid registration, the registration became robust enough to accommodate atrophy
and ventricular enlargement with age. The final segmentation was obtained by combination of the 30 propagated atlases
using decision fusion. Kernel smoothing was used for modelling the structural volume changes with aging. Example
linear correlation coefficients with age were, for lateral ventricular volume, rmale=0.76, rfemale=0.58 and, for hippocampal
volume, rmale=-0.6, rfemale=-0.4 (allρ<0.01).
Tissue classifications of the MRI brain images can either be obtained by segmenting the images or propagating the segmentations of the atlas to the target image. This paper compares the classification results of the direct segmentation method using FAST with those of the segmentation propagation method using nreg and the MNI Brainweb phantom images. The direct segmentation is carried out by extracting the brain and classifying the tissues by FAST. The segmentation propagation is carried out by registering the Brainweb atlas image to the target images by affine registration, followed by non-rigid registration at different control spacing, then transforming the PVE (partial volume effect) fuzzy membership images of cerebrospinal fluid (CSF), grey matter (GM) and white matter (WM) of the atlas image into the target space respectively. We have compared the running time, reproducibility, global and local differences between the two methods. Direct segmentation is much faster. There is no significant difference in reproducibility between the two techniques. There are significant global volume differences on some tissue types between them. Visual inspection was used to localize these differences. This study had no gold standard segmentations with which to compare the automatic segmentation solutions, but the global and local volume differences suggest that the most appropriate algorithm is likely to be application dependent.
Background: In order to perform statistical analysis of cohorts based on images, reliable methods for automated anatomical segmentation are required. Label propagation (LP) from manually segmented atlases onto newly acquired images is a particularly promising approach. Methods: We investigated LP on a set of 6 three-dimensional T1-weighted magnetic resonance data sets of the brains of normal individuals. For each image, a manually prepared segmentation of 67 structures was available. Each subject image was used in turn as an atlas and registered non-rigidly to each other subject's image. The resulting transformations were applied to the label sets, yielding five different generated segmentations for each subject, which we compared with the native manual segmentations using an overlap measure (similarity index, SI). We then reviewed the LP results for five structures with varied anatomical and label characteristics visually to determine how the registration procedure had affected the delineation of their boundaries. Results: The majority of structures propagated well as measured by SI (SI > 70 in 80% of measurements). Boundaries that were marked in the atlas image by definite intensity differences were congruent, with good agreement between the manual and the generated segmentations. Some boundaries in the manual segmentation were defined as planes marked by landmarks; such boundaries showed greater mismatch. In some cases, the proximity of structures with similar intensity distorted the LP results: e.g., parts of the parahippocampal gyrus were labeled as hippocampus in two cases. Conclusion: The size and shape of anatomical structures can be determined reliably using label propagation, especially where boundaries are defined by distinct differences in grey scale image intensity. These results will inform further work to evaluate potential clinical uses of information extracted from images in this way.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.