KEYWORDS: Digital breast tomosynthesis, Computer aided diagnosis and therapy, Reconstruction algorithms, 3D image processing, Computer-aided diagnosis, Detection and tracking algorithms, 3D image reconstruction, 3D image enhancement, Breast, Digital mammography, Deep learning, Convolutional neural networks, Mammography, 3D displays, Image enhancement
In a typical 2D mammography workflow scenario, a computer-aided detection (CAD) algorithm is used as a second reader producing marks for a radiologist to review. In the case of 3D digital breast tomosynthesis (DBT), the display of CAD detections at multiple reconstruction heights would lead to an increased image browsing and interpretation time. We propose an alternative approach in which an algorithm automatically identifies suspicious regions of interest from 3D reconstructed DBT slices and then merges the findings with the corresponding 2D synthetic projection image which is then reviewed. The resultant enhanced synthetic 2D image combines the benefits of a familiar 2D breast view with superior appearance of suspicious locations from 3D slices. Moreover, clicking on 2D suspicious locations brings up the display of the corresponding 3D regions in a DBT volume allowing navigation between 2D and 3D images. We explored the use of these enhanced synthetic images in a concurrent read paradigm by conducting a study with 5 readers and 30 breast exams. We observed that the introduction of the enhanced synthetic view reduced radiologist's average interpretation time by 5.4%, increased sensitivity by 6.7% and increased specificity by 15.6%.
KEYWORDS: Digital breast tomosynthesis, Reconstruction algorithms, Computer aided diagnosis and therapy, Tissues, Computer-aided diagnosis, Neural networks, Mammography, Digital mammography, Detection and tracking algorithms, Evolutionary algorithms, Deep learning, Convolutional neural networks, Breast, Image segmentation, Medical imaging
Computer-aided detection (CAD) has been used in screening mammography for many years and is likely to be utilized for digital breast tomosynthesis (DBT). Higher detection performance is desirable as it may have an impact on radiologist's decisions and clinical outcomes. Recently the algorithms based on deep convolutional architectures have been shown to achieve state of the art performance in object classification and detection. Similarly, we trained a deep convolutional neural network directly on patches sampled from two-dimensional mammography and reconstructed DBT volumes and compared its performance to a conventional CAD algorithm that is based on computation and classification of hand-engineered features. The detection performance was evaluated on the independent test set of 344 DBT reconstructions (GE SenoClaire 3D, iterative reconstruction algorithm) containing 328 suspicious and 115 malignant soft tissue densities including masses and architectural distortions. Detection sensitivity was measured on a region of interest (ROI) basis at the rate of five detection marks per volume. Moving from conventional to deep learning approach resulted in increase of ROI sensitivity from 0:832 ± 0:040 to 0:893 ± 0:033 for suspicious ROIs; and from 0:852 ± 0:065 to 0:930 ± 0:046 for malignant ROIs. These results indicate the high utility of deep feature learning in the analysis of DBT data and high potential of the method for broader medical image analysis tasks.
One widely accepted classification of a prostate is by a central gland (CG) and a peripheral zone (PZ). In some
clinical applications, separating CG and PZ from the whole prostate is useful. For instance, in prostate cancer
detection, radiologist wants to know in which zone the cancer occurs. Another application is for multiparametric
MR tissue characterization. In prostate T2 MR images, due to the high intensity variation between CG and PZ,
automated differentiation of CG and PZ is difficult. Previously, we developed an automated prostate boundary
segmentation system, which tested on large datasets and showed good performance. Using the results of the
pre-segmented prostate boundary, in this paper, we proposed an automated CG segmentation algorithm based
on Layered Optimal Graph Image Segmentation of Multiple Objects and Surfaces (LOGISMOS). The designed
LOGISMOS model contained both shape and topology information during deformation. We generated graph cost
by training classifiers and used coarse-to-fine search. The LOGISMOS framework guarantees optimal solution
regarding to cost and shape constraint. A five-fold cross-validation approach was applied to training dataset
containing 261 images to optimize the system performance and compare with a voxel classification based reference
approach. After the best parameter settings were found, the system was tested on a dataset containing another
261 images. The mean DSC of 0.81 for the test set indicates that our approach is promising for automated CG
segmentation. Running time for the system is about 15 seconds.
Manual delineation of the prostate is a challenging task for a clinician due to its complex and irregular shape.
Furthermore, the need for precisely targeting the prostate boundary continues to grow. Planning for radiation
therapy, MR-ultrasound fusion for image-guided biopsy, multi-parametric MRI tissue characterization, and
context-based organ retrieval are examples where accurate prostate delineation can play a critical role in a successful
patient outcome. Therefore, a robust automated full prostate segmentation system is desired. In this
paper, we present an automated prostate segmentation system for 3D MR images. In this system, the prostate is
segmented in two steps: the prostate displacement and size are first detected, and then the boundary is refined by
a shape model. The detection approach is based on normalized gradient fields cross-correlation. This approach
is fast, robust to intensity variation and provides good accuracy to initialize a prostate mean shape model. The
refinement model is based on a graph-search based framework, which contains both shape and topology information
during deformation. We generated the graph cost using trained classifiers and used coarse-to-fine search and
region-specific classifier training. The proposed algorithm was developed using 261 training images and tested
on another 290 cases. The segmentation performance using mean DSC ranging from 0.89 to 0.91 depending on
the evaluation subset demonstrates state of the art performance. Running time for the system is about 20 to 40
seconds depending on image size and resolution.
Fully automated prostate segmentation helps to address several problems in prostate cancer diagnosis and treatment:
it can assist in objective evaluation of multiparametric MR imagery, provides a prostate contour for
MR-ultrasound (or CT) image fusion for computer-assisted image-guided biopsy or therapy planning, may facilitate
reporting and enables direct prostate volume calculation. Among the challenges in automated analysis of
MR images of the prostate are the variations of overall image intensities across scanners, the presence of nonuniform
multiplicative bias field within scans and differences in acquisition setup. Furthermore, images acquired
with the presence of an endorectal coil suffer from localized high-intensity artifacts at the posterior part of the
prostate. In this work, a three-dimensional method for fast automated prostate detection based on normalized
gradient fields cross-correlation, insensitive to intensity variations and coil-induced artifacts, is presented and
evaluated. The components of the method, offline template learning and the localization algorithm, are described
in detail.
The method was validated on a dataset of 522 T2-weighted MR images acquired at the National Cancer
Institute, USA that was split in two halves for development and testing. In addition, second dataset of 29 MR
exams from Centre d'Imagerie Médicale Tourville, France were used to test the algorithm. The 95% confidence
intervals for the mean Euclidean distance between automatically and manually identified prostate centroids were
4.06 ± 0.33 mm and 3.10 ± 0.43 mm for the first and second test datasets respectively. Moreover, the algorithm
provided the centroid within the true prostate volume in 100% of images from both datasets. Obtained results
demonstrate high utility of the detection method for a fully automated prostate segmentation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.