Deviations from the MR acquisition guidelines could lead to images with serious quality concerns such as incompletely imaged anatomies, which might require re-examinations and could result in missed pathologies. In this paper, we propose a deep learning method to automatically estimate the coverage of the target anatomy and to predict the extent of an anatomy outside the present field-of-view (FOV). For this purpose, we employed a 3D fully-convolutional neural network operating at multiple resolution levels. The proposed solution could be employed to propose a correct FOV setting in case of organ-coverage issues while patient is on the table and could be incorporated as a retrospective tool for quality monitoring and staff training. Our method was evaluated for four abdominal organs - liver, spleen, and left and right kidneys - in 40 magnetic resonance (MR) images from the publicly available Combined Healthy Abdominal Organ Segmentation (CHAOS) dataset. We obtained median extent-detection errors of 5.5-7.3mm or 3-4 voxels in the superior or inferior position in a dataset with average anatomical clippings of 24.8-43.6mm for four partially missing organs in the given FOV.
In order to alleviate the risk of radiation induced cataract in patients undergoing head CT examinations, the guidelines published by the American Association of Physicists in Medicine (AAPM) link the optimal scan angle to particular anatomic landmarks in the skull. In this paper, we investigated the use of a foveal fully-convolutional neural network (F-Net) for the segmentation-based detection of three head CT landmarks, with the final objective of an automatic scan quality control. Three individual networks were trained using ground-truth (GT) from three different readers to investigate the detection accuracy compared to each reader. The experiments were performed using 119 head CT scans and the three-fold cross-validation set up. For the evaluation, two performance measures were employed: the Euclidean distance between the detected landmarks and GT, and the distance of the detected landmarks to the plane generated from the GT landmark positions. For three readers, the median values of the Euclidean and point-to-plane distance obtained using F-Net were in the range of 1.3 - 2.8 mm and 0.3 - 0.8 mm, respectively. The presented method outperformed a previously published approach using image registration and achieved results comparable to the inter-observer variability between three readers. Further improvements were achieved by training a similar network which combined GT information from all readers.
Most fully automatic segmentation approaches target a single anatomical structure in a specific combination of image modalities and are often difficult to extend to other modalities and protocols or segmentation tasks. More recently, deep learning-based approaches promise to be readily adaptable to new applications as long as a suitable training set is available, although most deep learning architectures are still tuned towards a specific application and data domain. In this paper, we propose a novel fully convolutional neural network architecture for image segmentation and show that the same architecture with the same learning parameters can be used to train models for 20 different organs on two different protocols, while still achieving segmentation accuracy that is on par with the state-of-the-art. In addition, the architecture was designed to minimize the amount of GPU memory required for processing large images, which facilitates the application to full-resolution whole-body CT scans. We have evaluated our method on the publicly available data set of the VISCERAL multi-organ segmentation challenge and compared the performance of our method with those of the challenge and two recently proposed deep learning-based approaches. We achieved the highest Dice similarity coefficients for 17 out of 20 organs for the contrast enhanced CT scans and for 10 out of 20 organs for the uncontrasted CT scans in a cross-comparison between our method and participating methods.
Most radiologists prefer an upright orientation of the anatomy in a digital X-ray image for consistency and quality reasons. In almost half of the clinical cases, the anatomy is not upright orientated, which is why the images must be digitally rotated by radiographers. Earlier work has shown that automated orientation detection results in small error rates, but requires specially designed algorithms for individual anatomies. In this work, we propose a novel approach to overcome time-consuming feature engineering by means of Residual Neural Networks (ResNet), which extract generic low-level and high-level features, and provide promising solutions for medical imaging. Our method uses the learned representations to estimate the orientation via linear regression, and can be further improved by fine-tuning selected ResNet layers. The method was evaluated on 926 hand X-ray images and achieves a state-of-the-art mean absolute error of 2.79°.
During the last years Model-Based Segmentation (MBS) techniques have been used in a broad range of medical applications. In clinical practice, such techniques are increasingly employed for diagnostic purposes and treatment decisions. However, it is not guaranteed that a segmentation algorithm will converge towards the desired solution. In specific situations as in the presence of rare anatomical variants (which cannot be represented) or for images with an extremely low quality, a meaningful segmentation might not be feasible. At the same time, an automated estimation of the segmentation reliability is commonly not available. In this paper we present an approach for the identification of segmentation failures using concepts from the field of outlier detection. The approach is validated on a comprehensive set of Computed Tomography Angiography (CTA) images by means of Receiver Operating Characteristic (ROC) analysis. Encouraging results in terms of an Area Under the ROC Curve (AUC) of up to 0.965 were achieved.
Dynamic contrast enhanced Breast MRI (DCE BMRI) has emerged as powerful tool in the diagnostic work-up of breast
cancer. While DCE BMRI is very sensitive, specificity remains to be an issue. Consequently, there is a need for features
that support the classification of enhancing lesions into benign and malignant lesions. Traditional features include the
morphology and the texture of a lesion, as well as the kinetic parameters of the time-intensity curves, i.e., the temporal
change of image intensity at a given location. The kinetic parameters include initial contrast uptake of a lesion and the
type of the kinetic curve. The curve type is usually assigned to one of three classes: persistent enhancement (Type I),
plateau (Type II), and washout (Type III). While these curve types show a correlation with the tumor type (benign or
malignant), only a small sub-volume of the lesion is taken into consideration and the curve type will depend on the
location of the ROI that was used to generate the kinetic curve. Furthermore, it has been shown that the curve type
significantly depends on which MR scanner was used as well as on the scan parameters.
Recently, it was shown that the heterogeneity of a given lesion with respect to spatial variation of the kinetic curve type
is a clinically significant indicator for malignancy of a tumor. In this work we compare four quantitative measures for the
degree of heterogeneity of the signal enhancement ratio in a tumor and evaluate their ability of predicting the dignity of a
tumor. All features are shown to have an area under the ROC curve of between 0.63 and 0.78 (for a single feature).
We present a simple rendering scheme for thoracic CT datasets which yields a color coding based on local differential
geometry features rather than Hounsfield densities. The local curvatures are computed on several resolution scales and
mapped onto different colors, thereby enhancing nodular and tubular structures. The rendering can be used as a
navigation device to quickly access points of possible chest anomalies, in particular lung nodules and lymph nodes. The
underlying principle is to use the nodule enhancing overview as a possible alternative to classical CAD approaches by
avoiding explicit graphical markers. For performance evaluation we have used the LIDC-IDRI lung nodule data base.
Our results indicate that the nodule-enhancing overview correlates well with the projection images produced from the
IDRI expert annotations, and that we can use this measure to optimize the combination of differential geometry filters.
Visualization of multi-dimensional data sets becomes a critical and
significant area in modern medical image processing. To analyze such
high dimensional data, novel nonlinear embedding approaches become
increasingly important to show dependencies among these data in a
two- or three-dimensional space. This paper investigates the
potential of novel nonlinear dimensional data reduction techniques
and compares their results with proven nonlinear techniques when
applied to the differentiation of malignant and benign lesions
described by high-dimensional data sets arising from dynamic
contrast-enhanced magnetic resonance imaging (DCE-MRI). Two
important visualization modalities in medical imaging are presented:
the mapping on a lower-dimensional data manifold and the image
fusion.
In recent years, dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI) has become a powerful
complement to X-ray based mammography in breast cancer diagnosis and monitoring. In DCE-MRI the time
related development of the signal intensity after the administration of contrast agent can provide valuable information
about tissue characteristics at pixel level. The integration of this information constitutes an important
step in the analysis of DCE-MRI data.
In this contribution we investigate the applicability of three different approaches from the field of independent
component analysis (ICA) for feature extraction and image fusion in the context of DCE-MRI data. Next
to FastICA, Tree-Dependent Component Analysis and Topographic ICA are applied to twelve clinical cases
from breast cancer research with a histopathologically confirmed diagnosis. The outcome of all algorithms is
quantitatively evaluated by means of Receiver Operating Characteristics (ROC) statistics. Additionally, the
estimated components are discussed exemplarily and the corresponding data is visualized.
The study suggests that all of the employed algorithms show some potential for the purposes of lesion detection
and subclassification and are rather robust with respect to their parameterization. However, with respect to ROC
analysis Tree-Dependent Component Analysis tends to outperform all other algorithms as well as with regarding
to the consistency of the results.
As a complement to model-based approaches for the analysis of functional magnetic resonance imaging (fMRI)
data, methods of exploratory analysis offer interesting options. While unsupervised clustering techniques can
be employed for the extraction of signal patterns and segmentation purposes, topographic mapping techniques
such as the Self-Organizing Map (SOM) and the Topographic Mapping for Proximity Data (TMP) provide
additionally a structured representation of the data.
In this contribution we investigate the applicability of two recently proposed variants of these algorithms which
make use of concepts from non-Euclidean geometry for the analysis of fMRI data. Compared to standard methods,
both approaches provide more freedom for the representation of complex relationships in low-dimensional
mappings while they offer a convenient interface for the visualization and exploration of high-dimensional data
sets. Based on data from fMRI experiments, the application of these techniques is discussed and the results are
quantitatively evaluated by means of ROC statistics.
The exploration and categorization of large and unannotated image collections is a challenging task in the field of image retrieval as well as in the generation of appearance based object representations. In this context the Self-Organizing Map (SOM) has shown to be an efficient and scalable tool for the analysis of image collections based on low level features. Next to commonly employed visualization methods, clustering techniques have been recently considered for the aggregation of SOM nodes into groups in order to facilitate category specific data exploration. In this paper, spectral clustering based on graph theoretic concepts is employed for SOM based data categorization. The results are compared with those from the Neural Gas algorithm and hierarchical agglomerative clustering. Using SOMs trained on an eigenspace representation of the Columbia Object Image Library 20 (COIL20), the correspondence of the cluster data to a semantic reference grouping is calculated. Based on the Adjusted Rand Index it is shown that independent from the number of selected clusters, spectral clustering achieves a significantly higher correspondence to the reference grouping than any of the other methods.
KEYWORDS: Tumors, Magnetic resonance imaging, Breast, 3D image processing, 3D acquisition, Breast cancer, Quantization, Cancer, Data acquisition, Volume rendering
In this paper we apply multiscale entropy (MSE) analysis to data
obtained from magnetic resonance imaging of the female breast.
All cases include lesions that were histologically proven as
malignant tumors. Our results indicate that multiscale entropy
analysis can play an important role in the detection of tumor
tissue when applied to single datasets, but does not allow to
calculate universal morphological features. The performance of
MSE was examined with respect to traditional features such
as difference imaging.
Due to the rapid progress in medical imaging technology, analysis of multivariate image data is receiving increased interest. However, their visual exploration is a challenging task since it requires the integration of information from many different sources which usually cannot be perceived at once by an observer.
Image fusion techniques are commonly used to obtain information from multivariate image data, while psychophysical aspects of data visualization are usually not considered. Visualization is typically achieved by means of device derived color scales. With respect to psychophysical aspects of visualization, more sophisticated color mapping techniques based on device independent (and perceptually uniform) color spaces like CIELUV have been proposed. Nevertheless, the benefit of these techniques is limited by the fact that they require complex color space transformations to account for device characteristics and viewing conditions.
In this paper we present a new framework for the visualization of multivariate image data using image fusion and color mapping techniques. In order to overcome problems of consistent image presentations and color space transformations, we propose perceptually optimized color scales based on CIELUV in combination with sRGB (IEC 61966-2-1) color specification. In contrast to color definitions based purely on CIELUV, sRGB data can be used directly under reasonable conditions, without complex transformations and additional information. In the experimental section we demonstrate the advantages of our approach in an application of these techniques to the visualization of DCE-MRI images from breast cancer research.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.