KEYWORDS: Data modeling, Computed tomography, Performance modeling, Blood, Feature extraction, Deep learning, Process modeling, Image classification, Deep convolutional neural networks
Accurately predicting the clinical outcome of patients with aneurysmal subarachnoid hemorrhage (aSAH) presents notable challenges. This study sought to develop and assess a Computer-Aided Detection (CAD) scheme employing a deep-learning classification architecture, utilizing brain Computed Tomography (CT) images to forecast aSAH patients' prognosis. A retrospective dataset encompassing 60 aSAH patients was collated, each with two CT images acquired upon admission and after ten to 14 days of admission. The existing CAD scheme was utilized for preprocessing and data curation based on the presence of blood clot in the cisternal spaces. Two pre-trained architectures, DenseNet-121 and VGG16, were chosen as convolutional bases for feature extraction. The Convolution Based Attention Module (CBAM) was introduced atop the pre-trained architecture to enhance focus learning. Employing five-fold cross-validation, the developed prediction model assessed three clinical outcomes following aSAH, and its performance was evaluated using multiple metrics. A comparison was conducted to analyze the impact of CBAM. The prediction model trained using CT images acquired at admission demonstrated higher accuracy in predicting short-term clinical outcomes. Conversely, the model trained using CT images acquired on ten to 14 days accurately predicted long-term clinical outcomes. Notably, for short-term outcomes, high sensitivity performances (0.87 and 0.83) were reported from the first scan, while the sensitivity of (0.65 and 0.75) was reported from the last scan, showcasing the viability of predicting the prognosis of aSAH patients using novel deep learning-based quantitative image markers. The study demonstrated the potential of integrating deep-learning architecture with attention mechanisms to optimize predictive capabilities in identifying clinical complications among patients with aSAH.
Radiomics and deep transfer learning have been attracting broad research interest in developing and optimizing CAD schemes of medical images. However, these two technologies are typically applied in different studies using different image datasets. Advantages or potential limitations of applying these two technologies in CAD applications have not been well investigated. This study aims to compare and assess these two technologies in classifying breast lesions. A retrospective dataset including 2,778 digital mammograms is assembled in which 1,452 images depict malignant lesions and 1,326 images depict benign lesions. Two CAD schemes are developed to classify breast lesions. First, one scheme is applied to segment lesions and compute radiomics features, while another scheme applies a pre-trained residual net architecture (ResNet50) as a transfer learning model to extract automated features. Next, the same principal component algorithm (PCA) is used to process both initially computed radiomics and automated features to create optimal feature vectors by eliminating redundant features. Then, several support vector machine (SVM)-based classifiers are built using the optimized radiomics or automated features. Each SVM model is trained and tested using a 10-fold cross-validation method. Classification performance is evaluated using area under ROC curve (AUC). Two SVMs trained using radiomics and automated features yield AUC of 0.77±0.02 and 0.85±0.02, respectively. In addition, SVM trained using the fused radiomics and automated features does not yield significantly higher AUC. This study indicates that (1) using deep transfer learning yields higher classification performance, and (2) radiomics and automated features contain highly correlated information in lesion classification.
Computer-aided detection and/or diagnosis (CAD) schemes typically include machine learning classifiers trained using handcrafted features. The objective of this study is to investigate the feasibility of identifying and applying a new quantitative imaging marker to predict survival of gastric cancer patients. A retrospective dataset including CT images of 403 patients is assembled. Among them, 162 patients have more than 5-year survival. A CAD scheme is applied to segment gastric tumors depicted in multiple CT image slices. After gray-level normalization of each segmented tumor region to reduce image value fluctuation, we used a special feature selection library of a publicly available Pyradiomics software to compute 103 features. To identify an optimal approach to predict patient survival, we investigate two logistic regression model (LRM) generated imaging markers. The first one fuses image features computed from one CT slice and the second one fuses the weighted average image features computed from multiple CT slices. Two LRMs are trained and tested using a leave-one-case-out cross-validation method. Using the LRM-generated prediction scores, receiving operating characteristics (ROC) curves are computed and the area under ROC curve (AUC) is used as index to evaluate performance in predicting patients’ survival. Study results show that the case prediction-based AUC values are 0.70 and 0.72 for two LRM-generated image markers fused with image features computed from a single CT slide and multiple CT slices, respectively. This study demonstrates that (1) radiomics features computed from CT images carry valuable discriminatory information to predict survival of gastric cancer patients and (2) fusion of quasi-3D image features yields higher prediction accuracy than using simple 2D image features.
Computer-Aided Diagnosis (CAD) schemes used to classify suspicious breast lesions typically include machine learning classifiers that are trained using features computed from either the segmented lesions or fixed regions of interest (ROIs) covering the lesions. Both methods have advantages and disadvantages. In this study, we investigate a new approach to train a machine learning classifier that fuses image features computed from both the segmented lesions and the fixed ROIs. We assembled a dataset with 2,000 mammograms. Based on lesion center, a ROI is extracted from each image. Among them, 1,000 ROIs depict verified malignant lesions and rest include benign lesions. An adaptive multilayer region growing algorithm is applied to segment suspicious lesions. Several sets of statistical features, texture features based on GLRLM, GLDM and GLCM, Wavelet transformed features and shape-based features are computed from the original ROI and segmented lesion, respectively. Three support vector machines (SVM) are trained using features computed from original ROIs, segmented lesions, and fusion of both, respectively, using a 10-fold cross-validation method embedded with a feature reduction method, namely a random projection algorithm. By applying the area under the ROC curve (AUC) as an evaluation index, our study results reveal no significant difference between AUC values computed using classification scores generated by two SVMs trained with features computed from original ROIs or segmented lesions. However, utilizing the fused features, AUC of SVM increases more than 10% (p<0.05). This study demonstrates that image features computed using the segmented lesions and the fixed ROIs contain complementary discriminatory information. Thus, fusing these features can significantly improve CAD performance.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.