Early detection of breast cancer is important to reduce morbidity and mortality. Access to breast imaging is limited in low- and middle-income countries compared to high-income countries. This contributes to advancestage breast cancer presentation with poor survival. Pocket-sized portable ultrasound device, also known as point-of-care ultrasound (POCUS), aided by decision support using deep learning-based algorithms for lesion classification could be a cost-effective way to enable access to breast imaging in low-resource settings. A previous study, where using convolutional neural networks (CNN) to classify breast cancer in conventional ultrasound (US) images, showed promising results. The aim of the present study is to classify POCUS breast images. A POCUS data set containing 1100 breast images was collected. To increase the size of the data set, a CycleConsistent Adversarial Network (CycleGAN) was trained on US images to generate synthetic POCUS images. A CNN was implemented, trained, validated and tested on POCUS images. To improve performance, the CNN was trained with different combinations of data consisting of POCUS images, US images, CycleGAN-generated POCUS images and spatial augmentation. The best result was achieved by a CNN trained on a combination of POCUS images and CycleGAN-generated POCUS images and augmentation. This combination achieved a 95% confidence interval for AUC between 93.5% – 96.6%.
Breast cancer is the most common type of cancer globally. Early detection is important for reducing the morbidity and mortality of breast cancer. The aim of this study was to evaluate the performance of different machine learning models to classify malignant or benign lesions on breast ultrasound images. Three different convolutional neural network approaches were implemented: (a) Simple convolutional neural network, (b) transfer learning using pre-trained InceptionV3, ResNet50V2, VGG19 and Xception and (c) deep feature networks based on combinations of the four transfer networks in (b). The data consisted of two breast ultrasound image data sets: (1) an open, single-vendor, data set collected by Cairo University at Baheya Hospital, Egypt, consisting of 437 benign lesions and 210 malignant lesions, where 10% was set to be a test set and the rest was used for training and validation (development) and (2) An in-house, multi-vendor data set collected at Unilabs Mammography Unit, Skåne University Hospital, Sweden, consisting of 13 benign lesions and 265 malignant lesions, was used as an external test set. Both test sets were used for evaluating the networks. The performance measures used were area under the receiver operating characteristic curve (AUC), sensitivity, specificity and weighted accuracy. Holdout, i.e. the splitting of the development data into training and validation data sets just once, was used to find a model with as good performance as possible. 10-fold cross-validation was also performed to provide uncertainty estimates. For the transfer networks which were obtained with holdout, Gradient-weighted Class Activation Mapping was used to generate heat maps indicating which part of the image contributed to the network’s decision. For 10-fold cross-validation it was possible to achieve a mean AUC of 92% and mean sensitivity of 95% for the transfer network based on Xception when testing on the first data set. When testing on the second data set it was possible to obtain a mean AUC of 75% and mean sensitivity of 86% for the combination of ResNet50V2 and Xception.
Myocardial perfusion scintigraphy, which is a non-invasive imaging technique, is one of the most common cardiological examinations performed today, and is used for diagnosis of coronary artery disease. Currently the analysis is performed visually by physicians, but this is both a very time consuming and a subjective approach. These are two of the motivations for why an automatic tool to support the decisions would be useful. We have developed a deep neural network which predicts the occurrence of obstructive coronary artery disease in each of the three major arteries as well as left bundle branch block. Since multiple, or none, of these could have a defect, this is treated as a multi-label classification problem. Due to the highly imbalanced labels, the training loss is weighted accordingly. The prediction is based on two polar maps, captured during stress in upright and supine position, together with additional information such as BMI and angina symptoms. The polar maps are constructed from myocardial perfusion scintigraphy examinations conducted in a dedicated Cadmium-Zinc-Telluride cardio camera (D-SPECT Spectrum Dynamics). The study includes data from 759 patients. Using 5-fold cross-validation we achieve an area under the receiver operating characteristics curve of 0.89 as average on per-vessel level for the three major arteries, 0.94 on per-patient level and 0.82 for left bundle branch block.
ABSTRACT When training a deep learning model, the dataset used is of great importance to make sure that the model learns relevant features of the data and that it will be able to generalize to new data. However, it is typically difficult to produce a dataset without some bias toward any specific feature. Deep learning models used in histopathology have a tendency to overfit to the stain appearance of the training data { if the model is trained on data from one lab only, it will usually not be able to generalize to data from other labs. The standard technique to overcome this problem is to use color augmentation of the training data which, artificially, generates more variations for the network to learn. In this work we instead test the use of a so called domain-adversarial neural network, which is designed to prevent the model from being biased towards features that in reality are irrelevant such as the origin of an image. To test the technique, four datasets from different hospitals for Gleason grading of prostate cancer are used. We achieve state of the art results for these particular datasets, and furthermore for two of our three test datasets the approach outperforms the use of color augmentation.
There are several different approaches used to treat prostate cancer, depending on age and general health conditions of the patient but also how severe the cancer is. To determine the latter, Gleason grading is used. The grade is determined by a pathologist, based on structures in histology samples from prostate biopsies. To determine the diagnosis, both the most common Gleason grade but also the highest Gleason grade occurring is used. Since the tumours typically split up the more malignant they are, single cells of Gleason grade 5, the highest and most malignant Gleason grade, can occur intermingled with benign tissue. Therefore, it is of great importance to fid even very small areas of the highest grade. This is what we aim to automatically do in this work. We have trained a convolutional neural network, with a ResNet design, to classify small areas of tissue in high magnification as either Gleason 5 or non-Gleason 5. The dataset used is generated from whole slide images from Skåne University Hospital, and consists in total of 19680 small images with the size 128×128 pixels in 40X. We try to make the algorithm more robust to stain variations, which is a common issue for this type of data, by using colour augmentation. The best accuracy we achieve for classification of Gleason 5 versus non-Gleason 5 images is 92%.
Prostate cancer is the most diagnosed cancer in men. The diagnosis is confirmed by pathologists based on ocular inspection of prostate biopsies in order to classify them according to Gleason score. The main goal of this paper is to automate the classification using convolutional neural networks (CNNs). The introduction of CNNs has broadened the field of pattern recognition. It replaces the classical way of designing and extracting hand-made features used for classification with the substantially different strategy of letting the computer itself decide which features are of importance.
For automated prostate cancer classification into the classes: Benign, Gleason grade 3, 4 and 5 we propose a CNN with small convolutional filters that has been trained from scratch using stochastic gradient descent with momentum. The input consists of microscopic images of haematoxylin and eosin stained tissue, the output is a coarse segmentation into regions of the four different classes. The dataset used consists of 213 images, each considered to be of one class only. Using four-fold cross-validation we obtained an error rate of 7.3%, which is significantly better than previous state of the art using the same dataset. Although the dataset was rather small, good results were obtained. From this we conclude that CNN is a promising method for this problem. Future work includes obtaining a larger dataset, which potentially could diminish the error margin.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.