This paper presents an automated real-time esophagus achalasia (achalasia) detection method for esophagoscopy assistance. Achalasia is a well-recognized primary esophageal motor disorder of unknown etiology. To diagnose the achalasia, endoscopic evaluation of the esophagus and stomach is recommended to ensure that there is not a malignancy causing the disease or esophageal squamous cell carcinoma complicating achalasia. However, esophagoscopy is low sensitive in the early-stage of achalasia, only about half of patients with early-stage achalasia can be identified. Thus, a quantitative detection system of real-time esophagoscopy video is required for diagnosis assistance of achalasia. This paper presents to use of a convolutional neural network (CNN) to detect all achalasia frames in esophagoscopy videos. The features of achalasia cannot be easily distinguished. To better extract features from esophagoscopy frames, we introduce dense pooling connections and dilated convolutions in the CNN. We trained and evaluated our network with an original dataset that is extracted from several esophagoscopy videos of achalasia patients. Furthermore, we develop a real-time achalasia detection ComputerAided Diagnosis (CAD) system with the trained network. The CAD system can detect each frame from the input esophagoscopy videos with only 0.1 milliseconds delay. The real-time achalasia detection system achieved 0.872 accuracy, and 0.943 AUC score on our dataset.
We propose a method for the three-dimensional reconstruction from a single-shot colonoscopic image. Extracting a three-dimensional colon structure is an important task in colonoscopy. However, a colonoscope captures only two-dimensional information as colonoscopic images. Therefore, an estimation of three-dimensional information from two-dimensional images has potential demands. In this paper, we integrate deep-learning-based depth estimation to three-dimensional reconstruction. This approach omits the inaccurate corresponding matching from the procedure of conventional three-dimensional reconstruction. We experimentally demonstrated accurate reconstructions with comparisons between a polyp size in three-dimensional reconstruction and an endoscopist's measurement.
Endoscopic submucosal dissection is a minimally invasive treatment for early gastric cancer. In endoscopic submucosal dissection, a physician directly removes the mucosa around the lesion under internal endoscopy by using the flush knife. However, the flush knife may accidentally pierce the colonic wall and generate a perforation on it. If physicians overlooking a small perforation, a patient may need emergency open surgery, since a perforation can easily cause peritonitis. For the prevention of overlooking of perforations, a computer-aided diagnosis system has a potential demand. We believe automatic perforation detection and localization function is very useful for the analysis of endoscopic submucosal dissection videos for the development of a computeraided diagnosis system. At current stage, the research of perforation detection and localization progress slowly, automatic image-based perforation detection is very challenge. Thus, we devote to the development of detection and localization of perforations in colonoscopic videos. In this paper, we proposed a supervised-learning method for perforations detection and localization in colonoscopic videos. This method uses dense layers in YOLO-v3 instead of residual units, and a combination of binary cross entropy and generalized intersection over union loss as the loss function in the training process. This method achieved 0.854 accuracy, 0.850 AUC score and 0.884 mean average precision for perforation detection and localization, respectively, as an initial study
KEYWORDS: Visualization, Image classification, Visual analytics, Image filtering, Convolution, Data modeling, Solid modeling, Feature extraction, Medical research, RGB color model
Purpose of this paper is to present a method for visualising decision-reasoning regions in computer-aided pathological pattern diagnosis of endocytoscopic images. Endocytoscope enables us to perform direct observation of cells and their nuclei on the colon wall at maximum 500-times ultramagnification. For this new modality, computer-aided pathological diagnosis system is strongly required for the support of non-expert physicians. To develop a CAD system, we adopt convolutional neural network (CNN) as the classifier of endocytoscopic images. In addition to this classification function, based on CNN weights analysis, we develop a filter function that visualises decision-reasoning regions on classified images. This visualisation function helps novice endocytoscopists to develop their understanding of pathological pattern on endocytoscopic images for accurate endocytoscopic diagnosis. In numerical experiment, our CNN model achieved 90 % classification accuracy. Furthermore, experimental results show that decision-reasoning regions suggested by our filter function contain characteristic pit patterns in real endocytoscopic diagnosis.
Measurement of a polyp size is an essential task in colon cancer screening, since the polyp-size information has critical roles for decision on colonoscopy. However, an estimation of a polyp size from a single view of colonoscope without a measurement device is quite difficult even for expert physicians. To overcome this difficulty, automated size estimation techniques would be desirable for clinical scenes. This paper presents polyp-size classification method with a single colonoscopic image for colonoscopy. Our proposed method estimates depth information from a single colonoscopic image with trained model and utilises the estimated information for the classification. In our method, the model for depth information is obtained by deep learning with colonoscopic videos. Experimental results show the achievement of binary and trinary polyp-size classification with 79% and 74% accuracy from a single still image of a colonoscopic movie.
This paper presents a new classification method for endocytoscopic images. Endocytoscopy is a new endoscope that enables us to perform conventional endoscopic observation and ultramagnified observation of cell level. This ultramagnified views (endocytoscopic images) make possible to perform pathological diagnosis only on endo-scopic views of polyps during colonoscopy. However, endocytoscopic image diagnosis requires higher experiences for physicians. An automated pathological diagnosis system is required to prevent the overlooking of neoplastic lesions in endocytoscopy. For this purpose, we propose a new automated endocytoscopic image classification method that classifies neoplastic and non-neoplastic endocytoscopic images. This method consists of two classification steps. At the first step, we classify an input image by support vector machine. We forward the image to the second step if the confidence of the first classification is low. At the second step, we classify the forwarded image by convolutional neural network. We reject the input image if the confidence of the second classification is also low. We experimentally evaluate the classification performance of the proposed method. In this experiment, we use about 16,000 and 4,000 colorectal endocytoscopic images as training and test data, respectively. The results show that the proposed method achieves high sensitivity 93.4% with small rejection rate 9.3% even for difficult test data.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.