SignificanceEndoscopic optical coherence tomography (OCT) enables real-time optical biopsy of human organs. Endoscopic probes require miniaturization of optics, which in turn limits field of view. When larger imaging areas are needed such as in the gastrointestinal tract, the operator must manually scan the probe over the tissue to extend the field of view, often resulting in an imperfect scanning pattern and increased risk of missing lesions. Automatic scanning has the potential to extend the field of view of OCT, allowing the user to focus on image interpretation during real-time observations.AimThis work proposes an automatic scanning using a steerable OCT catheter integrated with a robotized interventional flexible endoscope. The aim is to extend the field of view of a low-profile OCT probe while improving scanning accuracy and maintaining a stable endoscope’s position during minimally invasive treatment of colorectal lesions.ApproachA geometrical model of the steerable OCT catheter was developed for estimating the volume of the accessible workspace. Experimental validation was done using electromagnetic tracking of the catheter’s positions. An exemplary scanning path was then selected within the available workspace to evaluate motion performance with the robotized steerable OCT catheter. Automatic scanning is compared to a teleoperated one and a manual scanning with a nonrobotized flexible endoscope. Spectral arc length, scanning area, spacing between scan trajectories, and time are metrics used to quantify performance.ResultsThe available scanning workspace was experimentally estimated to be 255 cm3. The automatic scanning mode provided the highest accuracy and smoothness of motion with spectral arc length of −3.18, covered area of 10.11 cm2, 1.54 mm spacing between 15 sweep trajectories, maximum translation of 27.99 mm, and time to finish of 3.11s.ConclusionsAutomatic modality improved the accuracy of scanning within a large workspace. The robotic capability provided better control to the user to define spacing resolution of scanning patterns.
Guiqiu Liao, Fernando Gonzalez Herrera, Zhongkai Zhang, Ameya Pore, Luca Sestini, Sujit Kumar Sahu, Oscar Caravaca-Mora, Philippe Zanne, Benoit Rosa, Diego Dall’Alba, Paolo Fiorini, Michel de Mathelin, Florent Nageotte, Michalina J. Gora
Optical Coherence Tomography (OCT) has the ability to acquire cross-sectional images under tissue surfaces in real-time, which can provide real-time tissue characterization. One of the possible applications is improving the accuracy of white-light based diagnosis in the digestive system. In this work, we integrate OCT with a robotic surgical endoscope, using OCT local scanning to form complementary information of the white light endoscopic camera. Meanwhile, the white light camera can help to roughly guide the OCT probe to the potential pathological area. To accelerate the local scanning, we explore different volumetric scanning strategies to find a good trade-off between efficiency and 3D recovering quality. Based on stabilization and segmentation of the OCT images using deep learning techniques navigation information is extracted for autonomous control, as well as tactile information. The proposed method allows an increase of field of view (FoV) for OCT imaging in large lumen under dynamic displacement caused by tissue motion.
Optical Coherence Tomography (OCT) shows the ability of real-time diagnosis for the external environment or internal small lumen. OCT has not been applied to larger internal lumen like the colon or stomach due to the difficulty of scanning. In this work, we use a robotized helical scanning probe to explore large lumen. Based on the segmentation of the stabilized OCT image, high accurate distance and quantitative contact feedback are obtained for the robotic scanning. The proposed method for distance/contact feedback shows robustness on both phantom and real deformable colon tissue. The robotic scanning is conducted on the soft phantom.
Side-viewing catheter-based medical imaging modalities are used to produce cross-sectional images underneath tissue surfaces. Mainstream side-viewing catheters are based on Optical Coherence Tomography (OCT) or Ultrasound, and they are often applied to the luminal environment. Automatic lumen segmentation provides geometry information for tasks like robotic control and lumen assessment for real-time diagnosis task with side-viewing catheters. In this work, we propose a novel lumen segmentation deep neural networks based on explicit coordinates encoding, which is named CE-net. CE-net is computationally efficient and produces and produces clean segmentation by explicitly encoding the boundaries coordinates in one shot. The experimental evaluation shows a processing time of approximately 8ms per frame while maintaining robustness. We propose a data generation method to improve CE-net generalization, which shows considerable performance by just training with a small dataset.
Endoscopic submucosal dissection (ESD) is a minimally invasive treatment for early stage colorectal cancer that can be performed in teleoperation with a robotized flexible interventional endoscope. However, the tissue elevation step which requires submucosal needle insertion still requires manual operation. In this work we present robotic needle placement using image-guidance that combines white-light camera images to control the alignment of the needle and the OCT catheter. OCT images are used to determine the position of the needle tip during its insertion. This procedure is experimentally tested in an optical phantom that simulates the tissue layers of the colon.
The rotational distortion of endoscopic Optical Coherence Tomography (OCT) is caused by friction of optical fiber and motor instabilities. On-line rotational distortion compensation is essential to provide real-time feedback. We proposed a new method that integrates a Convolutional Neural Network based warping parameters prediction algorithm to correct the azimuthal position of each image line. This method solves the problem of drift in iterative processing by an overall shifting parameter predicting nets with a processing time of 145ms/frame and variation reduction of 88.9% for the data obtained in ex-vivo and in-vivo experiments.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.