Here we proposed a designed, built, and evaluated 20G vertically inserted razor edge cannula (VIREC) robotic device guided by optical coherence tomography (OCT) for pneumatic dissection. The fiber sensor was glued inside the needle at a fixed offset of ~500 um. During the experiment, the robotic needle driver precisely moves the VIREC based on the surgeon input which is carefully monitored by the M-mode OCT system. Once the needle is inserted into the desired depth, the air is injected by the surgeon to separate stroma from Descemet’s membrane (DM). During in vivo study (N=8), the “big bubble” was effectively generated in six of eight eyes tested and DM was perforated in two eyes. This demonstrated the reliability and effectiveness of VIREC for “big bubble” DALK.
Real-time fringe projection profilometry (FPP) is developed as a 3D vision system to plan and guide autonomous robotic intestinal suturing. Conventional FPP requires sinusoidal patterns with multiple frequencies, and phase shifts to generate tissue point clouds, resulting in a slow frame rate. Therefore, although FPP can reconstruct dense and accurate tissue point clouds, it is often too slow for dynamic measurements. To address this problem, we propose a deep learning-based single-shot FPP algorithm, which reconstructs tissue point clouds with a single sinusoidal pattern using a Swin-Unet. With this approach, we have achieved a FPP imaging frame rate of 50Hz while maintaining high point cloud measurement accuracy. System performance was trained and evaluated both by synthesized and an experimental dataset. An overall relative error of 1~3% was achieved.
In vivo 3D OCT imaging of live animal generally suffers from motion artifacts due to involuntary tissue movement. Here, we propose a real-time 3D OCT imaging approach using a convolution neural network (CNN)/regression-based algorithm to correct tissue motion in vivo. The system first scans four reference images along the slow axis within millisecond-scale acquisition time before acquiring a C-mode image. The algorithm recognizes the tissue surface by CNN, then uses the segmentation result along with reference images to compensate lateral and axial motion. We evaluated the system performance using a fish eye model.
Significance: Optical coherence tomography (OCT) allows high-resolution volumetric three-dimensional (3D) imaging of biological tissues in vivo. However, 3D-image acquisition can be time-consuming and often suffers from motion artifacts due to involuntary and physiological movements of the tissue, limiting the reproducibility of quantitative measurements.
Aim: To achieve real-time 3D motion compensation for corneal tissue with high accuracy.
Approach: We propose an OCT system for volumetric imaging of the cornea, capable of compensating both axial and lateral motion with micron-scale accuracy and millisecond-scale time consumption based on higher-order regression. Specifically, the system first scans three reference B-mode images along the C-axis before acquiring a standard C-mode image. The difference between the reference and volumetric images is compared using a surface-detection algorithm and higher-order polynomials to deduce 3D motion and remove motion-related artifacts.
Results: System parameters are optimized, and performance is evaluated using both phantom and corneal (ex vivo) samples. An overall motion-artifact error of <4.61 microns and processing time of about 3.40 ms for each B-scan was achieved.
Conclusions: Higher-order regression achieved effective and real-time compensation of 3D motion artifacts during corneal imaging. The approach can be expanded to 3D imaging of other ocular tissues. Implementing such motion-compensation strategies has the potential to improve the reliability of objective and quantitative information that can be extracted from volumetric OCT measurements.
Optical coherence tomography (OCT) has evolved into a powerful imaging technique that allows high-resolution visualization of biological tissues. However, most in vivo OCT systems for real-time volumetric (3D) imaging suffer from image distortion due to motion artifacts induced by involuntary and physiological movements of the living tissue, such as the eye that is constantly in motion.While several methods have been proposed to account for and remove motion artifacts during OCT imaging of the retina, fewer works have focused on motion-compensated OCT-based measurements of the cornea. Here, we propose an OCT system for volumetric imaging of the cornea, capable of compensating both axial and lateral motion with micron-scale accuracy and millisecond-scale time consumption based on higher-order regression. System performance was evaluated during volumetric imaging of corneal phantom and bovine (ex vivo) samples that were positioned in the palm of a hand to simulate involuntary 3D motion. An overall motion-artifact error of less than 4.61 μm and processing time of about 3.40 ms for each B-scan was achieved.
KEYWORDS: Optical coherence tomography, Real time imaging, Tissues, Natural surfaces, Motion analysis, In vivo imaging, Imaging systems, Detection and tracking algorithms, Cornea
Optical Coherence Tomography (OCT) has evolved into a powerful clinical tool, with a wide range of applications in ophthalmology. However, most OCT systems for real-time volumetric (3D) and in vivo imaging suffer from image distortion due to motion artifacts induced by involuntary and physiological movements of the living tissue. Several methods have been proposed to obtain motion-free images, yet they are generally limited in their applicability due to long acquisition times, requiring multiple volumes [1], and/or the need for additional hardware [2]. Here we propose and analyze a motion-compensated 3D-OCT imaging system that uses a higher-order regression analysis and show that it can effectively correct the motion artifacts within 0 to 5 Hz in real time without requiring additional hardware.
In a partial cornea transplant surgery, a procedure known as “Big Bubble” is used and it requires precise needle detection and tracking. To accomplish this goal, we used traditional image segmentation methods and trained a Convolutional Neural network (CNN) model to track the needle during the cornea transplant surgery guided by OCT B-scan imaging. The dataset was generated from the laboratory OCT system and we classified them to three categories. The network architecture is based on U-Net and modified to avoid overfitting. We are able to track the needle and detect the distance between the needle tip and cornea bottom layer based on these results.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.