Real-time fringe projection profilometry (FPP) is developed as a 3D vision system to plan and guide autonomous robotic intestinal suturing. Conventional FPP requires sinusoidal patterns with multiple frequencies, and phase shifts to generate tissue point clouds, resulting in a slow frame rate. Therefore, although FPP can reconstruct dense and accurate tissue point clouds, it is often too slow for dynamic measurements. To address this problem, we propose a deep learning-based single-shot FPP algorithm, which reconstructs tissue point clouds with a single sinusoidal pattern using a Swin-Unet. With this approach, we have achieved a FPP imaging frame rate of 50Hz while maintaining high point cloud measurement accuracy. System performance was trained and evaluated both by synthesized and an experimental dataset. An overall relative error of 1~3% was achieved.
Fringe projection profilometry (FPP) is being developed as a 3D vision system to assist robotic surgery and autonomous suturing. Conventionally, fluorescence markers are placed on a target tissue to indicate suturing landmarks, which not only increase the system complexity, but also impose safety concerns. To address these problems, we propose a numerical landmark detection algorithm based on deep learning. A landmark heatmap is regressed using an adopted U-Net from the four channel data generated by the FPP. A Markov random field leveraging the structure prior is developed to search the correct set of landmarks from the heatmap. The accuracy of the proposed method is verified through ex-vivo porcine intestine landmark detection experiments.
We developed a fully automated abdominal tissue classification algorithm for swept-source OCT imaging using a hybrid multilayer perceptron (MLP) and convolutional neural network (CNN) classifier. For MLP, we incorporated an extensive set of features and a subset was chosen to improve network efficiency. For CNN, we designed a threechannel model combining the intensity information with depth-dependent optical properties of tissues. A rule-based decision fusion approach was applied to find more convincing predictions between these two portions. Our model was trained using ex vivo porcine samples, (~200 B-mode images, ~200,000 A-line signals), evaluated by a hold-out dataset. Compared to other algorithms, our classifiers achieve the highest accuracy of 0.9114 and precision of 0.9106. The promising results showed its feasibility for real-time abdominal tissue sensing during robotic-assisted laparoscopic OCT surgery.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.