Purpose: Providing manual formative feedbacks to trainees in minimally invasive surgery is time-consuming and requires the observations of experts whose availability is often limited. The existing automatic assessment methods often provide a coarse and unidimensional evaluation that lacks formative value. Method: We trained a multi-layer perceptron to establish the multiple non-linear regression mapping a set of kinematic metrics to five scores representing six surgical technical criteria. We tested our method on two datasets: A new in-house laparoscopy dataset and a well-known robotic laparoscopy dataset. Results: Our results report that a simple deep learning model using motion metrics as inputs outperforms the state-of-the-art method based on a more elaborate end-to-end neural network. We also show that our method is suitable for both robotic and nonrobotic laparoscopic skills assessment. Moreover, the provided assessment is formative by rating specific technical skills, showing the student where to direct her/his training efforts. Conclusions: We developed a rather simple method for the automatic assessment, not only of the global surgical technical level, but of precise surgical technical skills. Our method shows that descriptive motion metrics combined with a simple deep learning model is powerful enough to capture and extract high level concepts such as surgical flow of operation or tissue handling. Such method can simplify access to a formative surgical training in both robotic and non-robotic laparoscopy.
Cone-Beam CT (CBCT) is a valuable imaging modality for the intraoperative localization of pulmonary nodules during Video-Assisted Thoracoscopic Surgery (VATS). However, inferring the nodule position from the CBCT to the operative field remains challenging and could greatly benefit from computer-aided guiding. As a first step towards an Augmented Endoscopy guiding system, we propose to register 2D monocular endoscopic views into the 3D CBCT space. Ribs and wound protectors are segmented in both imaging modalities, then registered using an image-to-cloud Iterative Closest Point variant. The method is evaluated qualitatively on clinical VATS video sequences from three patients. The promising results validate this first step towards a seamless monocular VATS navigation.
The accuracy of biopsy sampling and the related tumor localization are major issues for prostate cancer diagnosis and therapy. However, the ability to navigate accurately to biopsy targets faces several difficulties coming from both transrectal ultrasound (TRUS) image guidance properties and prostate motion or deformation. To reduce inaccuracy and exam duration, the main objective of this study is to develop a real-time navigation assistance. The aim is to provide the current probe position and orientation with respect to the deformable organ and the next biopsy targets. We propose a deep learning real-time 2D/3D registration method based on Convolutional Neural Networks (CNN) to localize the current 2D US image relative to the available 3D TRUS reference volume. We experiment several scenarii combining different input data including: pair of successive 2D US images, the optical ow between them and current probe tracking information. The main novelty of our study is to consider prior navigation trajectory information by introducing previous registration result. This model is evaluated on clinical data through simulated biopsy trajectories. The results highlight significant improvement by exploiting trajectory information especially through prior registration results and probe tracking parameters. With such trajectory information, we achieve an average registration error of 2.21 mm ± 2.89. The network demonstrates efficient generalization capabilities on new patients and new trajectories, which is promising for successful continuous tracking during biopsy procedure.
Osteoarthritis involves the progressive degeneration of the cartilage surface, which leads to joint pain and dysfunction. Arthroscopy is the standard procedure for monitoring cartilage degeneration in situ. However, this evaluation depends on the surgeon's subjective interpretation, and therefore may lack reliability. Full-field optical coherence tomography (FFOCT) is a promising technique that could enable micrometer scale cartilage evaluation in situ.
A preliminary study was carried out by the Grenoble-Alpes University Hospital in collaboration with TIMC-IMAG laboratory and LLTech to evaluate the ability of the FFOCT microscope commercialized by LLTech to evaluate cartilage quality. FFOCT images were acquired and matching histology sections were prepared for 33 ex vivo cartilage samples. A strong and significant correlation was found between the histology and FFOCT evaluation of the cartilage quality, both qualitatively and quantitatively.
In order to use FFOCT for the evaluation of cartilage quality at the micrometer scale in situ, LLTech has developed an endomicroscope version of the FFOCT microscope. When held in contact with the tissue to image, the FFOCT rigid endomicroscope acquires a micron resolution virtual optical slice at a depth of 20 microns below the surface, showing the tissue architecture at this depth in an 'en face' view of 1 mm diameter.
This FFOCT endomicroscope has been assessed through images that have been acquired of fresh ex vivo human cartilage samples using both the microscope and endomicroscope. Chondrocytes have been identified in both the microscope and endomicroscope images. Furthermore, a localization environment is under development by the TIMC-IMAG laboratory in order to be able to track the movement of the endomicroscope such that multiple endomicroscope images can be mosaicked together.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.