Multimodal endoscopy using fluorescence molecular probes is a promising method of surveying the entire esophagus to detect cancer progression. Using the fluorescence ratio of a target compared to a surrounding background, a quantitative value is diagnostic for progression from Barrett’s esophagus to high-grade dysplasia (HGD) and esophageal adenocarcinoma (EAC). However, current quantification of fluorescent images is done only after the endoscopic procedure. We developed a Chan–Vese-based algorithm to segment fluorescence targets, and subsequent morphological operations to generate background, thus calculating target/background (T/B) ratios, potentially to provide real-time guidance for biopsy and endoscopic therapy. With an initial processing speed of 2 fps and by calculating the T/B ratio for each frame, our method provides quasireal-time quantification of the molecular probe labeling to the endoscopist. Furthermore, an automatic computer-aided diagnosis algorithm can be applied to the recorded endoscopic video, and the overall T/B ratio is calculated for each patient. The receiver operating characteristic curve was employed to determine the threshold for classification of HGD/EAC using leave-one-out cross-validation. With 92% sensitivity and 75% specificity to classify HGD/EAC, our automatic algorithm shows promising results for a surveillance procedure to help manage esophageal cancer and other cancers inspected by endoscopy.
For patients with malignant brain tumors (glioblastomas), a safe maximal resection of tumor is critical for an
increased survival rate. However, complete resection of the cancer is hard to achieve due to the invasive nature
of these tumors, where the margins of the tumors become blurred from frank tumor to more normal brain tissue,
but in which single cells or clusters of malignant cells may have invaded. Recent developments in fluorescence
imaging techniques have shown great potential for improved surgical outcomes by providing surgeons
intraoperative contrast-enhanced visual information of tumor in neurosurgery. The current near-infrared (NIR)
fluorophores, such as indocyanine green (ICG), cyanine5.5 (Cy5.5), 5-aminolevulinic acid (5-ALA)-induced
protoporphyrin IX (PpIX), are showing clinical potential to be useful in targeting and guiding resections of such
tumors. Real-time tumor margin identification in NIR imaging could be helpful to both surgeons and patients by
reducing the operation time and space required by other imaging modalities such as intraoperative MRI, and has
the potential to integrate with robotically assisted surgery. In this paper, a segmentation method based on the
Chan-Vese model was developed for identifying the tumor boundaries in an ex-vivo mouse brain from relatively
noisy fluorescence images acquired by a multimodal scanning fiber endoscope (mmSFE). Tumor contours were
achieved iteratively by minimizing an energy function formed by a level set function and the segmentation
model. Quantitative segmentation metrics based on tumor-to-background (T/B) ratio were evaluated. Results
demonstrated feasibility in detecting the brain tumor margins at quasi-real-time and has the potential to yield
improved precision brain tumor resection techniques or even robotic interventions in the future.
Multimodal endoscopy, with fluorescence-labeled probes binding to overexpressed molecular targets, is a promising technology to visualize early-stage cancer. T/B ratio is the quantitative analysis used to correlate fluorescence regions to cancer. Currently, T/B ratio calculation is post-processing and does not provide real-time feedback to the endoscopist. To achieve real-time computer assisted diagnosis (CAD), we establish image processing protocols for calculating T/B ratio and locating high-risk fluorescence regions for guiding biopsy and therapy in Barrett’s esophagus (BE) patients.
Methods: Chan-Vese algorithm, an active contour model, is used to segment high-risk regions in fluorescence videos. A semi-implicit gradient descent method was applied to minimize the energy function of this algorithm and evolve the segmentation. The surrounding background was then identified using morphology operation. The average T/B ratio was computed and regions of interest were highlighted based on user-selected thresholding. Evaluation was conducted on 50 fluorescence videos acquired from clinical video recordings using a custom multimodal endoscope.
Results: With a processing speed of 2 fps on a laptop computer, we obtained accurate segmentation of high-risk regions examined by experts. For each case, the clinical user could optimize target boundary by changing the penalty on area inside the contour.
Conclusion: Automatic and real-time procedure of calculating T/B ratio and identifying high-risk regions of early esophageal cancer was developed. Future work will increase processing speed to <5 fps, refine the clinical interface, and apply to additional GI cancers and fluorescence peptides.
KEYWORDS: Cameras, 3D metrology, 3D image processing, Clouds, Image registration, X-rays, 3D modeling, 3D image reconstruction, Panoramic photography, Reconstruction algorithms
Rapid development in the performance of sophisticated optical components, digital image sensors, and computer abilities along with decreasing costs has enabled three-dimensional (3-D) optical measurement to replace more traditional methods in manufacturing and quality control. The advantages of 3-D optical measurement, such as noncontact, high accuracy, rapid operation, and the ability for automation, are extremely valuable for inline manufacturing. However, most of the current optical approaches are eligible for exterior instead of internal surfaces of machined parts. A 3-D optical measurement approach is proposed based on machine vision for the 3-D profile measurement of tiny complex internal surfaces, such as internally threaded holes. To capture the full topographic extent (peak to valley) of threads, a side-view commercial rigid scope is used to collect images at known camera positions and orientations. A 3-D point cloud is generated with multiview stereo vision using linear motion of the test piece, which is repeated by a rotation to form additional point clouds. Registration of these point clouds into a complete reconstruction uses a proposed automated feature-based 3-D registration algorithm. The resulting 3-D reconstruction is compared with x-ray computed tomography to validate the feasibility of our proposed method for future robotically driven industrial 3-D inspection.
The challenge is to accurately guide the surgical tool within the three-dimensional (3D) surgical field for roboticallyassisted operations such as tumor margin removal from a debulked brain tumor cavity. The proposed technique is 3D image-guided surgical navigation based on matching intraoperative video frames to a 3D virtual model of the surgical field. A small laser-scanning endoscopic camera was attached to a mock minimally-invasive surgical tool that was manipulated toward a region of interest (residual tumor) within a phantom of a debulked brain tumor. Video frames from the endoscope provided features that were matched to the 3D virtual model, which were reconstructed earlier by raster scanning over the surgical field. Camera pose (position and orientation) is recovered by implementing a constrained bundle adjustment algorithm. Navigational error during the approach to fluorescence target (residual tumor) is determined by comparing the calculated camera pose to the measured camera pose using a micro-positioning stage. From these preliminary results, computation efficiency of the algorithm in MATLAB code is near real-time (2.5 sec for each estimation of pose), which can be improved by implementation in C++. Error analysis produced 3-mm distance error and 2.5 degree of orientation error on average. The sources of these errors come from 1) inaccuracy of the 3D virtual model, generated on a calibrated RAVEN robotic platform with stereo tracking; 2) inaccuracy of endoscope intrinsic parameters, such as focal length; and 3) any endoscopic image distortion from scanning irregularities. This work demonstrates feasibility of micro-camera 3D guidance of a robotic surgical tool.
Inherited mutations in BRCA1 and BRCA2 lead to 20-50% lifetime risk of ovarian, tubal, or peritoneal carcinoma. Clinical
recommendations for women with these genetic mutations include the prophylactic removal of ovaries and fallopian tubes
by age 40 after child-bearing. Recent findings suggest that many presumed ovarian or peritoneal carcinomas arise in
fallopian tube epithelium. Although survival rate is <90% when ovarian cancer is detected early (Stage_I), 70% of women
have advanced disease (Stage_III/IV) at presentation when survival is less than 30%. Over the years, effective early
detection of ovarian cancer has remained elusive, possibly because screening techniques have mistakenly focused on the
ovary as origin of ovarian carcinoma. Unlike ovaries, the fallopian tubes are amenable to direct visual imaging without
invasive surgery, using access through the cervix. To develop future screening protocols, we investigated using our 1.2-
mm diameter, forward-viewing, scanning fiber endoscope (SFE) to image luminal surfaces of the fallopian tube before
laparoscopic surgical removal. Three anesthetized human subjects participated in our protocol development which
eventually led to 70-80% of the length of fallopian tubes being imaged in scanning reflectance, using red (632nm), green
(532nm), and blue (442nm) laser light. A hysteroscope with saline uterine distention was used to locate the tubal ostia.
To facilitate passage of the SFE through the interstitial portion of the fallopian tube, an introducer catheter was inserted 1-
cm through each ostia. During insertion, saline was flushed to reduce friction and provide clearer viewing. This is likely
the first high-resolution intraluminal visualization of fallopian tubes.
KEYWORDS: Tumors, Luminescence, 3D modeling, 3D image processing, Reconstruction algorithms, 3D image reconstruction, Reflectivity, Brain, Surgery, 3D acquisition
Brain tumor margin removal is challenging because diseased tissue is often visually indistinguishable from healthy tissue. Leaving residual tumor leads to decreased survival, and removing normal tissue causes life-long neurological deficits. Thus, a surgical robotics system with a high degree of dexterity, accurate navigation, and highly precise resection is an ideal candidate for image-guided removal of fluorescently labeled brain tumor cells. To image, we developed a scanning fiber endoscope (SFE) which acquires concurrent reflectance and fluorescence wide-field images at a high resolution. This miniature flexible endoscope was affixed to the arm of a RAVEN II surgical robot providing programmable motion with feedback control using stereo-pair surveillance cameras. To verify the accuracy of the three-dimensional (3-D) reconstructed surgical field, a multimodal physical-sized model of debulked brain tumor was used to obtain the 3-D locations of residual tumor for robotic path planning to remove fluorescent cells. Such reconstruction is repeated intraoperatively during margin clean-up so the algorithm efficiency and accuracy are important to the robotically assisted surgery. Experimental results indicate that the time for creating this 3-D surface can be reduced to one-third by using known trajectories of a robot arm, and the error from the reconstructed phantom is within 0.67 mm in average compared to the model design.
Endoscopic visualization in brain tumor removal is challenging because tumor tissue is often visually indistinguishable from healthy tissue. Fluorescence imaging can improve tumor delineation, though this impairs reflectance-based visualization of gross anatomical features. To accurately navigate and resect tumors, we created an ultrathin/flexible, scanning fiber endoscope (SFE) that acquires reflectance and fluorescence wide-field images at high-resolution. Furthermore, our miniature imaging system is affixed to a robotic arm providing programmable motion of SFE, from which we generate multimodal surface maps of the surgical field.
To test this system, synthetic phantoms of debulked tumor from brain are fabricated having spots of fluorescence representing residual tumor. Three-dimension (3D) surface maps of this surgical field are produced by moving the SFE over the phantom during concurrent reflectance and fluorescence imaging (30Hz video). SIFT-based feature matching between reflectance images is implemented to select a subset of key frames, which are reconstructed in 3D by bundle adjustment. The resultant reconstruction yields a multimodal 3D map of the tumor region that can improve visualization and robotic path planning.
Efficiency of creating these maps is important as they are generated multiple times during tumor margin clean-up. By using pre-programmed vector motions of the robot arm holding the SFE, the computer vision algorithms are optimized for efficiency by reducing search times. Preliminary results indicate that the time for creating these 3D multimodal maps of the surgical field can be reduced to one third by using known trajectories of the surgical robot moving the image-guided tool.
We present a high-resolution, high-speed three-dimensional (3-D) shape measurement technique that can reach the speed limit of a digital fringe projection system without significantly increasing the system cost. Instead of generating sinusoidal fringe patterns by a computer directly, they are produced by defocusing binary ones. By this means, with a relatively inexpensive camera, the 3-D shape measurement system can double the previously maximum achievable speed and reach the refreshing rate of a digital-light-processing projector: 120 Hz.
This paper presents a real-time 3-D, or 4-D, shape measurement technique that can reach the speed limit of a digital
fringe projection system without significantly increasing the system cost. Instead of generating sinusoidal fringe patterns
by a computer directly, they are produced by defocusing binary structured patterns. By this means, with a relatively
inexpensive camera, the 3-D shape measurement system can double the previously maximum achievable speed, and reaches
the refreshing rate of a digital-light-processing (DLP) projector: 120 Hz.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.