Purpose: We present a markerless vision-based method for on-the-fly three-dimensional (3D) pose estimation of a fiberscope instrument to target pathologic areas in the endoscopic view during exploration.
Approach: A 2.5-mm-diameter fiberscope is inserted through the endoscope’s operating channel and connected to an additional camera to perform complementary observation of a targeted area such as a multimodal magnifier. The 3D pose of the fiberscope is estimated frame-by-frame by maximizing the similarity between its silhouette (automatically detected in the endoscopic view using a deep learning neural network) and a cylindrical shape bound to a kinematic model reduced to three degrees-of-freedom. An alignment of the cylinder axis, based on Plücker coordinates from the straight edges detected in the image, makes convergence faster and more reliable.
Results: The performance on simulations has been validated with a virtual trajectory mimicking endoscopic exploration and on real images of a chessboard pattern acquired with different endoscopic configurations. The experiments demonstrated a good accuracy and robustness of the proposed algorithm with errors of 0.33 ± 0.68 mm in distance position and 0.32 ± 0.11 deg in axis orientation for the 3D pose estimation, which reveals its superiority over previous approaches. This allows multimodal image registration with sufficient accuracy of <3 pixels.
Conclusion: Our pose estimation pipeline was executed on simulations and patterns; the results demonstrate the robustness of our method and the potential of fiber-optical instrument image-based tracking for pose estimation and multimodal registration. It can be fully implemented in software and therefore easily integrated into a routine clinical environment.
Color, shape (size and volume), and temperature are important clinical features for chronic wound monitoring that could impact diagnosis and treatment. Noninvasive 3D measurement are better and more accurate than those in 2D, but expensive equipment and complexity of the setup prevent their use at hospitals. Therefore, the use of affordable and lightweight devices with straightforward protocol to acquire images for evaluations is fundamental to provide a functional and useful evaluation of the wound. In this work, an automated methodology to generate color and thermal 3D models is presented by using portable devices: a commercial mobile device with a connected portable thermal camera. The 3D model of the wound surface is estimated from a series of color images using structure-from-motion (SfM) while thermal information is overlaid to the ulcer’s relief for multimodal wound evaluation. The proposed methodology contributes with a proof of concept for multimodal wound monitoring in the hospital environment with a simple hand-held shooting protocol. The system was used efficiently with 5 patients on wounds of various sizes and types.
Computer aided planning for orthodontic treatment requires knowing occlusion of separately scanned dental casts. A visual guided registration is conducted starting by extracting corresponding features in both photographs and 3D scans. To achieve this, dental neck and occlusion surface are firstly extracted by image segmentation and 3D curvature analysis. Then, an iterative registration process is conducted during which feature positions are refined, guided by previously found anatomic edges. The occlusal edge image detection is improved by an original algorithm which follows Canny’s poorly detected edges using a priori knowledge of tooth shapes. Finally, the influence of feature extraction and position optimization is evaluated in terms of the quality of the induced registration. Best combination of feature detection and optimization leads to a positioning average error of 1.10 mm and 2.03°.
A significant recent breakthrough in medical imaging is the development of a new non-invasive modality based on
multispectral and hyperspectral imaging that can be easily integrated in the operating room. This technology consists of collecting series of images at wavelength intervals of only few nanometers and in which single pixels have spectral
information content relevant to the scene under observation. Before becoming of practical interest for the clinician, such system should meet important requirements. Firstly, it should enable real reflectance measurements and high quality images to dispose of valuable physical data after spatial and spectral calibration. Secondly, quick band pass scanning and a smart interface are needed for intra-operative mode. Finally, experimentation is required to develop expert knowledge for hyperspectral image interpretation and result display on RGB screens, to assist the surgeon with tissue detection and diagnostic capabilities during an intervention. This paper is focused mainly on the two first specifications of this methodology applied to a liquid crystal tunable filter (LCTF) based visible and near infrared spectral imaging system. The system consists of an illumination unit and a spectral imager that includes a monochrome camera, two LCTFs and a fixed focal lens. It also involves a computer with the data acquisition software. The system can capture hyperspectral images in the spectral range of 400 – 1100 nm. Results of preclinical experiments indicated that anatomical tissues can be distinguished especially in near infrared bands. This promises a great capability of hyperspectral imaging to bring efficient assistance for surgeons.
In orthodontics, a common practice used to diagnose and plan the treatment is the dental cast. After digitization by a CT-scan
or a laser scanner, the obtained 3D surface models can feed orthodontics numerical tools for computer-aided
diagnosis and treatment planning. One of the pre-processing critical steps is the 3D registration of dental arches to obtain
the occlusion of these numerical models. For this task, we propose a vision based method to automatically compute the
registration based on photos of patient mouth. From a set of matched singular points between two photos and the dental
3D models, the rigid transformation to apply to the mandible to be in contact with the maxillary may be computed by
minimizing the reprojection errors. During a precedent study, we established the feasibility of this visual registration
approach with a manual selection of singular points. This paper addresses the issue of automatic point detection. Based
on a priori knowledge, histogram thresholding and edge detection are used to extract specific points in 2D images.
Concurrently, curvatures information detects 3D corresponding points. To improve the quality of the final registration,
we also introduce a combined optimization of the projection matrix with the 2D/3D point positions. These new
developments are evaluated on real data by considering the reprojection errors and the deviation angles after registration
in respect to the manual reference occlusion realized by a specialist.
Accurate wound assessment is a critical task for patient care and health cost reduction at hospital but even still worse in
the context of clinical studies in laboratory. This task, completely devoted to nurses, still relies on manual and tedious
practices. Wound shape is measured with rules, tracing papers or rarely with alginate castings and serum injection. The
wound tissues proportion is also estimated by a qualitative visual assessment based on the red-yellow-black code.
Further to our preceding works on wound 3D complete assessment using a simple freehanded digital camera, we explore
here the adaptation of this tool to wounds artificially created for experimentation purposes. It results that tissue
uniformity and flatness leads to a simplified approach but requires multispectral imaging for enhanced wound
delineation. We demonstrate that, in this context, a simple active contour method can successfully replace more complex
tools such as SVM supervised classification, as no training step is required and that one shot is enough to deal with
perspective projection errors. Moreover, involving all the spectral response of the tissue and not only RGB components
provides a higher discrimination for separating healed epithelial tissue from granulation tissue. This research work is part
of a comparative preclinical study on healing wounds. It aims to compare the efficiency of specific medical honeys with
classical pharmaceuticals for wound care. Results revealed that medical honey competes with more expensive
pharmaceuticals.
In many tasks of machine vision applications, it is important that recorded colors remain constant, in the real world scene, even under changes of the illuminants and the cameras. Contrary to the human vision system, a machine vision system exhibits inadequate adaptability to the variation of lighting conditions. Automatic white balance control available in commercial cameras is not sufficient to provide reproducible color classification. We address this problem of color constancy on a large image database acquired with varying digital cameras and lighting conditions. A device-independent color representation may be obtained by applying a chromatic adaptation transform, from a calibrated color checker pattern included in the field of view. Instead of using the standard Macbeth color checker, we suggest selecting judicious colors to design a customized pattern from contextual information. A comparative study demonstrates that this approach ensures a stronger constancy of the colors-of-interest before vision control thus enabling a wide variety of applications.
In telemedicine environments, a standardized and reproducible assessment of wounds, using a simple free-handled digital camera, is an essential requirement. However, to ensure robust tissue classification, particular attention must be paid to the complete design of the color processing chain. We introduce the key steps including color correction, merging of expert labeling, and segmentation-driven classification based on support vector machines. The tool thus developed ensures stability under lighting condition, viewpoint, and camera changes, to achieve accurate and robust classification of skin tissues. Clinical tests demonstrate that such an advanced tool, which forms part of a complete 3-D and color wound assessment system, significantly improves the monitoring of the healing process. It achieves an overlap score of 79.3 against 69.1% for a single expert, after mapping on the medical reference developed from the image labeling by a college of experts.
KEYWORDS: Image segmentation, Tissues, RGB color model, 3D modeling, Databases, Image classification, 3D metrology, Light sources and illumination, Data modeling, Natural surfaces
This work is part of the ESCALE project dedicated to the design of a complete 3D and color wound assessment tool
using a simple hand held digital camera. The first part was concerned with the computation of a 3D model for wound
measurements using uncalibrated vision techniques. This article presents the second part, which deals with color
classification of wound tissues, a prior step before combining shape and color analysis in a single tool for real tissue
surface measurements. We have adopted an original approach based on unsupervised segmentation prior to
classification, to improve the robustness of the labelling stage. A database of different tissue types is first built; a simple
but efficient color correction method is applied to reduce color shifts due to uncontrolled lighting conditions. A ground
truth is provided by the fusion of several clinicians manual labellings. Then, color and texture tissue descriptors are
extracted from tissue regions of the images database, for the learning stage of an SVM region classifier with the aid of a
ground truth resulting from. The output of this classifier provides a prediction model, later used to label the segmented
regions of the database. Finally, we apply unsupervised color region segmentation on wound images and classify the
tissue regions. Compared to the ground truth, the result of automatic segmentation driven classification provides an
overlap score, (66 % to 88%) of tissue regions higher than that obtained by clinicians.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.