We propose a novel and efficient initialization method for generalized facial landmark localization with an unsupervised roll-angle estimation based on B-spline models. We first show that the roll angle is crucial for an accurate landmark localization. Therefore, we develop an unsupervised roll-angle estimation by adopting a joint 1st -order B-spline model, which is robust to intensity variations and generic for application to various face detectors. The method consists of three steps. First, the scaled-normalized Laplacian of Gaussian operator is applied to a bounding box generated by a face detector for extracting facial feature segments. Second, a joint 1 st -order B-spline model is fitted to the extracted facial feature segments, using an iterative optimization method. Finally, the roll angle is estimated through the aligned segments. We evaluate four state-of-the-art landmark localization schemes with the proposed roll-angle estimation initialization in the benchmark dataset. The proposed method boosts the performance of landmark localization in general, especially for cases with large head pose. Moreover, the proposed unsupervised roll-angle estimation method outperforms the standard supervised methods, such as random forest and support vector regression by 41.6% and 47.2%, respectively.
Discomfort detection for infants is essential in the healthcare domain, since infants lack the ability to verbalize their pain and discomfort. In this paper, we propose a robust and generic discomfort detection for infants by exploiting a novel and efficient initialization method for facial landmark localization, using an unsupervised rollangle estimation. The roll-angle estimation is achieved by fitting a 1st-order B-spline model to facial features obtained from the scaled-normalized Laplacian of the Gaussian operator. The proposed method can be adopted both for daylight and infrared-light images and supports real-time implementation. Experimental results have shown that the proposed method improves the performance of discomfort detection by 6.0% and 4.2% for the AUC and AP using daylight images, together with 6.9% and 3.8% for infrared-light images, respectively.
Ultrasound (US) has been increasingly used during interventions, such as cardiac catheterization. To accurately identify the catheter inside US images, extra training for physicians and sonographers is needed. As a consequence, automated segmentation of the catheter in US images and optimized presentation viewing to the physician can be beneficial to accelerate the efficiency and safety of interventions and improve their outcome. For cardiac catheterization, a three-dimensional (3-D) US image is potentially attractive because of no radiation modality and richer spatial information. However, due to a limited spatial resolution of 3-D cardiac US and complex anatomical structures inside the heart, image-based catheter segmentation is challenging. We propose a cardiac catheter segmentation method in 3-D US data through image processing techniques. Our method first applies a voxel-based classification through newly designed multiscale and multidefinition features, which provide a robust catheter voxel segmentation in 3-D US. Second, a modified catheter model fitting is applied to segment the curved catheter in 3-D US images. The proposed method is validated with extensive experiments, using different in-vitro, ex-vivo, and in-vivo datasets. The proposed method can segment the catheter within an average tip-point error that is smaller than the catheter diameter (1.9 mm) in the volumetric images. Based on automated catheter segmentation and combined with optimal viewing, physicians do not have to interpret US images and can focus on the procedure itself to improve the quality of cardiac intervention.
The usage of three-dimensional ultrasound (3D US) during image-guided interventions for e.g. cardiac catheterization has increased recently. To accurately and consistently detect and track catheters or guidewires in the US image during the intervention, additional training of the sonographer or physician is needed. As a result, image-based catheter detection can be beneficial to the sonographer to interpret the position and orientation of a catheter in the 3D US volume. However, due to the limited spatial resolution of 3D cardiac US and complex anatomical structures inside the heart, image-based catheter detection is challenging. In this paper, we study 3D image features for image-based catheter detection using supervised learning methods. To better describe the catheter in 3D US, we extend the Frangi vesselness feature into a multi-scale Objectness feature and a Hessian element feature, which extract more discriminative information about catheter voxels in a 3D US volume. In addition, we introduce a multi-scale statistical 3D feature to enrich and enhance the information for voxel-based classification. Extensive experiments on several in-vitro and ex-vivo datasets show that our proposed features improve the precision to at least 69% when compared to the traditional multi-scale Frangi features (from 45% to 76% at a high recall rate 75%). As for clinical application, the high accuracy of voxel-based classification enables more robust catheter detection in complex anatomical structures.
3D ultrasound (US) transducers will improve the quality of image-guided medical interventions if an automated detection of the needle becomes possible. Image-based detection of the needle is challenging due to the presence of other echogenic structures in the acquired data, inconsistent visibility of needle parts and the low quality in US imaging. As the currently applied approaches for needle detection classify each voxel individually, they do not consider the global relations between the voxels. In this work, we introduce coherent needle labeling by using dense conditional random fields over a volume, along with 3D space-frequency features. The proposal includes long-distance dependencies in voxel pairs according to their similarities in the feature space and their spatial distance. This post-processing stage leads to better label assignment of volume voxels and a more compact and coherent segmented region. Our ex-vivo experiments based on measuring the F-1, F-2 and IoU scores show that the performance improves a significant 10-20 % compared with only using the linear SVM as a baseline for voxel classification.
Ultrasound imaging is employed for needle guidance in various minimally invasive procedures such as biopsy guidance, regional anesthesia and brachytherapy. Unfortunately, a needle guidance using 2D ultrasound is very challenging, due to a poor needle visibility and a limited field of view. Nowadays, 3D ultrasound systems are available and more widely used. Consequently, with an appropriate 3D image-based needle detection technique, needle guidance and interventions may significantly be improved and simplified. In this paper, we present a multi-resolution Gabor transformation for an automated and reliable extraction of the needle-like structures in a 3D ultrasound volume. We study and identify the best combination of the Gabor wavelet frequencies. High precision in detecting the needle voxels leads to a robust and accurate localization of the needle for the intervention support. Evaluation in several ex-vivo cases shows that the multi-resolution analysis significantly improves the precision of the needle voxel detection from 0.23 to 0.32 at a high recall rate of 0.75 (gain 40%), where a better robustness and confidence were confirmed in the practical experiments.
KEYWORDS: Wavelets, Ultrasonography, 3D modeling, Detection and tracking algorithms, 3D image processing, Transducers, Breast, Visibility, 3D acquisition, Visualization
Ultrasound-guided needle interventions are widely practiced in medical diagnostics and therapy, i.e. for biopsy guidance, regional anesthesia or for brachytherapy. Needle guidance using 2D ultrasound can be very challenging due to the poor needle visibility and the limited field of view. Since 3D ultrasound transducers are becoming more widely used, needle guidance can be improved and simplified with appropriate computer-aided analyses. In this paper, we compare two state-of-the-art 3D needle detection techniques: a technique based on line filtering from literature and a system employing Gabor transformation. Both algorithms utilize supervised classification to pre-select candidate needle voxels in the volume and then fit a model of the needle on the selected voxels. The major differences between the two approaches are in extracting the feature vectors for classification and selecting the criterion for fitting. We evaluate the performance of the two techniques using manually-annotated ground truth in several ex-vivo situations of different complexities, containing three different needle types with various insertion angles. This extensive evaluation provides better understanding on the limitations and advantages of each technique under different acquisition conditions, which is leading to the development of improved techniques for more reliable and accurate localization. Benchmarking results that the Gabor features are better capable of distinguishing the needle voxels in all datasets. Moreover, it is shown that the complete processing chain of the Gabor-based method outperforms the line filtering in accuracy and stability of the detection results.
One of the major challenges for applications dealing with the 3D concept is the real-time execution of the algorithms. Besides this, for the indoor environments, perceiving the geometry of surrounding structures plays a prominent role in terms of application performance. Since indoor structures mainly consist of planar surfaces, fast and accurate detection of such features has a crucial impact on quality and functionality of the 3D applications, e.g. decreasing model size (decimation), enhancing localization, mapping, and semantic reconstruction. The available planar-segmentation algorithms are mostly developed using surface normals and/or curvatures. Therefore, they are computationally expensive and challenging for real-time performance. In this paper, we introduce a fast planar-segmentation method for depth images avoiding surface normal calculations. Firstly, the proposed method searches for 3D edges in a depth image and finds the lines between identified edges. Secondly, it merges all the points on each pair of intersecting lines into a plane. Finally, various enhancements (e.g. filtering) are applied to improve the segmentation quality. The proposed algorithm is capable of handling VGA-resolution depth images at a 6 FPS frame-rate with a single-thread implementation. Furthermore, due to the multi-threaded design of the algorithm, we achieve a factor of 10 speedup by deploying a GPU implementation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.