This article deals with machine vision techniques applied to timber grading singularities. Timber used for architectural purposes must satisfy certain mechanical requirements, and, therefore, must be mechanically graded to ensure the manufacturer that the product complies with the requirements. However, the timber material has many singularities, such as knots, cracks, and presence of juvenile wood, which influence its mechanical behavior. Thus, identifying those singularities is of great importance. We address the problem of timber defects segmentation and classification and propose a method to detect timber defects such as cracks and knots using a bag-of-words approach. Extensive experimental results show that the proposed methods are efficient and can improve grading machines performances. We also propose an automated method for the detection of transverse knots, which allows the computation of knot depth ratio (KDR) images. Finally, we propose a method for the detection of juvenile wood regions based on tree rings detection and the estimation of the tree’s pith. The experimental results show that the proposed methods achieve excellent results for knots detection, with a recall of 0.94 and 0.95 on two datasets, as well as for KDR image computation and juvenile timber detection.
Malignant melanoma is the most dangerous type of skin cancer, yet it is the most treatable kind of cancer, conditioned by its early diagnosis which is a challenging task for clinicians and dermatologists. In this regard, CAD systems based on machine learning and image processing techniques are developed to differentiate melanoma lesions from benign and dysplastic nevi using dermoscopic images. Generally, these frameworks are composed of sequential processes: pre-processing, segmentation, and classification. This architecture faces mainly two challenges: (i) each process is complex with the need to tune a set of parameters, and is specific to a given dataset; (ii) the performance of each process depends on the previous one, and the errors are accumulated throughout the framework. In this paper, we propose a framework for melanoma classification based on sparse coding which does not rely on any pre-processing or lesion segmentation. Our framework uses Random Forests classifier and sparse representation of three features: SIFT, Hue and Opponent angle histograms, and RGB intensities. The experiments are carried out on the public PH2 dataset using a 10-fold cross-validation. The results show that SIFT sparse-coded feature achieves the highest performance with sensitivity and specificity of 100% and 90.3% respectively, with a dictionary size of 800 atoms and a sparsity level of 2. Furthermore, the descriptor based on RGB intensities achieves similar results with sensitivity and specificity of 100% and 71.3%, respectively for a smaller dictionary size of 100 atoms. In conclusion, dictionary learning techniques encode strong structures of dermoscopic images and provide discriminant descriptors.
Diabetic Macular Edema (DME) is the leading cause of blindness amongst diabetic patients worldwide. It is characterized by accumulation of water molecules in the macula leading to swelling. Early detection of the disease helps prevent further loss of vision. Naturally, automated detection of DME from Optical Coherence Tomography (OCT) volumes plays a key role. To this end, a pipeline for detecting DME diseases in OCT volumes is proposed in this paper. The method is based on anomaly detection using Gaussian Mixture Model (GMM). It starts with pre-processing the B-scans by resizing, flattening, filtering and extracting features from them. Both intensity and Local Binary Pattern (LBP) features are considered. The dimensionality of the extracted features is reduced using PCA. As the last stage, a GMM is fitted with features from normal volumes. During testing, features extracted from the test volume are evaluated with the fitted model for anomaly and classification is made based on the number of B-scans detected as outliers. The proposed method is tested on two OCT datasets achieving a sensitivity and a specificity of 80% and 93% on the first dataset, and 100% and 80% on the second one. Moreover, experiments show that the proposed method achieves better classification performances than other recently published works.
Prostate cancer has been reported as the second most frequently diagnosed men cancers in the world. In the last decades, new imaging techniques based on MRI have been developed in order to improve the diagnosis task of radiologists. In practise, diagnosis can be affected by multiple factors reducing the chance to detect potential lesions. Computer-aided detection and computer-aided diagnosis have been designed to answer to these needs and provide help to radiologists in their daily duties. In this study, we proposed an automatic method to detect prostate cancer from a per voxel manner using 3T multi-parametric Magnetic Resonance Imaging (MRI) and a gradient boosting classifier. The best performances are obtained using all multi-parametric information as well as zonal information. The sensitivity and specificity obtained are 94:7% and 93:0%, respectively and an Area Under Curve (AUC) of 0:968.
Wood singularities detection is a primary step in wood grading enhancement. Our approach is purely machine vision based. The main objective is to compute physical properties like density, modulus of elasticity (MOE) and modulus of rupture (MOR) given wood surface images. Knots are one of the main singularities which directly affect the wood strength. Hence, our target is to detect knots and classify them into transverse and non-transverse ones. Then the Knots Depth Ratio (KDR) is computed based on all found transverse knots. Afterwards, KDR is used for the wood mechanical model improvement. Our technique is based on colour image analysis where the knots are detected by means of contrast intensity transformation and morphological operations. Then KDR computations are based on transverse knots and clear wood densities. Finally, MOE and MOR are computed using KDR images. The accuracy of number of knots found, their locations, MOE and MOR has been validated using a dataset of 252 images. In our dataset, these values were manually calculated. To the best of our knowledge our approach is the first purely machine vision based method to compute KDR, MOE and MOR.
Diabetic retinopathy (DR) and age related macular degeneration (ARMD) are among the major causes of visual impairment all over the world. DR is mainly characterized by small red spots, namely microaneurysms and bright lesions, specifically exudates. However, ARMD is mainly identified by tiny yellow or white deposits called drusen. Since exudates might be the only visible signs of the early diabetic retinopathy, there is an increase demand for automatic diagnosis of retinopathy. Exudates and drusen may share similar appearances; as a result discriminating between them plays a key role in improving screening performance. In this research, we investigative the role of bag of words approach in the automatic diagnosis of retinopathy diabetes. Initially, the color retinal images are preprocessed in order to reduce the intra and inter patient variability. Subsequently, SURF (Speeded up Robust Features), HOG (Histogram of Oriented Gradients), and LBP (Local Binary Patterns) descriptors are extracted. We proposed to use single-based and multiple-based methods to construct the visual dictionary by combining the histogram of word occurrences from each dictionary and building a single histogram. Finally, this histogram representation is fed into a support vector machine with linear kernel for classification. The introduced approach is evaluated for automatic diagnosis of normal and abnormal color retinal images with bright lesions such as drusen and exudates. This approach has been implemented on 430 color retinal images, including six publicly available datasets, in addition to one local dataset. The mean accuracies achieved are 97.2% and 99.77% for single-based and multiple-based dictionaries respectively.
Microaneurysms (MAs) are among the first signs of diabetic retinopathy (DR) that can be seen as round dark-red structures in digital color fundus photographs of retina. In recent years, automated computer-aided detection and diagnosis (CAD) of MAs has attracted many researchers due to its low-cost and versatile nature. In this paper, the MA detection problem is modeled as finding interest points from a given image and several interest point descriptors are introduced and integrated with machine learning techniques to detect MAs. The proposed approach starts by applying a novel fundus image contrast enhancement technique using Singular Value Decomposition (SVD) of fundus images. Then, Hessian-based candidate selection algorithm is applied to extract image regions which are more likely to be MAs. For each candidate region, robust low-level blob descriptors such as Speeded Up Robust Features (SURF) and Intensity Normalized Radon Transform are extracted to characterize candidate MA regions. The combined features are then classified using SVM which has been trained using ten manually annotated training images. The performance of the overall system is evaluated on Retinopathy Online Challenge (ROC) competition database. Preliminary results show the competitiveness of the proposed candidate selection techniques against state-of-the art methods as well as the promising future for the proposed descriptors to be used in the localization of MAs from fundus images.
Diabetic macular edema (DME) characterized by discrete white{yellow lipid deposits due to
vascular leakage is one of the most severe complication seen in diabetic patients that cause
vision loss in affected areas. Such vascular leakage can be treated by laser surgery. A regular
follow{up and laser photocoagulation can reduce the risk of blindness by 90%. In an automated
retina screening system, it is thus very crucial to make the segmentation of such hard exudates
accurate and register these images taken over time to a reference co-ordinate system to make the
necessary follow-ups more precise. We introduce a novel method of ethnicity based statistical
atlas for exudates segmentation and follow-up. Ethnic background plays a significant role in
retinal pigment epithelium, visibility of the choroidal vasculature and overall retinal luminance in
patients and retinal images. Such statistical atlas can thus help to provide a solution, simplify the
image processing steps and increase the detection rate. In this paper, bright lesion segmentation
is investigated and experimentally verified for the gold standard built from African American
fundus images.
40 automatically generated landmark points on the major vessel arches with macula and
optic centers are used to warp the retinal images. PCA is used to obtain a mean shape of the
retinal major arches (both lower and upper). The mean of the co-ordinates of the macula and
optic disk center are obtained resulting 42 landmark points and together they provide a reference
co-ordinate frame ( or the atlas co-ordinate frame) for the images. The retinal funds images of an
ethnic group without any artifact or lesion are warped to this reference co-ordinate frame from
which we obtain a mean image representing the statistical measure of the chromatic distribution
of the pigments in the eye of that particular ethnic group.
400 images of African American eye has been used to build such a gold standard for this ethnic
group. Any test image of the patient of that ethnic group is first warped to the reference frame
and then a distance map is obtained with this mean image. Finally, the post-processing schemes
are applied on the distance map image to enhance the edges of the exudates. A multi-scale and
multi-directional steerable filters along with the Kirsch edge detector was found to be promising.
Experiments with the publicly available HEI-MED dataset showed the good performance of the
proposed method. We achieved the lesion localization fraction (LLF) of 82.5% at 35% of
non{lesion localization fraction (NLF) on the FROC curve.
This paper presents a method based on shape-context and statistical measures to match interventional 2D Trans
Rectal Ultrasound (TRUS) slice during prostate biopsy to a 2D Magnetic Resonance (MR) slice of a pre-acquired
prostate volume. Accurate biopsy tissue sampling requires translation of the MR slice information on the TRUS
guided biopsy slice. However, this translation or fusion requires the knowledge of the spatial position of the
TRUS slice and this is only possible with the use of an electro-magnetic (EM) tracker attached to the TRUS
probe. Since, the use of EM tracker is not common in clinical practice and 3D TRUS is not used during biopsy,
we propose to perform an analysis based on shape and information theory to reach close enough to the actual
MR slice as validated by experts. The Bhattacharyya distance is used to find point correspondences between
shape-context representations of the prostate contours. Thereafter, Chi-square distance is used to find out those
MR slices where the prostates closely match with that of the TRUS slice. Normalized Mutual Information (NMI)
values of the TRUS slice with each of the axial MR slices are computed after rigid alignment and consecutively a
strategic elimination based on a set of rules between the Chi-square distances and the NMI leads to the required
MR slice. We validated our method for TRUS axial slices of 15 patients, of which 11 results matched at least
one experts validation and the remaining 4 are at most one slice away from the expert validations.
In this paper, we present a novel method for generating a background model from a sequence of images with
moving objects. Our approach is based on non-parametric statistics and robust mode estimation using quasicontinuous
histograms (QCH) framework. The proposed method allows the generation of very clear backgrounds
without blur effect, and is robust to noise and small variations. Experimental results from real sequences
demonstrate the effectiveness of our method.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.