Open Access
21 September 2015 Fast retinal layer segmentation of spectral domain optical coherence tomography images
Tianqiao Zhang, Zhangjun Song, Xiaogang Wang, Huimin Zheng, Fucang Jia, Jianhuang Wu, Guanglin Li, Qingmao Hu
Author Affiliations +
Abstract
An approach to segment macular layer thicknesses from spectral domain optical coherence tomography has been proposed. The main contribution is to decrease computational costs while maintaining high accuracy via exploring Kalman filtering, customized active contour, and curve smoothing. Validation on 21 normal volumes shows that 8 layer boundaries could be segmented within 5.8 s with an average layer boundary error <2.35 μm. It has been compared with state-of-the-art methods for both normal and age-related macular degeneration cases to yield similar or significantly better accuracy and is 37 times faster. The proposed method could be a potential tool to clinically quantify the retinal layer boundaries.

1.

Introduction

Optical coherence tomography (OCT)1 is an optical signal acquisition and processing modality which is noninvasive for imaging subsurface tissue structures. OCT can attain images of the internal structures of the eye with higher spatial resolution (several micrometers) than other imaging modalities, such as ultrasound, x-ray, and magnetic resonance imaging.

Because human eyes are optically penetrable, OCT has recently become an important technique to revolutionize the clinical imaging of the retina.2 Quantities derived from the segmented retinal layers will help doctors to understand the anatomy of the human retina better and to quantify the extents of abnormalities. These quantities include nerve fiber layer (NFL), ganglion cell layer (GCL), inner plexiform layer (IPL), inner nuclear layer (INL), outer plexiform layer (OPL), outer nuclear layer (ONL), inner segment (IS), outer segment (OS), and retinal pigment epithelium (RPE) (Fig. 1).

Fig. 1

Layers of a healthy retina centered at the macula.

JBO_20_9_096014_f001.png

There are efforts to segment layer boundaries from OCT retinal images based on grayscale variation and grayscale thresholding.310 Unfortunately, these methods are sensitive to noise. A Markov random field model for extracting the inner and outer retinal borders was introduced in Ref. 11. Its extension was adopted for optic nerve head12 and outer retinal layers’ segmentation.13 The autoregressive model was shown to be more robust than those based on grayscale thresholding and grayscale variation. Its main problem was to find reliable “seed” points for OCT images with abnormal retina. In addition, special rules needed to be applied for correcting errors since the model depended on the connection of one-dimensional (1-D) points. Active contour models were employed to segment retinal layers including macula and optic nerve regions.1418 However, they required good initialization and had high computational costs. Methods based on graph theory could accurately segment retinal layer boundaries in normal adult eyes,1929 but had high computational complexity. Recently, machine learning approaches, including support vector machines,30 random forest,31 and Bayesian artificial neural networks,32 have attracted much attention. As pointed out in Ref. 10, most of the reported segmentation methods on two-dimensional (2-D) and three-dimensional (3-D) OCT data were not practical for general clinical use due to their high computational costs. However, segmentation will be particularly valuable in fields such as computer-assisted surgery, where real-time visualization of the anatomy is a crucial component. As quantitative changes in different retinal layers are correlated well with changes in visual function and may represent surrogate indicators of glaucoma,33,34 age-related macular degeneration (AMD),35 type 1 diabetes,36 multiple sclerosis,3739 Alzheimer’s disease,40 and Parkinson’s disease,41 it is of great value to quantify the thickness of different retinal layer boundaries with speed and accuracy. Our objective is to decrease computational costs while preserving reliable and accurate segmentation for spectral-domain OCT (SD-OCT) images. The main points of this study are listed below.

First, Kalman filtering42 to model the correlation between adjacent image frames is adopted, which, to the best of our knowledge, has not yet been addressed in existing methods. It has the advantages of avoiding the initialization of contours from the second frame and enhancing the robustness in the presence of retinal blood vessel shades and other artifacts of motion or uneven illumination. Second, segmentation of 2-D OCT images is converted to 1-D positioning by exploiting the layered structures of the retina. Finally, a customized active contour model with only an image energy term inspired by Jacob et al.43 is proposed to roughly localize the retina followed by smoothing based on the Savitzky–Golay algorithms44 to yield fast and accurate layer boundaries.

The rest of the paper is organized as follows. In Sec. 2, the proposed macular layer boundary segmentation and measurement techniques are presented. Experimental results are then described in Sec. 3, whereas discussion and conclusion are given in Sec. 4.

2.

Proposed Method

An OCT imaging system can obtain 2-D images (i.e., B-scans, where the number of A-scans is the width of a 2-D image) and 3-D volumes (volumetric data, containing spatially aligned multiple B-scans). Due to the information correlation among adjacent images, for volumetric data, the first frame and others are addressed separately. If a volumetric data is not available, the algorithm to process individual B-scans is the same as that to process the first frame of a volumetric data. We will divide Sec. 2 into eight subsections, which, respectively, deal with image projection to get the region of interest (ROI), directional filtering, multiresolution to estimate vitreous-NFL and IS/OS, customized active contour model for coarse segmentation, curve smoothing, image flattening and flatttening, Kalman filtering, and segmentation of other layer boundaries. The following flowcharts outline the main steps (Fig. 2), which are, respectively, for individual B-scans or the first frame of a volumetric data, and the nonfirst frame of a volumetric data.

Fig. 2

Flowcharts of the proposed method: (a) flowchart for individual B-scans or the first frame of a volumetric data where initialization is achieved from image projection and prior knowledge and (b) flowchart for nonfirst frame of a volumetric data where initialization is attained from Kalman filtering.

JBO_20_9_096014_f002.png

2.1.

Image Projection to Get Region of Interest

In OCT images, the data are substantially redundant in the axial direction. Searching for the ROI is essential to decrease the computational cost as well as to make the delineation more reliable. Once the ROI is determined, the subsequent processings will be performed within the ROI. A method of image projection is employed to attain the ROI.

First, each row of the B-scan to be processed is projected along the lateral direction:

Eq. (1)

g(yj)=1Mi=0M1f(xi,yj),
where f(xi,yi) is the grayscale of pixel (xi,yj), i and j are, respectively, the horizontal (lateral direction) position and vertical (axial direction) position, and M is the width of the image. The highest and the second highest peaks correspond, respectively, to the center line of the RPE-IS/OS complex and the center line of the NFL. Here, the center line is a horizontal line whose Y coordinate is the average Y coordinates of the corresponding layer boundaries.

Second, according to prior knowledge, the distance between the center line of RPE-IS/OS complex (center line of NFL) and the top (bottom) border of the ROI can be determined, which has a correlation with the height of the B-scan. That is, the distance can be set as ρH. Here, ρ is a predefined constant and H is the height of the B-scan. In this study, we set ρ=0.1. Figure 3 shows the derived ROI (the rectangle formed by the magenta and yellow lines).

Fig. 3

An optical coherence tomography (OCT) retinal B-scan and its image projection, where green and blue lines are the approximated center lines of the nerve fiber layer (NFL) and retinal pigment epithelium (RPE)-inner segment (IS)/outer segment (OS) complex, magenta and yellow lines are the top and bottom borders of the region of interest (ROI), respectively: (a) an original OCT macular B-scan with colored lines for the ROI and initialized center lines and (b) the average grayscales of all rows of Fig. 3(a).

JBO_20_9_096014_f003.png

Note that image projection to derive the ROI is only applied for processing individual B-scans or the first frame of a volumetric data. For the nonfirst frame of a volumetric data, Kalman filtering (see Sec. 2.7) is utilized to avoid redundant calculation of image projections frame by frame.

2.2.

Directional Gaussian Filtering

The original B-scan is smoothed with 1-D Gaussian filter along the lateral direction as the layer boundaries are almost horizontal to suppress noise without blurring layer boundaries. Figure 4 shows the original and smoothed images along the lateral direction with the standard deviation (SD) σ being 5.0 and the mean being 0.

Fig. 4

One-dimensional Gaussian filtering along the lateral direction: (a) an original OCT retinal B-scan and (b) the filtered B-scan.

JBO_20_9_096014_f004.png

2.3.

Multiresolution to Estimate Vitreous-Nerve Fiber Layer and Inner Segment/Outer Segment

The two most outstanding layer boundaries are the vitreous-NFL and IS-OS in an SD-OCT retinal B-scan. As such, these two boundaries could be extracted first. To be robust to noise and reduce computational costs, a multiresolution approach is adopted in four steps.

First, downsampling is performed. Suppose that the original image is denoted as fH0W0(x,y), then the downsampling is carried out by a factor of 2 vertically to get fH1W0(x,y).

Eq. (2)

fH1W0(x,y)=12[fH0W0(x,2y)+fH0W0(x,2y+1)].

This downsampling could be performed along both the X and Y directions:

Eq. (3)

fH2W0(x,y)=12[fH1W0(x,2y)+fH1W0(x,2y+1)],

Eq. (4)

fH3W0(x,y)=12[fH2W0(x,2y)+fH2W0(x,2y+1)],

Eq. (5)

fH4W0(x,y)=12[fH3W0(x,2y)+fH3W0(x,2y+1)],

Eq. (6)

fH4W1(x,y)=12[fH4W0(2x,y)+fH4W0(2x+1,y)],

Eq. (7)

f42(x,y)=fH4W2(x,y)=12[fH4W1(2x,y)+fH4W1(2x+1,y)].

Second, the grayscale gradient magnitude g(x,y) is calculated at the coarse scale.

Eq. (8)

gx(x,y)=2f42(x,y)+f42(x,y1)+f42(x,y+1)2f42(x1,y)f42(x1,y1)f42(x1,y+1),

Eq. (9)

gy(x,y)=2f42(x,y)+f42(x1,y)+f42(x+1,y)2f42(x,y1)f42(x1,y1)f42(x+1,y1),

Eq. (10)

g(x,y)=[gx2(x,y)+gy2(x,y)]1/2.

Third, the two initial contours are composed of pixels whose gradient magnitudes are maximum and the second maximum at each column.

Finally, the edge pixels at the coarse scale are converted to pixels of the original image by multiplying x coordinates by 4 and y coordinates by 16 to form the initial contours (Fig. 5).

Fig. 5

Estimation of the two prominent boundaries from Fig. 4(a), with green line segments for vitreous-NFL and blue for IS/OS.

JBO_20_9_096014_f005.png

Note that multiresolution for estimating vitreous-NFL and IS/OS is only employed for individual B-scans or the first frame of a volumetric data. For the nonfirst frame of a volumetric data, Kalman filtering (see Sec. 2.7) is employed instead.

2.4.

Customized Active Contour Model for Coarse Positioning of Layer Boundaries

A customized active contour model is proposed to cater to the layered structures in OCT macular images. An active contour4547 is traditionally modeled as

Eq. (11)

E=[α(s)Econtinuity+β(s)Ecurvature+γ(s)Eimage]ds,
where α(s), β(s), and γ(s) are weights. Econtinuity is the continuity term to encourage even spacing of the contour points and could be calculated by vivi1 with vi for the i’th contour point. Ecurvature is the curvature term to penalize rapid curve bending and could be calculated by |vi12vi+vi+1|. Eimage is the image term for the contour to be converged to edges and could be calculated as (gcurgmin)/(gmaxgmin), with gcur, gmax, and gmin being, respectively, the gradient magnitude of the current pixel, and the maximum and minimum gradient magnitudes of the current neighborhood. In this study, the following constraints are imposed for each layer boundary:

  • 1. There is only one contour point at each column.

  • 2. At each iteration, the contour point at each column could only move vertically, being either one pixel up or down.

  • 3. The contour point at each column should be adjacent to the contour point of the previous and next columns (eight neighbors).

With these constraints, there might be only four configuration types for the three neighboring contour pixels (Fig. 6).

Fig. 6

Four configuration types for three adjacent contour points. (a) Horizontal, (b) inflectional, (c) V-shaped, and (d) diagonal.

JBO_20_9_096014_f006.png

Here are some observations to justify the simplification of Eq. (11).

  • 1. There are only two cases between the two adjacent contour points with Econtinuity being either 1 or 21/2.

  • 2. The curvature term Ecurvature is 0 for the horizontal and diagonal cases, 1 for the inflectional case, and 2 for the V-shaped case.

Equation (11) could thus be simplified to include only the third term. In this study, the third term is defined locally to reduce the computational cost.

Eq. (12)

Eimage=j=k+1j=k+nf(xi,yj)j=k1j=knf(xi,yj),
where k is the position of the current pixel and n is the neighborhood size in the vertical direction. Figure 7 shows the derived initial contours of vitreous-NFL and IS/OS from Eq. (12) of Fig. 5.

Fig. 7

Coarse segmentation of Fig. 5 using customized active contour model from Eq. (12), with green curve for vitreous-NFL and blue curve for IS/OS.

JBO_20_9_096014_f007.png

2.5.

Curve Smoothing

As the customized active contour model [Eq. (12)] has not taken into account continuity and curvature terms, the derived contour could be jagged. To make it smooth, curve smoothing is employed.

Savitzky–Golay smoothing filters44 have been widely used for 1-D filtering. The fundamental idea is to fit a polynomial to the data surrounding each data point. The difference between the general least-squares polynomial smoothing filters and the Savitzky–Golay smoothing filters lies in the fact that the latter introduces tables of convolution weights for the center-point, which could substantially enhance the speed.48 In this study, Savitzky–Golay smoothing filters are employed for curve smoothing of the layer boundaries with the order of the polynomial being 4 and a neighborhood size from 20 to 30 pixels for different layer boundaries.

2.6.

Image Flattening and Unflattening

To handle the possible deformation of layer boundaries due to pathology, the image is flattened with respect to the extracted layer boundary of IS/OS. Specifically, the flattening process is carried out this way: find the maximum y coordinate of the IS/OS layer boundary (denoted as maxy), for each column j, the column will move (maxyyj) downwards with yj being the y coordinate of the IS/OS boundary pixel. In this way, the extracted IS/OS becomes a horizontal line. Figure 8 shows the flattened image from Fig. 9. The other six layer boundaries are all extracted from the flattened image. The 8 layer boundaries are then derived by reversing the flattening process. It should be noted that these flattening and unflattening procedures are only applied to individual B-scans or the first frame of a volumetric data.

Fig. 8

Flattened image of Fig. 9 by shifting each column such that the IS/OS points lie on a horizontal line, with green curve for vitreous-NFL and blue line for IS/OS.

JBO_20_9_096014_f008.png

Fig. 9

Contours of macular layer boundaries including vitreous-NFL and IS/OS from Fig. 7, with green curve for vitreous-NFL and blue for IS/OS.

JBO_20_9_096014_f009.png

2.7.

Kalman Filtering

As a prior knowledge, for a volumetric data, the positions of layer boundaries in adjacent images (frames) are similar. The coarse layer boundaries of the current frame could be estimated from the layer boundaries of the previous frame. We employ Kalman filtering42 to track coarse layer boundaries frame by frame.

To approximate the layer boundaries, Kalman filter is formulated as a state equation [Eq. (13)] and a measurement equation [Eq. (14)]:

Eq. (13)

φk=Aφk1+Buk1+wk=Aφk1+wk,

Eq. (14)

zk=Hφk+vk,
where φ=[yvy] is a 2-D vector with y being the Y coordinate of layer boundaries and vy being the speed of adjacent frames of the layer boundaries being tracked, wy the process noise, vk the measurement noise, k the current frame, and k1 the previous frame. A is a state transfer matrix, A=(1dt01) with dt being the time interval from the previous frame to the current frame. B is the control input matrix and will not take effect when there is no control input (here, we suppose u=0). As there is no control input, the velocity cannot be measured, so z=zy, and H=[10].

The state φ and variance P could be predicted as follows:

Eq. (15)

φk=Aφk1,

Eq. (16)

Pk=APk1AT,
where is for the predicted value and T is for transpose.

The correction equation is

Eq. (17)

Kk=PkHT(HPkHT+R)1,
where R is the SD of the measurement noise. K will be adaptive to R, i.e., the components of K will be smaller when R is large, and larger otherwise.

Then, the state φ can be updated

Eq. (18)

φk=φk+Kk(ZkHφk).

At last, the variance P can be updated

Eq. (19)

Pk=(IKkH)Pk.

To summarize, two phases [prediction by Eqs. (15) and (16) and updating by Eqs. (17)–(19)] of Kalman filtering are applied to attain the state with minimum variance.

2.8.

Other Boundaries Segmentation

Once the two most outstanding layer boundaries are segmented, they can be utilized as constraints to facilitate segmentation of other layer boundaries. Figure 10 illustrates the flowchart of segmentation for other layer boundaries.

Fig. 10

Flowchart to segment other layer boundaries after IS/OS and vitreous NFL are segmented.

JBO_20_9_096014_f010.png

The segmentation of other layer boundaries consists of the same three steps: initialization, coarse segmentation by the customized active contour method [Eq. (12)], and curve smoothing through the Savitzky–Golay filtering described in Sec. 2.5.

There are two ways of initialization. One is to use prior knowledge to initialize for individual B-scans or the first frame of a volumetric data. The other layers are initialized as follows: OS/RPE is initialized 70μm below IS/OS, RPE-choroid is 105μm below OS-RPE, OPL-ONL is 105μm above IS/OS, INL-OPL is 105μm above OPL-ONL, IPL-INL is 105μm above INL-OPL, and NFL-GCL is the average of vitreous-NFL and IPL-INL in the vertical direction from our customized OCT imaging equipment with a 2048×2048 pixel sized image.

The second way of initialization is for the nonfirst frame of a volumetric data using the Kalman filter described in Sec. 2.7, by employing the fact that layer boundaries between consecutive frames are similar.

3.

Experiments

In this section, experiments were carried out to validate the proposed method. Validation was first carried out on 21 SD-OCT volumes with each volume from one subject was collected from 21 healthy Chinese subjects and then comparisons were made with state-of-the-art methods for both normal and pathological eyes with AMD using common datasets. Finally, additional experiments were performed to vary the components of the algorithm for comparison.

For each layer of a B-scan, the thickness between neighboring boundaries was computed and then the average difference in layer thickness between the automatic and manual delineation was calculated for quantification. The average difference in layer thickness between two manual delineations was also calculated to reflect the possible uncertainty of measuring the layer thickness, even when done by experts.

Figure 11 showed the segmentation for a B-scan of a normal right eye.

Fig. 11

Segmentation of retinal layer boundaries for a normal adult: (a) retinal thickness maps covered 6×6mm2 area and (b) segmentation for the 66th frame of a volumetric data, OD for oculus dexter, T for temporal, and N for nasal.

JBO_20_9_096014_f011.png

Figure 12 illustrated that the proposed method could handle images with heavy noise and blood vessel artifacts.

Fig. 12

Segmentation on B-scans with (a) blood vessel artifacts and (b) heavy noise, where the original image of (b) with small deposits of drusens was from Ref. 26.

JBO_20_9_096014_f012.png

3.1.

Validation on Our Clinical Data

A customized SD-OCT system was adopted for 3-D imaging, which used a superluminescent diode with a full-width at half-maximum spectral bandwidth of 50 nm and a center wavelength of 840 nm, and a linear CCD camera (Aviiva-SM2-CL-2014, e2V) with a 14-μm pixel size operating in an 11-bit mode. The optical power of the beam on the cornea was not greater than 750μW. Images were obtained with a depth of 1.68 mm and axial resolution of 3.5μm.

A dataset consisting of 21 SD-OCT volumes (128×2048×2048 voxels, 6×6×2.5mm3 macular regions, and an in-plane pixel size of 3.5μm) with each volume from one subject was collected from 21 healthy Chinese subjects at the Xinhua Hospital Affiliated to Shanghai Jiaotong University School of Medicine. One eye from each subject was selected randomly. The protocol of the research has been approved by the Institutional Review Board of the hospital. All subjects gave written consent and provided permission for educational and scientific purposes.

The automated detection of macular layer boundaries was performed successfully on all the 21 volumes. For segmenting 8 layer boundaries, it took an average of 5.8 s for each volume (128 B-scans per volume) and 95 ms for a B-scan on a personal computer (C program running on a 3.2 GHz Intel core i5-3470 CPU). These data were also segmented manually by two eye specialists as the reference for comparison with automatic segmentation.

The time distribution of the proposed method was recorded. For segmenting a B-scan, the time (95 ms) distribution was: image projection to get ROI (2 ms), 1-D Gaussian filter (5 ms), multiresolution to estimate the two boundaries (5 ms), image flattening and unflattening (3 ms), coarse segmentation (8ms×8=64ms), and curve smoothing (2ms×8=16ms). Here ×8 was used to process eight boundaries for each B-scan. For processing a volumetric data without the first frame, the time (5715 ms) distribution was: 1-D Gaussian filter (5ms×127=635ms), Kalman filtering (1ms×8×127=1016ms), coarse segmentation (2ms×8×127=2032ms), and curve smoothing (2ms×8×127=2032ms). Here ×127 was used to process 127 B-scans of each volumetric data without the first frame.

Table 1 showed the absolute mean difference and SD of the 21 volumes.

Table 1

Retinal layer thickness differences in μm of 21 volumes.

Differences in retinal layer thickness
Retinal layerComparison of two eye specialistsComparison of manual and automatic segmentation
Mean difference ± SDMax errorMean difference ± SDMax error
NFL3.05±1.585.11.62±1.205.4
GCL-IPL1.86±1.685.51.34±1.215.2
INL3.89±2.286.11.57±1.024.8
OPL3.32±2.705.52.03±1.414.6
ONL-IS2.87±2.105.42.37±1.735.3
OS1.96±1.544.11.55±1.203.5
RPE3.89±2.105.61.75±1.544.0
Total retina3.89±1.755.52.35±1.325.5
Note: SD, standard deviation; Max, maximum; RPE, retinal pigment epithelium; NFL, nerve fiber layer; GCL, ganglion cell layer; IPL, inner plexiform layer; INL, inner nuclear layer; OPL, outer plexiform layer; ONL, outer nuclear layer; OS, outer segment; IS, inner segment.

3.2.

Comparison with a State-of-the-Art Method for Normal Eyes

This is to compare the proposed method with the method of Chiu et al.23 using their data and their manual segmentation as a reference for normal eyes. The 10 volumes were acquired by Bioptigen, Inc. (Research Triangle Park, North Carolina) imaging systems with an axial spacing of 3.23μm. Only 100 B-scans from the 10 volumes were manually delineated by two experts (one junior and one senior) and the manual delineation by the senior expert was taken as the reference, among which 29 were used for interexpert comparison.

The average times to derive the 8 layer boundaries of a B-scan for the proposed method and Chiu’s method23 were, respectively, 60 and 9740 ms.

The comparison results between two experts as well as between the proposed method with Ref. 23 were summarized in Table 2. Paired t-tests were carried out to find that there existed a significant difference in the accuracy of INL (p<0.004), OPL (p<0.007), ONL-IS (p<0.019), OS (p<0.019), RPE (p<0.0001), and total retina (p<0.019), while errors of the NFL and GCL-IPL did not have significant differences.

Table 2

Comparison with Ref. 23 for thickness measurement of retinal layers of 10 volumes of normal eyes. The mean and SD are all in voxels with the voxel size being 3.23  μm.

Differences of retinal layer thickness
Retinal layerComparison between two manual gradersComparison between automatic and manual segmentations
Column IColumn II (Chiu’s method)Column III (proposed)Column IV (Chiu’s method)Column V (proposed)
29 B-scans100 B-scans
Mean/SDMean/SDMean/SDMean/SDMean/SD
NFL1.73/0.900.99/0.760.84/0.700.88/0.680.89/0.57
GCL-IPL1.06/0.950.57/0.480.51/0.250.77/0.650.73/0.63
INL2.22/1.301.07/0.870.93/0.480.98/0.740.83/0.55*
OPL1.90/1.531.64/1.081.03/1.001.48/1.051.21/1.03*
ONL-IS1.63/1.191.40/1.061.30/1.031.20/0.921.10/0.75*
OS1.11/0.880.92/0.800.90/0.900.88/0.730.78/0.45*
RPE2.21/1.200.96/0.690.82/0.540.99/0.870.73/0.53*
Total retina2.22/1.000.92/0.830.80/0.710.94/0.82072/0.65*
Note: * indicates significant differences.

3.3.

Comparison with a State-of-the-Art Method for Eyes with Age-Related Macular Degeneration

AMD is a primary cause of vision loss worldwide.49 To validate the applicability of the proposed method to data with pathology, the second direct comparison was to compare the proposed method with the method of Chiu et al.26 using their data and their manual segmentation as a reference for AMD patients.

In the study of Chiu et al.26 for SD-OCT images of eyes containing drusen and geographic atrophy (GA), segmentation was only performed for three retinal layer boundaries [the inner internal limiting membrane (ILM), inner RPE drusen complex, and outer Bruch’s membrane defined in Ref. 26 corresponded to, respectively, vitreous-NFL, OS-RPE, and RPE-choroid in Fig. 1]. Figure 13 showed the segmentation of a B-scan of an AMD patient from Ref. 26.

Fig. 13

The segmentation for a B-scan from an age-related macular degeneration patient with serious deformation of layer boundaries due to drusens. Red: internal limiting membrane (ILM), Cyan: IS/OS, green: RPE drusen complex (RPEDC), blue: the outer Bruch’s membrane.

JBO_20_9_096014_f013.png

In the Chiu’s study, 220 B-scans across 20 patients (one volume per patient) for eyes with AMD were acquired by Bioptigen, Inc. (Research Triangle Park, North Carolina) imaging systems with an axial pixel resolution ranging from 3.06 to 3.24μm per pixel (Devers Eye Institute, 3.21μm; Duke Eye Center, 3.23μm; Emory Eye Center, 3.06μm; and the National Eye Institute, 3.24μm). All the SD-OCT images were manually delineated by two certified experts (one junior and one senior) and the manual delineation by the senior expert was taken as the referencel all of the images were used for interexpert comparison.

The comparison results between the proposed and Ref. 26 were summarized in Table 3. Paired t-tests were carried out to find that there existed a significant difference in the accuracy of: total retina of drusens with low quality (p<0.004), total retina of GA with high quality (p<0.0001), total retina of GA with low quality (p<0.032), and total retina for both the drusens and GA images (p<0.0001), while errors of the rest did not have significant differences.

Table 3

Comparison with Ref. 26 for retinal layers of 20 volumes for patients with age-related macular degeneration. The mean and SD are all in pixels with the pixel size ranging from 3.06 to 3.24  μm.

Comparison between two manual gradersComparison between automatic and manual segmentations
Chiu’s methodProposed
PathologyQualityVolume groupRetinal ayer boundaryMean error ± SD (pixels)Max error; error > 5 pixels (μm; %)Mean error ± SD (pixels)Max error; error > 5 pixels (μm; %)Mean error ± SD (pixels)Max error; error > 5 pixels (μm; %)
Drusen (110 images)High1Total retina1.01±0.7313; 5.50.78±0.5517; 2.50.83±0.6017; 4.2
RPEDC1.31±0.9916; 5.80.94±0.6717; 3.20.91±0.5613; 3.1
Low2Total retina1.04±0.7122; 10.01.17±0.7522; 7.60.91±0.69*19; 6.5
RPEDC1.43±1.1123; 12.50.90±0.6523; 7.80.84±0.5221; 7.3
GA (110 images)High3Total retina1.45±1.2732; 12.81.55±0.9232; 11.41.15±1.05*29; 10.5
RPEDC1.52±1.3431; 13.31.29±1.0728; 11.31.03±1.0323; 8.4
Low4Total retina0.84±0.7323; 10.61.73±0.9430; 10.61.43±1.21*29; 11.7
RPEDC1.38±1.0022; 13.40.92±0.6831; 7.80.89±0.6026; 7.5
TotalTotal retina1.09±0.9132; 9.71.31±0.8832; 8.01.08±0.73*29; 8.2
RPEDC1.41±1.1631; 11.31.01±0.8031; 7.50.92±0.8326; 6.6
Note: SD for standard deviation, GA for geographic atrophy, RPE drusen complex (RPEDC) for retinal pigment epithelium drusen complex, total retinal is the distance between the inner ILM to inner RPEDC, and * for significant difference.

3.4.

Additional Experiments

To see the comparative performance with respect to different initializations for a volumetric data from the second frame (with the subsequent processing steps unchanged), two variants were devised: one variant to use the segmented layer boundaries of the previous frame as the corresponding initial layer boundaries of the current frame and this variant was denoted as method A; and the other variant was to ignore the spatial correlation to derive initial layer boundaries independently in the same way as was done for the first frame, and this variant was denoted as method B. Table 4 summarized the performance of the original method and the variants.

Table 4

Comparison among the proposed method (using Kalman filtering), method A (replacing Kalman filtering with direct casting from the previous frame), and method B (initializing boundary layers independently).

Average time for one B-scan of a volumetric data (ms)Mean difference ± SD (μm)
ProposedMethod AMethod BProposedMethod AMethod B
Our clinic data45.344.295.22.35±1.323.21±2.012.76±1.67
Chiu’s normal data2332.331.760.32.33±2.103.32±2.572.83±2.31
Chiu’s pathological data2623.122.645.53.41±2.614.05±3.023.51±3.03

In refining the boundary layers from coarse segmentation, the proposed method employed a combination of the customized/simplified active contour model and Savitzky–Golay smoothing filter. A variant was to use the full-active contour model. To this end, the active contour model in Ref. 46 was implemented for comparison, and this variant was denoted as method C. The comparative performance was summarized in Table 5.

Table 5

Comparison between the proposed method (using customized active contour plus Savitzky–Golay smoothing filter) and method C (using the active contour model in Ref. 46).

Average time for one B-scan of volumetric data (ms)Mean difference ± SD (μm)
ProposedMethod CProposedMethod C
Our clinic data45.363.22.35±1.323.41±2.47
Chiu’s normal data2332.344.72.33±2.103.53±2.76
Chiu’s pathological data2623.134.33.41±2.614.15±3.16

4.

Discussion and Conclusion

4.1.

Sensitivity to Parameters

The parameter ρ determines the size of the ROI. We alter the ρ in the range from 0.05 to 0.40 to yield similar segmentation accuracy (with the total retinal thickness showing a difference of 0.43±0.52μm), but different time consumption (the average time is in the range from 71 to 116 ms). A ρ value of 0.1 seems a good trade-off between segmentation accuracy and time consumption.

The parameter σ is related to the smoothing degree of Gaussian filtering. As changing the σ in [1.0, 10.0] yields almost the same segmentation, a value of 5.0 is chosen for this study.

4.2.

Comparison with Existing Methods

The performance of existing methods is to be compared in terms of computational cost and accuracy.

Segmentation of nerve fiber for a single image (with 1000 A-scans) could take as long as 62 s.14 It took 24 s to segment seven layer boundaries for a 1024×512 pixels’ image.5 The RPE segmentation took 8.3 min for a volume with 60 images (1024×1000 pixels).50 As to processing normal OCT volumetric data, the segmentation time for RPE and vitreous-NFL on 320×1024×138 volumetric data was 37 s,10 and for 11 layer boundaries on 200×1024×200 volumetric data was 70 s.22 The average computation time was 107s/volume (400×1000×11voxels) for 8 layer boundaries.23 The fastest approach was 16 s for detecting nine layer boundaries per volume (480×512×128voxels) by a commercial system named Topcon 3-D OCT-1000 (Topcon Medical Systems, Oakland, New Jersey).24 The proposed method segments 8 layer boundaries within 5.8 s for a volumetric data and 95 ms for an individual B-scan with similar spatial resolutions (128×2048×2048voxels), being substantially faster than these existing methods.

Garvin et al.19,20 reported an overall mean absolute boundary differences of 6.10±2.90μm19 and 5.69±2.41μm.20 Better accuracies were achieved to be in the range of 3 to 4μm, i.e., 3.39±0.96μm,24 3.04±2.65μm,23 and 3.38±4.09μm.31 The proposed method yields a comparable maximum absolute boundary difference of 2.35±1.32μm (Table 1). As these accuracies are from the segmentation of different data, it is inherently difficult to judge which method yields the best accuracy, as the segmentation accuracy depends on other factors in addition to segmentation method, such as imaging type [time-domain OCT (TD-OCT), or SD-OCT], the imaging quality, and axial and lateral resolutions.

We have tried to make direct comparisons using common datasets with existing methods.

On the common dataset with normal eyes,23 the proposed method has been compared with Chiu’s method.23 From Table 2, it can be seen that the proposed method yields similar accuracy to that of Ref. 23 (all the quantities of the proposed method have a smaller average error than Ref. 23 for the 29 scans, and seven out of the eight quantities of the proposed method have a smaller average error than Ref. 23 for the 100 B-scans), and all the mean differences of the proposed method were smaller than the mean difference of the two references from the two eye specialists. For the segmentation of 8 layer boundaries of 100 normal B-scans from Ref. 23, the proposed method has a significantly higher accuracy than Ref. 23 for INL, OPL, ONL-IS, OS, RPE, and total retina, while errors of NFL and GCL-IPL do not have significant differences. The proposed method is 162 (9740/60) times faster than Chiu’s method23 to derive the 8 layer boundaries of a B-scan.

On the common dataset of patients with AMD,26 the proposed method has been compared with Chiu’s method.26 From Table 3, it could be seen that the proposed method yields similar segmentation accuracy to that of Ref. 26 for the pathological data (9 out of 10 quantities of the proposed method have a smaller average error than Chiu’s method26). For the segmentation of three layer boundaries of 110 B-scans with drusens and 110 B-scans with GA,26 the proposed method has a significantly higher accuracy than Ref. 26 for the total retina of drusens with low quality, total retina of GA with high quality, total retina of GA with low quality, and total retina for both the drusens and GA images, while errors of the rest do not have significant differences. For these pathological data, the proposed method is 37 (1700/45) times faster than Chiu’s method26 to derive the three layer boundaries of a B-scan.

4.3.

Advantages

The recent evolution of OCT from the time domain to the spectral domain has substantially increased imaging speed and made the acquisition of volumetric data with a huge number of bytes feasible. Therefore, fast and automatic segmentation of layers and thickness measurement for OCT data has become increasingly more important than ever. Our purpose is to minimize processing time while maintaining reliable segmentation results. The proposed method has the following advantages.

  • 1. Fast. The average time cost of the proposed method to extract 8 layer boundaries is 5.8 s for volumetric data and 95 ms for a B-scan, while existing methods will take several minutes.10 The low computational cost can be attributed mainly to the framework of the proposed method: image projection to initialize the two most prominent layers (RPE-IS/OS and NFL), 1-D Gaussian filtering along lateral direction instead of 2-D Gaussian filtering, approximating the RPE-IS/OS and NFL based on grayscale gradient magnitude at low spatial resolution (one 16th of the original Y and quarter of the original X), customized active contour model (only calculating grayscale difference along the lateral direction) to refine the layer boundaries, Savitzsky–Golay smoothing with precalculated weights, and Kalman filtering with a low order (2×2) matrix manipulation to initialize layer boundaries from previous frame for volumetric data.

  • 2. Accurate. For both normal eyes and eyes with AMD, the proposed method yields similar or higher accuracy as compared with the state-of-the-art methods23 (Table 2) and Ref. 26 (Table 3). This could be ascribed to the core components of the proposed method: employment of prior knowledge to derive ROI, Kalman filtering to perform well even in the presence of heavy noise and artifacts, and customized active contour plus Savitzsky–Golay smoothing to balance between converging to edges and smoothness.

  • 3. Being able to handle pathologies. According to our methodology, the pathology will only influence the initialization of layer boundaries of the first frame of a volumetric data, as the initialization of layer boundaries of subsequent fames will be cast from the previous frame. Due to their edge prominence, the virtreous-NFL and IS/OS could be extracted irrespective of pathologies. To get rid of the influence of a possible deformation of layer boundaries due to pathologies for any individual B-scan or the first frame of a volumetric data, the image is flattened such that the extracted IS/OS boundary is a horizontal line, and the other six layer boundaries will be segmented from the flattened image by taking the horizontal line of IS/OS as the first basis with subsequently extracted layer boundaries being possibly used as bases for those layer boundaries to be extracted from. The method has been validated on pathological eyes with drusens and GA to perform well (Fig. 13 and Table 3).

4.4.

Contributions and Limitations

From Table 4, it can be found that initialization of layer boundaries through Kalman filtering could yield more accurate segmentation than the direct casting and independent initialization. As a matter of fact, this feature of Kalman filtering may be employed for initialization of other structures of volumetric data, such as the brain from 3-D computed tomography or magnetic resonance images once the brain from a 2-D slice is available.

From Table 5, it can be seen that the proposed combination of a customized/simplified active contour model and Savitzky–Golay smoothing is better than the active contour method46 in terms of both segmentation accuracy and computational cost. This feature may be explored as one more option to replace the active contour methods.

The novelties of the study are fourfold: determination of the ROI from grayscale projection and prior knowledge, Kalman filtering to initialize layer boundaries from the previous frame, customized active contour for coarse delineation of layer boundaries, and curve smoothing with precalculated weights. All these contribute to substantial reduction of computational cost while preserving segmentation accuracy.

The study is not without limitations. In particular, this study is focused on methodology and validation has only been carried out for retinal layer boundaries in OCT images of normal eyes and eyes with AMD. In the future, we will be handling more pathological cases.

The proposed method is based on the assumption that the layer boundaries are visible in the images and different layer boundaries will not intersect. We humbly believe that the proposed method will be applicable to those pathologies as long as they will not invalidate this assumption.

Existing methods could segment different numbers of layer boundaries (and/or surfaces), i.e., 2 boundaries (Ref. 9 with SD-OCT for posterior retinal layers of human eyes,10 with SD-OCT for AMD of human eyes,11 with SD-OCT for total retinal thickness of human eyes,12 with TD-OCT for total retinal thickness of human eyes,13 with TD-OCT for optic nerve head of human eyes,14 with SD-OCT for NFL thickness of human eyes,15 with SD-OCT for optic nerve head of human eyes), 3 boundaries (Ref. 26 with SD-OCT for AMD of human eyes), 4 boundaries (Ref. 25 with SD-OCT for patients with retinitis pigmentosa), 5 boundaries (Ref. 4 with TD-OCT for retinal layers of human eyes,30 with SD-OCT for retinal layers of human eyes), 6 boundaries (Ref. 18 with SD-OCT for intraretinal layers of rodent eyes,29 with SD-OCT for retinal layers of human eyes), 7 boundaries (Ref. 5 with TD-OCT for retinal layer of human eyes,8 with TD-OCT for retinal layers of human eyes), 8 boundaries (Ref. 23 with SD-OCT for retinal layers of human eyes,32 with TD-OCT for retinal layers of diabetic eyes), 9 boundaries (Ref. 24 with SD-OCT for retinal layers of human eyes,31 with SD-OCT for retinal layers of human eyes), 10 boundaries (Ref. 17 with SD-OCT for intraretinal layers of rat retinas,28 with SD-OCT for mouse retina), and 11 boundaries (Ref. 22 with SD-OCT for retinal layers of human eyes), varying from OCT types (TD-OCT or SD-OCT), purposes (total retinal thickness, optic nerve head, intraretinal, outer retinal, or pathology), and subjects (human or animal). Among the methods segmenting more layer boundaries than the proposed method, Refs. 17 and 28 focused on rat (or mouse) retina, which are different from human retina; as to Refs. 22, 24, and 31, they adopted Cirrus HD-OCT machines (Carl Zeiss Meditec, Inc., Dublin, California), Topcon 3-D OCT-1000 equipment (Topcon Medical Systems, Oakland, New Jersey), and Spectralis OCT system (Heidelberg Engineering, Heidelberg, Germany) respectively, which are all commercial systems with better signal-to-noise ratio than our customized system. In this study, as experts are only able to delineate 8 layer boundaries due to image quality, we opt to segment these layer boundaries to demonstrate the effectiveness of the methodology. We believe that the proposed methodology could be easily extended for extraction of more layers when the imaging quality is enhanced to be comparable with that of existing commercial systems.

To conclude, in this paper, an approach for automatic measurement of macular thickness using SD-OCT has been proposed and validated. A quantitative evaluation has been performed on 21 volumetric data covering the usually observed variability, by comparing the results with those obtained manually by two experts. For segmenting 8 layer boundaries on each B-scan, quantitative results show that the average layer boundary error is below 2.35μm, being smaller than the difference between two eye experts’ manual delineations with an average processing time of 5.8 s for a volumetric data (128×2048×2048voxels) and 95 ms for an individual B scan. The algorithm has been compared with state-of-the-art methods (Refs. 23 and 26) on a common dataset of normal eyes23 and a common dataset of AMD patients26 to yield similar or higher accuracy (Tables 2 and 3) and is 37 times faster to derive the 8 or 3 layer boundaries of a B-scan. The proposed method could be a potential tool to quantify the retinal layer boundaries of normal subjects and AMD patients from SD-OCT. In the future, we are going to extend the methodology to more layer boundaries on one hand, and to more eye diseases, such as diabetic macular edema,51 for computer-assisted diagnosis on the other hand, from SD-OCT images.

Acknowledgments

We would like to thank Dr. Stephanie J. Chiu for providing the datasets introduced in Ref. 23 (normal) and Ref. 26 (pathological). The work is supported by Key Joint Program of National Natural Science Foundation and Guangdong Province of China U1201257, Shenzhen Key Technical Development Grant (No. CXZZ20140610151856719), and Introduced Innovative R&D Teams of Guangdong Province of China “Robot and Intelligent Information Technology” (Grant No. 201001D0104648280), and “Technologies for Image-guided Bio-simulation Radiotherapy Machinery” (Grant No. 2011S013).

References

1. 

D. Huang et al., “Optical coherence tomography,” Science, 254 (5035), 1178 –1181 (1991). http://dx.doi.org/10.1126/science.1957169 SCIEAS 0036-8075 Google Scholar

2. 

J. G. Fujimoto et al., “Optical coherence tomography (OCT) in ophthalmology: introduction,” Opt. Express, 17 (5), 3978 –3979 (2009). http://dx.doi.org/10.1364/OE.17.003978 OPEXFF 1094-4087 Google Scholar

3. 

M. Shahidi, Z. Wang and R. Zelkha, “Quantitative thickness measurement of retinal layers imaged by optical coherence tomography,” Am. J. Ophthalmol., 139 (6), 1056 –1061 (2005). http://dx.doi.org/10.1016/j.ajo.2005.01.012 AJOPAA 0002-9394 Google Scholar

4. 

H. Ishikawa et al., “Macular segmentation with optical coherence tomography,” Invest. Ophthalmol. Vis. Sci., 46 (6), 2012 –2017 (2005). http://dx.doi.org/10.1167/iovs.04-0335 IOVSDA 0146-0404 Google Scholar

5. 

D. Cabrera Fernández, H. M. Salinas and C. A. Puliafito, “Automated detection of retinal layer structures on optical coherence tomography images,” Opt. Express, 13 (25), 10200 –10216 (2005). http://dx.doi.org/10.1364/OPEX.13.010200 OPEXFF 1094-4087 Google Scholar

6. 

M. Szkulmowski et al., “Analysis of posterior retinal layers in spectral optical coherence tomography images of the normal retinal and retinal pathologies,” J. Biomed. Opt., 12 (4), 041207 (2007). http://dx.doi.org/10.1117/1.2771569 JBOPFO 1083-3668 Google Scholar

7. 

O Tan et al., “Mapping of macular substructures with optical coherence tomography for glaucoma diagnosis,” Ophthalmology, 115 (6), 949 –956 (2008). http://dx.doi.org/10.1016/j.ophtha.2007.08.011 OPANEW 0743-751X Google Scholar

8. 

M. Bagci et al., “Thickness profiles of retinal layers by optical coherence tomography image segmentation,” Am. J. Ophthalmol., 146 (5), 679 –687 (2008). http://dx.doi.org/10.1016/j.ajo.2008.06.010 AJOPAA 0002-9394 Google Scholar

9. 

C Ahlers et al., “Automatic segmentation in three-dimensional analysis of fibrovascular pigmentepithelial detachment using high-definition optical coherence tomography,” Br. J. Ophthalmol., 92 197 –203 (2008). http://dx.doi.org/10.1136/bjo.2007.120956 BJOPAL 0007-1161 Google Scholar

10. 

T. Fabritius et al., “Automated segmentation of the macula by optical coherence tomography,” Opt. Express, 17 (18), 15659 –15669 (2009). http://dx.doi.org/10.1364/OE.17.015659 OPEXFF 1094-4087 Google Scholar

11. 

D. Koozekanani, K. L. Boyer and C. Roberts, “Retinal thickness measurements from optical coherence tomography using a Markov boundary model,” IEEE T. Med. Imaging, 20 (9), 900 –916 (2001). http://dx.doi.org/10.1109/42.952728 ITMID4 0278-0062 Google Scholar

12. 

K. L. Boyer, A. Herzog and C. Roberts, “Automatic recovery of the optic nervehead geometry in optical coherence tomography,” IEEE T. Med. Imaging, 25 (5), 553 –570 (2006). http://dx.doi.org/10.1109/TMI.2006.871417 ITMID4 0278-0062 Google Scholar

13. 

V. J. Srinivasan et al., “Characterization of outer retinal morphology with high-speed, ultrahigh-resolution optical coherence tomography,” Invest. Ophthalmol. Vis. Sci., 49 (4), 1571 –1579 (2008). http://dx.doi.org/10.1167/iovs.07-0838 IOVSDA 0146-0404 Google Scholar

14. 

M. Mujat et al., “Retinal nerve fiber layer thickness map determined from optical coherence tomography images,” Opt. Express, 13 (23), 9480 –9491 (2005). http://dx.doi.org/10.1364/OPEX.13.009480 OPEXFF 1094-4087 Google Scholar

15. 

D. C. Fernández, “Delineating fluid-Filled region boundaries in optical coherence tomography images of the retina,” IEEE Trans. Med. Imaging, 24 (8), 929 –945 (2005). http://dx.doi.org/10.1109/TMI.2005.848655 ITMID4 0278-0062 Google Scholar

16. 

A. Mishra et al., “Intra-retinal layer segmentation in optical coherence tomography images,” Opt. Express, 17 (26), 23719 –23728 (2009). http://dx.doi.org/10.1364/OE.17.023719 OPEXFF 1094-4087 Google Scholar

17. 

A. Yazdanpanah et al., “Segmentation of intra-retinal layers from optical coherence tomography images using an active contour approach,” IEEE Trans. Med. Imaging, 30 (2), 484 –496 (2011). http://dx.doi.org/10.1109/TMI.2010.2087390 ITMID4 0278-0062 Google Scholar

18. 

M. A. Mayer et al., “Retinal nerve fiber layer segmentation on FD-OCT scans of normal subjects and glaucoma patients,” Biomed. Opt. Express, 1 (5), 1358 –1383 (2010). http://dx.doi.org/10.1364/BOE.1.001358 BOEICL 2156-7085 Google Scholar

19. 

M. K. Garvin et al., “Intraretinal layer segmentation of macular optical coherence tomography images using optimal 3-D graph search,” IEEE Trans. Med. Imaging, 27 (10), 1495 –1505 (2008). http://dx.doi.org/10.1109/TMI.2008.923966 ITMID4 0278-0062 Google Scholar

20. 

M. K. Garvin et al., “Automated 3D intraretinal layer segmentation of macular spectral-domain optical coherence tomography images,” IEEE Trans. Med. Imaging, 28 (9), 1436 –1447 (2009). http://dx.doi.org/10.1109/TMI.2009.2016958 ITMID4 0278-0062 Google Scholar

21. 

K. Lee et al., “Segmentation of the optic disc in 3-D OCT scans of the optic nerve head,” IEEE Trans. Med. Imaging, 29 (1), 159 –168 (2010). http://dx.doi.org/10.1109/TMI.2009.2031324 ITMID4 0278-0062 Google Scholar

22. 

G. Quellec et al., “Three-dimensional analysis of retinal layer texture: identification of fluid-filled regions in SD-OCT of the macula,” IEEE Trans. Med. Imaging, 29 (6), 1321 –1330 (2010). http://dx.doi.org/10.1109/TMI.2010.2047023 ITMID4 0278-0062 Google Scholar

23. 

S. J. Chiu et al., “Automatic segmentation of seven retinal layers in SDOCT images congruent with expert manual segmentation,” Opt. Express, 18 (18), 19413 –19428 (2010). http://dx.doi.org/10.1364/OE.18.019413 OPEXFF 1094-4087 Google Scholar

24. 

Q. Yang et al., “Automated layer segmentation of macular OCT images using dual-scale gradient information,” Opt. Express, 18 (20), 21293 –21307 (2010). http://dx.doi.org/10.1364/OE.18.021293 OPEXFF 1094-4087 Google Scholar

25. 

Q. Yang et al., “Automated segmentation of outer retinal layers in macular OCT images of patients with retinitis pigmentosa,” Biomed. Opt. Express, 2 (9), 2493 –2503 (2011). http://dx.doi.org/10.1364/BOE.2.002493 BOEICL 2156-7085 Google Scholar

26. 

S. J. Chiu et al., “Validated automatic segmentation of AMD pathology including drusen and geographic atrophy in SD-OCT images,” Invest. Ophthalmol. Vis. Sci., 53 (1), 53 –61 (2012). http://dx.doi.org/10.1167/iovs.11-7640 IOVSDA 0146-0404 Google Scholar

27. 

X. Chen et al., “3D segmentation of fluid-associated abnormalities in retinal OCT: probability constrained graph-search-graph-cut,” IEEE Trans. Med. Imaging, 31 (8), 1521 –1531 (2012). http://dx.doi.org/10.1109/TMI.2012.2191302 ITMID4 0278-0062 Google Scholar

28. 

P. P. Srinivasan et al., “Automatic segmentation of up to ten layer boundaries in SD-OCT images of the mouse retina with and without missing layers due to pathology,” Biomed. Opt. Express, 5 (2), 348 –365 (2014). http://dx.doi.org/10.1364/BOE.5.000348 BOEICL 2156-7085 Google Scholar

29. 

P. A. Dufour et al., “Graph-based multi-surface segmentation of OCT data using trained hard and soft constraints,” IEEE Trans. Med. Imaging, 32 (3), 531 –543 (2012). http://dx.doi.org/10.1109/TMI.2012.2225152 ITMID4 0278-0062 Google Scholar

30. 

K. A. Vermeer et al., “Automated segmentation by pixel classification of retinal layers in ophthalmic OCT images,” Biomed. Opt. Express, 2 (6), 1743 –1756 (2011). http://dx.doi.org/10.1364/BOE.2.001743 BOEICL 2156-7085 Google Scholar

31. 

A. Lang et al., “Retinal layer segmentation of macular OCT images using boundary classification,” Biomed. Opt. Express, 4 (7), 1133 –1152 (2013). http://dx.doi.org/10.1364/BOE.4.001133 BOEICL 2156-7085 Google Scholar

32. 

G. M. Somfai et al., “Automated classifiers for early detection and diagnosis of retinopathy in diabetic eyes,” BMC Bioinf., 15 (106), 1 –10 (2014). http://dx.doi.org/10.1186/1471-2105-15-106 BBMIC4 1471-2105 Google Scholar

33. 

D. S. Greenfield, H. Bagga and R. W. Knighton, “Macular thickness changes in glaucomatous optic neuropathy detected using optical coherence tomography,” Arch. Ophthalmol., 121 41 –46 (2003). http://dx.doi.org/10.1001/archopht.121.1.41 AROPAW 0003-9950 Google Scholar

34. 

V. Guedes et al., “Optical coherence tomography measurement of macular and nerve fiber layer thickness in normal and glaucomatous human eyes,” Ophthalmology, 110 177 –189 (2003). http://dx.doi.org/10.1016/S0161-6420(02)01564-6 OPANEW 0743-751X Google Scholar

35. 

R. D. Jager, W. F. Mieler and J. W. Miller, “Age-related macular degeneration,” N. Engl. J. Med., 358 2606 –2617 (2008). http://dx.doi.org/10.1056/NEJMra0801537 NEJMBH Google Scholar

36. 

H. W. van Dijk et al., “Selective loss of inner retinal layer thickness in type 1 diabetic patients with minimal diabetic retinopathy,” Invest. Ophthalmol. Visual Sci., 50 3404 –3409 (2009). http://dx.doi.org/10.1167/iovs.08-3143 IOVSDA 0146-0404 Google Scholar

37. 

E. M. Frohman et al., “Optical coherence tomography: a window into the mechanisms of multiple sclerosis,” Nat. Clin. Pract. Neurol., 4 664 –675 (2008). http://dx.doi.org/10.1038/ncpneuro0950 Google Scholar

38. 

S. Saidha et al., “Primary retinal pathology in multiple sclerosis as detected by optical coherence tomography,” Brain, 134 518 –533 (2011). http://dx.doi.org/10.1093/brain/awq346 BRAIAK 0006-8950 Google Scholar

39. 

S. Saidha et al., “Microcystic macular oedema, thickness of the inner nuclear layer of the retina, and disease characteristics in multiple sclerosis: a retrospective study,” Lancet Neurol., 11 963 –972 (2012). http://dx.doi.org/10.1016/S1474-4422(12)70213-2 Google Scholar

40. 

S. Kirbas et al., “Retinal nerve fiber layer thickness in patients with Alzheimer disease,” J. Neuroophthalmol., 33 58 –61 (2013). http://dx.doi.org/10.1097/WNO.0b013e318267fd5f Google Scholar

41. 

M. E. Hajee et al., “Inner retinal layer thinning in Parkinson disease,” Arch. Ophthalmol., 127 737 –741 (2009). http://dx.doi.org/10.1001/archophthalmol.2009.106 AROPAW 0003-9950 Google Scholar

42. 

R. E. Kalman, “A new approach to linear filtering and prediction theory,” J. Basic. Eng., 82D 35 –45 (1960). http://dx.doi.org/10.1115/1.3662552 JBAEAI 0021-9223 Google Scholar

43. 

M. Jacob, T. Blu and M. Unser, “Efficient energies and algorithms for parametric snakes,” IEEE Trans. Image Process., 13 (9), 1231 –1244 (2004). http://dx.doi.org/10.1109/TIP.2004.832919 IIPRE4 1057-7149 Google Scholar

44. 

A Savitzky and M. J. E. Golay, “Smoothing and differentiation of data by simplified least square procedures,” Anal. Chem., 36 (8), 1627 –1639 (1964). http://dx.doi.org/10.1021/ac60214a047 ANCHAM 0003-2700 Google Scholar

45. 

M. Kass, A. Witkin and D. Terzopoulos, “Snakes: active contour models,” Int. J. Comput. Vision, 1 321 –331 (1988). http://dx.doi.org/10.1007/BF00133570 IJCVEQ 0920-5691 Google Scholar

46. 

D. A Williams and M. Shah, “A fast algorithm for active contours and curvature estimation,” CVGIP Image Understand., 55 (1), 14 –16 (1992). http://dx.doi.org/10.1016/1049-9660(92)90003-L Google Scholar

47. 

K. M. Lam and H. Yan, “Fast greedy algorithm for active contours,” Electron. Lett., 30 (1), 21 –23 (1994). http://dx.doi.org/10.1049/el:19940040 ELLEAK 0013-5194 Google Scholar

48. 

P. A. Gorry, “General least-squares smoothing and differentiation by the convolution,” Anal. Chem., 62 (6), 570 –573 (1990). http://dx.doi.org/10.1021/ac00205a007 ANCHAM 0003-2700 Google Scholar

49. 

L. S. Lim et al., “Age-related macular degeneration,” Lancet, 379 (9827), 1728 –1738 (2012). http://dx.doi.org/10.1016/S0140-6736(12)60282-7 LANCAO 0140-6736 Google Scholar

50. 

E. Götzinger et al., “Retinal pigment epithelium segmentation by polarization sensitive optical coherence tomography,” Opt. Express, 16 (21), 16410 –16422 (2008). http://dx.doi.org/10.1364/OE.16.016410 OPEXFF 1094-4087 Google Scholar

51. 

S. J. Chiu et al., “Kernel regression based segmentation of optical coherence tomography images with diabetic macular edema,” Biomed. Opt. Express, 6 (4), 1172 –1194 (2015). http://dx.doi.org/10.1364/BOE.6.001172 BOEICL 2156-7085 Google Scholar

Biography

Tianqiao Zhang is a PhD student with Shenzhen Institutes of Advanced Technology since 2011. He received his master's degree from Guangxi Normal University of China in 2006. Then he worked as a software engineer for Huawei Technologies Co., Ltd., a senior image analysis engineer, and a team leader with Shenzhen Moptim Imaging Technique Co., Ltd. His research interests include medical image processing and machine vision.

Zhangjun Song received his PhD from Tsinghua University in 2007. Then he worked as an assistant researcher, was promoted to and has been working as a principal investigator with Shenzhen Institutes of Advanced Integration Technology, Chinese Academy of Sciences. He also worked as a research fellow in National University of Singapore for more than one year. His research interests include machine vision and motion control of mobile robots.

Xiaogang Wang has been with the Shanxi Eye Hospital since 2014 as a resident of ophthalmology. He obtained the MD degree from Shanghai Jiao Tong University. His research activities are mainly about ophthalmic imaging technologies for the eye, especially about optical coherence tomography, and Scheimpflug imaging technology.

Huimin Zheng is a research assistant at Shenzhen Institutes of Advanced Technology. She received her BS degree in information and computing technology from Guilin University of Electronic Technology in 2010. Her current research interest is medical image processing.

Fucang Jia received his PhD in computer application technology from Institute of Computing Technology, Chinese Academy of Sciences in 2004. He then worked as an engineer and senior engineer in Shenzhen Anke High-Tech Co., Ltd. From 2008, he joined Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, as one principle investigator. His research interests include medical image processing and computer assisted surgery.

Jianhuang Wu received his PhD from Shenyang Institute of Automation, Chinese Academy of Sciences, in 2007. He is currently an associate professor at the Research Laboratory for Medical Imaging and Digital Surgery, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences. His research interests include virtual surgery, medical visualization, and computer graphics.

Guanglin Li received his PhD from Zhejiang University in 1997. He was a research associate at University of Illinois at Chicago. Then he joined the BioTechPlex Corporation as a senior scientist, and later became a senior research scientist in the Neural Engineering Center for Artificial Limbs at the Rehabilitation Institute of Chicago. He is currently a professor with Shenzhen Institutes of Advanced Technology. His current research interests include neural engineering and computational biomedical engineering.

Qingmao Hu is a professor with Shenzhen Institutes of Advanced Technology since 2007. He received his PhD from Huazhong University of Science and Technology in 1990. Then he worked as a lecturer and associate professor for the South Medical University of China, a guest scientist with University of Bern in Switzerland, and a senior scientist for A*STAR in Singapore. His research interests include medical image processing, pattern recognition, and computer-aided diagnosis.

CC BY: © The Authors. Published by SPIE under a Creative Commons Attribution 4.0 Unported License. Distribution or reproduction of this work in whole or in part requires full attribution of the original publication, including its DOI.
Tianqiao Zhang, Zhangjun Song, Xiaogang Wang, Huimin Zheng, Fucang Jia, Jianhuang Wu, Guanglin Li, and Qingmao Hu "Fast retinal layer segmentation of spectral domain optical coherence tomography images," Journal of Biomedical Optics 20(9), 096014 (21 September 2015). https://doi.org/10.1117/1.JBO.20.9.096014
Published: 21 September 2015
Lens.org Logo
CITATIONS
Cited by 9 scholarly publications and 1 patent.
Advertisement
Advertisement
KEYWORDS
Image segmentation

Optical coherence tomography

Retina

Filtering (signal processing)

Electronic filtering

Image filtering

Eye

Back to Top