Open Access
27 August 2014 Multiscale vision model for event detection and reconstruction in two-photon imaging data
Alexey R. Brazhe, Claus Mathiesen, Barbara Lind, Andrey Rubin, Martin Lauritzen
Author Affiliations +
Abstract
Reliable detection of calcium waves in multiphoton imaging data is challenging because of the low signal-to-noise ratio and because of the unpredictability of the time and location of these spontaneous events. This paper describes our approach to calcium wave detection and reconstruction based on a modified multiscale vision model, an object detection framework based on the thresholding of wavelet coefficients and hierarchical trees of significant coefficients followed by nonlinear iterative partial object reconstruction, for the analysis of two-photon calcium imaging data. The framework is discussed in the context of detection and reconstruction of intercellular glial calcium waves. We extend the framework by a different decomposition algorithm and iterative reconstruction of the detected objects. Comparison with several popular state-of-the-art image denoising methods shows that performance of the multiscale vision model is similar in the denoising, but provides a better segmenation of the image into meaningful objects, whereas other methods need to be combined with dedicated thresholding and segmentation utilities.

1.

Introduction

Modern imaging techniques, including two-photon microscopy, produce in vivo experimental data of great diversity and volume. New experimental possibilities, such as optogenetics, bring about new scientific questions to address and consequently generate a torrent of data. One of the existing problems is the compromise between a demand for fast sampling to resolve rapid transient processes on one hand and sufficient spatial resolution on the other hand. Although imaging data tend to be collected in high detail, a large portion of it contains background or noise and is discarded downstream in the processing pipeline, while a small fraction is considered the useful “signal.” It is important to robustly identify the useful parts of data in low signal-to-noise ratio (SNR) measurements.

An important feature of analyzed data is its sparseness or compressibility under some transform, i.e., if the data can be decomposed into some basis such as Fourier or wavelet, where only a few coefficients are significantly larger than zero. The notion of sparseness is essential for noise suppression. If transformed data are sparse, one can legitimately assume that only the few large coefficients contain information about the underlying signal, whereas the small-valued coefficients can be attributed to noise. This leads to the idea of thresholding in a transform space for image enhancement.1,2 Because wavelet transform provides a nearly sparse representation of piecewise smooth images,3 this transform and its variants have become popular in image denoising and signal detection in transform domain,48 although there are also other transforms that can provide sparse representation, such as singular value decomposition and Radon transform. In short, wavelet transform is a multiscale representation of the input data, obtained via iterative application of band-pass filters. Wavelet coefficients capture the signal features at different locations and hierarchical spatial resolutions. The multiscale property is particularly useful for denoising because a typical useful signal is sparse (concentrated in a few coefficients at several scales), while noise is homogeneously distributed.

Calcium signaling in astrocytes takes versatile forms, one of which is spatially patterned spreading Ca2+ signals that pervade astroglial networks. Intercellular Ca2+ waves in astrocytic syncytium can be triggered by electrical and mechanical stimulations,9 local elevation of extracellular adenosine triphosphate (ATP) level,10 or by neuronal activity in situ11 Spontaneous glial calcium signaling is reported to guide axonal growth and cell migration in the developing brain,1214 and calcium waves may represent a reaction to local tissue damage or other pathology. For instance, the incidence of spontaneous Ca2+ waves is increased with aging and low-oxygen conditions.15 Detection and reconstruction of Ca2+ waves in fluorescent imaging recordings pose challenges for data analysis. The key problem is identifying transient low contrast events in large series of images at a low SNR. Conventional data analysis methods can be loosely categorized to region of interest (ROI) type analyses, pixel thresholding, statistical component analyses, and multiscale (usually wavelet based) analyses. ROI analysis and pixel thresholding work particularly well with evoked responses, easily recognizable cellular correlates, relatively low noise, and small datasets. An ROI-based approach is primarily popular because it is often simple to select the ROIs corresponding to neuronal bodies and then process the ROI-averaged fluorescence traces. However, it becomes unwieldy for the analysis of sparse spontaneous events in large datasets and high noise levels or where a priori selection of regions of interest is impossible, which is the case for Ca2+ waves in the molecular layer of the cerebellum. Independent component analysis (ICA)16 is capable of processing large datasets with sparse spontaneous events, but has some limitations. Specifically, the output of ICA relies on the independence of the analyzed signals, and does not preserve the relative amplitude or sign of the detected components; in application to frame series it does not directly take advantage of local correlations in pixel intensities. Spontaneously occurring glial Ca2+ waves (GCWs) are hard to detect and quantify by ROI analysis, pixel thresholding, or ICA.

In this paper, we present an example of a multiscale approach to image and signal processing. We discuss a framework for the multiscale vision model (MVM)17,18 in application to the problem of detection and reconstruction of spontaneous intercellular glial Ca2+ waves. This technique relies on the sparsifying property of the wavelet transform. In short, wavelet transform is a multiscale representation of the input data, made via iterative application of band-pass filters. Wavelet coefficients capture the signal features at different locations and hierarchical spatial resolutions. Due to the nature of the used transform, MVM is most suitable for detection of nearly round faint structures in noisy images or bright blobs in three-dimensional (3-D) data. Because glial calcium waves are manifested as expanding nearly isotropic transient fluorescent elevations, MVM performs well in pursuing such structures frame after frame, and then stitching overlapping objects as snapshots of a single GCW event.

2.

Multiscale Vision Model for Detection of Glial Calcium Waves

Recently, we adapted a wavelet-based framework, multiscale vision model to detect and analyze the spontaneous intercellular calcium waves in mouse cerebellum glial cells in vivo.18 In short, this framework amounts to noise rejection by thresholding wavelet coefficients of the input image, followed by establishing an interscale relationship between significant coefficients at different scales of decomposition and partial iterative reconstruction of the detected objects. Originally, this framework was suggested for the analysis of astronomy images19 and employed redundant wavelet transform with B-spline basis, also called “starlet” transform.2 Here, we report two improvements to our previous MVM implementation:18 (1) we increase the tolerance to outlier samples by using a mixed multiscale median and the starlet transform, and (2) we implement an iterative partial reconstruction algorithm. To illustrate this improved framework, we use several phantom patterns [Fig. 1(a)] of different shapes, contaminated with additive Gaussian white noise (AGWN) with a few bright “hot-spot” pixels with 50× the noise variance in 0.05% of pixels [Fig. 1(b)]. This can also be described as a small amount of “salt noise” (SN). The framework is summarized in Fig. 4 using an experimental 2-pm record with a spontaneous glial Ca2+ wave in the mouse cerebellum as an example. The MVM framework is compared to other denoising techniques in Fig. 6 for phantoms and in Fig. 7 for the experimental recording.

Fig. 1

Test phantom patterns and their decompositions. (a) Pure patterns, patterns corrupted with additive Gaussian white noise (+AGWN), patterns corrupted with additive Gaussian white noise and salt noise (+AGWN+SN), (b) wavelet coefficients after the five-level starlet transform for the noisy patterns (+AGWN+SN), (c) coefficients of the multiscale median transform of the same pattern, (d) coefficients of the mixed multiscale median/starlet transform of the same pattern.

NPh_1_1_011012_f001.png

2.1.

Starlet and Multiscale Median Transforms

Starlet transfrom (also known as à trous transform) decomposes an original image I(x,y) into a set {wj(x,y)} representing two-dimensional (2-D) image details at different scales j (wavelet coefficients) and a smoothed approximation cN(x,y) at the largest scale:

Eq. (1)

I(x,y)=cN(x,y)+jwj(x,y),
where j=1,N is the level of decomposition corresponding to a hierarchy of spatial scales. As j increases, the coefficient images wj represent more and more coarse features of the original image I. The starlet decomposition of a noisy phantom image is illustrated in Fig. 1. Wavelet coefficients at consecutive levels are iteratively obtained. First, the original image is considered an approximation at level 0 I(x,y)=c0(x,y). Then, smoothed approximations at the level n+1 are obtained by convolution of the approximation cn at the level n with a low-pass filter:

Eq. (2)

cn+1(x,y)=cn(x,y)*hn+1,
and the wavelet coefficients (“details”) are defined as the difference between the subsequent approximations:

Eq. (3)

Wn+1(x,y)=Cn+1(x,y)Cn(x,y).

The low-pass filter is 2× zero-upsampled at each level, leading to interlaced image convolution. In the starlet transform, a discrete filter based on a cubic B-spline is used. In one dimension (1-D), this filter takes the form h1=[(1/16),(1/4),(3/8),(1/4),(1/16)]. For 2-D images, one can use the outer product of the two one-dimensional filters h2=h1h1 or process each dimension separately with a 1-D filter.

A known drawback of the starlet transform is that a very bright point structure will have responses in many scales instead of just being captured by the finest scale.2 Such small bright structures can be outliers, like occasional “hot pixels” of the charge coupled device matrix or salt noise. This is illustrated in the starlet decomposition of the phantom image, mixed with AGWN and salt noise [Fig. 1(b)], where the bright outlier points influence coefficients up to the fourth level of decomposition. Such a problem does not exist in the multiscale median transform [Fig. 1(c)]. This transform is organized in the same iterative way as the starlet transform, but the role of low-pass filtering is played by the median filter, with the window size doubling at each level of decomposition. In 2-Ds, the median filter of an image f with window size L×L can be defined as Med(f,L):fi,kQ2{fi±L,k±L}, i.e., each pixel in the image f is replaced by the sample median of pixel values in the L×L neigborhood of the pixel fi,k. The window size changes with the decomposition level j as L=4j+1. The median filter is nonlinear and provides for robust smoothing, i.e., it attenuates the effect of outlier samples.2 Hence, one of the advantages of the multiscale median transform is better separation of structures in the scales, but the starlet transform provides a more robust noise estimation at different scales. In a compromise, one can mix the two transforms and use the median or starlet transform depending on the amplitude of the coefficient. In the original merged median/starlet transform (MST)2 at level j+1, one first computes the median filtering of the approximation cj with a window size 4j+1:

cj+1=Med(cj,4j+1).
Next, the temporary median transform coefficients are obtained, as wj+1=cjcj+1. Then the high-intensity coefficients are defined as coefficients where |wj+1|>τMAD(wj+1)/0.6745, where MAD stands for the median absolute deviation and is used as a robust estimator of noise standard deviation, and threshold τ is chosen high enough to avoid false detections, usually τ=5. All the high-intensity coefficients in wj+1 are set to zero and a version of cj with zeroed high-intensity structures is formed: cj=wj+1+cj+1. Next, the starlet transform of cj is done at j+1 scales (1), and the last approximation is finally used as cj+1. Thus, cj has been smoothed with wavelets after all strong features have been removed by median filtering. Because median filtering is computationally intensive and the outlying structures are usually small and thus are most frequent at small scales, we simplified the original algorithm by only performing the mixed decomposition at the first two scales and then continuing with the usual starlet transform. Coefficients of such a decomposition for the phantom image are shown in Fig. 1(d)—here all the hot-spot pixels are confined to the smallest scale coefficients and the structure of the patterns is more well preserved. Another example is shown in Fig. 2 for a noisy 1-D signal, which contains a Gaussian with σ=10 at t=512 and an outlier sample with an amplitude 100× the noise standard deviation at t=485. Representation of the Gaussian object in the starlet coefficients at scale j=5 is affected by this sample, while the multiscale median and median/starlet transforms are free from this spurious influence and represent only the object of interest.

Fig. 2

Illustration of the influence of outlying samples on detection of objects in one-dimensional (1-D) signals. From top to bottom: analyzed signal, containing white noise, a Gaussian with σ=10 at t=512 and an outlier with amplitude 100× higher than the noise standard deviation at t=485; starlet coefficients at fifth level of decomposition; multiscale median transform coefficients at fifth level of decomposition; mixed median/starlet transform coefficients at fifth level of decomposition. The outlier distorts object of interest representation for starlet transform, but has no effect on multiscale median and mixed transforms.

NPh_1_1_011012_f002.png

2.2.

Significant Wavelet Coefficients

Detection of structures of interest that are significantly brighter than the background should be based on the knowledge about the statistical distribution of the wavelet coefficients {wj(x,y)} in the background. We assumed stationary Gaussian white noise to set significance levels. Statistics of wavelet coefficients at each level were estimated with Monte Carlo simulations: we obtained standard deviations σ1(j) of noise wavelet coefficients at each level of decomposition j (index 1 in σ1 represents unit variance). The obtained σ1(j) values were used as weighting factors for arbitrary noise variance in the analyzed images. More details can be found in Ref. 17 and in Chapter 2 of Ref. 20

The knowledge of noise standard deviations at different spatial scales allows the definition of significant coefficients for an image I(k,l) by thresholding at kσ(j) at each level. If Wj(x,y)>kσ(j), the coefficient is considered significant and possibly belonging to a bright object. Interested in elevations of [Ca2+]i, we only test positive coefficients corresponding to the luminous sources. The choice of k can be varied for optimal performance; in the figures below, we used k=3.3 which corresponds to the 99.95th percentile of a Gaussian distribution.

2.3.

Interscale Relationship and Object Reconstruction

Image I(x,y) can be modeled as a composition of No objects Oi, smooth background B, and noise N:

Eq. (4)

I(x,y)=i=1NoOi(x,y)+B(x,y)+N(x,y).

To recover the objects Oi(x,y) in a given image, we use contiguous regions of significant coefficients at each scale and establish their interscale connectivity relationships. Let a structure Sj,l be a set of p connected significant wavelet coefficients at scale j:

Eq. (5)

Sj,l={wj(x1,y1),wj(x2,y2),,wj(xp,yp)},
where (xi,yi) are the coordinates of the i’th coefficient included in the structure. An example of such structures is given in Fig. 3(a), where the positions of significant median/starlet coefficients from Fig. 1(d) are shown in red. An object can be defined via a set of n structures at several different levels:

Eq. (6)

Oiw={Sj1,l1,,Sjn,ln},
and the real object Oi can be reconstructed from its wavelet representation Oiw. Structures in this set are hierarchically connected. Two structures at successive levels Sj,k and Sj+1,m are connected if the position (xm,ym) of the maximum of wavelet coefficients belonging to the structure Sj,k is also contained in Sj+1,m. Some significant structures can also result from the noise. These structures are typically isolated, i.e., they are not connected to any structure at a lower or higher level. Such structures are discarded in the algorithm. Indeed, any structure can only be connected to one structure at the higher level and to more than one structure at the lower level, thus resulting in a branched tree-like connectivity graph. Such connectivity trees are illustrated in Fig. 3(b), where each tree is shown in a different color, and coefficients at different scales are overlaid. We can refer to these connectivity trees by their root nodes, i.e., the structures at the largest scale.

Fig. 3

Iterative reconstruction of the detected structures. (a) Masks for significant (shown in red) starlet/wavelet coefficients for the noisy image shown in Fig. 1, (b) labeled interscale trees of the significant starlet/wavelet coefficients, (c) L2 norm of difference between the significant coefficients and decomposition of the reconstructed image E=ΣiMi(OiTwX)22 for decomposition and reconstruction using the starlet transform (black circles), decomposition and reconstruction using the combined median/starlet transform (MST, blue diamonds), and decomposition using MST and reconstruction using the starlet transform (green triangles), (d) noisy image (+AGWN+SN), reconstructions using the different reconstruction variants shown in (c) and the residuals after subtracting the reconstructed image from the input image.

NPh_1_1_011012_f003.png

When there are several close objects or a smaller object is overlaid on a bigger one, the objects can become entangled in one connectivity tree and should be deblended. A structure Sj,k will be detached from the tree as a new root node if there exists at least one other structure at the same level belonging to the same tree and the following condition is fulfilled: wj1m<wjm>wj+1m. Here, wjm is the maximum wavelet coefficient of Sj,k; wj1m=max{Sj1,l}, where Sj1,l is the structure connected to Sj,k, such that the position of its maximum wavelet coefficient is closest to the position of the maximum of Sj,k, if Sj,k is not connected to any structure at scale j1 then wj1m=0; and wj+1m=maxwj+1(x,y)|wj(x,y)Sj,k, i.e., the maximum wavelet coefficient at scale j+1, such that its location belongs to Sj,k. Recursive application of this deblending procedure to the connectivity tree yields objects as independent structures of significant wavelet coefficients. Partial reconstruction of these objects as images is a nontrivial task. The simplest solution is to perform inverse wavelet transforms for each object, setting all wavelet coefficients not belonging to the object to zero.

In our previos study,18 we looked for a compromise between the computational speed and accuracy of reconstruction. For the sake of computational speed, we reconstructed objects simply as the inverse transform Rw of the wavelet coefficients representing the object. Here, we improve our framework with an iterative reconstruction scheme.2 The idea is that we search for an image that will, under the multiscale transform (e.g., starlet), result in coefficients as close as possible to those to Oiw. Thus, we want to minimize E=Mi(OiwTwX)22 by varying reconstruction image X, where Tw is the wavelet transform operator, and Mi is the multiresolution support of Oiw, i.e., has ones where Oiw is nonzero and has zeros otherwise. The solution is searched iteratively:

Eq. (7)

Xn+1=Xn+αRwMi(OiwTwXn),
where Rw is the inverse transform operator, X1=TwOiw, and step size α can be damped at each iteration to ensure the stability of the solution. Reconstruction convergence can be seen from the decay of E with the iteration number in Fig. 3(c) for the starlet transform (black points) and the median/starlet transform (blue diamonds). It is implied that the transform operator Tw used for reconstruction in Eq. (7) is the same as the one used to obtain Oiw and Mi, but this is not necessary. Noting that the median/starlet transform is computationally slower than the starlet transform and must be done multiple times in the iterative reconstruction algorithm, we tried to use the starlet transform as the Tw in Eq. (7) (reconstruction operator Rw is the same for these transforms). This trick is justified by the notion that it is unlikely that the starting image for reconstruction X1=TwOiw should contain any salt noise or outlier point structures, as the latter should have been rejected at the MST decomposition stage. This scheme led to surprisingly good results, shown in Fig. 3(c) (green triangles): the convergence is faster and better than in the starlet–starlet decomposition/reconstruction pair, and the reconstruction process is much less computationally intensive than in the MST-MST decomposition/reconstruction pair.

The reconstruction results for the three decomposition/reconstruction pairs are shown in Fig. 3(d). We should note that in the reconstruction images, each of the pattern was independently reconstructed, but all the individual object reconstructions are summed in one image for the clarity of representation. The starlet-starlet variant retains the point structures along with the patterns, while the schemes involving the median/starlet transform reject this type of noise. Although the MST-MST scheme enhances the object contrast, the residual image (input minus the reconstruction) shows that it does not preserve initial object intensity, while the residual image for the MST-starlet scheme almost exclusively contains noise and little if any traces of the original patterns. This, together with a lower computational load, suggests the MST-starlet decomposition/reconstruction pair as the scheme of choice for the task of glial Ca2+ waves detection and reconstruction.

The full MVM framework with the mixed median/starlet decomposition and iterative reconstruction to detect and recover spontaneous glial calcium waves is summarized in Fig. 4. For each frame of normalized imaging data, we perform the median/starlet decomposition and after thresholding find contiguous areas of significant coefficients. These structrures are linked into interscale tree-like connectivity trees (connected structures belonging to one object are shown in the same color in the figure). Based on these connectivity trees, iterative reconstruction using the starlet transform is individually done for each tree according to Eq. (7) (shown in different colors as “significant structures in the kth frame” in the figure). After this the procedure is done for all frames, and the individual objects contained in each frame are stored. The overlapping objects in the neighboring frames are linked into an XYT 3-D representation of an evolution of distinct Ca2+ signaling events, shown as 3-D colored blobs in the scheme. Clearly, not all Ca2+ signaling events are glial Ca2+ waves, but after detection and 3-D reconstruction stages, one can identify the events of different types manually or according to any automated sorting scheme.

Fig. 4

Scheme of the MVM protocol for a frame sequence: significant bright objects are detected and reconstructed in each frame using 2-D MVM, followed by relation of the objects in the neighboring frames and finally the time evolution of each detected Ca2+ signaling event is independently reconstructed.

NPh_1_1_011012_f004.png

2.4.

Comparison to Other Denoising Techniques

We compared the MVM framework with our modifications to other modern denoising techniques, although an exhaustive survey of different denoising and object detection techniques is beyond the scope of this study. Figure 5(a) presents how the PSNR of denoised images changes with the PSNR of noise-corrupted images. Here, we compare the MVM, two algorithms of total variation (TV) denoising, which searches for image approximation with a minimal TV (integral of absolute gradient) and a bilateral filter.21 The TV denoising was used in two variants: the Chambolle algorithm22 and the split-Bregman algorithm.23 The TV and bilateral filtering implementations were used from the open-sourse scikit-image python library.24 Because the maximum value of the pure phantom image was 1, the PSNR was calculated as 20lg(1/MSE), where MSE is the mean square error between the tested image f and the pure phantom image g, each of size N×N: MSE=(1/N2)k,lf(k,l)g(k,l)2. It is clear that with AGWN, the MVM resulted in a better PSNR of the recovered images at very low SNRs of the input images and had a similar performance [Fig. 5(a), left pane] to the TV/Chambolle algorithm at a higher PSNR of the input images, although MVM displayed much a higher performance scatter than the other algorithms at the higher PSNR of the input images. The better performance of the MVM at a very low PNSR of the noisy images likely results from background rejection in the MVM and exclusive reconstruction of only the detected structures. The addition of small amounts of salt noise (outlier pixels) did not affect the MVM performance, but deteriorated the performances of the other algorithms [Fig. 5(a), right pane]. It is clear that while the used denoising algorithms are not optimized for outlier measurements, preprocessing of the input images with filters, devoted for removal of salt noise, e.g., adaptive median filter, would have reverted the results to the case of a simple AGWN. Figure 5(b) visualizes the difference between denoising provided by the TV/Chambolle and the MVM approaches at different PSNR values for the degraded phantom image. Comparsion of the images denoised by the two algorithms supports the observation that the higher PSNR values achieved by MVM are due to setting the background to zero.

Fig. 5

Performance of the MVM and other state-of-the-art denoising techniques at different PSNR levels of the degraded phantom image shown in Fig. 1(a). (a) Resulting PSNR for images, denoised with the MVM (green) two total variation (TV) techniques with Chambolle (blue) and Bregman (yellow) algorithms, and a bilateral filter algorithm (pink) as a function of PSNR of input (degraded) images; input images are mixed with AGWN (left pane) or AGWN+SN (right pane). Shaded areas around the curves are between 0.05 and 0.95 quantiles of the observed PNSR values and continuous lines are median values (measured after 200 runs), (b) visual comparison between the TV/Chambolle and the MVM denoising performance at different levels of input PSNR (after mixing with AGWN). Grayscale values rescaled to belong to [0,1] interval for each image independently.

NPh_1_1_011012_f005.png

An important feature of the MVM is the ability to detect and segment structures in an input images, which is built in. Figure 6(a) presents the results of denoising of the test noisy phantom with TV/Chambolle, TV/Bregman, and the sure-let wavelet-based algorithm.25 SURE-LET denoising was done using the MATLAB® code provided by Luisier and colleagues.26 To detect individual objects in the output of the alternative denoising algorithms, we performed Otsu thresholding27 and labeled the resulting contiguous areas as individual objects. Because the given denoising techniques did not effectively suppress the salt noise present in the input phantom, we had to perform adaptive median filtering on the denoised images prior to Otsu segmentation. The results are presented in Fig. 6(b), where each object is shown in a different color. The object separation in MVM matches the ground truth of the object composition, while the alternative methods tend to segment some of the patterns into multiple objects. Some false-positive objects are also detected. It is likely that it is possible to tune the thresholding and segmentation algorithms for the other denoising methods to produce more satisfactory results, but in our view, the power of MVM is that this object separation comes out of the box. The TV/Chambolle produced the best results among the alternative denoising approaches and it was chosen for further comparison of the algorithm performance.

Fig. 6

Comparison of performance of MVM and other denoising techniques on the test noisy phantom, shown in Fig. 1(a). Denoising results for two total variation (TV) minimization techniqes with Chambolle and Bregman algorthms, bilateral filter and SURE-LET wavelet-based algorithm (a) and object segmentation and labeling results for the corresponding algorithms (b), segmentation is part of MVM, in other algorithms it was performed by labeling contiguous areas after binarization with automatic Otsu thresholding.

NPh_1_1_011012_f006.png

Figure 7 shows the results of the TV/Chambolle denoising and the MVM for the experimental data, displaying several glial Ca2+ waves co-occurring in the field of view (FOV). In short, we imaged spontaneous glial Ca2+ waves in mouse cerebellum in vivo under ketamine anesthesia using two-photon microscope and Oregon green BAPTA-1/AM dye. In the top row, every fourth frame of the original normalized frame sequence is shown (time interval between the shown frames is 2.9s). Fluorescence values in each pixel were normalized to their respective standard deviation after subtraction of the mean value. Experimental details are given in Refs. 15 and 18. Because we do not have ground truth for the events in the experimental data, we compared the denoising and detection results to operator-detected events. Locations of the five glial Ca2+ waves in the FOV as detected by the operator in the ΔF/σF data are shown as circles in the fourth column of images. The TV/Chambolle denoising followed by Otsu thresholding and zeroing pixel values below the threshold (middle row) clearly improves the contrast of the images; however, there are also numerous above-threshold pixels which do not belong to any wave visible in the FOV. In contrast, the MVM (bottom row) provides a cleaner view of the data, recovering areas of elevated Ca2+ corresponding to glial Ca2+ waves, while at the same time rejecting background and providing for recognition of individual objects.

Fig. 7

Normalized fluorescence data (top row), results of TV/Chambolle denoising followed by Otsu thresholding and rejecting pixels below threshold (middle row), and results of the MVM reconstruction (bottom row). Objects, overlapping in neighboring frames are attributed to one signaling event and are shown in the same color code. Shown is every fourth frame of the original sequence, same data as in Fig. 4, there are several co-occurring calcium waves in the field of view (FOV). In the MVM.

NPh_1_1_011012_f007.png

3.

Conclusion

Our aim in this work was to illustrate the utility of multiscale transforms for the extraction of useful signals from two-photon laser scanning microscopy imaging data. Our main point of application was detection and reconstruction of spontaneous glial Ca2+ waves using the multiscale vision model approach. In comparison to our previously reported MVM experience,18 we extended the technique by using the more robust mixed multiscale median/startlet transform for image decomposition and implemented an iterative scheme for nonlinear image restoration from the wavelet coefficients representing an object of interest.

With this technique we were able to detect and reconstruct areas of increased Ca2+ elevations, corresponding to glial calcium waves and other forms of Ca2+ signaling in each of the recorded image frames. Performed in all frames, this procedure allows us to recover the whole dynamics of individual calcium waves by linking overlapping objects together in the neighboring frames. The mixed multiscale median/starlet transform allowed for suppression of both the Gaussian noise and outlying samples such as salt noise, while iterative object reconstruction from the significant wavelet coefficients and multiscale support allowed for a nearly perfect representation of the patterns, although the difference between the source image and the object reconstruction contained almost nothing but noise.

With only minor modifications, MVM should be extendable to other applications, such as quantification of blood vessel cross-sectional areas, since penetrating vessels also appear as nearly circular structures in typical two-photon brain imaging data. There is obviously more to neuroimaging data than roundish calcium waves or vessel cross sections, and some native structures, such as Purkinje cell dendrites, are markedly anisotropic. Representation of such structures should benefit from using, for example, curvelet transforms.6 In general, it is probably a winning strategy to not limit the analysis to a single transform, but to combine the significant coefficients from several types of transforms, creating over-redundant dictionaries for morphological component analysis,2 which would allow for high flexibility in data representation.

Acknowledgments

We thank Micael Lønstrup for providing excellent surgical assistance. This study was supported by the NORDEA Foundation/Center for Healthy Aging, the Lundbeck Foundation via the Lundbeck Foundation Center for Neurovascular Signaling (LUCENS), the NOVO-Nordisk Foundation, and the Danish Medical Research Council.

References

1. 

F. LuisierT. BluM. Unser, “Image denoising in mixed Poisson-Gaussian noise,” IEEE Trans. Image Process., 20 (3), 696 –708 (2011). http://dx.doi.org/10.1109/TIP.2010.2073477 IIPRE4 1057-7149 Google Scholar

2. 

J.-L. StarckF. MurtaghM.-J. Fadili, Sparse Image and Signal Processing: Wavelets, Curvelets, Morphological Diversity, Cambridge University Press, Cambridge, GB (2010). Google Scholar

3. 

A Wavelet Tour of Signal Processing, Third Edition: The Sparse Way, 3rd ed.Academic Press, Burlington, MA (2008). Google Scholar

4. 

B. Bathellieret al., “Wavelet-based multi-resolution statistics for optical imaging signals: application to automated detection of odour activated glomeruli in the mouse olfactory bulb,” Neuroimage, 34 (3), 1020 –1035 (2007). http://dx.doi.org/10.1016/j.neuroimage.2006.10.038 NEIMEF 1053-8119 Google Scholar

5. 

J.-L. StarckM. EladD. Donoho, “Redundant multiscale transforms and their application for morphological component separation,” Adv. Imaging Electron Phys., 132 287 –348 (2004). http://dx.doi.org/10.1016/S1076-5670(04)32006-9 Google Scholar

6. 

J.-L. StarckE. J. CandèsD. L. Donoho, “The curvelet transform for image denoising,” IEEE Trans. Image Process., 11 (6), 670 –684 (2002). http://dx.doi.org/10.1109/TIP.2002.1014998 IIPRE4 1057-7149 Google Scholar

7. 

F. V. Wegneret al., “Fast XYT imaging of elementary calcium release events in muscle with multifocal multiphoton microscopy and wavelet denoising and detection,” IEEE Trans. Med. Imaging, 26 (7), 925 –934 (2007). http://dx.doi.org/10.1109/TMI.2007.895471 ITMID4 0278-0062 Google Scholar

8. 

T. BluF. Luisier, “The sure-let approach to image denoising,” IEEE Trans. Image Process., 16 (11), 2778 –2786 (2007). http://dx.doi.org/10.1109/TIP.2007.906002 IIPRE4 1057-7149 Google Scholar

9. 

E. ScemesC. Giaume, “Astrocyte calcium waves: what they are and what they do,” Glia, 54 (7), 716 –725 (2006). http://dx.doi.org/10.1002/(ISSN)1098-1136 GLIAEJ 1098-1136 Google Scholar

10. 

T. M. Hooglandet al., “Radially expanding transglial calcium waves in the intact cerebellum,” Proc. Natl. Acad. Sci. U S A, 106 (9), 3496 –3501 (2009). http://dx.doi.org/10.1073/pnas.0809269106 PNASA6 0027-8424 Google Scholar

11. 

J. W. DaniA. ChernjavskyS. J. Smith, “Neuronal activity triggers calcium waves in hippocampal astrocyte networks,” Neuron, 8 (3), 429 –440 (1992). http://dx.doi.org/10.1016/0896-6273(92)90271-E NERNET 0896-6273 Google Scholar

12. 

J. HungM. A. Colicos, “Astrocytic Ca2+ waves guide CNS growth cones to remote regions of neuronal activity,” PLoS One, 3 (11), e3692 (2008). http://dx.doi.org/10.1371/journal.pone.0003692 1932-6203 Google Scholar

13. 

K. Kanemaruet al., “Regulation of neurite growth by spontaneous Ca2+ oscillations in astrocytes,” J. Neurosci., 27 (33), 8957 –8966 (2007). http://dx.doi.org/10.1523/JNEUROSCI.2276-07.2007 JNRSDS 0270-6474 Google Scholar

14. 

T. A. Weissmanet al., “Calcium waves propagate through radial glial cells and modulate proliferation in the developing neocortex,” Neuron, 43 (5), 647 –661 (2004). http://dx.doi.org/10.1016/j.neuron.2004.08.015 NERNET 0896-6273 Google Scholar

15. 

C. Mathiesenet al., “Spontaneous calcium waves in Bergman glia increase with age and hypoxia and may reduce tissue oxygen,” J. Cereb. Blood Flow Metab., 33 (2), 161 –169 (2013). http://dx.doi.org/10.1038/jcbfm.2012.175 JCBMDN 0271-678X Google Scholar

16. 

E. A. MukamelA. NimmerjahnM. J. Schnitzer, “Automated analysis of cellular signals from large-scale calcium imaging data,” Neuron, 63 (6), 747 –760 (2009). http://dx.doi.org/10.1016/j.neuron.2009.08.009 NERNET 0896-6273 Google Scholar

17. 

F. RueA. Bijaoui, “A multiscale vision model to analyse field astronomical images,” Exp. Astron., 7 (3), 129 –160 (1997). http://dx.doi.org/10.1023/A:1007984321129 EXASER 0922-6435 Google Scholar

18. 

A. BrazheC. MathiesenM. Lauritzen, “Multiscale vision model highlights spontaneous glial calcium waves recorded by 2-photon imaging in brain tissue,” Neuroimage, 68 192 –202 (2013). http://dx.doi.org/10.1016/j.neuroimage.2012.11.024 NEIMEF 1053-8119 Google Scholar

19. 

A. BijaouiF. Rué, “A multiscale vision model adapted to the astronomical images,” Signal Process., 46 (3), 345 –362 (1995). http://dx.doi.org/10.1016/0165-1684(95)00093-4 SPRODR 0165-1684 Google Scholar

20. 

J.-L. StarckF. Murtagh, Astronomical Image and Data Analysis, Springer, Berlin Heidelberg New York (2006). Google Scholar

21. 

C. TomasiR. Manduchi, “Bilateral filtering for gray and color images,” in Proc. Int. Conf. Computer Vision, 839 –846 (1998). Google Scholar

22. 

A. Chambolle, “An algorithm for total variation minimization and applications,” J. Math. Imaging Vis., 20 89 –97 (2004). http://dx.doi.org/10.1023/B:JMIV.0000011320.81911.38 JIMVEC 0924-9907 Google Scholar

23. 

T. GoldsteinS. Osher, “The split bregman method for l1-regularized problems,” SIAM J. Img. Sci., 2 (2), 323 –343 (2009). http://dx.doi.org/10.1137/080725891 1936-4954 Google Scholar

24. 

S. van der Waltet al., “Scikit-image: image processing in python,” PeerJ. PrePrints, 2 e453 (2014). http://dx.doi.org/10.7717/peerj.453 Google Scholar

25. 

F. LuisierT. Blu, “Sure-let multichannel image denoising: interscale orthonormal wavelet thresholding,” IEEE Trans. Image Process., 17 (4), 482 –92 (2008). http://dx.doi.org/10.1109/TIP.2008.919370 IIPRE4 1057-7149 Google Scholar

26. 

F. LuisierT. Blu, “SURE-LET Wavelet Denoising,” (2014) http://bigwww.epfl.ch/demo/suredenoising/ August ). 2014). Google Scholar

27. 

N. Otsu, “A threshold selection method from gray-level histograms,” IEEE Trans. Syst. Man Cybern., 9 (1), 62 –66 (1979). http://dx.doi.org/10.1109/TSMC.1979.4310076 ITSHFX 1083-4427 Google Scholar

Biography

Alexey Brazhe is a senior researcher in the Department of Biophysics, Biological faculty, Moscow State University. His research interests include mathematical modeling and multiscale signal and image analysis tools in application to neuroscience. He received his PhD degree in 2006 from Moscow State University.

Claus Mathiesen is a postdoctoral fellow at Translational Neurobiology, University of Copenhagen. He received his PhD degree in 1990 from the University of Copenhagen and studies blood-brain-barrier mechanism, vascular and metabolic coupling to astrocytes, and neuronal activity in the aging brain.

Barbara Lykke Lind is a postdoctoral fellow in the Department of Neuroscience and Pharmacology at the University of Copenhagen, Denmark. She is working in Martin Lauritzen’s group studying intracellular Ca2+ activities in astrocytes and neurons and the implications for the neurovascular coupling. She completed her PhD degree in 2014 at the University of Copenhagen.

Andrey Rubin has been the head of the Department of Biophysics, Biological faculty, Moscow State University since 1976, and chair of the National Committee for Biophysics, Russian Academy of Science.

Martin Lauritzen is a professor of clinical neurophysiology and head of the Department of Clinical Neurophysiology at Glostrup Hospital, and professor of translational neurobiology at the Department of Neuroscience and Pharmacology at the University of Copenhagen, Denmark. His research interests focus on the mechanisms coupling neuroglial activity to cerebral blood flow and energy metabolism, the underlying mechanisms for brain imaging, cortical spreading depression, and mechanisms of aging.

CC BY: © The Authors. Published by SPIE under a Creative Commons Attribution 4.0 Unported License. Distribution or reproduction of this work in whole or in part requires full attribution of the original publication, including its DOI.
Alexey R. Brazhe, Claus Mathiesen, Barbara Lind, Andrey Rubin, and Martin Lauritzen "Multiscale vision model for event detection and reconstruction in two-photon imaging data," Neurophotonics 1(1), 011012 (27 August 2014). https://doi.org/10.1117/1.NPh.1.1.011012
Published: 27 August 2014
Lens.org Logo
CITATIONS
Cited by 2 scholarly publications.
Advertisement
Advertisement
KEYWORDS
Calcium

Data modeling

Wavelets

Denoising

Visual process modeling

Reconstruction algorithms

Two photon imaging

RELATED CONTENT


Back to Top