We examine the question of how a population of independently noisy sensory neurons should be configured to
optimize the encoding of a random stimulus into sequences of neural action potentials. For the case where firing
rates are the same in all neurons, we consider the problem of optimizing the noise distribution for a known
stimulus distribution, and the converse problem of optimizing the stimulus for a given noise distribution. This
work is related to suprathreshold stochastic resonance (SSR). It is shown that, for a large number of neurons,
the SSR model is equivalent to a single rate-coding neuron with multiplicative output noise.
KEYWORDS: Neurons, Monte Carlo methods, Stochastic processes, Sensors, Interference (communication), Data processing, Error analysis, Signal processing, Quantization, Psychophysics
Pooling networks are composed of noisy independent neurons that all noisily process the same information in
parallel. The output of each neuron is summed into a single output by a fusion center. In this paper we study
such a network in a detection or discrimination task. It is shown that if the network is not properly matched to
the symmetries of the detection problem, the internal noise may restore at least partially some kind of optimality.
This is shown for both (i) noisy threshold model neurons, as well as (ii) Poisson neuron models. We also study an
optimized version of the network, mimicking the notion of excitation/inhibition. We show that, when properly
tuned, the network may reach optimality in a very robust way. Furthermore, we find in this optimization that
some neurons remain inactive. The pattern of inactivity is organized in a strange branching structure, the
meaning of which remains to be elucidated.
Cochlear implants are prosthetic devices used to provide hearing to people who would otherwise be profoundly deaf.
The deliberate addition of noise to the electrode signals could increase the amount of information transmitted, but
standard cochlear implants do not replicate the noise characteristic of normal hearing because if noise is added in an
uncontrolled manner with a limited number of electrodes then it will almost certainly lead to worse performance. Only if
partially independent stochastic activity can be achieved in each nerve fibre can mechanisms like suprathreshold
stochastic resonance be effective.
We are investigating the use of stochastic beamforming to achieve greater independence. The strategy involves
presenting each electrode with a linear combination of independent Gaussian noise sources. Because the cochlea is filled
with conductive salt solutions, the noise currents from the electrodes interact and the effective stimulus for each nerve
fibre will therefore be a different weighted sum of the noise sources. To some extent therefore, the effective stimulus for
a nerve fibre will be independent of the effective stimulus of neighbouring fibres.
For a particular patient, the electrode position and the amount of current spread are fixed. The objective is therefore to
find the linear combination of noise sources that leads to the greatest independence between nerve discharges. In this
theoretical study we show that it is possible to get one independent point of excitation (one null) for each electrode and
that stochastic beamforming can greatly decrease the correlation between the noise exciting different regions of the
cochlea.
We have investigated how optimal coding for neural systems changes with the time available for decoding.
Optimization was in terms of maximizing information transmission. We have estimated the parameters for
Poisson neurons that optimize Shannon transinformation with the assumption of rate coding. We observed a
hierarchy of phase transitions from binary coding, for small decoding times, toward discrete (M-ary) coding
with two, three and more quantization levels for larger decoding times. We postulate that the presence of
subpopulations with specific neural characteristics could be a signiture of an optimal population coding scheme
and we use the mammalian auditory system as an example.
Pooling networks of noisy threshold devices are good models for natural networks (e.g. neural networks in some
parts of sensory pathways in vertebrates, networks of mossy fibers in the hippothalamus, . . . ) as well as for
artificial networks (e.g. digital beamformers for sonar arrays, flash analog-to-digital converters, rate-constrained
distributed sensor networks, . . . ). Such pooling networks exhibit the curious effect of suprathreshold stochastic
resonance, which means that an optimal stochastic control of the network exists.
Recently, some progress has been made in understanding pooling networks of identical, but independently
noisy, threshold devices. One aspect concerns the behavior of information processing in the asymptotic limit of
large networks, which is a limit of high relevance for neuroscience applications. The mutual information between
the input and the output of the network has been evaluated, and its extremization has been performed. The
aim of the present work is to extend these asymptotic results to study the more general case when the threshold
values are no longer identical. In this situation, the values of thresholds can be described by a density, rather
than by exact locations. We present a derivation of Shannon's mutual information between the input and output
of these networks. The result is an approximation that relies a weak version of the law of large numbers, and a
version of the central limit theorem. Optimization of the mutual information is then discussed.
Consider a quantization scheme which has the aim of quantizing a signal into N+1 discrete output states. The specification of such a scheme has two parts. Firstly, in the encoding stage, the specification of N unique threshold values is required. Secondly, the decoding stage requires specification of N+1 unique reproduction values. Thus, in general, 2N+1 unique values are required for a complete specification. We show in this paper how noise can be used to reduce the number of unique values required in the encoding stage. This is achieved by allowing the noise to effectively make all thresholds independent random variables, the end result being a stochastic quantization. This idea originates from a form of stochastic resonance known as suprathreshold stochastic resonance. Stochastic resonance occurs when noise in a system is essential for that system to provide its optimal output and can only occur in nonlinear systems--one prime example being neurons. The use of noise requires a tradeoff in performance, however, we show that even very low signal-to-noise ratios can provide a reasonable average performance for a substantial reduction in complexity, and that high signal-to-noise ratios can also provide a reduction in complexity for only a negligible degradation in performance.
KEYWORDS: Nerve, Electrodes, Ear, Interference (communication), Bandpass filters, Stochastic processes, Systems modeling, Action potentials, Signal to noise ratio, Data modeling
We have previously advocated the deliberate addition of noise to cochlear implant signals to enhance the speech comprehension of cochlear implant users. The function of the additive noise is to mimic noise sources that are present in a healthy ear (originating, for example, from Brownian motion of the hair cells and the fluctuations induced by the opening and closing of ion channels) but are largely absent in a deafened ear where the hair cells have been damaged or destroyed. The normal ear, however, also contains multiplicative noise sources that result from the quantal nature of synaptic transmission between the inner hair-cells and the cochlear nerve. These noise synaptic noise sources are also largely absent in the deafened ear. Given that previous studies suggest that additive noise can enhance information coding by sensory systems, we have investigated whether multiplicative noise also enhances coding in a model of electrical stimulation of the cochlear nerve by a cochlear implant. The model was based on leaky integrate-and-fire dynamics and modelled refractory and accommodation effects by a threshold dependency derived from the sodium-inactivation dynamics of the Frankenhauser-Huxley equations for myelinated nerves. We show that multiplicative noise leads to a fundamental change in the coding mechanism and can lead to a marked increase in the transmitted information compared with additive noise or a control condition with no noise. These results suggest that multiplicative noise in the normal auditory system might have a functional role.
KEYWORDS: Distortion, Signal to noise ratio, Quantization, Interference (communication), Stochastic processes, Computer programming, Neurons, Signal processing, Sensors, Detection theory
It is shown that Suprathreshold Stochastic Resonance (SSR) is
effectively a way of using noise to perform quantization or lossy
signal compression with a population of identical threshold-based
devices. Quantization of an analog signal is a fundamental
requirement for its efficient storage or compression in a digital
system. This process will always result in a loss of quality,
known as distortion, in a reproduction of the original signal. The
distortion can be decreased by increasing the number of states
available for encoding the signal (measured by the rate, or mutual
information). Hence, designing a quantizer requires a tradeoff
between distortion and rate. Quantization theory has recently been
applied to the analysis of neural coding and here we examine the
possibility that SSR is a possible mechanism used by populations
of sensory neurons to quantize signals. In particular, we analyze
the rate-distortion performance of SSR for a range of input SNR's
and show that both the optimal distortion and optimal rate occurs
for an input SNR of about 0 dB, which is a biologically plausible
situation. Furthermore, we relax the constraint that all
thresholds are identical, and find the optimal threshold values
for a range of input SNRs. We find that for sufficiently small
input SNRs, the optimal quantizer is one in which all thresholds
are identical, that is, the SSR situation is optimal in this case.
We have investigated information transmission in an array of threshold units with multiplicative noise that have a common input signal. We demonstrate a phenomenon similar to stochastic resonance with additive noise, and show that information transmission can be enhanced by a non-zero multiplicative noise level. Given that sensory neurons in the nervous system have multiplicative as well as additive noise sources, and they act approximately like threshold units, our results suggest that multiplicative noise might be an essential part of neural coding.
KEYWORDS: Signal to noise ratio, Distortion, Quantization, Stochastic processes, Interference (communication), Analog electronics, Computer programming, Detection theory, Neurons, Signal processing
We present an analysis of the use of suprathreshold stochastic resonance for analog to digital conversion. Suprathreshold stochastic resonance is a phenomenon where the presence of internal or input noise provides the optimal response from a system of identical parallel threshold devices such as comparators or neurons. Under the conditions where this occurs, such a system is effectively a non-deterministic analog to digital converter. In this paper we compare the suprathreshold stochastic resonance effect to conventional analog to digital conversion by analysing the rate-distortion trade-off of each.
Cochlear implants are used to restore functional hearing to people with profound deafness. Success, as measured by speech intelligibility scores, varies greatly amongst patients; a few receive almost no benefit while some are able to use a telephone under favourable listening conditions. Using a novel nerve model and the principles of suprathreshold stochastic resonance, we demonstrate that the rate of information transfer through a cochlear implant system can be globally maximized by the addition of noise. If this additional information could be used by the brain then it would lead to greater speech intelligibility, which is important given that the intelligibility of all cochlear implant recipients is poorer than that of people with normal hearing, particularly in adverse listening conditions.
KEYWORDS: Stochastic processes, Signal to noise ratio, Diffusion, Systems modeling, Interference (communication), Oscillators, Neurons, Correlation function, Signal processing, Switching
The problem of estimating periodic properties of periodically non-stationary stochastic processes is studied. A recently introduced
measure, the measure of periodicity (MP), of stochastic oscillations is discussed. The MP estimates the "periodicity level" of the
oscillations, i.e. the ratio of the periodic to the non-periodic components of the stochastic processes. The introduced measure differs fundamentally from the traditional measure, SNR, because the MP lets us estimate the value of the oscillation period. The MP is particularly useful in systems that display stochastic synchronisation phenomenon where the ratio of the periods of the
external force and the response of the studied system is m:n, where m and n are positive integer numbers. The MP is used to study synchronisation in two different systems, a bistable system and a neuronal model driven by noise and a sinusoidal signal. The dependence of MP on parameters is compared with the behaviour of the cross-correlation coefficient and the effective diffusion
coefficient. The influence of asymmetry in the bistable system is also studied. In the autonomous neuronal model it is shown that the coherence resonance phenomenon is well described by the MP.
We consider the application of Gaussian channel theory (GCT) to the problem of estimating the rate of information transmission through a nonlinear channel such as a neural element. We suggest that, contrary to popular belief, GCT can be applied to neural systems even when the dynamics are highly nonlinear. We show that, under suitable conditions, the Gaussianity of the response is not compromised and hence GCT can be usefully applied. Using the GCT approach we develop a new method for estimating information rates in the time domain. Finally, using this new method, we show that a recently introduced form of stochastic resonance, termed suprathreshold stochastic resonance, is also displayed by the information rate.
KEYWORDS: Neurons, Energy efficiency, Interference (communication), Stochastic processes, Quantization, Brain, Signal processing, Signal to noise ratio, Complex systems, Sensors
Suprathreshold Stochastic Resonance (SSR) is a recently discovered
form of stochastic resonance that occurs in populations of neuron-like devices. A key feature of SSR is that all devices in the population possess identical threshold nonlinearities. It has
previously been shown that information transmission through such a
system is optimized by nonzero internal noise. It is also clear
that it is desirable for the brain to transfer information in an
energy efficient manner. In this paper we discuss the energy efficient maximization of information transmission for the case of
variable thresholds and constraints imposed on the energy available to the system, as well as minimization of energy for the case of a fixed information rate. We aim to demonstrate that under certain conditions, the SSR configuration of all devices having identical thresholds is optimal. The novel feature of this work is that optimization is performed by finding the optimal threshold settings for the population of devices, which is equivalent to solving a noisy optimal quantization problem.
KEYWORDS: Signal to noise ratio, Binary data, Interference (communication), Stochastic processes, Data processing, Complex systems, Systems modeling, Signal processing, Information theory, Signal detection
The data processing inequality of information theory states that given random variables X, Y and Z which form a Markov chain in the order X-->Y-->Z, then the mutual information between X and Y is greater than or equal to the mutual information between X and Z. That is I(X) >= I(X;Z) . In practice, this means that no more information can be obtained out of a set of data then was there to begin with, or in other words, there is a bound on how much can be accomplished with signal processing. However, in the field of stochastic resonance, it has been reported that a signal to noise ratio gain can occur in some nonlinear systems due to the addition of noise. Such an observation appears to contradict the data processing inequality. In this paper, we investigate this question by using an example model system.
In this article we discuss the possible use of a novel form of stochastic resonance, termed suprathreshold stochastic resonance (SSR), to improve signal encoding/transmission in cochlear implants. A model, based on the leaky-integrate-and-fire (LIF) neuron, has been developed from physiological data and use to model information flow in a population of cochlear nerve fibers. It is demonstrated that information flow can, in principle, be enhanced by the SSR effect. Furthermore, SSR was found to enhance information transmission for signal parameters that are commonly encountered in cochlear implants. This, therefore, gives hope that SSR may be implemented in cochlear implants to improve speech comprehension.
Consider an array of parallel comparators (threshold devices) receiving the same input signal, but subject to independent noise, where the output from each device is summed to give an overall output. Such an array is a good model of a number of nonlinear systems including flash analogue to digital converters, sonar arrays and parallel neurons. Recently, this system was analysed by Stocks in terms of information theory, who showed that under certain conditions the transmitted information through the array is maximised for non-zero noise. This phenomenon was termed Suprathreshold Stochastic Resonance (SSR). In this paper we give further results related to the maximisation of the transmitted information in this system.
The responses of an all-optical bistable system and an analog model of the Brownian motion in the symmetric Duffing potential to a weak periodic force in the presence of noise are investigated. The appearance of a stochastic resonance in both cases is explained in the theory of a linear response.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.