Poster + Paper
9 October 2021 Denoising of event-based sensors with deep neural networks
Zhihong Zhang, Jinli Suo, Qionghai Dai
Author Affiliations +
Conference Poster
Abstract
As a novel asynchronous imaging sensor, event camera features low power consumption, low temporal latency and high dynamic range, but abundant noise. In real applications, it is essential to suppress the noise in the output event sequences before successive analysis. However, the event camera is of address-event-representation (AER), and requires developing new denoising techniques rather than conventional frame-based image denoising methods. In this paper, we propose two learning-based methods for the denoising of event-based sensor measurements, i.e., convolutional denoising auto-encoder (ConvDAE) and sequence-fragment recurrent neural network (SeqRNN). The former converts the event sequence into 2D images before denoising, which is compatible with existing deep denoisers and high-level vision tasks. The latter, utilizes recurrent neural network’s advantages in dealing with time series to realize online denoising while keeping the event’s original AER representation. Experiments based on real data demonstrate the effectiveness and flexibility of the proposed methods.
© (2021) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
Zhihong Zhang, Jinli Suo, and Qionghai Dai "Denoising of event-based sensors with deep neural networks", Proc. SPIE 11897, Optoelectronic Imaging and Multimedia Technology VIII, 1189713 (9 October 2021); https://doi.org/10.1117/12.2602742
Advertisement
Advertisement
RIGHTS & PERMISSIONS
Get copyright permission  Get copyright permission on Copyright Marketplace
KEYWORDS
Denoising

Cameras

Neural networks

Sensors

Video

Image processing

Convolutional neural networks

Back to Top