This will count as one of your downloads.
You will have access to both the presentation and article (if available).
In this work, we train deep models on a combination of real and synthetic IR data, and we evaluate model performance on real IR data. We focus on the tasks of vehicle and person detection, object identification, and vehicle parts segmentation. We find that for both detection and object identification, training on a combination of real and synthetic data performs better than training only on real data. This classification improvement demonstrates an advantage to using synthetic data for computer vision. Furthermore, we believe that the utility of synthetic data – when combined with real data – will only increase as the realism gap closes.
Advances in camera design have resulted in the development of next generation “event-based” imaging sensors. These imaging sensors provide super-high-temporal resolution in individual pixels, but only pickup changes in the scene. This enables interesting new capabilities like low-power computer vision bullet-tracking and hostile fire detection and their low power-consumption is important for edge systems.
However, computer vision algorithms require massive amounts of data for system training and development and these data collections are expensive and time-consuming; it is unlikely that future event-based computer vision system development efforts will want to re-collect the amounts of data already captured and curated. Therefore, it is of interest to explore whether and how current data can be modified to simulate event-based images for training and evaluation. In this work, we present results from training and testing CNN architectures on both simulated and real event-based imaging sensor systems.
Relative performance comparisons as a function of various simulated event-based sensing parameters are presented and comparisons between approaches are provided.
View contact details
No SPIE Account? Create one