Paper
8 November 2024 Driver visual attention model based on top-down mechanism
Qinghao Li, Han Liu, Xing Tang, Yan Su
Author Affiliations +
Proceedings Volume 13416, Fourth International Conference on Advanced Algorithms and Neural Networks (AANN 2024); 134162G (2024) https://doi.org/10.1117/12.3049562
Event: 2024 4th International Conference on Advanced Algorithms and Neural Networks, 2024, Qingdao, China
Abstract
As a result of driver’s visual fixation behaviours, experienced drivers are able to selectively focus on specific areas or objects within the scene, ensuring the safe performance of driving tasks. Therefore, modeling driver’s visual fixation behaviours is crucial for the development of autonomous driving systems (ADS). Research indicates that driver’s visual fixation is determined by both top-down and bottom-up mechanisms. This paper proposes a driver’s visual attention model that incorporates both top-down and bottom-up mechanisms. We consider expectancy, effort, and value as top-down factors and salience as a bottom-up factor. The DFF model has been developed with the objective of integrating the top-down factors, while existing models are used to represent the bottom-up factors. Subsequently, a fusion strategy is employed to integrate all features, thereby generating a driver’s visual attention map. The proposed model was trained with the DR(eye)VE dataset and subsequently compared with state-of-the-art (SOTA) models. The results demonstrate that the proposed model exhibits enhanced performance in terms of the Pearson correlation coefficient (CC) and similarity (SMI) metrics, with an improvement of 22.4% and 26.7%, respectively. Furthermore, the trained model was tested on the BDDA dataset to assess its cross-dataset generalisation capability.
(2024) Published by SPIE. Downloading of the abstract is permitted for personal use only.
Qinghao Li, Han Liu, Xing Tang, and Yan Su "Driver visual attention model based on top-down mechanism", Proc. SPIE 13416, Fourth International Conference on Advanced Algorithms and Neural Networks (AANN 2024), 134162G (8 November 2024); https://doi.org/10.1117/12.3049562
Advertisement
Advertisement
RIGHTS & PERMISSIONS
Get copyright permission  Get copyright permission on Copyright Marketplace
KEYWORDS
Visualization

Visual process modeling

RGB color model

Feature extraction

Data modeling

Education and training

Information visualization

Back to Top