Eye-tracking holds numerous promises for improving the mixed reality experience. While eye-tracking devices are capable of accurate gaze mapping on 2D surfaces, depth estimation of gaze points remains a challenging problem. Most gaze-based interaction applications are supported by estimation techniques that find a mapping between gaze data and corresponding targets on a 2D surface. This approach inevitably leads to a biased outcome, as the nearest objects in the line of sight will tend to be the target of interest more often. One viable solution would be to estimate gaze as a 3D coordinate (x, y, z) rather than the traditional 2D coordinate (x, y). This article first introduces a new comprehensive 3D gaze dataset collected in a realistic scene setting. Data was collected using a head-mounted eye-tracker and a depth estimation camera. Next, we present a novel depth estimation model, trained on the new gaze dataset to accurately predict gaze depth based on calibrated gaze vectors. This method could help develop a mapping between gaze and 3D objects on a 3D plane. The presented model improves the reliability of depth measurement of visual attention in real scenes as well as the accuracy of depth-based scenes in virtual reality environments. Improving situational awareness using 3D gaze data will benefit several domains, particularly human-vehicle interaction, autonomous driving, and augmented reality.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.