The increasing number of mobile and wearable devices is dramatically changing the way we collect data about person’s life. These devices allow recording our daily activities and behavior in several forms, e.g., text, images, bio-signals, or video. However, many times, the collected data includes low quality or irrelevant contents, feeding lifelogging applications with huge amounts of data, and creating computational challenges for patterns’ identification. In this paper, we propose a fast image analysis approach to automatically select relevant images from lifelog data. Using images intrinsic information, such as scenes and objects, we have manually curated two datasets, one with relevant content and another with non-relevant information. Then, we applied supervised learning algorithms based on low-level image features, namely blur and focus, to find the binary model that best discriminates between the two classes. The binary models were then compared based on learning curves and f1-scores, achieving a 95.4% of f1-score for the best one. By reducing the amount of images in the lifelog data, we were able to save computational time without losing images with relevant content.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.