Human activity recognition (HAR) gained great interest in today’s research activities, especially in regard to demographic change. Especially when complex activities have to be recognized, HAR systems often rely on multiple sensors necessary to be worn by the user. In this work, we propose a novel approach for combining a segmented optical receiver with a single IMU device. By fusing IMU related real world experimental data with precise optical simulations of a segmented optical receiver we can not only determine the activity of the user, including complex movements like walk-up and walk-down, but can also determine the user´s location.
Fingerprinting-based Visible Light Positioning is a promising candidate to perform large-scale indoor positioning tasks. In fingerprinting, signal characteristics are grouped in a fingerprint map together with the respective locations inside the indoor environment. By comparing live signal measurements with the fingerprint map, the closest match is selected as the current position estimate. However, the fingerprint map has to be generated beforehand in the so-called offline phase, which is the time-consuming process of sampling the environment, in which the positioning task is desired, for signal characteristics. Here, we propose a fingerprint-based positioning approach for which we mitigate the need for the offline phase by taking advantage of the VLC data transmission capabilities of the LED luminaires of the obligatory room lighting. Based on the transmitted data on room and luminaire configurations to the receiving device, the illumination characteristics in the room can be calculated by simplified analytical formalisms, substituting the need for an experimentally measured offline phase. We demonstrate the effectiveness of our approach with the help of ray-tracing simulations and under the assumption that the receiving device is equipped with an angular sectored receiver. The results of the ray-tracing simulations mimic real world measurements with the receiver in the online phase. We show that decimeter level accuracies down to centimeter level accuracies are achievable for such an approach.
KEYWORDS: Visible radiation, Receivers, Machine learning, Light sources and illumination, Transmitters, Sensors, Received signal strength, RGB color model, Photodiodes
Achieving precise information on the position of a subject without changing the luminaire infrastructure is a big challenge in positioning approaches that rely on visible light positioning. Achieving high positioning accuracy on a centimeter scale is done by implementing complex receiver unit designs or adapting the existing luminaries. In this context, we suggest a visible light positioning based approach that can determine the position of a person in certain areas of a room without the need of lighting infrastructure modifications. With this approach, one can identify the position with the help of the existing luminaires for the obligatory room lighting. The receiver, represented by a RGB sensitive photodiode, is positioned in an optimized way in order to support both the positioning task as well as the comfort of the user. Based on received signal strength measurements in the red, green and blue channels, we achieve the positioning task by a segmentation of the room into different areas corresponding to the respective impinging light and by utilizing machine learning clustering. Our results show the influence of different segmentation strategies and parameters on the number and size of the distinguishable areas inside the room. Then, we demonstrate the achievable accuracy of our solution approach in real world experiments. Our results show that such light-based positioning data can be fused with IMU sensor data for recognizing human activity.
Recently, indoor activity monitoring of human beings has gained more and more relevance. In particular, the determination of the spatial and temporal context of a user is of utter importance in many applications like monitoring or safety. In this paper, we present a framework that can identify what, where and how long a user is performing a certain activity by the utilization of a low cost and low complex system. Our system only comprises of a single inertial measurement unit and a single RGB sensitive photodiode, with no prerequisite for infrastructural modifications. By using independent decision trees, also the training effort can be kept minimal. Additionally, we verify experimentally the optimal set of features to be used for the framework. Overall, the achieved results are above 90 % in correct determinations of the room the user is in, the activity the user is performing and in which direction the activity is undertaken.
Conference Committee Involvement (3)
14th International Conference on Knowledge-Based and Intelligent Information & Engineering Systems.
8 September 2010 |
Knowledge-Based and Intelligent Information and Engineering Systems
8 September 2010 |
12th Portuguese Conference on Artificial Intelligence - EPIA 2005
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.