We have developed a diver-robot empathetic communication system that allows the diver to feel the disturbance around the robot and control the robot remotely using hand gestures. The underwater robot is embedded with soft dielectric elastomer (DE) sensors to sense the direction and amplitude of the disturbance around its surroundings, defined as the physical indentation of the eye sensors. The direction and intensity of the disturbance communicate to the user remotely via an array of vibrotactile actuators in the form of a bracelet. Wears of the glove will feel what the robot is going through, represented by different vibration intensities and patterns. The smart glove employs five dielectric elastomer sensors to capture finger motion and implements a machine-learning classifier in the onboard electronics to recognize gestures. Hence allowing the wearer to send commands in the form of hand gestures for correcting the underwater robot’s posture. The system will be tested in a user study to determine performance improvement over the traditional robotic control interface. Our work has demonstrated the capability of DE sensing for advanced human-machine interaction.
The buddy system, where a pair of divers look out for one another, is used by the diving community to mitigate danger. They inspect each other’s breathing apparatus, monitor remaining air supplies, health status, and can provide emergency support during a dive. Due to buddy unavailability however, some divers dive solo, forgoing the safety aspects of the buddy system. We propose a dedicated dive-buddy robot as a solution to this problem. The robot, an autonomous underwater vehicle, could operate as an assistant, controlled by the diver using hand gesture-based communication; a communication method commonly used amongst divers. To capture the gestures, we have developed a smart dive glove integrated with 5 dielectric elastomer strain sensors. The capacitance of each sensor was measured with on-board electronics, translated into a command using machine learning and transmitted underwater using acoustics. Due to travel restrictions relating to the Covid-19 pandemic, a demonstration with the diver and vehicle in the same pool was not possible. Therefore, here we present a demonstration with the diver performing gestures in a pool in Auckland, New Zealand, sending commands to the robot in a pool in Zagreb, Croatia. The commands were sent through acoustics to a computer in Auckland, over cellular internet to a computer in Zagreb, which then relayed instructions to the robot using acoustics. The robot was sent four commands and successfully completed all manoeuvres. The performance of the communication with regards to time delays is assessed and future improvements are discussed.
To swim well a fish points towards the oncoming flow. This action, termed rheotaxis is partially enabled by the flow-sensitive neuromasts on the skin of the fish. To mimic this we have fitted an elasto-tensegrity, fish-like robot, Robowahoo, with piezoresistive electroactive polymer sensors, and placed it in a flow-controlled water-flume tank. Signals were recorded as the head was slowly turned in yaw, demonstrating the real-time measurement of head alignment to flow. Such cyber-rheotaxis sensors can be directly linked to tail actuators in closed-loop control, thus bringing us closer to the goal of accurate and efficient robotic fish-like swimming.
The superior swimming ability of fish has encouraged the development of fish-like robots. To fully capture fish swimming kinematics a continuum under-actuated robot can be used, and there are many examples of such robots in the literature. But for realistic fish-like swimming in strong currents such robots will benefit from closed-loop feedback. We demonstrate how this can be achieved, underwater, using a stretchy neoprene sensing skin with embedded, discrete dielectric elastomer stretch sensors. The latest prototype skin, with 8 sensors, 4 on each side, is currently being evaluated on an underactuated tensegrity fish-like robot driven by a stepper motor. Carangiform movement of the body has been characterized using cameras and this data has been compared with a virtual model of the robot that uses sensor input for real-time model kinematic data. Four angles along the body defined the shape. A root mean square error between model and true camera angles of less than 3° was calculated for realistic cangiform motion at 0.5 Hz tail frequency.
Virtual reality allows users to immerse themselves into an alternate reality. Science fiction books and films, like Ready Player One, have represented the interaction with virtual reality as a body and mind immersion, where the user is able to move freely, interact with objects and feel the surrounding environment like in the real world. Although we are not there yet, advances in technology are bringing us closer to this vision. One of which, is the advancement in dielectric elastomers sensors (DES). By integrating DES into a wetsuit to capture shoulder and elbow motion, we are able to replicate the movement in a virtual reality humanoid avatar. Compared to the elbow motion, which can be modelled as extension and flexion motions, the shoulder joint is much more complex with a greater number of degrees of freedom. In this paper we present a wetsuit with 5 dielectric elastomer sensors that captures the elbow flexion/extension, shoulder flexion/extension, abduction/adduction and rotation with an RMSE of 11.8°, 5.2°, 5.4° and 7.9° respectively.
Hand gesture recognition algorithms require information from the material world to be converted to digital data. In this paper we present an analysis of dielectric elastomer sensors for hand gesture recognition. A glove with five dielectric elastomer sensors has been used to collect motion data from the hand. The capacitance value of each sensor was read and analysed for a total of 24 participants. The study shows that the sensors provide enough information to differentiate gestures from each participant, although the maximum capacitance value varied with each participant, making gesture recognition over all participants difficult. Data processing allowed for this problem to be solved.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.