Robots are ideal surrogates for performing tasks that are dull, dirty, and dangerous. To fully achieve this ideal, a robotic teammate should be able to autonomously perform human-level tasks in unstructured environments where we do not want humans to go. In this paper, we take a step toward realizing that vision by introducing the integration of state of the art advancements in intelligence, perception, and manipulation on the RoMan (Robotic Manipulation) platform. RoMan is comprised of two 7 degree of freedom (DoF) limbs connected to a 1 DoF torso and mounted on a tracked base. Multiple lidars are used for navigation, and a stereo depth camera visualizes point clouds for grasping. Each limb has a 6 DoF force-torque sensor at the wrist, with a dexterous 3-finger gripper on one limb and a stronger 4-finger claw-like hand on the other. Tasks begin with an operator specifying a mission type, a desired final destination for the robot, and a general region where the robot should look for grasps. All other portions of the task are completed autonomously. This includes navigation, object identification and pose estimation (if the object is known) via deep learning or perception through search, fine maneuvering, grasp planning via grasp library, arm motion planning, and manipulation planning (e.g. dragging if the object is deemed too heavy to freely lift). Finally, we present initial test results on two notional tasks: clearing a road of debris such as a heavy tree or a pile of unknown light debris, and opening a hinged container to retrieve a bag inside it.
In December of 2017, members of the Army Research Laboratory’s Robotics Collaborative Technology Alliance (RCTA) conducted an experiment to evaluate the progress of research on robotic grasping of occluded objects. This experiment used the Robotic Manipulator (RoMan) platform equipped with an Asus Xtion to identify an object on a table cluttered with other objects, and to grasp and pick up the target object. The identification and grasping was conducted with varying input factor assignments following a formal design of experiments; these factors comprised different sizes of target, varied target orientation, variation in the number and positions of objects which occluded the target object from view, and different levels of lighting. The grasping was successful in 18 out of 23 runs (78% success rate). The grasping action was conducted within constraints placed on the position and orientation of the RoMan with respect to the table of target objects. The Statistical approach of a ‘deterministic’ design and the use of odds ratio analysis were applied to the task at hand.
The Army Research Laboratory’s Robotics Collaborative Technology Alliance is a program intended to change robots from tools that soldiers use into teammates alongside which soldiers can work. This requires the integration of fundamental and applied research in robotic perception, intelligence, manipulation, mobility, and human-robot interaction. In this paper, we present the results of assessments conducted in 2016 to evaluate the capabilities of a new robot, the Robotic Manipulator (RoMan), and of a cognitive architecture (ACT-R). The RoMan platform was evaluated on its ability to conduct a search and grasp task under a variety of conditions. Specifically, it was required to search for and recognize a gas can placed on the floor, and then pick it up. The RoMan showed the potential to be a good platform for autonomous manipulation, but the autonomy used in these experiments will require improvement to make full use of the platform’s capabilities. The cognitive architecture was evaluated as to how well it could learn to select an appropriate set of features for a classification task. The task was to classify emotions that had been encoded using the Facial Action Coding System, with ACT-R learning to select the most effective set of features for correct classification. ACT-R leaned rules which required it to observe about half of the available features to make a decision, and the subsequent decisions had an accuracy ranging from 76% to 93% (depending on the emotion).
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.