Operational cleaning robot face numerous challenges when attempting to deftly and steadily grasp objects in cluttered scenes due to factors such as limited areas, stacked items, and restricted sensor perception. To address this issue, we propose CR-Graspnet, a six-degree-of-freedom (6-DoF) grasp generation network that utilizes point cloud contact features. This approach decouples the grasp pose in high-dimensional space by defining contact points, allowing for joint learning of contact point sampling, grasp parameter regression, and grasp quality classification. Our experimental results demonstrate the effectiveness and feasibility of this method, with a success rate of 93% achieved in single target grasping scenarios.
Compared to flat ground, objects in indoor environments have various curved surfaces, cleaning these surfaces is a higher-level task than traditional ground cleaning. To accomplish this task, we propose a new method to perform wiping on the curved surface of an object and complete the cleaning work through the end tool of the robotic arm. Our method is divided into two parts: An RGB-D semantic segmentation attention-based Feature Fusion Network (AFFNet), which can effectively fuse the features from the encoder and decoder to improve semantic segmentation accuracy; A path planning algorithm based on point cloud, which can autonomously generate the robotic arm operation path. The experimental results show that AFFNet achieves an mIOU accuracy of 46.24% on SUNRGBD dataset with excellent performance, the robotic arm can complete the curved surface cleaning operation with continuous and smooth path.
Fabric defect segmentation is an important part of ensuring the quality of product production. Using fabrics with surface defects will affect the quality and reputation of their products. In previous studies, some model compression methods have helped semantic segmentation models to be deployed on resource-limited working devices. However, the capacity reduction of models usually leads to a decline in detection performance. We propose a knowledge distillation method combining traditional KD loss and contrastive relational distillation, which makes student models learn the differential representation among various defects while receiving knowledge transfer from the teacher model. We use DeepLabV3+ and PSPNet with MobileNetV2 backbone networks as student models to validate our method. Experiment shows that our method performs better than direct training and traditional knowledge distillation methods on the DAGM and AITEX datasets. Our method enables lightweight models to achieve higher performance on fabric defect segmentation tasks.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.