A neural learning approach for simultaneous object detection and grasp detection in cluttered scenes

Zhang, Yang and Xie, Lihua and Li, Yuheng and Li, Yuan (2023) A neural learning approach for simultaneous object detection and grasp detection in cluttered scenes. Frontiers in Computational Neuroscience, 17. ISSN 1662-5188

[thumbnail of pubmed-zip/versions/1/package-entries/fncom-17-1110889/fncom-17-1110889.pdf] Text
pubmed-zip/versions/1/package-entries/fncom-17-1110889/fncom-17-1110889.pdf - Published Version

Download (1MB)

Abstract

Object detection and grasp detection are essential for unmanned systems working in cluttered real-world environments. Detecting grasp configurations for each object in the scene would enable reasoning manipulations. However, finding the relationships between objects and grasp configurations is still a challenging problem. To achieve this, we propose a novel neural learning approach, namely SOGD, to predict a best grasp configuration for each detected objects from an RGB-D image. The cluttered background is first filtered out via a 3D-plane-based approach. Then two separate branches are designed to detect objects and grasp candidates, respectively. The relationship between object proposals and grasp candidates are learned by an additional alignment module. A series of experiments are conducted on two public datasets (Cornell Grasp Dataset and Jacquard Dataset) and the results demonstrate the superior performance of our SOGD against SOTA methods in predicting reasonable grasp configurations “from a cluttered scene.”

Item Type: Article
Subjects: Open Research Librarians > Medical Science
Depositing User: Unnamed user with email support@open.researchlibrarians.com
Date Deposited: 27 Mar 2023 09:02
Last Modified: 04 Apr 2024 09:24
URI: http://stm.e4journal.com/id/eprint/468

Actions (login required)

View Item
View Item