Abstract:
To obtain general purpose ability in human's life and work, robots first need to master the skill of grasping objects. However, most current robot grasping decision-making methods have many problems such as simple understanding of object features, lack of grasping prior knowledge and poor task compatibility. Inspired by the functional structure of partitions and blocks in the brain, this paper proposed a decision model that integrates object perception, prior knowledge and grasping task. The model consists of three parts: convolutional perception network, memory graph network and Bayesian decision network, which realize the functional affordance extraction of objects, grasping prior knowledge reasoning and association, and decision-making with information fusion, respectively. Three networks were respectively trained on the UMD part affordance dataset, self-built common-sense graph, and self-built decision dataset. Test on the cognitive model verified its good performance with the accuracy of 99.8%. Results show that it can make reasonable decisions, including the ability whether the object belongs to the current task scene and the ability whether and where to grasp, which can help improve the robot's availability in real applications.