TY - GEN
T1 - Model-free and learning-free grasping by Local Contact Moment matching
AU - Adjigble, Maxime
AU - Marturi, Naresh
AU - Ortenzi, Valerio
AU - Rajasekaran, Vijaykumar
AU - Corke, Peter
AU - Stolkin, Rustam
PY - 2019/1/7
Y1 - 2019/1/7
N2 - This paper addresses the problem of grasping arbitrarily shaped objects, observed as partial point-clouds, without requiring: models of the objects, physics parameters, training data, or other a-priori knowledge. A grasp metric is proposed based on Local Contact Moment (LoCoMo). LoCoMo combines zero-moment shift features, of both hand and object surface patches, to determine local similarity. This metric is then used to search for a set of feasible grasp poses with associated grasp likelihoods. LoCoMo overcomes some limitations of both classical grasp planners and learning-based approaches. Unlike force-closure analysis, LoCoMo does not require knowledge of physical parameters such as friction coefficients, and avoids assumptions about fingertip contacts, instead enabling robust contacts of large areas of hand and object surface. Unlike more recent learning-based approaches, LoCoMo does not require training data, and does not need any prototype grasp configurations to be taught by kinesthetic demonstration. We present results of real-robot experiments grasping 21 different objects, observed by a wrist-mounted depth camera. All objects are grasped successfully when presented to the robot individually. The robot also successfully clears cluttered heaps of objects by sequentially grasping and lifting objects until none remain.
AB - This paper addresses the problem of grasping arbitrarily shaped objects, observed as partial point-clouds, without requiring: models of the objects, physics parameters, training data, or other a-priori knowledge. A grasp metric is proposed based on Local Contact Moment (LoCoMo). LoCoMo combines zero-moment shift features, of both hand and object surface patches, to determine local similarity. This metric is then used to search for a set of feasible grasp poses with associated grasp likelihoods. LoCoMo overcomes some limitations of both classical grasp planners and learning-based approaches. Unlike force-closure analysis, LoCoMo does not require knowledge of physical parameters such as friction coefficients, and avoids assumptions about fingertip contacts, instead enabling robust contacts of large areas of hand and object surface. Unlike more recent learning-based approaches, LoCoMo does not require training data, and does not need any prototype grasp configurations to be taught by kinesthetic demonstration. We present results of real-robot experiments grasping 21 different objects, observed by a wrist-mounted depth camera. All objects are grasped successfully when presented to the robot individually. The robot also successfully clears cluttered heaps of objects by sequentially grasping and lifting objects until none remain.
KW - Grasping
KW - Robots
KW - Measurement
KW - Grippers
KW - Shape
KW - Three-dimensional displays
KW - Training data
UR - http://www.scopus.com/inward/record.url?scp=85062289798&partnerID=8YFLogxK
U2 - 10.1109/IROS.2018.8594226
DO - 10.1109/IROS.2018.8594226
M3 - Conference contribution
SN - 978-1-5386-8095-7 (PoD)
T3 - IEEE International Conference on Intelligent Robots and Systems
SP - 2933
EP - 2940
BT - 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)
PB - Institute of Electrical and Electronics Engineers (IEEE)
ER -