In this paper we present a multi-modal framework for offline learning of generative models of object deformation under robotic pushing. The model is multi-modal in that it is based on integrating force and visual information. The framework consists of several sub-models that are independently calibrated from the same data. These component models can be sequenced to provide many-step prediction and classification. When presented with a test example–a robot finger pushing a deformable object made of an unidentified, but previously learned, material–the predictions of modules for different materials are compared so as to classify the unknown material. Our approach, which consists of offline learning and combination of multiple models, goes beyond previous techniques by enabling i) predictions over many steps, ii) learning of plastic and elastic deformation from real data, iii) prediction of forces experienced by the robot, iv) classification of materials from both force and visual data, v) prediction of object behaviour after contact by the robot terminates. While previous work on deformable object behaviour in robotics has offered one or two of these features none has offered a way to achieve them all, and none has offered classification from a generative model. We do so through separately learned models which can be combined in different ways for different purposes.
|Pages (from-to)||153 - 169|
|Number of pages||17|
|Journal||IEEE Transactions on Cognitive and Developmental Systems|
|Early online date||3 Feb 2017|
|Publication status||Published - Jun 2017|
- deformable objects