A Multi-Modal Model of Object Deformation under Robotic Pushing

Veronica E. Arriola-Rios, Jeremy L. Wyatt

Research output: Contribution to journalArticlepeer-review

8 Citations (Scopus)
247 Downloads (Pure)


In this paper we present a multi-modal framework for offline learning of generative models of object deformation under robotic pushing. The model is multi-modal in that it is based on integrating force and visual information. The framework consists of several sub-models that are independently calibrated from the same data. These component models can be sequenced to provide many-step prediction and classification. When presented with a test example–a robot finger pushing a deformable object made of an unidentified, but previously learned, material–the predictions of modules for different materials are compared so as to classify the unknown material. Our approach, which consists of offline learning and combination of multiple models, goes beyond previous techniques by enabling i) predictions over many steps, ii) learning of plastic and elastic deformation from real data, iii) prediction of forces experienced by the robot, iv) classification of materials from both force and visual data, v) prediction of object behaviour after contact by the robot terminates. While previous work on deformable object behaviour in robotics has offered one or two of these features none has offered a way to achieve them all, and none has offered classification from a generative model. We do so through separately learned models which can be combined in different ways for different purposes.
Original languageEnglish
Pages (from-to)153 - 169
Number of pages17
JournalIEEE Transactions on Cognitive and Developmental Systems
Issue number2
Early online date3 Feb 2017
Publication statusPublished - Jun 2017


  • deformable objects
  • learning
  • prediction
  • classification


Dive into the research topics of 'A Multi-Modal Model of Object Deformation under Robotic Pushing'. Together they form a unique fingerprint.

Cite this