Uncertainty Averse Pushing with Model Predictive Path Integral Control

Research output: Chapter in Book/Report/Conference proceedingConference contribution


Colleges, School and Institutes

External organisations



Planning robust robot manipulation requires good forward models that enable robust plans to be found. This work shows how to achieve this using a forward model learned from robot data to plan push manipulations. We explore learning methods (Gaussian Process Regression, and an Ensemble of Mixture Density Networks) that give estimates of the uncertainty in their predictions. These learned models are utilised by a model predictive path integral (MPPI) controller to plan how to push the box to a goal location. The planner avoids regions of high predictive uncertainty in the forward model. This includes both inherent uncertainty in dynamics, and meta uncertainty due to limited data. Thus, pushing tasks are completed in a robust fashion with respect to estimated uncertainty in the forward model and without the need of differentiable cost functions. We demonstrate the method on a real robot, and show that learning can outperform physics simulation. Using simulation, we also show the ability to plan uncertainty averse paths.


Original languageEnglish
Title of host publicationProceedings of 2017 IEEE-RAS International Conference on Humanoid Robots (Humanoids 2017)
Publication statusPublished - 15 Nov 2017
Event2017 IEEE-RAS International Conference on Humanoid Robots (Humanoids 2017) - Birmingham, United Kingdom
Duration: 15 Nov 201717 Nov 2017


Conference2017 IEEE-RAS International Conference on Humanoid Robots (Humanoids 2017)
CountryUnited Kingdom


  • Cost function, Data models, Uncertainty, Predictive models, Planning, Robots, Trajectory