Robot plans execution for information gathering tasks with resources constraints

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Colleges, School and Institutes


Partially observable Markov decision processes (POMDPs) have been widely used to model real world problems because of their abilities to capture uncertainty in states, actions and observations. In robotics, there are also constraints imposed on the problems, such as time constraints or resources constraints for executing actions. In this work, we seek to address the problems of planning in the presence of both uncertainty and constraints. Constrained POMDPs extend the general POMDPs by explicitly representing constraints in the goal conditions. The method we take in this paper is to use a translation-based approach to generate an MDP policy off-line, and apply value of information calculation on-line to stochastically select the observation action by taking into account of information they gain and their resource usage. This on-line selection scheme was evaluated in a number of scenarios and simulations, and the preliminary results show that our approach can achieve better performance compared to deterministic schemes.


Original languageEnglish
Title of host publicationEuropean Conference on Mobile Robots
Publication statusAccepted/In press - 1 Sep 2015
EventEuropean Conference on Mobile Robots, 7th - Lincoln, United Kingdom
Duration: 2 Sep 20154 Sep 2015


ConferenceEuropean Conference on Mobile Robots, 7th
CountryUnited Kingdom