Partially observable Markov decision processes (POMDPs) have been widely used to model real world problems because of their abilities to capture uncertainty in states, actions and observations. In robotics, there are also constraints imposed on the problems, such as time constraints or resources constraints for executing actions. In this work, we seek to address the problems of planning in the presence of both uncertainty and constraints. Constrained POMDPs extend the general POMDPs by explicitly representing constraints in the goal conditions. The method we take in this paper is to use a translation-based approach to generate an MDP policy off-line, and apply value of information calculation on-line to stochastically select the observation action by taking into account of information they gain and their resource usage. This on-line selection scheme was evaluated in a number of scenarios and simulations, and the preliminary results show that our approach can achieve better performance compared to deterministic schemes.
|Title of host publication||European Conference on Mobile Robots|
|Publisher||Institute of Electrical and Electronics Engineers (IEEE)|
|Publication status||Accepted/In press - 1 Sep 2015|
|Event||European Conference on Mobile Robots, 7th - Lincoln, United Kingdom|
Duration: 2 Sep 2015 → 4 Sep 2015
|Conference||European Conference on Mobile Robots, 7th|
|Period||2/09/15 → 4/09/15|