Abstract
Partially observable Markov decision processes (POMDPs) have been widely used to model real world problems because of their abilities to capture uncertainty in states, actions and observations. In robotics, there are also constraints imposed on the problems, such as time constraints or resources constraints for executing actions. In this work, we seek to address the problems of planning in the presence of both uncertainty and constraints. Constrained POMDPs extend the general POMDPs by explicitly representing constraints in the goal conditions. The method we take in this paper is to use a translation-based approach to generate an MDP policy off-line, and apply value of information calculation on-line to stochastically select the observation action by taking into account of information they gain and their resource usage. This on-line selection scheme was evaluated in a number of scenarios and simulations, and the preliminary results show that our approach can achieve better performance compared to deterministic schemes.
Original language | English |
---|---|
Title of host publication | European Conference on Mobile Robots |
Publisher | Institute of Electrical and Electronics Engineers (IEEE) |
ISBN (Print) | 9781-1-4673-9163-4 |
DOIs | |
Publication status | Accepted/In press - 1 Sept 2015 |
Event | European Conference on Mobile Robots, 7th - Lincoln, United Kingdom Duration: 2 Sept 2015 → 4 Sept 2015 |
Conference
Conference | European Conference on Mobile Robots, 7th |
---|---|
Country/Territory | United Kingdom |
City | Lincoln |
Period | 2/09/15 → 4/09/15 |