Robot plans execution for information gathering tasks with resources constraints

Minlue Wang, Richard Dearden, Nick Hawes

Research output: Chapter in Book/Report/Conference proceedingConference contribution

40 Downloads (Pure)


Partially observable Markov decision processes (POMDPs) have been widely used to model real world problems because of their abilities to capture uncertainty in states, actions and observations. In robotics, there are also constraints imposed on the problems, such as time constraints or resources constraints for executing actions. In this work, we seek to address the problems of planning in the presence of both uncertainty and constraints. Constrained POMDPs extend the general POMDPs by explicitly representing constraints in the goal conditions. The method we take in this paper is to use a translation-based approach to generate an MDP policy off-line, and apply value of information calculation on-line to stochastically select the observation action by taking into account of information they gain and their resource usage. This on-line selection scheme was evaluated in a number of scenarios and simulations, and the preliminary results show that our approach can achieve better performance compared to deterministic schemes.
Original languageEnglish
Title of host publicationEuropean Conference on Mobile Robots
PublisherInstitute of Electrical and Electronics Engineers (IEEE)
ISBN (Print)9781-1-4673-9163-4
Publication statusAccepted/In press - 1 Sept 2015
EventEuropean Conference on Mobile Robots, 7th - Lincoln, United Kingdom
Duration: 2 Sept 20154 Sept 2015


ConferenceEuropean Conference on Mobile Robots, 7th
Country/TerritoryUnited Kingdom


Dive into the research topics of 'Robot plans execution for information gathering tasks with resources constraints'. Together they form a unique fingerprint.

Cite this