Abstract
We investigate the problem of learning the control of small groups of units in combat situations in Real Time Strategy (RTS) games. AI systems may acquire such skills by observing and learning from expert players, or other AI systems performing those tasks. However, access to training data may be limited, and representations based on metric information - position, velocity, orientation etc. - may be brittle, difficult for learning mechanisms to work with, and generalise poorly to new situations. In this work we apply qualitative spatial relations to compress such continuous, metric state-spaces into symbolic states, and show that this makes the learning problem easier, and allows for more general models of behaviour. Models learnt from this representation are used to control situated agents, and imitate the observed behaviour of both synthetic (pre-programmed) agents, as well as the behaviour of human-controlled agents on a number of canonical micromanagement tasks. We show how a Monte-Carlo method can be used to decompress qualitative data back in to quantitative data for practical use in our control system. We present our work applied to the popular RTS game Starcraft.
Original language | English |
---|---|
Title of host publication | Proceedings of the 10th AAAI Conference on Artificial Intelligence and Interactive Digital Entertainment, AIIDE 2014 |
Publisher | AAAI Press |
Pages | 195-201 |
Number of pages | 7 |
ISBN (Print) | 9781577356813 |
Publication status | Published - 2014 |
Event | 10th AAAI Conference on Artificial Intelligence and Interactive Digital Entertainment, AIIDE 2014 - Raleigh, United States Duration: 3 Oct 2014 → 7 Oct 2014 |
Conference
Conference | 10th AAAI Conference on Artificial Intelligence and Interactive Digital Entertainment, AIIDE 2014 |
---|---|
Country/Territory | United States |
City | Raleigh |
Period | 3/10/14 → 7/10/14 |
ASJC Scopus subject areas
- Artificial Intelligence
- Visual Arts and Performing Arts