TY - JOUR
T1 - A Study of Structural and Parametric Learning in XCS
AU - Kovacs, T
AU - Kerber, Manfred
PY - 2006/3/1
Y1 - 2006/3/1
N2 - The performance of a learning classifier system is due to its two main components. First, it evolves new structures by generating new rules in a genetic process; second, it adjusts parameters of existing rules, for example rule prediction and accuracy, in an evaluation step, which is not only important for applying the rules, but also for the genetic process. The two components interleave and in the case of XCS drive the population toward a minimal, fit, non-overlapping population. In this work we attempt to gain new insights as to the relative contributions of the two components. We find that the genetic component has an additional role when using the train/test approach which is not present in online learning. We compare XCS to a system in which the rule set is restricted to the initial random population (XCS-NGA, that is, XCS No Genetic Algorithm). For small Boolean functions we can give XCS-NGA all possible rules of a particular condition length. In online learning, XCS-NGA can, given sufficiently many rules, achieve a surprisingly high classification accuracy, comparable to that of XCS. In a train/test approach, however, XCS generalises better than XCS-NGA and there seem to be limitations of XCS-NGA which cannot be overcome simply by increasing the population size. This illustrates that the requirements of a function approximator tend to differ between reinforcement learning (which is typically online) and concept learning (which is typically train/test).
AB - The performance of a learning classifier system is due to its two main components. First, it evolves new structures by generating new rules in a genetic process; second, it adjusts parameters of existing rules, for example rule prediction and accuracy, in an evaluation step, which is not only important for applying the rules, but also for the genetic process. The two components interleave and in the case of XCS drive the population toward a minimal, fit, non-overlapping population. In this work we attempt to gain new insights as to the relative contributions of the two components. We find that the genetic component has an additional role when using the train/test approach which is not present in online learning. We compare XCS to a system in which the rule set is restricted to the initial random population (XCS-NGA, that is, XCS No Genetic Algorithm). For small Boolean functions we can give XCS-NGA all possible rules of a particular condition length. In online learning, XCS-NGA can, given sufficiently many rules, achieve a surprisingly high classification accuracy, comparable to that of XCS. In a train/test approach, however, XCS generalises better than XCS-NGA and there seem to be limitations of XCS-NGA which cannot be overcome simply by increasing the population size. This illustrates that the requirements of a function approximator tend to differ between reinforcement learning (which is typically online) and concept learning (which is typically train/test).
UR - http://www.scopus.com/inward/record.url?scp=33646101262&partnerID=8YFLogxK
U2 - 10.1162/evco.2006.14.1.1
DO - 10.1162/evco.2006.14.1.1
M3 - Article
C2 - 16536886
SN - 1530-9304
SN - 1530-9304
SN - 1530-9304
SN - 1530-9304
SN - 1530-9304
SN - 1530-9304
SN - 1530-9304
SN - 1530-9304
SN - 1530-9304
SN - 1530-9304
SN - 1530-9304
SN - 1530-9304
SN - 1530-9304
VL - 14
SP - 1
EP - 19
JO - Evolutionary Computation
JF - Evolutionary Computation
IS - 1
ER -