Boosting Relational Sequence Alignments

Andreas Karwath, Kristian Kersting, Niels Landwehr

Research output: Contribution to conference (unpublished)Paperpeer-review

18 Citations (Scopus)


The task of aligning sequences arises in many applications. Classical dynamic programming approaches require the explicit state enumeration in the reward model. This is often impractical: the number of states grows very quickly with the number of domain objects and relations among these objects. Relational sequence alignment aims at exploiting symbolic structure to avoid the full enumeration. This comes at the expense of a more complex reward model selection problem: virtually infinitely many abstraction levels have to be explored. In this paper, we apply gradient-based boosting to leverage this problem. Specifically, we show how to reduce the learning problem to a series of relational regressions problems. The main benefit of this is that interactions between states variables are introduced only as needed, so that the potentially infinite search space is not explicitly considered. As our experimental results show, this boosting approach can significantly improve upon established results in challenging applications.
Original languageEnglish
Publication statusPublished - 2008


  • inductive logic programming, machine learning, relational learning, scientific knowledge


Dive into the research topics of 'Boosting Relational Sequence Alignments'. Together they form a unique fingerprint.

Cite this