Adaptive incremental learning for statistical relational models using gradient-based boosting

Yulong Gu, Paolo Missier

Research output: Contribution to journalConference articlepeer-review

Abstract

We consider the problem of incrementally learning models from relational data. Most existing learning methods for statistical relational models use batch learning, which becomes computationally expensive and eventually infeasible for large datasets. The majority of the previous work in relational incremental learning assumes the model's structure is given and only the model's parameters needed to be learned. In this paper, we propose algorithms that can incrementally learn the model's parameters and structure simultaneously. These algorithms are based on the successful formalisation of the relational functional gradient boosting system (RFGB), and extend the classical propositional ensemble methods to relational learning for handling evolving data streams.

Original languageEnglish
Pages (from-to)22-26
Number of pages5
JournalCEUR Workshop Proceedings
Volume2085
Publication statusPublished - 2018
EventLate Breaking Papers of the 27th International Conference on Inductive Logic Programming, LBP-ILP 2017 - Orleans, France
Duration: 4 Sept 20176 Sept 2017

Bibliographical note

Publisher Copyright:
© by the paper's authors.

Keywords

  • Concept drift
  • Ensemble methods
  • Gradient-based boosting
  • Hoeffding bound
  • Incremental learning
  • Relational regression tree
  • Statistical relational learning

ASJC Scopus subject areas

  • General Computer Science

Fingerprint

Dive into the research topics of 'Adaptive incremental learning for statistical relational models using gradient-based boosting'. Together they form a unique fingerprint.

Cite this