Verifying reinforcement learning up to infinity

Edoardo Bacci, Mirco Gia­cob­be, David Parker

Research output: Chapter in Book/Report/Conference proceedingConference contribution

121 Downloads (Pure)

Abstract

Formally verifying that reinforcement learning systems act safely is increasingly important, but existing methods only verify over finite time. This is of limited use for dynamical systems that run indefinitely. We introduce the first method for verifying the time-unbounded safety of neural networks controlling dynamical systems. We develop a novel abstract interpretation method which, by constructing adaptable template-based polyhedra using MILP and interval arithmetic, yields sound---safe and invariant---overapproximations of the reach set. This provides stronger safety guarantees than previous time-bounded methods and shows whether the agent has generalised beyond the length of its training episodes. Our method supports ReLU activation functions and systems with linear, piecewise linear and non-linear dynamics defined with polynomial and transcendental functions. We demonstrate its efficacy on a range of benchmark control problems.
Original languageEnglish
Title of host publicationProceedings of the Thirtieth International Joint Conference on Artificial Intelligence
Subtitle of host publicationMontreal, 19-27 August 2021
EditorsZhi-Hua Zhou
PublisherInternational Joint Conferences on Artificial Intelligence Organization (IJCAI)
Pages2154-2160
Number of pages7
ISBN (Electronic)9780999241196
DOIs
Publication statusPublished - 27 Aug 2021
Event30th International Joint Conference on Artificial Intelligence (IJCAI-21) -
Duration: 21 Aug 202126 Aug 2021

Conference

Conference30th International Joint Conference on Artificial Intelligence (IJCAI-21)
Abbreviated titleIJCAI-21
Period21/08/2126/08/21

Keywords

  • Machine Learning: Deep Reinforcement Learning
  • Multidisciplinary Topics and Applications: Validation and Verification
  • Robotics: Learning in Robotics

Fingerprint

Dive into the research topics of 'Verifying reinforcement learning up to infinity'. Together they form a unique fingerprint.

Cite this