Probabilistic Guarantees for Safe Deep Reinforcement Learning

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Colleges, School and Institutes

Abstract

Deep reinforcement learning has been successfully applied to many control tasks, but the application of such controllers in safety-critical scenarios has been limited due to safety concerns. Rigorous testing of these controllers is challenging, particularly when they operate in probabilistic environments due to, for example, hardware faults or noisy sensors. We propose MOSAIC, an algorithm for measuring the safety of deep reinforcement learning controllers in stochastic settings. Our approach is based on the iterative construction of a formal abstraction of a controller’s execution in an environment, and leverages probabilistic model checking of Markov decision processes to produce probabilistic guarantees on safe behaviour over a finite time horizon. It produces bounds on the probability of safe operation of the controller for different initial configurations and identifies regions where correct behaviour can be guaranteed. We implement and evaluate our approach on controllers trained for several benchmark control problems.

Details

Original languageEnglish
Title of host publicationProceedings of 18th International Conference on Formal Modelling and Analysis of Timed Systems (FORMATS 2020)
EditorsNathalie Bertrand, Nils Jansen
Publication statusAccepted/In press - 29 Jun 2020
Event18th International Conference on Formal Modelling and Analysis of Timed Systems (FORMATS 2020) - Virtual Event
Duration: 1 Sep 20203 Sep 2020

Publication series

NameLecture Notes in Computer Science
PublisherSpringer
ISSN (Print)0302-9743
ISSN (Electronic)1611-3349

Conference

Conference18th International Conference on Formal Modelling and Analysis of Timed Systems (FORMATS 2020)
CityVirtual Event
Period1/09/203/09/20