Autonomous Recovery from Hostile Code Insertion using Distributed Reflection

Catriona Kennedy, Aaron Sloman

Research output: Contribution to journalArticlepeer-review

6 Citations (Scopus)

Abstract

In a hostile environment, an autonomous cognitive system requires a reflective capability to detect problems in its own operation and recover from them without external intervention. We present an architecture in which reflection is distributed so that components mutually observe and protect each other, and where the system has a distributed model of all its components, including those concerned with the reflection itself. Some reflective (or ‘meta-level’) components enable the system to monitor its execution traces and detect anomalies by comparing them with a model of normal activity. Other components monitor ‘quality’ of performance in the application domain. Implementation in a simple virtual world shows that the system can recover from certain kinds of hostile code attacks that cause it to make wrong decisions in its application domain, even if some of its self-monitoring components are also disabled.
Original languageEnglish
Pages (from-to)89-117
Number of pages29
JournalCognitive Systems Research
Volume4
Issue number2
Early online date11 Feb 2003
DOIs
Publication statusPublished - 1 Jun 2003

Fingerprint

Dive into the research topics of 'Autonomous Recovery from Hostile Code Insertion using Distributed Reflection'. Together they form a unique fingerprint.

Cite this