We propose a neural network model to explore how humans can learn and accurately retrieve temporal sequences, such as melodies, movies, or other dynamic content. We identify target memories by their neural oscillatory signatures, as shown in recent human episodic memory paradigms. Our model comprises three plausible components for the binding of temporal content, where each component imposes unique limitations on the encoding and representation of that content. A cortical component actively represents sequences through the disruption of an intrinsically generated alpha rhythm, where a desynchronisation marks information-rich operations as the literature predicts. A binding component converts each event into a discrete index, enabling repetitions through a sparse encoding of events. A timing component – consisting of an oscillatory “ticking clock” made up of hierarchical synfire chains – discretely indexes a moment in time. By encoding the absolute timing between discretised events, we show how one can use cortical desynchronisations to dynamically detect unique temporal signatures as they are reactivated in the brain. We validate this model by simulating a series of events where sequences are uniquely identifiable by analysing phasic information, as several recent EEG/MEG studies have shown. As such, we show how one can encode and retrieve complete episodic memories where the quality of such memories is modulated by the following: alpha gate keepers to content representation; binding limitations that induce a blink in temporal perception; and nested oscillations that provide preferential learning phases in order to temporally sequence events.
|Early online date||24 Apr 2021|
|Publication status||Published - 30 Jul 2021|
- Brain oscillations
- attentional blink
- episodic memory model
- temporal sequence model