The attempt to understand the cerebellum has been dominated for years by supervised learning models. The central idea is that a learning algorithm modifies transmission strength at repeatedly co-active synapses, creating memories stored as finely calibrated synaptic weights. As a result, Purkinje cells, usually the de facto output cells of these models, acquire a modified response to input in a remembered pattern. This paper proposes an alternative model of pattern memory in which the function of a match is permissive, allowing but not driving output, and accordingly controlling the timing of output but not the rate of firing by Purkinje cells. Learning does not result in graded synaptic weights. There is no supervised learning algorithm or memory of individual patterns, which, like graded weights, are unnecessary to explain the evidence. Instead, patterns are classed as simply either known or not, at the level of input to a functional population of 100s of Purkinje cells (a microzone). The standard is strict. If only a handful of Purkinje cells receive a mismatch output of the whole circuit is blocked. Only if there is a full and accurate match are projection neurons in deep nuclei, which carry the output of most circuits, released from default inhibitory restraint. Purkinje cell firing at those times is a linear function of input rates. There is no effect of modification of synaptic transmission except to either allow or block output.
|Early online date||19 Jan 2021|
|Publication status||E-pub ahead of print - 19 Jan 2021|
- Parallel fibres
- Purkinje cells