How language users become able to produce forms they have never encountered in input is central to our understanding of language cognition. A range of models, including rule-based models, analogy-based models and stochastic models have been proposed to account for this ability. Despite the fact that all three models are reasonably successful, we argue that productivity is more accurately captured through learnability than by rules or probabilities. Using a combination of computational modelling and behavioural experimentation we show that the basic principle of error-driven learning allows language users extract the relevant patterns. These patterns are found at a level that cuts across phonology and morphology and is not considered by mainstream approaches to language. Our findings thus highlight how a learning-based approach constrains our inferences about the types of structures that should be targeted on a cognitively realistic account of language representation.