The sociolinguistic foundations of language modeling

Jack Grieve*, Sara Bartl, Matteo Fuoli, Jason Grafmiller, Weihang Huang, Alejandro Napolitano Jawerbaum, Akira Murakami, Marcus Perlman, Dana Roemling, Bodo Winter

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

61 Downloads (Pure)

Abstract

In this article, we introduce a sociolinguistic perspective on language modeling. We claim that language models in general are inherently modeling varieties of language, and we consider how this insight can inform the development and deployment of language models. We begin by presenting a technical definition of the concept of a variety of language as developed in sociolinguistics. We then discuss how this perspective could help us better understand five basic challenges in language modeling: social bias, domain adaptation, alignment, language change, and scale. We argue that to maximize the performance and societal value of language models it is important to carefully compile training corpora that accurately represent the specific varieties of language being modeled, drawing on theories, methods, and descriptions from the field of sociolinguistics.
Original languageEnglish
Article number1472411
Number of pages18
JournalFrontiers in Artificial Intelligence
Volume7
DOIs
Publication statusPublished - 13 Jan 2025

Fingerprint

Dive into the research topics of 'The sociolinguistic foundations of language modeling'. Together they form a unique fingerprint.

Cite this