Abstract
Large-scale learner corpora collected from online language learning platforms, such as the EF-Cambridge Open Language Database (EFCAMDAT), provide opportunities to analyze learner data at an unprecedented scale. However, interpreting the learner language in such corpora requires a precise understanding of tasks: How does the prompt and input of a task and its functional requirements influence task-based linguistic performance? This question is vital for making large-scale task-based corpora fruitful for second language acquisition research. We explore the issue through an analysis of selected tasks in EFCAMDAT and the complexity and accuracy of the language they elicit.
| Original language | English |
|---|---|
| Pages (from-to) | 180-208 |
| Journal | Language Learning |
| Volume | 67 |
| Issue number | S1 |
| DOIs | |
| Publication status | Published - Jun 2017 |