Abstract
We present an overview of the shared task for spoken CALL. Groups competed on a prompt-response task using English language data collected, through an online CALL game, from Swiss German teens in their second and third years of learning
English. Each item consists of a written German prompt and an audio file containing a spoken response. The task is to accept linguistically correct responses and reject linguistically incorrect ones, with “linguistically correct” being defined by a gold standard derived from human annotations; scoring was
performed using a metric defined as the ratio of the relative rejection rates on incorrect and correct responses. The task received twenty entries from nine different groups. We present the task itself, the results, a tentative analysis of what makes items challenging, a comparison between different metrics, and
suggestions for a continuation.
English. Each item consists of a written German prompt and an audio file containing a spoken response. The task is to accept linguistically correct responses and reject linguistically incorrect ones, with “linguistically correct” being defined by a gold standard derived from human annotations; scoring was
performed using a metric defined as the ratio of the relative rejection rates on incorrect and correct responses. The task received twenty entries from nine different groups. We present the task itself, the results, a tentative analysis of what makes items challenging, a comparison between different metrics, and
suggestions for a continuation.
Original language | English |
---|---|
Title of host publication | Proceedings of the 7th ISCA Workshop on Speech and Language Technology in Education |
Editors | O Engwall, J Lopes, I Leite |
Place of Publication | Stockholm, Sweden |
Number of pages | 8 |
Publication status | Published - 25 Aug 2017 |
Keywords
- CALL
- Shared Task
- Automatic speech recognition
- Metrics