Abstract
We extend previous work on applying computational linguistics in understanding distributed sensemaking. In an experiment, teams of three respond to incidents in the C3Fire simulation under different levels of shared information, and radio communications were automatically transcribed (with an overall accuracy of around 80% for these recordings). The transcription was analyzed by computer to classify speech acts. We compared heuristics and regular expressions, supervised machine learning (using linear SVM), and unsupervised learning (using BERT). For this small corpus of utterances, SVM provides acceptable performance (around 79% accuracy) with minimal computational demand compared with the other approaches. In terms of team communication, differences in information conditions are identified (particularly in terms of speech acts relating to statements about “fire” and “rescue,” and statements about “reasoning and planning”). The study demonstrates the potential of automated analysis of team communications and indicates when teams might struggle with sensemaking.
| Original language | English |
|---|---|
| Pages (from-to) | 130-136 |
| Number of pages | 7 |
| Journal | Proceedings of the Human Factors and Ergonomics Society |
| Volume | 68 |
| Issue number | 1 |
| Early online date | 29 Aug 2024 |
| DOIs | |
| Publication status | Published - 30 Sept 2024 |
| Event | 68th International Annual Meeting of the Human Factors and Ergonomics Society, HFES 2024 - Phoenix, United States Duration: 9 Sept 2024 → 13 Sept 2024 |
Bibliographical note
Publisher Copyright:© 2024 Human Factors and Ergonomics Society.
Keywords
- C3Fire
- machine learning
- sensemaking
- speech acts
- teams
ASJC Scopus subject areas
- Human Factors and Ergonomics