Optimising transparency, reliability and replicability: annotation principles and inter-coder agreement in the quantification of evaluative expressions

Matteo Fuoli, Charlotte Hommerberg

Research output: Contribution to journalArticlepeer-review

13 Citations (Scopus)
317 Downloads (Pure)

Abstract

Manual corpus annotation facilitates exhaustive and detailed corpus-based analyses of evaluation that would not be possible with purely automatic techniques. However, manual annotation is a complex and subjective process. Most studies adopting this approach have paid insufficient attention to the methodological challenges involved in manually annotating evaluation, in particular concerning transparency, reliability and replicability. This article illustrates a procedure for annotating evaluative expressions in text that facilitates more transparent, reliable and replicable analyses. The method is demonstrated through a case study analysis of APPRAISAL (Martin and White, 2005) in a small-size specialized corpus of CEO letters published by the British energy company BP and four competitors before and after the Deepwater Horizon oil spill of 2010. Drawing on Fuoli and Paradis’ (2014) model of trust-repair discourse, it examines how ATTITUDE and ENGAGEMENT resources are strategically deployed by BP’s CEO in the attempt to repair stakeholders’ trust after the accident.
Original languageEnglish
Pages (from-to)315-349
JournalCorpora
Volume10
Issue number3
DOIs
Publication statusPublished - 2015

Keywords

  • evaluation
  • APPRAISAL theory
  • manual corpus annotation
  • inter-coder agreement
  • reliability
  • transparency
  • replicability
  • trust-repair
  • BP Deepwater Horizon oil spill

Fingerprint

Dive into the research topics of 'Optimising transparency, reliability and replicability: annotation principles and inter-coder agreement in the quantification of evaluative expressions'. Together they form a unique fingerprint.

Cite this