Abstract
Despite a growing awareness of methodological issues, the literature on APPRAISAL has not so far provided adequate answers to some of the key challenges involved in reliably identifying and classifying evaluative language expressions. This article presents a step-wise method for the manual annotation of APPRAISAL in text that is designed to optimize reliability, replicability and transparency. The procedure consists of seven steps, from the creation of a context-specific annotation manual to the statistical analysis of the quantitative data derived from the manually-performed annotations. By presenting this method, the article pursues the twofold purpose of (i) providing a practical tool that can facilitate more reliable, replicable and transparent analyses, and (ii) fostering a discussion of the best practices that should be observed when manually annotating APPRAISAL.
Original language | English |
---|---|
Pages (from-to) | 229-258 |
Number of pages | 30 |
Journal | Functions of Language |
Volume | 25 |
Issue number | 2 |
DOIs | |
Publication status | Published - Jan 2018 |
Keywords
- reliability
- replicability
- transparency
- inter-coder agreement
- intra-coder agreement
- challenges in analyzing APPRAISAL