All evidence is equal: the flaw in statistical reasoning

Stephen Gorard

    Research output: Contribution to journalArticlepeer-review

    25 Citations (Scopus)
    438 Downloads (Pure)

    Abstract

    In the context of existing ‘quantitative’/’qualitative’ schisms, this paper briefly reminds readers of the current practice of testing for statistical significance in social science research. This practice is based on a widespread confusion between two conditional probabilities. A worked example and other elements of logical argument demonstrate the flaw in statistical testing as currently conducted, even when strict protocols are met. Assessment of significance cannot be standardised and requires knowledge of an underlying figure that the analyst does not generally have and can not usually know. Therefore, even if all assumptions are met, the practice of statistical testing in isolation is futile. The question many people then ask in consequence is - what should we do instead? This is, perhaps, the wrong question. Rather, the question could be – why should we expect to treat randomly sampled figures differently from any other kinds of numbers, or any other forms of evidence? What we could do ‘instead’ is use figures in the same way as we would most other data, with care and judgement. If all such evidence is equal, the implications for research synthesis and the way we generate new knowledge are considerable
    Original languageEnglish
    Pages (from-to)63-77
    Number of pages15
    JournalOxford Review of Education
    Volume36
    Issue number1
    DOIs
    Publication statusPublished - 1 Feb 2010

    Fingerprint

    Dive into the research topics of 'All evidence is equal: the flaw in statistical reasoning'. Together they form a unique fingerprint.

    Cite this