TY - UNPB
T1 - Auto-regressive Rank Order Similarity (aros) test
AU - Clausner, Tommy
AU - Gentili, Stefano
PY - 2022/6/17
Y1 - 2022/6/17
N2 - In the present paper we propose a non-parametric statistical test procedure for interval scaled, paired samples data that circumvents the multiple comparison problem (MCP) by relating the data to the rank order of its group averages. Using an auto-regressive procedure, a single test statistic for multiple groups is obtained that allows for qualitative statements about whether multiple group averages are in fact different and how they can be sorted. The presented procedure outperforms classical tests, such as pairwise conducted t-tests and ANOVA, in some circumstances. Furthermore, the test is robust against noise and does not require the data to follow any particular distribution. If A is a data matrix containing N observations for k groups, then the test statistic η can be computed by η = ΣNi=1 f(Ai, s)/N, where s is a vector of length k containing the average for each group, transformed into unique rank values. This statistic is compared to the distribution D, obtained by Monte Carlo sampling from the permutation distribution. It will be demonstrated that D can be described by a normal distribution for a variety of input data distributions and choices for f, as long as a set of criteria is met. Comparing η to the permutation distribution controls the false alarm (FA) rate sufficiently, since the exact p-value can be estimated [1]. Multiple examples of possible choices for f will be discussed, as well as detailed descriptions of the underlying test assumptions, possible interpretations and use cases. All mathematical derivations are supported with a set of simulations, written in Python that can be downloaded from https://gitlab.com/TommyClausner/aros-test together with an implementation of the test itself.
AB - In the present paper we propose a non-parametric statistical test procedure for interval scaled, paired samples data that circumvents the multiple comparison problem (MCP) by relating the data to the rank order of its group averages. Using an auto-regressive procedure, a single test statistic for multiple groups is obtained that allows for qualitative statements about whether multiple group averages are in fact different and how they can be sorted. The presented procedure outperforms classical tests, such as pairwise conducted t-tests and ANOVA, in some circumstances. Furthermore, the test is robust against noise and does not require the data to follow any particular distribution. If A is a data matrix containing N observations for k groups, then the test statistic η can be computed by η = ΣNi=1 f(Ai, s)/N, where s is a vector of length k containing the average for each group, transformed into unique rank values. This statistic is compared to the distribution D, obtained by Monte Carlo sampling from the permutation distribution. It will be demonstrated that D can be described by a normal distribution for a variety of input data distributions and choices for f, as long as a set of criteria is met. Comparing η to the permutation distribution controls the false alarm (FA) rate sufficiently, since the exact p-value can be estimated [1]. Multiple examples of possible choices for f will be discussed, as well as detailed descriptions of the underlying test assumptions, possible interpretations and use cases. All mathematical derivations are supported with a set of simulations, written in Python that can be downloaded from https://gitlab.com/TommyClausner/aros-test together with an implementation of the test itself.
U2 - 10.1101/2022.06.15.496113
DO - 10.1101/2022.06.15.496113
M3 - Preprint
BT - Auto-regressive Rank Order Similarity (aros) test
PB - bioRxiv
ER -