Abstract
Background: Fairness testing for deep learning systems has been becoming increasingly important. However, much work assumes perfect context and conditions from the other parts: well-tuned hyperparameters for accuracy; rectified bias in data, and mitigated bias in the labeling. Yet, these are often difficult to achieve in practice due to their resource-/labour-intensive nature.
Aims: In this paper, we aim to understand how varying contexts affect fairness testing outcomes.
Method: We conduct an extensive empirical study, which covers cases, to investigate how contexts can change the fairness testing result at the model level against the existing assumptions. We also study why the outcomes were observed from the lens of correlation/fitness landscape analysis.
Results: Our results show that different context types and settings generally lead to a significant impact on the testing, which is mainly caused by the shifts of the fitness landscape under varying contexts.
Conclusions: Our findings provide key insights for practitioners to evaluate the test generators and hint at future research directions.
Aims: In this paper, we aim to understand how varying contexts affect fairness testing outcomes.
Method: We conduct an extensive empirical study, which covers cases, to investigate how contexts can change the fairness testing result at the model level against the existing assumptions. We also study why the outcomes were observed from the lens of correlation/fitness landscape analysis.
Results: Our results show that different context types and settings generally lead to a significant impact on the testing, which is mainly caused by the shifts of the fitness landscape under varying contexts.
Conclusions: Our findings provide key insights for practitioners to evaluate the test generators and hint at future research directions.
Original language | English |
---|---|
Title of host publication | 2024 ACM/IEEE International Symposium on Empirical Software Engineering and Measurement (ESEM) |
Publisher | IEEE |
Publication status | Published - 26 Oct 2024 |
Event | 18th ACM/IEEE International Symposium on Empirical Software Engineering and Measurement - Barcelona, Spain Duration: 20 Oct 2024 → 25 Oct 2024 |
Publication series
Name | Proceedings of the ACM-IEEE International Symposium on Empirical Software Engineering and Measurement |
---|---|
ISSN (Print) | 1949-3770 |
ISSN (Electronic) | 1949-3789 |
Exhibition
Exhibition | 18th ACM/IEEE International Symposium on Empirical Software Engineering and Measurement |
---|---|
Abbreviated title | ESEM 2024 |
Country/Territory | Spain |
City | Barcelona |
Period | 20/10/24 → 25/10/24 |