Abstract
Predicting the performance of highly configurable software systems is the foundation for performance testing and quality assurance. To that end, recent work has been relying on machine/deep learning to model software performance. However, a crucial yet unaddressed challenge is how to cater for the sparsity inherited from the configuration landscape: the influence of configuration options (features) and the distribution of data samples are highly sparse.
In this paper, we propose an approach based on the concept of “divide-and-learn”, dubbed DaL. The basic idea is that, to handle sample sparsity, we divide the samples from the configuration landscape into distant divisions, for each of which we build a regularized Deep Neural Network as the local model to deal with the feature sparsity. A newly given configuration would then be assigned to the right model of division for the final prediction.
Experiment results from eight real-world systems and five sets of training data reveal that, compared with the state-of-the-art approaches, DaL performs no worse than the best counterpart on 33 out of 40 cases (within which 26 cases are significantly better) with up to 1.94× improvement on accuracy; requires fewer samples to reach the same/better accuracy; and producing acceptable training overhead. Practically, DaL also considerably improves different global models when using them as the underlying local models, which further strengthens its flexibility. To promote open science, all the data, code, and supplementary figures of this work can be accessed at our repository: https://github.com/ideas-labo/DaL.
In this paper, we propose an approach based on the concept of “divide-and-learn”, dubbed DaL. The basic idea is that, to handle sample sparsity, we divide the samples from the configuration landscape into distant divisions, for each of which we build a regularized Deep Neural Network as the local model to deal with the feature sparsity. A newly given configuration would then be assigned to the right model of division for the final prediction.
Experiment results from eight real-world systems and five sets of training data reveal that, compared with the state-of-the-art approaches, DaL performs no worse than the best counterpart on 33 out of 40 cases (within which 26 cases are significantly better) with up to 1.94× improvement on accuracy; requires fewer samples to reach the same/better accuracy; and producing acceptable training overhead. Practically, DaL also considerably improves different global models when using them as the underlying local models, which further strengthens its flexibility. To promote open science, all the data, code, and supplementary figures of this work can be accessed at our repository: https://github.com/ideas-labo/DaL.
Original language | English |
---|---|
Title of host publication | ESEC/FSE 2023 |
Subtitle of host publication | Proceedings of the 31st ACM Joint European Software Engineering Conference and Symposium on the Foundations of Software Engineering |
Editors | Satish Chandra, Kelly Blincoe, Paolo Tonella |
Publisher | Association for Computing Machinery (ACM) |
Pages | 858–870 |
Number of pages | 13 |
ISBN (Electronic) | 9798400703270 |
DOIs | |
Publication status | Published - 30 Nov 2023 |
Event | 31st ACM Joint European Software Engineering Conference and Symposium on the Foundations of Software Engineering - San Francisco, United States Duration: 3 Dec 2023 → 9 Dec 2023 |
Publication series
Name | FSE: Foundations of Software Engineering |
---|
Conference
Conference | 31st ACM Joint European Software Engineering Conference and Symposium on the Foundations of Software Engineering |
---|---|
Abbreviated title | ESEC/FSE '23 |
Country/Territory | United States |
City | San Francisco |
Period | 3/12/23 → 9/12/23 |
Keywords
- Configurable System
- Machine Learning
- Deep Learning
- Performance Prediction
- Performance Learning
- Configuration Learning