Abstract
Just-in-Time Software Defect Prediction (JIT-SDP) can be seen as an online learning problem where additional software changes produced over time may be labeled and used to create training examples. These training examples form a data stream that can be used to update JITSDP models in an attempt to avoid models becoming obsolete and poorly performing. However, labeling procedures adopted in existing online JIT-SDP studies implicitly assume that practitioners would not inspect software changes upon a defect-inducing prediction, delaying the production of training examples. This is inconsistent with a realworld scenario where practitioners would adopt JIT-SDP models and inspect certain software changes predicted as defect-inducing to check whether they really induce defects. Such inspection means that some software changes would be labeled much earlier than assumed in existing work, potentially leading to different JIT-SDP models and performance results. This paper aims at formulating a more practical human labeling procedure that takes into account the adoption of JIT-SDP models during the software development process. It then analyses whether and to what extent it would impact the predictive performance of JIT-SDP models. We also propose a new method to target the labeling of software changes with the aim of saving human inspection effort. Experiments based on 14 GitHub projects revealed that adopting a more realistic labeling procedure led to significantly higher predictive performance than when delaying the labeling process, meaning that existing work may have been underestimating the performance of JIT-SDP. In addition, our proposed method to target the labeling process was able to reduce human effort while maintaining predictive performance by recommending practitioners to inspect software changes that are more likely to induce defects. We encourage the adoption of more realistic human labeling methods in research studies to obtain an evaluation of JIT-SDP predictive performance that is closer to reality.
Original language | English |
---|---|
Title of host publication | ESEC/FSE 2023 |
Subtitle of host publication | Proceedings of the 31st ACM Joint European Software Engineering Conference and Symposium on the Foundations of Software Engineering |
Editors | Satish Chandra, Kelly Blincoe, Paolo Tonella |
Publisher | Association for Computing Machinery (ACM) |
Pages | 605–617 |
Number of pages | 13 |
ISBN (Electronic) | 9798400703270 |
DOIs | |
Publication status | Published - 30 Nov 2023 |
Event | 31st ACM Joint European Software Engineering Conference and Symposium on the Foundations of Software Engineering - San Francisco, United States Duration: 3 Dec 2023 → 9 Dec 2023 |
Publication series
Name | FSE: Foundations of Software Engineering |
---|
Conference
Conference | 31st ACM Joint European Software Engineering Conference and Symposium on the Foundations of Software Engineering |
---|---|
Abbreviated title | ESEC/FSE '23 |
Country/Territory | United States |
City | San Francisco |
Period | 3/12/23 → 9/12/23 |
Bibliographical note
Acknowledgments:This work was supported by National Natural Science Foundation of China (NSFC) under Grant Nos. 62002148 and 62250710682, the Program for Guangdong Introducing Innovative and Enterpreneurial Teams under Grant No. 2017ZT07X386, Guangdong Provincial Key Laboratory under Grant No. 2020B121201001 and Research Institute of Trustworthy Autonomous Systems (RITAS).
Keywords
- Just-in-time software defect prediction
- online learning
- verification latency
- waiting time
- human labeling
- human inspection