On the Validity of Retrospective Predictive Performance Evaluation Procedures in Just-In-Time Software Defect Prediction

Liyan Song, Leandro Minku*, Xin Yao*

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

26 Downloads (Pure)

Abstract

Just-In-Time Software Defect Prediction (JIT-SDP) is concerned with predicting whether software changes are defect-inducing or clean. It operates in scenarios where labels of software changes arrive over time with delay, which in part corresponds to the time we wait to label software changes as clean (waiting time). However, clean labels decided based on waiting time may be different from the true labels of software changes, i.e., there may be label noise. This typically overlooked issue has recently been shown to affect the validity of continuous performance evaluation procedures used to monitor the predictive performance of JIT-SDP models during the software development process. It is still unknown whether this issue could potentially also affect evaluation procedures that rely on retrospective collection of software changes such as those adopted in JIT-SDP research studies, affecting the validity of the conclusions of a large body of existing work. We conduct the first investigation of the extent with which the choice of waiting time and its corresponding label noise would affect the validity of retrospective performance evaluation procedures. Based on 13 GitHub projects, we found that the choice of waiting time did not have a significant impact on the validity and that even small waiting times resulted in high validity. Therefore, (1) the estimated predictive performances in JIT-SDP studies are likely reliable in view of different waiting times, and (2) future studies can make use of not only larger (5k+ software changes), but also smaller (1k software changes) projects for evaluating performance of JIT-SDP models.
Original languageEnglish
Article number124
Number of pages33
JournalEmpirical Software Engineering
Volume28
Issue number5
DOIs
Publication statusPublished - 18 Sept 2023

Keywords

  • Just-in-time software defect prediction
  • Performance evaluation procedures
  • Verification latency
  • Online learning
  • Concept drift
  • Label noise

Fingerprint

Dive into the research topics of 'On the Validity of Retrospective Predictive Performance Evaluation Procedures in Just-In-Time Software Defect Prediction'. Together they form a unique fingerprint.

Cite this