Quantitative verification tools compute probabilities, expected rewards, or steady-state values for formal models of stochastic and timed systems. Exact results often cannot be obtained efficiently, so most tools use floating-point arithmetic in iterative algorithms that approximate the quantity of interest. Correctness is thus defined by the desired precision and determines performance. In this paper, we report on the experimental evaluation of these trade-offs performed in QComp 2020: the second friendly competition of tools for the analysis of quantitative formal models. We survey the precision guarantees-ranging from exact rational results to statistical confidence statements-offered by the nine participating tools. They gave rise to a performance evaluation using five tracks with varying correctness criteria, of which we present the results.
|Name||Lecture Notes in Comuter Science|
|Conference||9th International Symposium on Leveraging Applications of Formal Methods, Verification and Validation (ISoLA'20)|
|Period||20/10/20 → 30/10/20|