Design-time evaluation is essential to build the initial software architecture to be deployed. However, experts' assumptions made at design-time are unlikely to remain true indefinitely in systems that are characterized by scale, hyperconnectivity, dynamism and uncertainty in operations (e.g. IoT). Therefore, experts’ designtime decisions can be challenged at run-time. A continuous architecture evaluation that systematically assesses and intertwines design-time and run-time decisions is thus necessary. This paper proposes the first proactive approach to continuous architecture evaluation of the system leveraging the support of simulation. The approach evaluates software architectures by not only tracking their performance over time, but also forecasting their likely future performance through machine learning of simulated instances of the architecture. This enables architects to make cost-effective informed decisions on potential changes to the architecture. We perform an IoT case study to show how machine learning on simulated instances of architecture can fundamentally guide the continuous evaluation process and influence the outcome of architecture decisions. A series of experiments is conducted to demonstrate the applicability and effectiveness of the approach. We also provide the architect with recommendations on how to best benefit from the approach through choice of learners and input parameters, grounded on experimentation and evidence.
|Number of pages||54|
|Journal||ACM Transactions on Software Engineering and Methodology|
|Early online date||15 Mar 2022|
|Publication status||Published - Jul 2022|
- Continuous evaluation
- software architecture evaluation
- time series forecasting