Abstract
Machine learning is key for automated detection of malicious network activity to ensure that computer networks and organizations are protected against cyber security attacks. Recently, there has been growing interest in the domain of adversarial machine learning, which explores how a machine learning model can be compromised by an adversary, resulting in misclassified output. Whilst to date, most focus has been given to visual domains, the challenge is present in all applications of machine learning where a malicious attacker would want to cause unintended functionality, including cyber security and network traffic analysis. We first present a study on conducting adversarial attacks against a well-trained network traffic classification model. We show how well-crafted adversarial examples can be constructed so that known attack types are misclassified by the model as benign activity. To combat this, we present a novel defensive strategy based on hierarchical learning to help reduce the attack surface that an adversarial example can exploit within the constraints of the parameter space of the intended attack. Our results show that our defensive learning model can withstand crafted adversarial attacks and can achieve classification accuracy in line with our original model when not under attack.
Original language | English |
---|---|
Article number | 103398 |
Number of pages | 14 |
Journal | Journal of Information Security and Applications |
Volume | 72 |
Early online date | 17 Dec 2022 |
DOIs | |
Publication status | Published - Feb 2023 |
Keywords
- Adversarial learning
- Hierarchical classification
- Network traffic analysis
- Functionality preservation
- Machine learning
- Model robustness