A Deep Reinforcement Learning Approach to Modelling an Intrusion Detection Sustem Using Asynchronous Advantage Actor-Critic (A3C) Algorithm

Citation:

Yego, J.K., Kiget, N.K. & Samoei, D., 2022. A Deep Reinforcement Learning Approach to Modelling an Intrusion Detection Sustem Using Asynchronous Advantage Actor-Critic (A3C) Algorithm. Journal of Research Innovation and Implications in Education, 6(1), p.441-452.

Abstract:

An increase in growth and use of the internet has also resulted in attacks evolving and more novel attacks having a devastating effect are witnessed. The Intrusion Detection System (IDS) is yet to achieve maximum success due to false positives and low detection. The purpose of the study was to determine the modelling of an intrusion detection system using the Asynchronous Advantage Actor-Critic (A3C) Algorithm. In this paper we look at the following: (i) To evaluate the current machine learning techniques being used in IDS, (ii) To determine the effectiveness of using the Asynchronous Advantage Actor-Critic algorithm in anomaly detection, (iii) To select the appropriate training data set and prepare for use on A3C. A conceptual study was done in looking at these objectives. The UNSW_TRAIN and UNSW_TEST were samples selected by purposive sampling from the whole population of UNSW-NB15 dataset. Analysis of the dataset was done using Python. Key findings were that anomaly detection approach is the best approach due to its ability to detect novel attacks. Also, there is need to continue research on intrusion detection and improve solutions to the problem of false positives and fully optimize on accuracy. The UNSW-NB15 dataset is comprehensive and so all the attack types should be used so as to accurately depict the intrusions and should selected attack types be used, feature selection should be done accurately so as to reflect modern attack types.