Bridging the Interpretability Gap: A SHAP-Enhanced Framework for Intrusion Detection in Cybersecurity
Abstract
While the advanced cyber threats are becoming increasingly advanced, the development of transparent explainable artificial intelligence (XAI)-based intrusion detection systems (IDS) is a necessity in the realm of cybersecurity. The lack of transparency in traditional black-box machine learning models is mainly responsible for the fact that they are hardly ever applicable in high-risk environments. This paper suggests a completely new SHAP (Shapley Additive Explanations)-enhanced framework for network intrusion detection, which benefits from explainable machine learning to get insights and inform decision-making in cybersecurity. We execute machine learning and deep learning models, with the Random Forest, XGBoost, and Convolutional Neural Networks (CNN) among the models under study, using the NSL-KDD dataset. SHAP is used as part of the study to examine feature importance, which in turn gives the manner to interpretable insights into the model's predictions. The proposed framework takes into account things such as model transparency, feature selection, and dealing with imbalanced datasets in IDS. Our findings reveal that SHAP not only facilitates the understanding of black-box model predictions but also, by determining key feature interactions, it can be instrumental in differentiating normal network activities from malicious ones. Furthermore, we compare SHAP-based explanations with rule-based decision tree methods, highlighting the benefits of post-hoc interpretability in sensitive cybersecurity scenarios. The findings emphasize the need for the integration of XAI techniques in IDS to ensure that threat detection is reliable, transparent, and efficient. This work lies within the broad field of ML interpretability in cybersecurity and will be a great support to companies that are willing to deploy SHAP-enhanced IDS on their existing security infrastructures.