Explore chapters and articles related to this topic
Explainable AI in Machine/Deep Learning for Intrusion Detection in Intelligent Transportation Systems for Smart Cities
Published in Mohamed Lahby, Utku Kose, Akash Kumar Bhoi, Explainable Artificial Intelligence for Smart Cities, 2021
Andria Procopiou, Thomas M. Chen
Without doubt, extended work was conducted in detecting malicious traffic in Road ITS networks which is considered remarkable. A plethora of different machine learning and statistical techniques were adopted to detect various types of network attacks. Analyzing the studies’ reported results, we observe machine learning was proved successful as the detection rate is at least approximately 89% or more regardless of the attack. Furthermore, various types of attacks were investigated from the numerous studies provided, all of them detected successfully by machine learning. Based on the justifications provided, it is proved further that machine learning algorithms can be an ideal solution to cybersecurity attacks regardless of the type, TCP/IP layer they are conducted, and evasion techniques that might be used.
Enhanced Defensive Model Using CNN against Adversarial Attacks for Medical Education through Human Computer Interaction
Published in International Journal of Human–Computer Interaction, 2023
There are different types of attacks were already identified in this domain like poisoning attacks and evasion attacks. Poisoning attacks are the attacks in which poisoned samples are inserted in the training data. Attacks of this nature are practiced throughout training. Attacks that use evasion are made during the test period. Through these attacks inputs are crafted with noise and then the model fails to predict correctly and it is undetected by the human. Different evasion attack methods includes FGSM, BIM, PGD, Carlini and Wagner, DeepFool, newtonFool, Elastic net attack, universal perturbations etc. Various threat models are emerging in the area of adversarial machine learning. The main aim of all these to identify where the adversaries can attack (Li et al., 2021). If the deployed model is black box then the only place the adversaries can focus the deployed model’s behaviour. Adversaries may try to access some of the training data or they may try to introduce attack in the early stages of AI life cycle. Adversarial attack may be white box or blackbox attacks. If it is white box attack, the adversaries have thorough knowledge about the model and its hyper parameters and it so easy to include the perturbations in white box attacks. But in the case of blackbox attacks, adversaries will not have detailed knowledge about the model and its hyper-parameters, moreover, adversaries may concentrate more on the models behavior and the domain of deployment. In this scenario, if attackers get access to the data required to train the model, the model will be increasingly susceptible from the start of the AI life cycle on. Most of the attackers focus on the decision boundaries in the classification problems. In all the classification model, there will be boundary condition through which the classifier classify the data either as class A or Class B. Attackers try to exploit this area of boundary to introduce the misclassification behavior in the model, Adversaries try to exploit linearity hypothesis. Neural network output explorate linearly as a function of their inputs. The main goal of these adversarial attackers is to push the deep neural network outside this operating boundary. DNN always recognizes the adversarial samples but it is not having the capacity to discriminate these samples from the actual input images.