Explore chapters and articles related to this topic
Feature Selection and Evaluation
Published in Guozhu Dong, Huan Liu, Feature Engineering for Machine Learning and Data Analytics, 2018
Pattern recognition and machine learning techniques have been increasingly applied to information security areas, such as spam, intrusion, and malware detection. Then we need to analyze the security of machine learning itself [5]. Correspondingly, adversarial machine learning is always mentioned. Adversarial machine learning is the design of machine learning algorithms that can resist some sophisticated attacks, and the study of the capabilities and limitations of attackers [31]. These sophisticated attacks include avoiding detection of attacks, causing benign input to be classified as attack input, launching focused or targeted attacks, or searching a classifier to find blind spots in the algorithm.
Machine Learning – Supervised Learning
Published in Rakesh M. Verma, David J. Marchette, Cybersecurity Analytics, 2019
Rakesh M. Verma, David J. Marchette
Adversarial machine learning is a subfield of machine learning in which the robustness of machine learning models is investigated using synthetic attacks. It began in 2004 with the work of Dalvi and Lowd et al. in the context of defeating spam filters [40]. They showed that linear classifiers can be tricked by making simple changes to spam emails without significantly affecting readability. Barreno et al. [28] developed an initial taxonomy of attacks, e.g., training versus testing time, dictionary-based, and suggested defenses against them. A couple of books have been published on this topic [222, 464].
Security Challenges and Solutions in IoT Networks for the Smart Cities
Published in Mohammad Ayoub Khan, Internet of Things, 2022
Another limitation observed is that none of the presented studies consider any DDoS attack on the application layer. Application-layer DDoS attacks are particularly effective due to their stealthiness and great similarity to legitimate traffic. Finally, as machine learning is heavily used as a defense mechanism, adversarial machine learning should be taken into consideration. Adversarial machine learning consists of malicious acts performed by adversaries to mislead machine learning defense mechanisms and being eventually bypassed.
Artificial Intelligence and Cyber Defense System for Banking Industry: A Qualitative Study of AI Applications and Challenges
Published in Cybernetics and Systems, 2022
Khalifa AL-Dosari, Noora Fetais, Murat Kucukvar
Adversarial machine learning is an ML area aimed at influencing the output of a trained system by feeding it specific inputs (Kaloudi and Li 2020). Considering the number of AI-powered systems employed in the banking industry, adversarial ML could become a major threat (Geluvaraj, Satwik, and Ashok Kumar 2019). A generative adversarial network (GAN) is an ML configuration where an ML system is trained to find flaws in the output generated by another ML system. The development of GANs has improved the capabilities of AI to generate convincing artificial content, or “deep fakes” (Caldwell et al. 2020). A related ML method of adversarial perturbation aims to exploit the decision boundaries of existing ML systems. This allows for forcing the existing system to produce a wrong output by slightly changing the inputs (Kaloudi and Li 2020).
Anti-malware engines under adversarial attacks
Published in International Journal of Computers and Applications, 2022
Shymalagowri Selvaganapathy, Sudha Sadasivam
Adversarial machine learning has risen as a serious threat to cyber-security applications deploying learning-based techniques. Techniques generating adversarial samples for malware domain are valid only if they retain the malicious functionality. This work considers Android apps and builds a feature space effective for malware detection by deploying a deep neural network. The efficacy of the attacks deployed brings to light the vulnerabilities prevalent in the learner considered. These threats can be addressed by a proactive defense by design approach incorporating the Kerckhoff's principle where everything about the model can be revealed to the attacker and yet the model has to withstand and be robust. Developing adaptive defense for various possible attacks can be considered as a possible research direction. Convergence between application of the considered attacks and defense in the theoretical feature space against the practical real life problem space is a challenge to be addressed in future.