Explore chapters and articles related to this topic
Adapting Our Reference Framework to Your Environment
Published in James F. Ransome, Anmol, Mark S. Merkow, Practical Core Software Security, 2023
James F. Ransome, Anmol, Mark S. Merkow
Threat modeling includes determining the attack surface of the software by examining its functionality for trust boundaries, entry points, data flow, and exit points. Threat models are only useful once the design documentation that represents the entire application’s architecture is more or less finalized, so that the threat model is based on the intended, likely future of the software. Threat modeling is useful for ensuring that the design complements the security objectives, making trade-off and prioritization-of-effort decisions, and reducing the risk of security issues during development and operations. Risk assessments of software can be accomplished by ranking the threats as they pertain to your organization’s business objectives, compliance and regulatory requirements, and security exposures. Once understood and prioritized, those newly uncovered threats can then be sent into a phase of planning countermeasures and/or changing the design to remove defects.
Failure Detection Application in Autonomous Vehicles
Published in Diego Galar, Uday Kumar, Dammika Seneviratne, Robots, Drones, UAVs and UGVs for Operation and Maintenance, 2020
Diego Galar, Uday Kumar, Dammika Seneviratne
A connected AV is subject to cyberattacks through its various network interfaces to the public network infrastructure, as well as its direct exposure to the open physical environment. An attack surface of a system is the sum of the different attack vectors, that is, the different points where attackers can make attempts to inject data to or extract data from the system to compromise the security control of the AV. Figure 9.15 depicts the typical attack surfaces of an AV (Intel IoT, 2016) and potential attack sources. As the figure shows, the attack sources are typically external agents/events but can also be internal components with malicious intent that attempt to compromise the expected autonomy functionality of the AV. For example, the Bluetooth interface of the AV shown in Figure 9.15 can be considered a potential attack surface that can be compromised by plugging malicious devices (attack source) into this communication channel (Chattopadhyay & Lam, 2018).
A Model for and Inventory of Cybersecurity Values: Metrics and Best Practices
Published in Natalie M. Scala, James P. Howard, Handbook of Military and Defense Operations Research, 2020
Natalie M. Scala, Paul L. Goethals
Another method used to assess risk in systems is the attack surface, first introduced by Howard (2003) to address vulnerabilities in computer software. While several different definitions of the attack surface exist, it is generally used to describe the internal and external accesses or privileges via hardware or software; the union of system components, features, and services; and the protocols established for a given organization (Theisen et al., 2018). At the macroscopic level, an attack surface may be used to evaluate vulnerabilities or identify attack vectors across the physical architecture of an organization, including its servers, routers, and other devices connected to the network. It can also involve very detailed constructs at a minuscule level, such as with specific application susceptibilities, interfaces between email and the internet, an individual’s network behavior, and data storage modules. Illustrations like Figure 14.2 are often created to give decision-makers an increased awareness of high-risk areas within their security environment. The objective is then to strengthen system security by focusing resources on minimizing the organization’s attack surface.
Explainable and secure artificial intelligence: taxonomy, cases of study, learned lessons, challenges and future directions
Published in Enterprise Information Systems, 2023
Khalid A. Eldrandaly, Mohamed Abdel-Basset, Mahmoud Ibrahim, Nabil M. Abdel-Aziz
The adversarial ability is a significant criterion to be recognised in security analysis as it expresses the power of the adversaries to compromise the security of the system. Generally speaking, an adversary could be regarded robust or weak according to the amount of its knowledge or access to the system information. This criterion advocate in what way and which kind of attacks an attacker could initiate by means of attack vector and on which attack surface. Again, the attacking could be performed in either training or inference phase. The attacks in the training time seek to tamper with the underlying model and impact the relevant learning process. Two types of attack can be initiated at this phase: first, injection of adversarial examples into the training data; second, modifying the training data directly. Moreover, adversarial attacks at inference time are usually regarded as an exploratory attack that do not adjust the ML/DL model. Rather, they just fool it to generate inappropriate predictions.
Explainable AI for Security of Human-Interactive Robots
Published in International Journal of Human–Computer Interaction, 2022
Antonio Roque, Suresh K. Damodaran
The third kill chain phase to consider is Exploitation (Execution from the ATT&CK matrix), until the system is subverted to perform the attacker’s wishes. Example techniques are a systems security analysis of a deployed industrial robot, defining an attacker model, and showing how the attacker can exploit software vulnerabilities (Quarta et al., 2017); describing the attack surface of industrial robots, focusing on the networked interface, the operator interface, and features in domain-specific programming languages (Pogliani et al., 2019); attacks on teleoperated surgical robots, which inject control commands into the robot’s control system, performing a model-based analysis, as well as experiments on an actual robot that quantify the extent to which the model can mitigate attacks (Alemzadeh et al., 2016); attacks on an Amigobot robot (such as a false data injection attack Sabaliauskaite et al. (2017)) by creating false signals for its sonar sensors, and conducting tests of the cumulative sum technique for attack detection (Sabaliauskaite et al., 2015).
Technical debt as an indicator of software security risk: a machine learning approach for software development enterprises
Published in Enterprise Information Systems, 2022
Miltiadis Siavvas, Dimitrios Tsoukalas, Marija Jankovic, Dionysios Kehagias, Dimitrios Tzovaras
Several directions for future work can be identified. First of all, the present study was based on open-source software applications written in Java programming language. In order to investigate the generalisability of our results, we are planning to replicate the present work by considering software applications written in programming languages other than Java, whereas the case of commercial software applications will be also considered. In addition, in the present study, the SAVD metric was used as a measure of software security risk and the SonarQube static analysis platform was used for its quantification. In the future, we are planning to redo the present analysis using other open-source or commercial static code analysers for quantifying SAVD, while we are also planning to consider other software security risk indicators like the Attack Surface (Howard 2007; Manadhata and Wing 2011). Finally, if the results of the present study are generalised, we are planning to implement our models in the form of individual tools (or as part of common IDEs or software quality platforms), which will facilitate decision making during the overall SDLC, by helping developers and project managers identify and mitigate security risks early enough in the development process.