Explore chapters and articles related to this topic
Loopholes in IoT Security Services
Published in Syed Rameem Zahra, Mohammad Ahsan Chishti, Security and Privacy in the Internet of Things, 2020
Shafalika Vijayal, Salim Qureshi
The aggressor can extricate the private data of the client through the information gathered and data spilled. The aggressor can find the security by utilizing the following stated dangers:Keystroke inference attack: such attacks affect not only the device but also devices placed near to the target. This attack uses the input devices like touchpad and keyboard to determine the username and passwords entered by the user. The data is fetched by the deviation caused on the sensors by the change in direction of the device.Task inference attack: This type of attack is used to find out the information regarding the current tasks being carried out on a user’s smart device so that the device’s state can be duplicated. Such an attack helps find out which applications are being run on a user’s device connected over the network.Eavesdropping: Some voice applications use a malicious program installed to fetch the content of conversation extracted from audio sensors installed in AI speakers without the knowledge of the user. A malware that incorporates the voice assistant application can use a range of malicious activities such as voice duplication to carry out financial fraud through phone.Location inference attack: This attack uses a side-channel to control the IoT devices to find the personal information such as home or work address.
Privacy and Anonymity in Mobile Ad Hoc Networks
Published in Yang Xiao, Security in Distributed, Grid, Mobile, and Pervasive Computing, 2007
Besides venues, the change of venues, or the nodes’ motion patterns are very important information. For example, a network mission may require a set of legitimate nodes to move toward the same direction or a specific spot. Any inference of the motion pattern will effectively visualize the outline of the mission and may finally lead to the failure of the mission. Ensuring the privacy for mobile nodes’ motion patterns is a new expression. If the network fails to ensure topological venue privacy, a mobile node’s motion pattern can be inferred by a dense grid of traffic analysts, or even by a sparse set of node intruders under certain conditions [13], e.g., capable of knowing neighbors’ relative positions (clockwise or counterclockwise), and capable of overhearing or receiving route replies (RREPs) of on-demand routing. Example 8.6(Motion pattern inference attack: dense mode) The goal of this passive attack is to infer (possibly imprecise) motion patterns of mobile nodes. In Figure 8.1, the omnipresent colluding intruders can monitor wireless transmissions in and out a specific mobile node, they can combine the intercepted data and trace the motion pattern of the node at the granularity of cell.Example 8.7(Motion pattern inference attack: sparse mode) When node intruders are sparse in the network, they may still be able to infer motion patterns from ongoing routing events, though the information gathered could be imprecise. Here we describe a probabilistic H(op)-clique attack. Figure 8.2 depicts the situation when a node intruder X finds from the routing packets that its next hop toward the node Y switches from node V1 to V2 (both are X’s neighbors). With high probability, this routing event indicates that either the target node Y (left figure) or some intermediate forwarding nodes (right figure) have moved along the direction V1→ V2 (clockwise). We assume that a node intruder can be furnished with basic ad hoc localization techniques (e.g., using Angle-of-Arrival, Receiver-Signal-Strength-Index, etc.). The H-clique is comprised of a single-node intruder and its gullible neighbors. Through colluding, multiple H-cliques can combine their knowledge to obtain more precise information on motion pattern. Figure 8.3 shows that a mobile node cutting through two H-cliques is detectable by the adversary. Figure 8.4 shows the case of three H-cliques. Therefore, a few node intruders can effectively launch motion pattern inference attacks against the entire network. Both proactive routing schemes and on-demand schemes are vulnerable to such passive attacks.
Empirical study of privacy inference attack against deep reinforcement learning models
Published in Connection Science, 2023
Huaicheng Zhou, Kanghua Mo, Teng Huang, Yongjin Li
Machine learning research has experienced rapid growth, leading to significant advances in image recognition (He et al., 2016), natural language processing (Vaswani et al., 2017) and robotic control (Lillicrap et al., 2015). However, the application of these technologies carries a significant risk of potential data privacy breaches. When deep learning models are deployed, they provide a new means of accessing information from the training dataset, which can reveal private information that attackers may exploit. Thus, the deployment of deep learning models poses a significant security risk that requires addressing. For example, membership inference attacks (Shokri et al., 2017) allow querying a model to determine whether a sample is in the training set, and (Ganju et al., 2018) present an inference attack to extract dataset attributes. In some cases, attempts are even made to reconstruct the entire training set, resulting in severe privacy leaks in publicly used models. The issue of privacy arises in both supervised and reinforcement learning (RL). The study by Pan et al. (2019) emphasise that policy models in RL are at risk of leaking information about the environment. RL models in various domains, such as healthcare (Esteva et al., 2019), contain sensitive data that may be exploited, leading to privacy disclosure. Therefore, further research into the privacy concerns of reinforcement learning is necessary.
Secure data outsourcing in presence of the inference problem: issues and directions
Published in Journal of Information and Telecommunication, 2021
Adel Jebali, Salma Sassi, Abderrazak Jemai
According to Farkas and Jajodia (2002), there are three types of inference attack: Statistical attacks, semantic attacks and inference due to data mining. For each of the mentioned techniques, researchers have devoted a lot of efforts to deal with inference problem. For statistical attacks, techniques like Anonymization and Data-perturbation have been developed to protect data from indirect access. For security threats based on data mining, techniques like privacy-preserving data mining and Privacy-preserving data publishing was carried out. Furthermore, a lot of works have investigated the semantic attacks (Brodsky et al., 2000; Chen & Chu, 2006; Su & Ozsoyoglu, 1991).