Explore chapters and articles related to this topic
Challenges of the digital public space
Published in Naomi Jacobs, Rachel Cooper, Living in Digital Worlds, 2018
With that in mind, are there ways in which information can be evaluated and verified? Persistence of information in digital public space means that incorrect information can still be delivered even when it has been discredited. The ‘right to be forgotten’ ruling, as discussed in Chapter 6, means that individuals have the right to ask search engines to remove information about them from search results ‘if the information is inaccurate, inadequate, irrelevant or excessive’. Does this count as creating bias in the system, or simply correcting for bias that already exists? If something ‘wrong’ is put online, and it spreads, it is very difficult to correct that information. Of course this is also true in the non-digital sphere, as anyone knows who has tried to quash a rumour spreading through a community of people. But it can be particularly frustrating if one is using the digital public space to try and find the objective truth.
Privacy and data protection
Published in Eduard Fosch-Villaronga, Robots, Healthcare, and the Law, 2019
The Right to Be Forgotten has risen to prominence alongside the rising importance of privacy law in general, particularly as understood in regulations like the GDPR. The right to be forgotten is essentially the concept that individuals have the right to request that their data (collected by others) be deleted. This concept of “data deletion” has come to the forefront of many juridical discussions of the right to be forgotten. While “data deletion” may seem to be a straightforward topic for many regulators, however, this seemingly simple issue poses many practical problems in machine learning environments. In fact, “data deletion” requirements can be considered actually to border on the edge of impossibility (Fosch-Villaronga, Kieseberg, & Li, 2018).
‘A right to be forgotten’: retrospective privacy concerns in social networking services
Published in Behaviour & Information Technology, 2023
Yun Zhang, Chuan Luo, Hongyan Wang, Yang Chen, Yue Chen
As time passes, users’ posting content is increasingly accumulated in social networking services (SNSs) and can be easily revisited by others at any time. This may complicate users’ self-presentations, as they should manage their past and present versions of self (Brandtzaeg and Lüders 2018). For instance, most users expressed they deleted or hid their old posts which they considered as childish or silly to maintain their current self-presentations, especially when they experienced changes in their life (Huang, Vitak, and Tausczik 2020). These reflect that data permanence may raise privacy concerns, because past posts may put uncertainty into future interactions (Ayalon and Toch 2017). In this sense, some users switch to use the ephemeral applications, such as Snapchat, which automatically delete messages quickly. Likewise, WeChat Moments (abbreviated as ‘Moments’), as the most popular social media platform in China, has launched the Time Limit setting that allows users to choose an expiry time (three days, one month, or six months) for their posts, after which content is only viewable by posters alone. These imply that individuals have an increased need to manage their past content in SNSs (Bayer et al. 2016; Choi and Sung 2018; Chen and Cheung 2019). Also, the European Union proposed a general data protection regulation based on ‘the right to be forgotten’ reflecting the importance of temporal privacy (EU Commission 2018). Actually, people have always managed their temporal aspects of privacy, but the temporal privacy issue is challenged and highlighted by the affordances of SNSs.
Blockchain for the supply chain of the Italian craft beer sector: tracking and discount coupons
Published in International Journal of Parallel, Emergent and Distributed Systems, 2023
Lorenzo Ariemma, Niccolò De Carlo, Diego Pennino, Maurizio Pizzonia, Andrea Vitaletti, Marco Zecchini
A well-known technical approach to protect the privacy of users (even in regular centralized systems) is to separate the personal data that have to be processed in an untrusted environment (e.g. a database or, in our case, a blockchain) from the name and other data that directly refer to the identity of the data owner. These two kinds of data are linked by means of some sort of anonymous identifier. We can proceed in a similar way by forcing each coupon to be associated with a new public key, even if they belong to the same beerlover. This is similar to the approach used in many blockchains (like, e.g. Bitcoin) to protect the privacy of cryptocurrency owners. Note that, for certain functionalities like the search of all coupons belonging to a beerlover, it is necessary to access both kinds of data. In our specific case, a subject called managing operator (see Section ‘Objectives: from centralized to blockchain-based’) has a special role and it may be entitled the controller role (according to the GDPR jargon). It can be in charge of handling the identity-related data off-chain and in compliance with the privacy regulation (e.g. regarding confidentiality or the right to be forgotten). Note that, this approach leverages a centralized aspect that was already present in the system and keeps intact most of the advantages that are associated with the adoption of a blockchain, in particular, the guarantee of the integrity of the processing and the limitation of the responsibility of the managing operator.
Online Privacy Breaches, Offline Consequences: Construction and Validation of the Concerns with the Protection of Informational Privacy Scale
Published in International Journal of Human–Computer Interaction, 2020
Eric Durnell, Karynna Okabe-Miyamoto, Ryan T. Howell, Martin Zizi
To demonstrate the utility of the CPIP, we used an ABA design to measure people’s emotions before and after they read text about privacy rights. These texts related to how various domestic and foreign documents addressed the right to privacy (e.g., the 1st, 4th, 6th, and 9th Amendments of the U.S. Constitution, the Right to be Forgotten, and the Health Insurance Portability and Accountability Act [HIPPA]; see Appendix A), and were intended to make salient people’s right to privacy. The participants first rated their current emotions using the PANAS-X (Watson, 1988). Next, they read about how the various documents guaranteed a right to privacy. This allowed us to measure how participants felt about a loss of privacy. Then they rated their current emotions again using the same PANAS-X survey. We were particularly interested in the change in emotions among those who scored high and low on a general concern for the protection of informational privacy.