Explore chapters and articles related to this topic
Neuroprivacy and Cognitive Liberty
Published in L. Syd M Johnson, Karen S. Rommelfanger, The Routledge Handbook of Neuroethics, 2017
The ideas of cognitive liberty and neuroprivacy are the results of neuroscientific advances that will give us unprecedented powers to peer into brain process or manipulate brain function. Society is beginning a serious conversation over what kinds of access the state should have to the thought processes of its citizens and to what degree we should allow citizens unfettered rights to alter their own brain chemistries. The questions are not simple. The state has always retained rights to violate personal privacy in the face of threats, and society has always tempered individual rights to some degree in service of communal good.
Artificial Intelligence in Clinical Neuroscience: Methodological and Ethical Challenges
Published in AJOB Neuroscience, 2020
Marcello Ienca, Karolina Ignatiadis
Predictive AI software that can inferentially reveal sensitive information (e.g. probabilities of risk, signatures of disease or cues of personal preferences) about a person’s neurocognitive domain raises privacy concerns. This subdomain of privacy considerations can be defined “neuroprivacy” (Wolpe 2017) as it involves the private status of neural data as well as of the inferences that can be made based either on those data or proxy information (e.g. secondary data or indirect measurements). Neuroprivacy issues are manifold and depend on various features inherent in AI systems. An obvious one is the inferential potential of intelligent algorithms, which can allow to infer and generate sensitive information (e.g. responsiveness to cognitive tasks) about individuals or groups from seemingly non-sensitive data (e.g. EEG recordings). Furthermore, AI methods could be used to identify or re-identify people (e.g. research participants or neurological patients) who wish to remain anonymous. This is because de-identified data may become re-identifiable by triangulating them with other datasets (Gymrek et al. 2013; Price and Cohen 2019; Schwarz et al. 2019). In addition, the increasing availability of consumer neurotechnology, mental health apps, sensor-equipped wearable systems and other AI-driven consumer products generates vast amounts of data (Ienca et al. 2018). In most cases, these data are collected in the absence of explicit consent or even knowledge of the data subject. Neuroprivacy concerns are exacerbated if relevant sensitive information (e.g. neural signatures of disease or simple statistical probabilities of developing a mental illness) falls in to the hands of third parties—i.e. actors different than the original data collectors. For example, if a participant’s early neural signatures of Alzheimer’s disease are incidentally revealed during a research project, that information can potentially lead to discrimination if accessed by employers and health insurance providers. Finally, neuroprivacy risks do not limit to individual privacy but scale to group-level privacy as AI methods can be used to profile people based upon population-scale data (which are typically anonymized). For example, anonymous EEG data from consumer BCI applications, even if rendered immune to individual re-identification, can nonetheless be used to profile group of users based on ancillary information such as location (as many consumer neurodevices enable smartphone-mediated location tracking), age, sex etc.