Explore chapters and articles related to this topic
Privacy and Ethics in Brain–Computer Interface Research
Published in Chang S. Nam, Anton Nijholt, Fabien Lotte, Brain–Computer Interfaces Handbook, 2018
But we have had to put aside several important debates. One is that different modes of brain data collection (ECoG, EEG, and intracortical electrodes) currently yield brain data of different quality, quantity, cost, and reproducibility. While these differences may have privacy implications, it is difficult to anticipate how developments in technology will affect these and, as such, for simplicity we have considered these together as a source of BCI data. Another is the philosophical question of whether decoding neural activity to infer mental states is conceptually coherent or misguided. There is a rich debate in the philosophy of mind about the relation between neural activity and mental states, such as that about the theory of extended mind, which also has implications for BCI (Heersmink 2013). While the extension of one’s mind out into a neuroprosthetic, for instance, may have implications for privacy (is confiscating a prosthetic an invasion of physical privacy?), we think that such questions can be tabled for now. Last, BCI raises concern not only about privacy but also about security (or “neurosecurity”) and hacking of data and devices (Bonaci et al. 2014; Denning et al. 2009; Ienca and Haselager 2016). That is clearly an important and far-reaching issue; we have focused on privacy here because security is important either for reasons that are independent of privacy (e.g., harms to patients or compromise of devices) or because of the importance of privacy. We take security to be significant in part because of the importance of privacy, and we assume that to the extent that privacy is valuable, security is all the more important.
The future of neuromodulation: smart neuromodulation
Published in Expert Review of Medical Devices, 2021
Dirk De Ridder, Jarek Maciaczyk, Sven Vanneste
But, importantly, it is also evident that this technology may create novel opportunities for misuse, i.e. have dual use [84], and therefore a discussion including not only philosophers and neuroethicists, but a larger societal representation needs to be initiated to guide the development of this promising technology. A neurosecurity framework has been developed that involves calibrated regulation, (neuro)ethical guidelines, and awareness-raising activities within the scientific community [84]. But it is paramount that this research is guided by representatives of society, both ethicists and politicians. Yet, prohibiting this research will result in this research being performed in uncontrollable places, resulting in potentially devastating consequences for society. After a publication in which three people played Tetris together, using only brain–brain interfaces [75], a call has been raised to start a multidisciplinary discussion on a number of currently unresolved ethical issues related to multi-person brain-to-brain interfaces, including autonomy, privacy, agency, accountability, and identity [85]. These overlap with an earlier inventory of the major ethical implications of brain-hacking, in case brain–computer interfaces would be hacked [86–88]. Furthermore, the dangers of AI in the setting of neuromodulation need to be seen in the larger picture of AI in society in which AI may pose risks by 1. Automation, resulting in job loss, weapons taking autonomous decisions, brain stimulators activating and inactivating outside human control 2. Invasion of privacy, not only by medical personnel but also insurance companies, politicians, etc., 3. Deep fakes to manipulative society’s opinions, 4. Data quality: garbage in garbage out, misalignment between human and AI goals and discrimination.