Explore chapters and articles related to this topic
Computational Neuroscience and Compartmental Modeling
Published in Bahman Zohuri, Patrick J. McDaniel, Electrical Brain Stimulation for the Treatment of Neurological Disorders, 2019
Bahman Zohuri, Patrick J. McDaniel
Another understandable doubt may be that it’s hard to believe, even given unlimited scientific research, that computers will ever be able to think like humans, that 0’s and 1’s could have consciousness, self-awareness, or sensory perception. It is certainly true that these dimensions of self are difficult to explain, if not currently very unexplainable by science—it is called the hard problem of consciousness for a reason! But assuming that consciousness is an emergent property—a result of a billion-year evolutionary process starting from the first self-replicating molecules, which themselves were the result of the molecular motions of inanimate matter—then computer consciousness does not seem so crazy. If we who emerged from a soup of inanimate atoms cannot believe inanimate 0’s and 1’s could lead to consciousness no matter how intricate a setup, we should try telling that to the atoms. Machine intelligence really is just switching hardware from organic to the much faster and more efficient silicon-metallic. Supposing consciousness is an emergent property on one medium, why cannot it be on another?
The physician’s understanding of the patient’s bodily meaning
Published in Rolf Ahlzén, Martyn Evans, Pekka Louhiala, Raimo Puustinen, Medical Humanities Companion, 2018
The operation of mirror neurons thus gives a biological clue to intersubjectivity and empathy. As metaphors, the magnetic resonance images of the activated clusters of neurons are quite strong, and accordingly we may anticipate an increasing interest in empathy in medicine. Obviously, what is shown in these images is not understanding in itself. The ‘hard problem of consciousness’, the explanatory gap between consciousness and nature,17 has not been solved, and will most probably not be. Experience is only to be understood as experience. However, experience and scientific explanations are two versions of one and the same reality.
In defence of the conscious mind
Published in Christopher Dowrick, Person-centred Primary Care, 2017
The focus of this chapter is on the nature of human consciousness. Our problem is that our experience of our own human consciousness does not fit well with what we know through science. Including neuroscience (actually, especially neuroscience). Surely this is a problem? The philosopher David Chalmers has called this the ‘hard problem of consciousness’ (Chalmers, 1995). To be a good physicist you don’t really need to think much about the nature of consciousness. Just do the science. But good doctors should pay some attention to this or they may find themselves tripped up by trying to use exclusively biomedical models that don’t fit the needs of real people.
Artificial Consciousness Is Morally Irrelevant
Published in AJOB Neuroscience, 2023
There are formidable barriers to demonstrating machine consciousness. First, we are unsure whether it is possible. Cartesian dualism is unpopular, but still has its defenders, and implies that our consciousness is a nonphysical substance. However, machine consciousness implies consciousness is purely a result of physical processes. Also, several philosophical arguments claim that true machine consciousness is not possible. There are many aspects to human consciousness, but three crucial components are understanding, intentionality and subjective experience. John Searle’s (2008) Chinese Room thought experiment argued that machines merely manipulate symbols, lacking any understanding of what they are doing. They also lack intentionality: the property mental states have of being about things. Further, the “hard” problem of consciousness, popularized by David Chalmers (2010), contends that physical processes cannot give rise to subjective experience. For example, Frank Jackson’s (1982) knowledge argument argues that subjective experience is something over and above physical facts. These arguments are still widely debated.
From Quantum Physics to Quantum Hypnosis: A Quantum Mind Perspective
Published in International Journal of Clinical and Experimental Hypnosis, 2020
Consciousness is defined as a “quality of mind that is generally associated with subjective experience, self-awareness, feeling, cognition, free will and perception of relationships between us and our environment” (Edelman, 2005, p. 8). In consciousness research, there are “easy” and “hard” problems (Chalmers, 1996). The easy problem refers to the main functions of consciousness, such as attention, awareness, perception, feeling, and so on. Neural studies (both electrophysiological and neuroimaging) have provided neural correlates of discrete components of conscious experience (Edelman, 2005). The hard problem of consciousness is to explain ontological consciousness in terms of neural correlates (Chalmers, 1996). In a nutshell, the hard problem is explaining how the biological brain generates the subjective, inner world of experience. It is impossible to explain consciousness purely in terms of its neural correlates (Chalmers, 1996).
Ernst Brücke and Sigmund Freud: Physiological roots of psychoanalysis
Published in Journal of the History of the Neurosciences, 2022
The hard problem of consciousness is a problem of the philosophy of mind. It refers to the problem of explaining the subjective experience by exploring the easy problems such as the focus of attention, categorizations, and so on. Although the easy problems are explainable by neurological or computational methods, the hard problem appears to be immune to be grasped by the sum total of easy problems. Therefore, even if we explain an easy problem with absolute certainty, we still cannot explain the hard problem (Chalmers 1995, 1–3).