corneretageres.com

Exploring the Illusion of Consciousness Transfer: A Critical Analysis

Written on

The complexity of consciousness and its biological roots.

The Hard Problem of Consciousness

As the popular saying goes, “everyone wants to be on a postage stamp, but nobody wants to die.” Recently, discussions around mind uploading—transferring consciousness to machines—have gained traction among those seeking eternal life. However, similar to previous immortality concepts, this idea is fraught with significant issues.

The essential point to grasp is that consciousness, the only kind of “mind” that could feasibly be transferred for true immortality, is fundamentally a bodily function. It’s an activity performed by our bodies. While we still lack complete understanding of this process, it is clear that our brains generate what neuroscientists term percepts (and philosophers refer to as qualia). These form the essence of our conscious experiences, encompassing sensations like sound, color, smell, taste, pain, and bodily awareness, all of which our minds integrate to create our perception of reality—whether during wakefulness, dreaming, or hallucinations.

This raises the question of what it truly means to transfer a bodily function to a machine. For instance, how could one capture the essence of breathing, digestion, or circulation and “upload” that function into a mechanical device, ensuring it remains “your” own? The reality is, this is unfeasible. Even if one were to create a perfect atomic replica of these biological systems, the time taken for such a process would mean they would not be identical to your original functions by the time they begin to operate.

The Misconception of the Computer Mind

Some argue that consciousness differs from other bodily functions, positing that the brain operates like an organic computer. According to this view, consciousness is akin to a program running on the brain’s hardware, with personal identity merely a collection of memories accessed by this program. Thus, if one could replicate the brain's operating program and transfer memories to a machine, one’s mind could live on in a robotic form, similar to characters in "Westworld."

At first glance, this perspective appears valid. Our continuous identity is indeed tied to our memories; losing them would result in a loss of self, regardless of the brain's ability to generate real-time percepts. You might wake up able to sense your environment, but without memories, you would feel like a new individual.

However, the comparison of the brain to a computer hinges on how we define “computer” and “program.”

Computational devices have a long history, with the abacus being a well-known example. While useful for calculations, these devices were not programmable; they required human input to function. The first programmable machines performed physical tasks like playing music or weaving, while the Analytical Engine, conceived in the early 1800s, was the first to combine computation with physical operations. Alan Turing later theorized a universal machine capable of executing any calculation that a human might perform, forming the foundation for contemporary computers.

Both programmable and non-programmable devices yield physical and symbolic outputs. The final state of a device represents concepts that can be interpreted by humans. For instance, the positioning of beads on an abacus holds no meaning for those unfamiliar with its use. Similarly, patterns displayed on a screen or printed page become significant only to those who understand the symbols. The machine itself does not perform calculations; it merely executes motions. The interpretation of its output requires human involvement, as only humans can ascribe meaning to the results.

While it is possible to integrate computing devices with other machines to perform physical tasks—like robots painting cars—this form of computation does not concern mathematical calculations. Instead, it relates to physical iterations based on a set of rules (the program). In his 2002 work, A New Kind of Science, Stephen Wolfram proposed that the universe itself functions as a computer, with every change of state representing a computation governed by physical laws.

Moreover, these laws do not exist separately from the physical systems; they are abstractions derived from observation. Newton’s calculations regarding gravity were based on observed relationships, but he could not explain the underlying reasons. It took Einstein to elucidate gravity as a distortion of spacetime, demonstrating how those values arise from universal physics. The same principle applies to computers today.

The historical development of computing devices.

Two centuries ago, the Analytical Engine was clearly a physical device with no independent software. However, contemporary programming languages and interfaces may lead us to mistakenly believe that software exists apart from hardware. In reality, the laws of physics constitute the only programming, intertwined with the hardware itself. When programming computers, we merely adjust their physical states to achieve desired outcomes, whether visual displays or mechanical movements.

It's easy to assume that computers are processing information as a food plant processes corn, leading to the misconception that information exists independently within machines. This perception extends to our brains as well, viewing them as information processors containing tangible data. Yet, like physical laws, information is an abstraction—an immensely useful one, but still a construct. The universe comprises space, time, energy, and matter; information cannot exist as a standalone entity.

Misinterpretations of science sometimes treat information as a tangible substance, suggesting that the “contents” of our minds can be transferred from one information processor to another. This notion is fundamentally flawed.

Consciousness is not merely information; it is a biological phenomenon. For a machine to replicate consciousness, it would need to physically emulate the brain's functions, something general-purpose computers are not designed to accomplish. While we understand the basic mechanics of bodily movements, we lack comprehension of how the brain generates conscious percepts, making it impossible to create a conscious machine.

The Turing Fallacy

Another common misconception in computing circles relates to Turing's universal computer theory, leading some to erroneously conclude that Turing demonstrated that a general-purpose computer could replicate any human brain function. This belief fosters the idea that if adequately programmed, computers could be conscious, thereby making mind uploading seem plausible.

As summarized by the Stanford Encyclopedia of Philosophy, a myth has arisen regarding Turing’s 1936 paper, which purportedly addressed the limits of mechanistic computation. This misunderstanding has permeated the fields of philosophy, psychology, computer science, and artificial intelligence, with detrimental effects.

Consciousness is not a calculated end-state but a physically computed one, as defined by Wolfram. To produce real percepts, a computing device must work in conjunction with other machinery. Just as a computer cannot simulate human biological functions, it cannot replicate conscious awareness on its own. This is not merely a matter of advancing technology; it is a fundamental impossibility.

Significant strides have been made in understanding the sequential processes in the brain. For example, when skin contacts a hot surface, nerve cells initiate a biochemical reaction that sends signals to the spinal column, reaching the brain and triggering muscle responses to withdraw from the heat. This is a simplified overview of the complex interactions involved, but it illustrates the primary function of nervous systems.

At some point, the brains of advanced organisms evolved a qualitatively distinct capability: the production of percepts. This process cannot be achieved through simple sequential actions. Our brains developed additional mechanisms necessary for creating sensory experiences, and research continues to uncover the specifics of this machinery. However, studying the functioning brain in living organisms poses significant challenges and often requires indirect methods.

One promising theory in consciousness research is CEMI, or conscious electromagnetic information field theory, which posits that evolution utilized the “junk” electromagnetic fields generated by neural activity to facilitate the production of conscious experiences. While CEMI is still in its early stages, it offers potential insights into the mystery of consciousness. Simply following the daisy-chain of neural processes may not yield fruitful results.

The Illusion of Replicants

Another concept for achieving eternal consciousness involves constructing synthetic components that mimic neuronal functions and assembling them appropriately. Although the challenges of such endeavors are staggering, it becomes more conceivable if we view consciousness as merely “information” and neurons as simple switches in a sequence. This perspective allows us to overlook the brain's intricate physical structure, imagining that conscious components could be assembled without considering their arrangement or processing speed.

However, such oversimplifications are often misleading. If CEMI proves accurate, the notion that physical structure is irrelevant to consciousness will be debunked. The idea that the brain's physical architecture is inconsequential as long as information flow is maintained is known as “opportunistic minimalism,” a concept likened to a humorous anecdote about theoretical physicists. In 2009, Ned Block highlighted the conflict between biological and minimalist perspectives, emphasizing that while we may not know how to create a thinking machine, the challenges involved do not pertain to consciousness itself. The biological perspective argues that only machines with the appropriate biological makeup can possess consciousness, placing it at odds with more abstract, nonbiological theories.

The biological perspective is further supported by Daniel Dennett, who notes that the recent advancements in neuroscience underscore the importance of specific connectivity, neuromodulators, architecture, and rhythmic patterns in understanding how the mind operates. Many of the optimistically simplistic views of minimalists have been discredited, as omitting critical components makes it impossible to explain mental functions. Although Dennett is reluctant to draw the obvious conclusion, evidence suggests that the biological makeup of the brain is crucial to consciousness—at least the type we experience.

Additionally, another obstacle to mechanical replication arises from the body map. Each brain contains a neurological representation of the body it inhabits. Without a corresponding body map, the brain cannot function effectively. Therefore, even if one could create a synthetic brain that replicates its biological counterpart, it would not operate correctly within an artificial body. Past instances of body-map discrepancies in humans reveal that such a condition would lead to a profoundly distressing conscious experience, with constant alarms signaling severe disarray.

Considering that the human brain contains approximately as many neurons as there are stars in the galaxy—over 100 billion—and that the connections between these neurons far exceed the total number of stars in the universe, the ambition of brain replication appears nearly unattainable, even if we were to devise viable synthetic neurons, which remains uncertain.

This is not to suggest that the secrets of mental immortality will never be unraveled. Personally, I remain skeptical, but the future is unpredictable. What is certain, however, is that mind uploading is unfeasible, as it fundamentally cannot work.