How to Live Forever

CREDITILLUSTRATION BY BOYOUN KIM

Part IV in a series on technological evolution. Part I was “If a Time Traveller Saw a Smartphone.” Part II was “As Technology Gets Better, Will Society Get Worse?” Part III was “The Problem With Easy Technology.”

Could technology help to make our minds last forever? Consider the following parable, about a very wealthy man I’ll call Nicolas Flamel.

As he became older, Flamel became fixated on the idea that he didn’t want to die. After considering the problem for a long time, he figured that what he needed to do was move the contents of his mind into a receptacle more stable than a human head. Flamel was an engineer who made his fortune in networks, and he felt confident that what we think of as our brains—and as ourselves—was really nothing more than a combination of electrical pathways. Surely these could be copied and stored somewhere safe. The task would be daunting but not impossible: there are eighty-five billion neurons in the average brain, and mapping them seemed to be a problem not unlike mapping the Internet. Flamel liked to tell his friends, “One day, you’ll start reading e-mails from me, and wonder where I went.”

Flamel dedicated his fortune to the brain-uploading project, and over the years came to realize that he’d be able to do what he wanted—with one rather important catch. Transferring the information contained in his physical brain would require the brain’s destruction. But, at the age of eighty-eight, after testing his technology on rats, he eventually decided to go forward. He would submit to his own procedure.

Flamel remained awake for his surgery, and as he lay on the hospital table his brain was picked apart, its information transferred to a computer one neural connection at a time. At first, he felt nothing, but eventually he experienced a sense of fading, as though he were falling asleep. And then something unexpected happened. The computer said to him, distinctly, “I am awake.” But Flamel observed that he was still lying on the table. And then he understood that, whatever might happen to the computer, he was about to die.

The story of Flamel is just a parable, but uploading the brain, or achieving “whole brain emulation,” has in recent years become something of a cause célèbre among certain scientists and entrepreneurs. “It’s theoretically possible to copy the brain onto a computer, and so provide a form of life after death,” Stephen Hawking said last year. Ray Kurzweil, the author of a series of books about what he calls the Singularity, has declared that we may be uploading our brains by the twenty-thirties. Currently, the best-known effort to develop brain uploads is something called the 2045 Initiative, founded by Dmitry Itskov, a Russian billionaire. His goal is to enable “the transfer of an individual’s personality to a more advanced non-biological carrier, and extending life, including to the point of immortality.”

Assume, along with Hawking and Kurzweil, that it is plausible for the information in our heads to be digitized and stored somewhere else. And assume, as scientists now tend to do, that our minds are actually stored in our physical brains. (Descartes, on the other hand, thought that the mind resided in the pineal gland.) As the story of Nicolas Flamel suggests, it’s still not at all clear what uploading the brain would mean. What if what’s created, even if it has a copy of your brain, just isn’t you?

Some people don’t consider that a problem. After all, if a copy thinks it is you, perhaps that would be good enough. David Chalmers, a philosopher at the Australian National University, points out that we lose consciousness every night when we go to sleep. When we regain it, we think nothing of it. “Each waking is really like a new dawn that’s a bit like the commencement of a new person,” Chalmers has said. “That’s good enough…. And if that’s so, then reconstructive uploading will also be good enough.”

Maybe that is all that matters, particularly if you think that our sense of self is illusory. Many Buddhists take something close to this position: they regard our entire sense of self as a product of mistaking memories, thoughts, or emotions for something more than fleeting sensations. If the self has no meaning, its death has less significance; if the computer thinks it’s you, then maybe it really is. The philosopher Derek Parfitcaptures this idea when he says that “my death will break the more direct relations between my present experiences and future experiences, but it will not break various other relations. This is all there is to the fact that there will be no one living who will be me.”

I suspect, however, that most people seeking immortality rather strongly believe that they have a self, which is why they are willing to spend so much money to keep it alive. They wouldn’t be satisfied knowing that their brains keep on living without them, like a clone. This is the self-preserving, or selfish, version of everlasting life, in which we seek to be absolutely sure that immortality preserves a sense of ourselves, operating from a particular point of view.

The fact that we cannot agree on whether our sense of self would survive copying is a reminder that our general understanding of consciousness and self-awareness is incredibly weak and limited. Scientists can’t define it, and philosophers struggle, too. Giulio Tononi, a theorist based at the University of Wisconsin, defines consciousness simply as “what fades when we fall into dreamless sleep.” In recent years, he and other scientists, like Christof Koch, at Caltech, have made progress in understanding when consciousness arises, namely from massive complexity and linkages between different parts of the brain. “To be conscious,” Koch has written, “you need to be a single, integrated entity with a large repertoire of highly differentiated states.” That is pretty abstract. And it still gives us little to no sense of what it would mean to transfer ourselves to some other vessel.

With just an uploaded brain and no body, would you even be conscious in a meaningful sense? Not according to Alva Noë, the author of a bookcalled “Out of Our Heads: Why You Are Not Your Brain.” Noë argues that our sense of self does not arise simply from having a brain. It requires having a body and living in a world. “Meaningful thought arises only for the whole animal dynamically engaged with its environment,” he writes. What we call consciousness, according to Noë, is actually “an achievement of the whole animal in its environmental context.” By this measure, serious, conscious immortality would require not just an electronic brain but a fancy robot body to go with it, one with enough nerves to be capable of sensing what’s happening around it.

Personally, I tend to wonder if our powers of duplication have distorted our thinking in this area. We are capable of making copies of things that our ancestors might have thought of as ineffable, like Bach’s cantatas or images of the moment of birth. Perhaps this ability is what has given us the idea that we can copy other things that seem ethereal—like our minds. But, of course, achieving immortality will surely be much harder than backing up your hard drive.

Perhaps a better approach for future Nicolas Flamels—or Ray Kurzweils or Dmitry Itskovs—is not copying our brains but, rather, trying to migrate the self to a new physical host. Like a hermit crab seeking a new shell, immortality may not really be about copying ourselves but about creating a process in which we slowly leave behind our current, biological homes and move somewhere more durable, a point made by Steven Novella, a clinical neurologist and an assistant professor at Yale.

How might this work? In the past two decades, scientists have gained a better understanding of neuroplasticity, or the idea that the brain is continually rewiring itself. Stroke victims, for example, sometimes recover lost functions after their brains reallocate control of certain actions from a damaged area. The idea would be to encourage the brain’s activities to slowly begin migrating to a massively interconnected electronic brain. Over time, if things went well, our intelligence and identity might be coaxed into leaving behind the old brain and taking refuge in a more durable unit (which would be attached to the robot body mentioned earlier).

But it won’t necessarily work. After all, the real Nicholas Flamel was a French bookseller in the fourteenth century who practiced alchemy and was widely believed to have discovered the elixir of life and the philosopher’s stone. He died in 1418 and was buried in Paris.

  • Email
  • Print

Posted

in

by

Tags:

Comments

Leave a Reply

Your email address will not be published. Required fields are marked *