Two essays about the future of minds written by people more rigorous and educated than me both make a mistake — at least what I perceive as a mistake — that seems like a very strange mistake for such intelligent people to make. My hypothesis is that I’m missing something. Maybe explaining why I think they’re wrong will lead one of you to point out what I’m missing.

Note: usually “artificial intelligence” is a pretty broad term, but in this case regard it as “conscious intelligence housed in a non-human, non-flesh substrate”.

One of the essays I found puzzling was written by Scott Aaronson, a quantum computing theorist who is a professor at MIT, soon to be a professor at UT Austin instead. He wrote Quantum Computing since Democritus, published by Cambridge University Press.

Most of Aaronson’s relevant post is about quantum physics’ implications on the nature of consciousness, which I thoroughly do not understand. But then there’s an idea within the larger context that seems easy to refute.

Image of digital clones via Ian Hughes.

Image via Ian Hughes.

Aaronson explains at length that a computer couldn’t fully replicate a brain because there’s no way to fully replicate the initial conditions. This has something to do with quantum states but also makes common sense, if you roll with the quantum states element of the argument. He continues:

“This picture agrees with intuition that murder, for example, entails the destruction of something irreplaceable, unclonable, a unique locus of identity — something that, once it’s gone, can’t be recovered even in principle. By contrast, if there are (say) ten copies of an AI program, deleting five of the copies seems at most like assault, or some sort of misdemeanor offense! And this picture agrees with intuition both that deleting the copies wouldn’t be murder, and that the reason why it wouldn’t be murder is directly related to the AI’s copyability.”

To refute this, let’s conduct a thought experiment. Pretend that you can copy a human brain. There are ten copies of me. They are all individually conscious — perfect replicas that only diverge after the point when replication happened. Is it okay to kill five of these copies? No, of course not! Each one is a self-aware, intelligent mind, human in everything but body. The identicalness doesn’t change that.

Why would this be any different when it comes to an artificial intelligence? I suppose if the AI has no survival drive then terminating it would be okay, but then the question becomes whether the boundary of murder is eliminating a survival drive — in which case stepping on bugs would qualify — or eliminating a consciousness.

Earlier in the essay, Aaronson poses this question:

“Could we teleport you to Mars by ‘faxing’ you: that is, by putting you into a scanner that converts your brain state into pure information, then having a machine on Mars reconstitute the information into a new physical body?  Supposing we did that, how should we deal with the ‘original’ copy of you, the one left on earth: should it be painlessly euthanized?  Would you agree to try this?”

No, of course I wouldn’t agree to being euthanized after a copy of me was faxed to Mars! That would be functionally the same as writing down what I consist of, killing me, and then reconstructing me. Except wait, not me, because I am not the clone — the clone just happens to be a replica.

My own individual consciousness is gone, and a new one with the same memories and personalities is created. The break in continuity of self means that actually there are two selves. They each feel their own pain and joy, and each will have its own fierce desire to survive.

Aaronson goes on:

“There’s a deep question here, namely how much detail is needed before you’ll accept that the entity reconstituted on Mars will be you? Or take the empirical counterpart, which is already an enormous question: how much detail would you need for the reconstituted entity on Mars to behave nearly indistinguishably from you whenever it was presented the same stimuli?”

Commenter BLANDCorporatio expressed much the same point that I want to:

“My brain is on Earth at the beginning of the process, stays on Earth throughout, and I have no reason to suspect my consciousness is suddenly going to jump or split. I’ll still feel as if I’m on Earth (regardless of whether a more or less similar individual now runs around on Mars). Conversely, if the me on Earth is destroyed in the copying, then I’m gone, however similar the Mars one is.”

So that’s that.

The second instance of this fallacy, which could maybe be called the cloned-consciousness-as-continuous-consciouness fallacy, comes from an essay that Robin Hanson wrote in 1994. (Per Slate Star Codex, “He’s obviously brilliant — a PhD in economics, a masters in physics, work for DARPA, Lockheed, NASA, George Mason, and the Future of Humanity Institute.”) You may be familiar with Hanson as the speculative economist who wrote The Age of Em. His instance of the CCaCC fallacy emerges from a different angle (remember the hyper-specific definition of “artificial intelligence” that I mentioned in the beginning):

“Imagine […] that we learn how to take apart a real brain and to build a total model of that brain — by identifying each unit, its internal state, and the connections between units. […] if we implement this model in some computer, that computer will ‘act’ just like the original brain, responding to given brain inputs with the same sort of outputs. […] Yes, recently backed-up upload soldiers needn’t fear death, and their commanders need only fear the loss of their bodies and brains, not of their experience and skills.”

But… no! By the same argument I used to refute Aaronson, when an “upload” soldier dies, that is still a death. Reverting to a previous copy is not the same as continuing to live.

This seems really simple and obvious to me. So what am I missing?


Hat tip to the reader who recommended that I check out Hanson’s work — I can’t remember which one of you it was, but I appreciate it.

If you’re interested in further discussion, there are thoughtful comments on this page (just scroll down a bit), on Facebook, and on Hacker News. I particularly like what HN user lhankbhl said, because it expresses the problem so succinctly:

You are placed in a box. Moments later, you are told, “We have successfully made a copy of you. We are sending it home now. You must be disposed of.”

Will you allow them to dispose of you?

This is the question being posed, not whether a copy will have no idea if it is the original. The point is that it isn’t relevant if one is a copy. No one was moved, it’s only that a second person now exists and killing either is murder of a unique person.

(Again, uniqueness is not a question of whether these people will think or react to situations in the same way, but rather that there are two different consciousnesses at play.)

One of the commenters below recommended this video that investigates the Star Trek angle: