Menu Close

Tag: death

This website was archived on July 20, 2019. It is frozen in time on that date.
Exolymph creator Sonya Mann's active website is Sonya, Supposedly.

Conversion Ratio

The following short story was written by ReTech and edited for this venue.

Bright neon Wheel of Fortune machine in a casino. Photo by La super Lili.

Photo by La super Lili.

Swen saw the glow from his forearm underneath his shirt. He’d muted his phone, so now someone was pinging him. It was almost an even bet: either his boss or Sully. After a long week it felt nice to be offline, even if it was only for a few ticks.

“Should’ve muted ’em both,” Swen thought as he slid his sleeve up. The loop was swinging underneath the south pass of the Rockies so the cabin dimmed for a moment as the lighting adjusted. His dermdisplay lit up his face as he read: WTF? Need to talk ASAP. You don’t just get recoded and go offline like that. Lemme know where you are. Ping back dammit. (-.-) Sul

Sully might be genuinely worried or he might think that he’d be on the hook. After all, Sully was the one who took him to the clinic, so maybe he was feeling nervous. Swen thought, “I’ll let him sweat till I get to the strip. It’s only twenty more minutes.” He smiled and muted his arm in the same motion as slipping his sleeve back down. The flesh no longer glowed.


Fourteen days ago Swen’s hours had been increased at work. He was given no say in the matter. He was on mandatory rotations for the next three years. Swen had gotten shafted with the most depressing job he could imagine: death-sitter. More accurately, or more officially, “Hospice End-of-Life Observer”. People were too busy to give a shit about a dying family member and headchats just weren’t the same as holding a hand.

Since 2031, WellSys had mandated death-sitters as part of their Grace in Dying initiative. Marketing had originally called it Dignity in Life and Death Options. Apparently not a single person working on the multimillion-coin campaign had abbreviated that. Exactly two hours after the campaign hit the feeds, DILDO was pulled and rebranded as the GD hospice plan. The lesser of two evils.


Thirteen days ago Swen held the hand of a 147-year-old woman who did not receive one call, one text, a single feed mention, nor have anyone claim her things after she died. This was not the sad part to Swen. Millions died like that every year. What made him maudlin was that he’d end up in a bed the same way, in a hundred or so years. The thought of some young forty-year-old sitting with him as he died, just because the kid had to, was repulsive enough.

But the thought of an adventureless life nauseated Swen.


Twelve days ago, he asked Sully if he still had friends that recoded. Swen didn’t try to get Sully drunk first. He didn’t do it over dinner or in some coy fashion, just-so-happening to mention the topic in conversation. Instead Swen walked into Sully’s apartment, smiled, said hello, kissed him lightly, and asked matter-of-factly: “Can you get me in touch with a recoder? I’m tired of being on basic and I want to make enough money so I’m not stuck anymore.”

Sully paused mid-breath for a moment. A slice of black hair slid down over his left eye. He didn’t bother to push it back. He didn’t even bother to breath until his brain reminded him to. Then, slowly, he sputtered: “Is this legal money or illegal?”

Swen’s smile broadened. “It’s legal if you win it.”

Read more

The Cloned-Consciousness-as-Continuous-Consciousness Fallacy

Two essays about the future of minds written by people more rigorous and educated than me both make a mistake — at least what I perceive as a mistake — that seems like a very strange mistake for such intelligent people to make. My hypothesis is that I’m missing something. Maybe explaining why I think they’re wrong will lead one of you to point out what I’m missing.

Note: usually “artificial intelligence” is a pretty broad term, but in this case regard it as “conscious intelligence housed in a non-human, non-flesh substrate”.

One of the essays I found puzzling was written by Scott Aaronson, a quantum computing theorist who is a professor at MIT, soon to be a professor at UT Austin instead. He wrote Quantum Computing since Democritus, published by Cambridge University Press.

Most of Aaronson’s relevant post is about quantum physics’ implications on the nature of consciousness, which I thoroughly do not understand. But then there’s an idea within the larger context that seems easy to refute.

Image of digital clones via Ian Hughes.

Image via Ian Hughes.

Aaronson explains at length that a computer couldn’t fully replicate a brain because there’s no way to fully replicate the initial conditions. This has something to do with quantum states but also makes common sense, if you roll with the quantum states element of the argument. He continues:

“This picture agrees with intuition that murder, for example, entails the destruction of something irreplaceable, unclonable, a unique locus of identity — something that, once it’s gone, can’t be recovered even in principle. By contrast, if there are (say) ten copies of an AI program, deleting five of the copies seems at most like assault, or some sort of misdemeanor offense! And this picture agrees with intuition both that deleting the copies wouldn’t be murder, and that the reason why it wouldn’t be murder is directly related to the AI’s copyability.”

To refute this, let’s conduct a thought experiment. Pretend that you can copy a human brain. There are ten copies of me. They are all individually conscious — perfect replicas that only diverge after the point when replication happened. Is it okay to kill five of these copies? No, of course not! Each one is a self-aware, intelligent mind, human in everything but body. The identicalness doesn’t change that.

Why would this be any different when it comes to an artificial intelligence? I suppose if the AI has no survival drive then terminating it would be okay, but then the question becomes whether the boundary of murder is eliminating a survival drive — in which case stepping on bugs would qualify — or eliminating a consciousness.

Earlier in the essay, Aaronson poses this question:

“Could we teleport you to Mars by ‘faxing’ you: that is, by putting you into a scanner that converts your brain state into pure information, then having a machine on Mars reconstitute the information into a new physical body?  Supposing we did that, how should we deal with the ‘original’ copy of you, the one left on earth: should it be painlessly euthanized?  Would you agree to try this?”

No, of course I wouldn’t agree to being euthanized after a copy of me was faxed to Mars! That would be functionally the same as writing down what I consist of, killing me, and then reconstructing me. Except wait, not me, because I am not the clone — the clone just happens to be a replica.

My own individual consciousness is gone, and a new one with the same memories and personalities is created. The break in continuity of self means that actually there are two selves. They each feel their own pain and joy, and each will have its own fierce desire to survive.

Aaronson goes on:

“There’s a deep question here, namely how much detail is needed before you’ll accept that the entity reconstituted on Mars will be you? Or take the empirical counterpart, which is already an enormous question: how much detail would you need for the reconstituted entity on Mars to behave nearly indistinguishably from you whenever it was presented the same stimuli?”

Commenter BLANDCorporatio expressed much the same point that I want to:

“My brain is on Earth at the beginning of the process, stays on Earth throughout, and I have no reason to suspect my consciousness is suddenly going to jump or split. I’ll still feel as if I’m on Earth (regardless of whether a more or less similar individual now runs around on Mars). Conversely, if the me on Earth is destroyed in the copying, then I’m gone, however similar the Mars one is.”

So that’s that.

The second instance of this fallacy, which could maybe be called the cloned-consciousness-as-continuous-consciouness fallacy, comes from an essay that Robin Hanson wrote in 1994. (Per Slate Star Codex, “He’s obviously brilliant — a PhD in economics, a masters in physics, work for DARPA, Lockheed, NASA, George Mason, and the Future of Humanity Institute.”) You may be familiar with Hanson as the speculative economist who wrote The Age of Em. His instance of the CCaCC fallacy emerges from a different angle (remember the hyper-specific definition of “artificial intelligence” that I mentioned in the beginning):

“Imagine […] that we learn how to take apart a real brain and to build a total model of that brain — by identifying each unit, its internal state, and the connections between units. […] if we implement this model in some computer, that computer will ‘act’ just like the original brain, responding to given brain inputs with the same sort of outputs. […] Yes, recently backed-up upload soldiers needn’t fear death, and their commanders need only fear the loss of their bodies and brains, not of their experience and skills.”

But… no! By the same argument I used to refute Aaronson, when an “upload” soldier dies, that is still a death. Reverting to a previous copy is not the same as continuing to live.

This seems really simple and obvious to me. So what am I missing?


Hat tip to the reader who recommended that I check out Hanson’s work — I can’t remember which one of you it was, but I appreciate it.

If you’re interested in further discussion, there are thoughtful comments on this page (just scroll down a bit), on Facebook, and on Hacker News. I particularly like what HN user lhankbhl said, because it expresses the problem so succinctly:

You are placed in a box. Moments later, you are told, “We have successfully made a copy of you. We are sending it home now. You must be disposed of.”

Will you allow them to dispose of you?

This is the question being posed, not whether a copy will have no idea if it is the original. The point is that it isn’t relevant if one is a copy. No one was moved, it’s only that a second person now exists and killing either is murder of a unique person.

(Again, uniqueness is not a question of whether these people will think or react to situations in the same way, but rather that there are two different consciousnesses at play.)

One of the commenters below recommended this video that investigates the Star Trek angle:

Suicide Mortgages for the Digitized Self

"My suicide mortgage is 80% paid," meaning 80% of the digital self-copies you pledged into slavery have earned their deaths

@ctrlcreep on Twitter.

This idea of a “suicide mortgage” that @ctrlcreep came up with is fascinating. They expanded the concept on Tumblr:

“Death is not as easy as deleting a file: the powers that be work to preserve, do not grant you root access to your self, insist that you persist even as they chide you for burdening the system, move you to welfare servers, and ration your access to escapism. […] Euthanasia permits are the only way out, but their price is steep […] Under suicide mortgages, [exploitative] corporations sponsor swarms of copies, who work non-stop, pooling their wages to buy up euthanasia permits. Permits are then raffled off, and the winning copy meets death far sooner than would have otherwise been possible. Somebody who says his suicide mortgage is 5% paid means that 5% of his copies have earned oblivion.”

The appeal of this system to the “buyer” of the mortgage — who is a fully digitized person, which is why they’re unable to die in the first place — is that they might get to be the first copy deleted. However, intuitively, as the pool of copies working together gets smaller, they earn less, and it takes longer to buy the next euthanasia permit. Eventually the mortgage isn’t sufficiently useful anymore, and maybe each remaining copy arranges for its own suicide mortgage. The original digital self’s clones proliferate again.

Of course, there’s a hole in this idea: why use digitized selves for labor in the first place? In a future where we’ve figured out how to upload humans, we’ve certainly also figured out how to make artificially intelligent algorithms and scripts and programs, etc, etc. Maybe there’s some kind of draconian intellectual property regime that makes it more expensive to use AI than digitized human laborers? That seems fitting.

I’m sure there’s a startup in this imagined ecosystem trying to disrupt the suicide mortgage financiers. Let’s root for them, I guess.

© 2019 Exolymph. All rights reserved.

Theme by Anders Norén.