Menu Close

Tag: morality

This website was archived on July 20, 2019. It is frozen in time on that date.
Exolymph creator Sonya Mann's active website is Sonya, Supposedly.

We Appear to Be Globally Heated

The Computing Generation and all subsequent generations will have to cope with climate change (née global warming). That’s my own demographic cohort and probably yours as well: those of us who grew up with laptop keyboards and seemingly instantaneous information transfers. It’s important to remember that global warming is only bad insofar as it affects human beings.

Nature doesn’t give a shit, inherently. Global warming is fine as far as Nature is concerned — the key point is that Nature isn’t concerned at all. Wild flora and fauna constitute a vast assortment of interlocking systems, not a single entity with agency. Events like mass extinctions are only “bad” because human beings want to exploit biodiversity. Moral rectitude or lack thereof is in the eye of the beholder.

I find this revelation both comforting and terrifying. On the one hand, I needn’t feel guilty about hurting Gaia. She doesn’t care. On the other hand, will I live long enough for none of this to matter?

The Cloned-Consciousness-as-Continuous-Consciousness Fallacy

Two essays about the future of minds written by people more rigorous and educated than me both make a mistake — at least what I perceive as a mistake — that seems like a very strange mistake for such intelligent people to make. My hypothesis is that I’m missing something. Maybe explaining why I think they’re wrong will lead one of you to point out what I’m missing.

Note: usually “artificial intelligence” is a pretty broad term, but in this case regard it as “conscious intelligence housed in a non-human, non-flesh substrate”.

One of the essays I found puzzling was written by Scott Aaronson, a quantum computing theorist who is a professor at MIT, soon to be a professor at UT Austin instead. He wrote Quantum Computing since Democritus, published by Cambridge University Press.

Most of Aaronson’s relevant post is about quantum physics’ implications on the nature of consciousness, which I thoroughly do not understand. But then there’s an idea within the larger context that seems easy to refute.

Image of digital clones via Ian Hughes.

Image via Ian Hughes.

Aaronson explains at length that a computer couldn’t fully replicate a brain because there’s no way to fully replicate the initial conditions. This has something to do with quantum states but also makes common sense, if you roll with the quantum states element of the argument. He continues:

“This picture agrees with intuition that murder, for example, entails the destruction of something irreplaceable, unclonable, a unique locus of identity — something that, once it’s gone, can’t be recovered even in principle. By contrast, if there are (say) ten copies of an AI program, deleting five of the copies seems at most like assault, or some sort of misdemeanor offense! And this picture agrees with intuition both that deleting the copies wouldn’t be murder, and that the reason why it wouldn’t be murder is directly related to the AI’s copyability.”

To refute this, let’s conduct a thought experiment. Pretend that you can copy a human brain. There are ten copies of me. They are all individually conscious — perfect replicas that only diverge after the point when replication happened. Is it okay to kill five of these copies? No, of course not! Each one is a self-aware, intelligent mind, human in everything but body. The identicalness doesn’t change that.

Why would this be any different when it comes to an artificial intelligence? I suppose if the AI has no survival drive then terminating it would be okay, but then the question becomes whether the boundary of murder is eliminating a survival drive — in which case stepping on bugs would qualify — or eliminating a consciousness.

Earlier in the essay, Aaronson poses this question:

“Could we teleport you to Mars by ‘faxing’ you: that is, by putting you into a scanner that converts your brain state into pure information, then having a machine on Mars reconstitute the information into a new physical body?  Supposing we did that, how should we deal with the ‘original’ copy of you, the one left on earth: should it be painlessly euthanized?  Would you agree to try this?”

No, of course I wouldn’t agree to being euthanized after a copy of me was faxed to Mars! That would be functionally the same as writing down what I consist of, killing me, and then reconstructing me. Except wait, not me, because I am not the clone — the clone just happens to be a replica.

My own individual consciousness is gone, and a new one with the same memories and personalities is created. The break in continuity of self means that actually there are two selves. They each feel their own pain and joy, and each will have its own fierce desire to survive.

Aaronson goes on:

“There’s a deep question here, namely how much detail is needed before you’ll accept that the entity reconstituted on Mars will be you? Or take the empirical counterpart, which is already an enormous question: how much detail would you need for the reconstituted entity on Mars to behave nearly indistinguishably from you whenever it was presented the same stimuli?”

Commenter BLANDCorporatio expressed much the same point that I want to:

“My brain is on Earth at the beginning of the process, stays on Earth throughout, and I have no reason to suspect my consciousness is suddenly going to jump or split. I’ll still feel as if I’m on Earth (regardless of whether a more or less similar individual now runs around on Mars). Conversely, if the me on Earth is destroyed in the copying, then I’m gone, however similar the Mars one is.”

So that’s that.

The second instance of this fallacy, which could maybe be called the cloned-consciousness-as-continuous-consciouness fallacy, comes from an essay that Robin Hanson wrote in 1994. (Per Slate Star Codex, “He’s obviously brilliant — a PhD in economics, a masters in physics, work for DARPA, Lockheed, NASA, George Mason, and the Future of Humanity Institute.”) You may be familiar with Hanson as the speculative economist who wrote The Age of Em. His instance of the CCaCC fallacy emerges from a different angle (remember the hyper-specific definition of “artificial intelligence” that I mentioned in the beginning):

“Imagine […] that we learn how to take apart a real brain and to build a total model of that brain — by identifying each unit, its internal state, and the connections between units. […] if we implement this model in some computer, that computer will ‘act’ just like the original brain, responding to given brain inputs with the same sort of outputs. […] Yes, recently backed-up upload soldiers needn’t fear death, and their commanders need only fear the loss of their bodies and brains, not of their experience and skills.”

But… no! By the same argument I used to refute Aaronson, when an “upload” soldier dies, that is still a death. Reverting to a previous copy is not the same as continuing to live.

This seems really simple and obvious to me. So what am I missing?


Hat tip to the reader who recommended that I check out Hanson’s work — I can’t remember which one of you it was, but I appreciate it.

If you’re interested in further discussion, there are thoughtful comments on this page (just scroll down a bit), on Facebook, and on Hacker News. I particularly like what HN user lhankbhl said, because it expresses the problem so succinctly:

You are placed in a box. Moments later, you are told, “We have successfully made a copy of you. We are sending it home now. You must be disposed of.”

Will you allow them to dispose of you?

This is the question being posed, not whether a copy will have no idea if it is the original. The point is that it isn’t relevant if one is a copy. No one was moved, it’s only that a second person now exists and killing either is murder of a unique person.

(Again, uniqueness is not a question of whether these people will think or react to situations in the same way, but rather that there are two different consciousnesses at play.)

One of the commenters below recommended this video that investigates the Star Trek angle:

Pornbots Lacking Self & Gender

Warnings: 1) Could be NSFW if you work somewhere stodgy. 2) Discusses cissexism and sexual assault.

Image of a gynoid via Mona Eberhardt.

Image via Mona Eberhardt.

Wikipedia says of the gynoid, “A fembot is a humanoid robot that is gendered feminine. It is also known as a gynoid, though this term is more recent.” (Hold on, I’m going something with this.) The article elaborates:

“A gynoid is anything that resembles or pertains to the female human form. Though the term android refers to robotic humanoids regardless of apparent gender, the Greek prefix ‘andr-‘ refers to man in the masculine gendered sense. Because of this prefix, many read Android as referring to male-styled robots.” [Emphasis in original.]

I disagree with the Wikipedia editors’ conflation of “female” and “has tits and a vagina” but I must leave the depth of that argument for another day. Suffice it to say that a gynoid is an android — a robot designed to mimic Homo sapiens — that has tits and a vagina. Its overall appearance matches the shapes we code as “womanly” (or, disturbingly, “girlish”).

But a gynoid with no self-awareness, no sentience, cannot have a gender. Because gender is an inner experience than may be communicated to the world, not something that outside observers can impose on a body, however much they might try.

Screenshot of a gynoid by Sophrosyne Stenvaag.

Screenshot (?) by Sophrosyne Stenvaag.

Is it wrong to fetishize gynoids and treat them as fucktoys? If the gynoid has consciousness then yes, it’s just as immoral as any other sexual abuse. But if the robot is simply a well-engineered physical manifestation of porn? Can you rape a souped-up Fleshlight?

I think not. There’s no self in that container to traumatize. So it wouldn’t be wrong because of any harm done to the device — a gynoid with no mind or soul is a gadget like your phone or your Roomba — but could be wrong because of the effect on humans who also have bodies coded as feminine.

If someone gets into the habit of treating a gynoid as a sexual object, will they pattern-match and treat people they perceive as women with the same violence and disrespect? It is by no means conclusive that regular pornography has the common-sense effect of making viewers more sexually violent. There’s no consensus on whether video games encourage IRL aggression either.

I’m sure we’ll find out eventually. For better or for worse.


(I told my boyfriend that I was going to write a thinkpiece about gynoids instead of a political thinkpiece and he said, “The lady robots?!”)

© 2019 Exolymph. All rights reserved.

Theme by Anders Norén.