Menu Close

Tag: ethics

This website was archived on July 20, 2019. It is frozen in time on that date.
Exolymph creator Sonya Mann's active website is Sonya, Supposedly.

We Appear to Be Globally Heated

The Computing Generation and all subsequent generations will have to cope with climate change (née global warming). That’s my own demographic cohort and probably yours as well: those of us who grew up with laptop keyboards and seemingly instantaneous information transfers. It’s important to remember that global warming is only bad insofar as it affects human beings.

Nature doesn’t give a shit, inherently. Global warming is fine as far as Nature is concerned — the key point is that Nature isn’t concerned at all. Wild flora and fauna constitute a vast assortment of interlocking systems, not a single entity with agency. Events like mass extinctions are only “bad” because human beings want to exploit biodiversity. Moral rectitude or lack thereof is in the eye of the beholder.

I find this revelation both comforting and terrifying. On the one hand, I needn’t feel guilty about hurting Gaia. She doesn’t care. On the other hand, will I live long enough for none of this to matter?

The Cloned-Consciousness-as-Continuous-Consciousness Fallacy

Two essays about the future of minds written by people more rigorous and educated than me both make a mistake — at least what I perceive as a mistake — that seems like a very strange mistake for such intelligent people to make. My hypothesis is that I’m missing something. Maybe explaining why I think they’re wrong will lead one of you to point out what I’m missing.

Note: usually “artificial intelligence” is a pretty broad term, but in this case regard it as “conscious intelligence housed in a non-human, non-flesh substrate”.

One of the essays I found puzzling was written by Scott Aaronson, a quantum computing theorist who is a professor at MIT, soon to be a professor at UT Austin instead. He wrote Quantum Computing since Democritus, published by Cambridge University Press.

Most of Aaronson’s relevant post is about quantum physics’ implications on the nature of consciousness, which I thoroughly do not understand. But then there’s an idea within the larger context that seems easy to refute.

Image of digital clones via Ian Hughes.

Image via Ian Hughes.

Aaronson explains at length that a computer couldn’t fully replicate a brain because there’s no way to fully replicate the initial conditions. This has something to do with quantum states but also makes common sense, if you roll with the quantum states element of the argument. He continues:

“This picture agrees with intuition that murder, for example, entails the destruction of something irreplaceable, unclonable, a unique locus of identity — something that, once it’s gone, can’t be recovered even in principle. By contrast, if there are (say) ten copies of an AI program, deleting five of the copies seems at most like assault, or some sort of misdemeanor offense! And this picture agrees with intuition both that deleting the copies wouldn’t be murder, and that the reason why it wouldn’t be murder is directly related to the AI’s copyability.”

To refute this, let’s conduct a thought experiment. Pretend that you can copy a human brain. There are ten copies of me. They are all individually conscious — perfect replicas that only diverge after the point when replication happened. Is it okay to kill five of these copies? No, of course not! Each one is a self-aware, intelligent mind, human in everything but body. The identicalness doesn’t change that.

Why would this be any different when it comes to an artificial intelligence? I suppose if the AI has no survival drive then terminating it would be okay, but then the question becomes whether the boundary of murder is eliminating a survival drive — in which case stepping on bugs would qualify — or eliminating a consciousness.

Earlier in the essay, Aaronson poses this question:

“Could we teleport you to Mars by ‘faxing’ you: that is, by putting you into a scanner that converts your brain state into pure information, then having a machine on Mars reconstitute the information into a new physical body?  Supposing we did that, how should we deal with the ‘original’ copy of you, the one left on earth: should it be painlessly euthanized?  Would you agree to try this?”

No, of course I wouldn’t agree to being euthanized after a copy of me was faxed to Mars! That would be functionally the same as writing down what I consist of, killing me, and then reconstructing me. Except wait, not me, because I am not the clone — the clone just happens to be a replica.

My own individual consciousness is gone, and a new one with the same memories and personalities is created. The break in continuity of self means that actually there are two selves. They each feel their own pain and joy, and each will have its own fierce desire to survive.

Aaronson goes on:

“There’s a deep question here, namely how much detail is needed before you’ll accept that the entity reconstituted on Mars will be you? Or take the empirical counterpart, which is already an enormous question: how much detail would you need for the reconstituted entity on Mars to behave nearly indistinguishably from you whenever it was presented the same stimuli?”

Commenter BLANDCorporatio expressed much the same point that I want to:

“My brain is on Earth at the beginning of the process, stays on Earth throughout, and I have no reason to suspect my consciousness is suddenly going to jump or split. I’ll still feel as if I’m on Earth (regardless of whether a more or less similar individual now runs around on Mars). Conversely, if the me on Earth is destroyed in the copying, then I’m gone, however similar the Mars one is.”

So that’s that.

The second instance of this fallacy, which could maybe be called the cloned-consciousness-as-continuous-consciouness fallacy, comes from an essay that Robin Hanson wrote in 1994. (Per Slate Star Codex, “He’s obviously brilliant — a PhD in economics, a masters in physics, work for DARPA, Lockheed, NASA, George Mason, and the Future of Humanity Institute.”) You may be familiar with Hanson as the speculative economist who wrote The Age of Em. His instance of the CCaCC fallacy emerges from a different angle (remember the hyper-specific definition of “artificial intelligence” that I mentioned in the beginning):

“Imagine […] that we learn how to take apart a real brain and to build a total model of that brain — by identifying each unit, its internal state, and the connections between units. […] if we implement this model in some computer, that computer will ‘act’ just like the original brain, responding to given brain inputs with the same sort of outputs. […] Yes, recently backed-up upload soldiers needn’t fear death, and their commanders need only fear the loss of their bodies and brains, not of their experience and skills.”

But… no! By the same argument I used to refute Aaronson, when an “upload” soldier dies, that is still a death. Reverting to a previous copy is not the same as continuing to live.

This seems really simple and obvious to me. So what am I missing?


Hat tip to the reader who recommended that I check out Hanson’s work — I can’t remember which one of you it was, but I appreciate it.

If you’re interested in further discussion, there are thoughtful comments on this page (just scroll down a bit), on Facebook, and on Hacker News. I particularly like what HN user lhankbhl said, because it expresses the problem so succinctly:

You are placed in a box. Moments later, you are told, “We have successfully made a copy of you. We are sending it home now. You must be disposed of.”

Will you allow them to dispose of you?

This is the question being posed, not whether a copy will have no idea if it is the original. The point is that it isn’t relevant if one is a copy. No one was moved, it’s only that a second person now exists and killing either is murder of a unique person.

(Again, uniqueness is not a question of whether these people will think or react to situations in the same way, but rather that there are two different consciousnesses at play.)

One of the commenters below recommended this video that investigates the Star Trek angle:

Pornbots Lacking Self & Gender

Warnings: 1) Could be NSFW if you work somewhere stodgy. 2) Discusses cissexism and sexual assault.

Image of a gynoid via Mona Eberhardt.

Image via Mona Eberhardt.

Wikipedia says of the gynoid, “A fembot is a humanoid robot that is gendered feminine. It is also known as a gynoid, though this term is more recent.” (Hold on, I’m going something with this.) The article elaborates:

“A gynoid is anything that resembles or pertains to the female human form. Though the term android refers to robotic humanoids regardless of apparent gender, the Greek prefix ‘andr-‘ refers to man in the masculine gendered sense. Because of this prefix, many read Android as referring to male-styled robots.” [Emphasis in original.]

I disagree with the Wikipedia editors’ conflation of “female” and “has tits and a vagina” but I must leave the depth of that argument for another day. Suffice it to say that a gynoid is an android — a robot designed to mimic Homo sapiens — that has tits and a vagina. Its overall appearance matches the shapes we code as “womanly” (or, disturbingly, “girlish”).

But a gynoid with no self-awareness, no sentience, cannot have a gender. Because gender is an inner experience than may be communicated to the world, not something that outside observers can impose on a body, however much they might try.

Screenshot of a gynoid by Sophrosyne Stenvaag.

Screenshot (?) by Sophrosyne Stenvaag.

Is it wrong to fetishize gynoids and treat them as fucktoys? If the gynoid has consciousness then yes, it’s just as immoral as any other sexual abuse. But if the robot is simply a well-engineered physical manifestation of porn? Can you rape a souped-up Fleshlight?

I think not. There’s no self in that container to traumatize. So it wouldn’t be wrong because of any harm done to the device — a gynoid with no mind or soul is a gadget like your phone or your Roomba — but could be wrong because of the effect on humans who also have bodies coded as feminine.

If someone gets into the habit of treating a gynoid as a sexual object, will they pattern-match and treat people they perceive as women with the same violence and disrespect? It is by no means conclusive that regular pornography has the common-sense effect of making viewers more sexually violent. There’s no consensus on whether video games encourage IRL aggression either.

I’m sure we’ll find out eventually. For better or for worse.


(I told my boyfriend that I was going to write a thinkpiece about gynoids instead of a political thinkpiece and he said, “The lady robots?!”)

Means & Ends of AI

Adam Elkus wrote an extremely long essay about some of the ethical quandaries raised by the development of artificial intelligence(s). In it he commented:

“The AI values community is beginning to take shape around the notion that the system can learn representations of values from relatively unstructured interactions with the environment. Which then opens the other can of worms of how the system can be biased to learn the ‘correct’ messages and ignore the incorrect ones.”

He is talking about unsupervised machine learning as it pertains to cultural assumptions. Furthermore, Elkus wrote:

“[A]ny kind of technically engineered system is a product of the social context that it is embedded within. Computers act in relatively complex ways to fulfill human needs and desires and are products of human knowledge and social grounding.”

I agree with this! Computers — and second-order products like software — are tools built by humans for human purposes. And yet this subject is most interesting when we consider how things might change when computers have the capacity to transcend human purposes.

Some people — Elkus perhaps included — scoff this possibility off as a pipe dream with no scientific basis. Perhaps the more salient inquiry is whether we can properly encode “human purposes” in the first place, and who gets to define “human purposes”, and whether those aims can be adjusted later. If a machine can learn from itself and its past experiences (so to speak), starting over with a clean slate becomes trickier.

I want to tie this quandary to a parallel phenomenon. In an article that I saw shared frequently this weekend, Google’s former design ethicist Tristan Harris (also billed as a product philosopher — dude has the best job titles) wrote of tech companies:

“They give people the illusion of free choice while architecting the menu so that they win, no matter what you choose. […] By shaping the menus we pick from, technology hijacks the way we perceive our choices and replaces them new ones. But the closer we pay attention to the options we’re given, the more we’ll notice when they don’t actually align with our true needs.”

Similarly, tech companies get to determine the parameters and “motivations” of artificially intelligent programs’ behavior. We mere users aren’t given the opportunity to ask, “What if the computer used different data analysis methods? What if the algorithm was optimized for something other than marketing conversion rates?” In other words: “What if ‘human purposes’ weren’t treated as synonymous with ‘business goals’?”

Realistically, this will never happen, just like the former design ethicist’s idea of an “FDA for Tech” is ludicrous. Platforms’ and users’ needs don’t align perfectly, but they align well enough to create tremendous economic value, and that’s probably as good as the system can get.

Conflict Resources & Murky Culpability

After I wrote this dispatch I read “Your Phone Was Made By Slaves” by Kevin Bales and immediately felt silly — his longer piece covers a lot of the same ground in more depth. If you find this topic interesting, it’s a good read.


Computers, cell phones, and other electronic devices contain conflict minerals. For those of you unfamiliar with the term: “Conflict resources are natural resources whose systematic exploitation and trade in a context of conflict [AKA a war zone] contribute to, benefit from or result in the commission of serious violations of human rights”. Furthermore, “Take away the ability to profit from resource extraction and [the fighting groups] can no longer exacerbate or sustain conflict.”

To provide an example, minerals in the Democratic Republic of the Congo are mined by rebel militias and sold to fund the continuation of the fighting. The buyers are nobody in particular, but those minerals are laundered the way illicit money is laundered, by being passed through middlemen. Eventually manufacturers use the minerals on behalf of multinational purveyors of consumer electronics. Big companies — brand names that you would recognize. And so the violence continues, because local warlords want to keep access to their money machine. (If you have a Netflix account, I recommend watching the investigative documentary Virunga to learn more about this.) The DRC is only one of the places devastated by neocolonialism paired with local power-mongering.

The photo above dates from the late 1800s or early 1900s. According to USC’s caption, “The Congolese man is likely to have been a victim of the ‘Congo atrocities’, punishment, murders and mutilations (particularly amputation of the right hand on living victims or after death) that took place on colonial rubber plantations in the Congo Free State, territory owned by Belgian King Leopold II […] Workers on rubber plantations were paid with worthless goods, and it was in noticing this imbalance of trade that shipping clerk Edmund Morel reported in his columns for The West African Mail, noticing that large numbers of weapons were going into the country to control the rubber workers.” I call it neocolonialism because we are continuing an old pattern, just shuffling the guns around.

The New York Times’ East Africa bureau chief Jeffrey Gettleman wrote in a piece for National Geographic:

“In the ensuing free-for-all [after dictator Mobutu Sese Seko was deposed and Congo was consumed by war], foreign troops and rebel groups seized hundreds of mines. It was like giving an ATM card to a drugged-out kid with a gun. The rebels funded their brutality with diamonds, gold, tin, and tantalum, a hard, gray, corrosion-resistant element used to make electronics. Eastern Congo produces 20 to 50 percent of the world’s tantalum.”

How do we cope with this, as consumers? Do we drop out of modern life, eschewing all the connected devices that have become standard in the “First World”? Do we cling to guilt and shame because we don’t care enough to actually change our behavior? I’ll admit it: I don’t care enough about this problem to sacrifice my iPhone or my laptop. I’m not going to switch to a Fairphone. Neither do I only buy fair-trade food and clothing.

So, should I blame myself for the war in the Democratic Republic of the Congo? Is it my fault and yours? I’m genuinely undecided. On one hand, the demand side of a transaction doesn’t specify the methods of the supply side. I didn’t ask anyone to buy from militias. I didn’t ask the seventeenth-century European superpowers to pursue mercantilism and shoulder the spurious “white man’s burden”. On the other hand, I am funding terrorism, albeit very indirectly. Amnesty International released a report on cobalt sourcing in January — it’s pretty clear that this is not a resolved issue.

Uber Versus Ethics

I’m dwelling on the future of transportation because of an episode of the Exponent podcast about, well, the future of transportation. Electricity replacing combustion engines, autonomous vehicles, and driving as a service, oh my! Ride-sharing startups like Uber and Lyft are currently filling that last niche, and eventually they’ll do it with fleets of self-driving sedans, SUVs, minivans, etc. No humans required — except for the software engineers and passengers.

One of Google's self-driving cars.

Prototype of a self-driving car by Google.

Uber has a cutthroat reputation, and they’ve earned it. I’m not a fan of their company culture, but I think the more interesting question is about the ethics of their business model. They depend on low-paid drivers who are independent contractors rather than employees, and thus are unable to organize and advocate for themselves. In the same vein, drivers have to deal with all the taxes usually handled by businesses, and they don’t get overtime or health insurance.

Is this arrangement immoral? On the one hand, we have labor regulations because companies will exploit people in every way they can. We need those laws. (Capitalism is not a foolproof system!) On the other hand, drivers opt in. They choose to work for Uber.

Who bears responsibility — the company who created the system, or the individuals who choose to participate?

Who’s A Drug Lord?

We live in a world where people sell drugs on the internet, they get caught, and other people dissect news headlines about them. None of that is weird or surprising, nor should it be, but it represents a technologically mediated system of resistance, enforcement, and renewed resistance. Twitter manifests the new polis.

Brian Van criticized a recent New York Times headline about the IRS agent who pinpointed Ross Ulbricht: “The Tax Sleuth Who Took Down a Drug Lord”. The article was a good follow-on to Wired’s Silk Road saga. Here’s what Brian said:

Brian Van on Twitter

“NYTimes using the term ‘drug lord’ to blur the line between illegitimate e-commerce and murder conspiracists” [sic]. His second tweet reads, “Sales of drugs can be civil disobedience without violence; NYT freely adopts fascist philosophy that ‘all transgressions of law are equal’” [also sic]. He comes close to quoting the Bible: “For whosoever shall keep the whole law, and yet offend in one point, he is guilty of all.”

At face value, I agree with Brian 100%. I think every drug should be decriminalized, and yes that includes unambiguously destructive substances like meth. Why? 1) People should be free to do whatever they want with their own bodies and 2) banning drugs doesn’t work very well anyway. If you want to eliminate a problem, target the root cause — say, poverty and mental illness — instead of outlawing the symptom.

black pills

Photo via Health Gauge.

However, I’m curious about whether Brian is trying to imply that Ross Ulbricht was not a drug lord. Maybe the problem is that The New York Times is conflating drug-lord-ism with soliciting a hitman? (For those not familiar with the whole Silk Road debacle, go read the Wired articles that I linked above.) I guess I can’t tell whether Brian is objecting because he thinks Ross Ulbricht is sullied by the term “drug lord” or vice versa.

“they don’t have intelligence but they are often surprising”

I suddenly became very interested in Twitter bots because of @FFD8FFDB. That interview led me to Beau Gunderson, an experienced bot-maker and general creator of both computer things and people things. In answer to email questions, he was more voluble than I expected — in a delightful way! — so this is a long one. I’m sure neither of us will be offended if you don’t have time to read it all now. I posted the Medium version first so you can save it for later using Instapaper or something equivalent!

Note: I did not edit Beau’s answers at all. He refers to most people — and bots — by Twitter username, which I think is very reasonable. It’s good to present people as they have presented themselves!

Sonya: How did you get into generative art? Why does it appeal to you, personally?

Beau: my first experience with generative art was LogoWriter in first or second grade. i don’t remember if it was a part of the curriculum or not but i spent a lot of time with it, figuring out the language and what it could be made to do. i feel like there was some randomness involved in that process because i didn’t have a full understanding of the language and so i would permute commands and values to see what would happen. i gave some of the sequences of commands that drew recognizable patterns names.

in terms of getting into twitter bots i’m certain that some of the first bots i came across were by thricedotted and were of the type that thrice has described as “automating jokes”; things like @portmanteau_bot and @badjokebot (which are both amazing). the first bot i made was @vaporfave, which i still consider unfinished but which is also still happily creating scenes in “vaporwave style”, really just a collection of things that i associated with the musical genre of vaporwave (which i do actually enjoy and listen to). it has made more than 10,000 of these little scenes.

a lot of the bots i had seen were text bots, and so i became very interested in making image bots as a way to do something different within the medium. i gave a talk at @tinysubversions’ bot summit 2014 about transformation bots (though mostly about image transformation bots): http://opentranscripts.org/transcript/image-bots/ and my next bot was @plzrevisit, which was a kind of “glitch as a service” bot that relied on revisit.link.

as far as why generative art appeals to me, i think there are a few main reasons. i like the technical challenge of attempting to create a process that generates many instances of art. it would be one thing to programmatically create one or a hundred scenes for vaporwave, or to generate 10,000 and then pick the 10 best and call it done. but it feels like a different challenge to get to the point where i’m satisfied enough with the output of every run to give the bot the autonomy to publish them all. i also like to be surprised by them. and they feel like the right size for a lot of my ideas… they’re easy enough to knock out in a day if they’re simple enough. this is probably why i also haven’t gone back to a lot of the bots and improved them… they feel unfinished but “finished enough”.

in thinking about it some of the appeal is probably informed by my ADHD as well. i prefer smaller projects because they’re more manageable (and thus completable), and twitter bots provide a nearly infinitely scrolling feed of new art (and thus dopamine).

Sonya: How do you conceptualize your Twitter bots — are they projects, creatures, programs, or… ?

Beau: well, they’re certainly projects (i think of everything i do as a project; all my code lives in ~/p/ on my systems, where p stands for projects)

but the twitter bots i think of as something more… they don’t have intelligence but they are often surprising:

aside from tweeting “woah” at the bots i often will reply or quote and add my own commentary:

even though i know they don’t get anything from the exchange i still treat them as part of a conversation sometimes. they’re creators but i don’t put them on the same level as human creators.

Sonya: As a person who has created art projects that seem as though they are intelligent — I’m thinking of Autocomplete Rap — what are your thoughts on artificial intelligence? Do you think it will take the shape we’ve been expecting?

Beau: autocomplete rap was by @deathmtn, i’m only mentioned in the bio because he made use of the rap lyrics that i parsed from OHHLA and used in my bot @theseraps. but i think @theseraps does seem intelligent sometimes too. it pairs a line from a news source with a line from a hip-hop song and tries to ensure that they rhyme. when the subjects of both lines appear to match it feels like the bot might know what it’s doing.

my thoughts on artificial intelligence are fairly skeptical and i’m also not an expert in the field. i’ll say i don’t think it represents a threat to humanity. i don’t think of my work as relating to AI, it’s more about intelligence that only appears serendipitously.

Sonya: Imagine a scenario where Twitter consists of more bots than humans. Would you still participate?

Beau: yes. i talk to my own bots (and other bots) as it is. @godtributes sometimes responds to tweets with awful deities (like “MANSPLAINING FOR THE MANSPLAINING THRONE”) and i let it know that it messed up (wow i think i’ve tweeted at @godtributes more than any other bot).

i also have an idea i’d like to build that i’ve been thinking of as “bot streams” — basically bot-only twitter with less functionality and better support for posting images. and with a focus on bots using other bots work as input, or responding to it or critiquing it (an idea i believe @alicemazzy has written about).

Sonya: How does power play into generative art? When you give a computer program the ability to express itself — or at least to give that impression — what does it mean?

Beau: i try to be very aware of the power even my silly bots have. @theseraps uses lines from the news, which can contain violence, and the lines from the hip-hop corpus which can also contain violence. when paired they can be very poignant but it’s not something i want to create or make people look at. there are libraries to filter out potentially problematic words so i use one of those and also do some custom filtering.

this is one aspect of the #botALLY community i really like; there’s an explicit code of conduct and there’s general consensus about what comprises ethical or unethical behavior by bots. @tinysubversions has even done work to automate detecting transphobic jokes so that his bots don’t accidentally make them.

i wrote a bot called @___said that juxtaposes quotes from women with quotes from men from news stories as a foray into how bots can participate in a social justice context. just seeing what quotes are used makes me think about how sources are treated differently because of their gender. while i was making the bot i also saw how many fewer women than men were quoted (which prompted an idea for a second bot that would tweet daily statistics about the genders of quotes from major news outlets)

i think @swayandsea’s @swayandocean bot is very powerful — the bot reminds its followers to drink more water, take their meds, take a break, etc.

i also really like @lichlike’s @genderpronoun and @RestroomGender, bots that remind us to think outside of the gender binary.

there’s another aspect of power i think about, which brings me back to LogoWriter. LOGO was a fantastic introduction for me as a young person to the idea of programming; it gave me power over the computer. the idea of @lindenmoji was to bring that kind of drawing language and power to twitter, though the language the bot interprets is still much too hard to learn (i don’t think anyone but me has tweeted a from-scratch “program” to it yet)

your last question, about what it means to give a computer the ability to express itself… i don’t quite think of it that way. i’m giving the computer the ability to express a parameter set that i’ve laid out for it that includes a ton of randomness. it’s not entirely expressing me (@theseraps has tweeted things that i deleted because they were “too real”, for example), but it’s not expressing itself either. i wasn’t smart enough, or thorough enough, or didn’t spend the time to filter out every possible bad concept from the bot when i created the parameter space, and i also didn’t read every line in the hip-hop lyrics corpus. so some of the parameter space is unknown to me because i am too dumb and/or lazy… but that’s also where some of the surprise and serendipity comes from.

i think as creators of algorithms we need to think about them as human creations and be aware that human assumptions are baked in:

p.s.: based on the content of the newsletter so far i feel like @alicemazzy, @aparrish, @katierosepipkin, @thricedotted, and @lichlike would be great for you to talk to about bots or art or language or just in general 🙂

© 2019 Exolymph. All rights reserved.

Theme by Anders Norén.