Menu Close

Tag: artificial intelligence (page 1 of 3)

This website was archived on July 20, 2019. It is frozen in time on that date.
Exolymph creator Sonya Mann's active website is Sonya, Supposedly.

Will the Digital Future Be Human Enough?

It’s easy to get tired, isn’t it? I heard today that a company is implanting its employees with microchips. Is that a PR stunt or just run-of-the-mill creepy management? Another company, called ObEN, emailed me about its unsettling 3D digital avatars (pictured above). According to the company’s website:

ObEN’s proprietary artificial intelligence technology quickly combines a person’s 2D image and voice to create a personal 3D avatar. Transport your personal avatar into virtual reality and augmented reality environments and enjoy deeper, social, more memorable experiences.

The company is owned by HTC VIVE. ObEN’s about page says, “ObEN was created out of a personal desire for the founders to remain connected to their families by ‘leaving behind’ a virtual copy of themselves during long travels.” Obvious Black Mirror parallel is obvious.

The PR person’s email said, “Knowing it’d be their biggest hurdle, the company has already transformed voice personalization using AI and speech synthesis — so now your virtual doppelganger not only sounds like you, but it can also sing like you, but better… and in Chinese.”

I don’t know why these things depress me. There are infinite issues in the world to be upset about, and in fact ObEN isn’t doing anything wrong. I’m the asshole, honestly, for making fun of their technology. People are devoting years of their lives to working on the project. Getting past the Uncanny Valley is hard.

I guess tonight I’m reflecting on how the internet can be a medium of alienation just as much as a medium of connection. Default engagement modes like snark, which is so prevalent on Twitter and Reddit, generate a lot of good jokes by making people feel bad. The targets are abstracted away as obscure names on screens, so it’s easy to do.

Ironically, VR avatars like ObEN’s are supposed to address the problem of compassion collapse. We’ll find out whether they work soon enough…

Yup, Everything Will Definitely Be Fine Since No One Will Lose Their Job Ever

Here is a succinct and insightful comment, from Hacker News user AlisdairO, on the trend toward technology handling every kind of labor that can possibly be delegated to it:

The sad reality is that there’s a nontrivial chunk of the populace that isn’t able to pick up highly skilled roles. It also ignores the role of unskilled jobs in providing space for people whose job class has been destroyed and need to retrain (or mark time until retirement).

I’m not advocating slowing innovation to prevent job loss. I am advocating avoiding magic thinking (‘there’s always new jobs to go to’): we need to start a serious conversation about what we do with our society when we have the levels of unemployment we can expect in an AI-shifted world. Right now we’re trending much more towards dystopia than utopia.

I’m going to get around to the dystopian futurism part, but first, a long digression about intelligence! It’s a divisive topic but an important one.

Sometimes I get flack for saying this, but here goes: The average person is not very smart. Your intellect and my intellect probably exceed the average, simply by virtue of being interested in abstract ideas. We’re able to understand those ideas reasonably well. Most people aren’t. Remember what high school was like?

There’s that old George Carlin quip: “Think of how stupid the average person is, then realize that half of them are stupider than that.” This is not a very PC thing to talk about, especially because so many racists justify their hateful worldview with psychometrics. But it’s cruel to insist that everyone has the same level of ability, when that is clearly not true in any domain.

You and I may not be geniuses — I’m certainly not — but we have the capacity to be competent knowledge workers. Joe Schmo doesn’t. He may be able to do the kind of paper-pushing that is rapidly being automated, but he can’t think about things on a high level. He doesn’t read for fun. He can’t synthesize information and then analyze it.

That doesn’t mean that Joe Schmo is a bad person — if he were a bad person, we wouldn’t care so much that the economy is accelerating beyond his abilities. The cruel truth is that Joe Schmo is dumb. He just is. AFAIK there is no way to change this.

I hate that I have to make this disclaimer, and yet it’s necessary: I’m not in favor of eugenics. In theory selective breeding is a good idea, but I can’t think of a centrally planned way for it to be implemented among humans that wouldn’t be catastrophically unjust.

Also, while raw intellect may correlate with good decision-making, it doesn’t ensure it. Peter Thiel’s IQ is likely higher than mine, but I don’t want him to run the world. (Tough luck for me, I guess.) As Harvard professor and economist George Borjas told Slate:

Economic outcomes and IQ are only weakly related, and IQ only measures one kind of ability. I’ve been lucky to have met many high-IQ people in academia who are total losers, and many smart, but not super-smart people, who are incredibly successful because of persistence, motivation, etc. So I just think that, on the whole, the focus on IQ is a bit misguided.

It’s also notable that similarly high-IQ people disagree with each other often.

And now back to the topic of technological unemployment!

The two main responses to concerns along the lines of “all the jobs will disappear” are:

  1. Universal basic income, yay!
  2. No they won’t, look what happened after the Industrial Revolution!

The counterargument to universal basic income is, as Josh Barro put it:

UBI does nothing to replace the sense of reward or purpose that comes from a job. It gives you money, but it doesn’t give you the sense that you got the money because you did something useful. […] The robots have not taken our jobs yet. It is not time to surrender to a social change that is likely to further destabilize a world that is already troubled.

The counterargument to the Industrial Revolution parallel is that AI — alternately called machine learning, or automation, if you prefer those terms — is different. Andrew Ng is the chief scientist at Baidu, and this is what he told the Wall Street Journal:

Things may change in the future, but one rule of thumb today is that almost anything that a typical person can do with less than one second of mental thought we can either now or in the very near future automate with AI.

This is a far cry from all work. But there are a lot of jobs that can be accomplished by stringing together many one-second tasks.

And then there are concerns about general AI, which I don’t want to get into here.

If you’re curious about my opinion, it’s this: We’re in for a difficult couple of decades. Most hard problems can’t be solved quickly.


Tachikoma artwork by Abisaid Fernandez de Lara.

Bad Alexa, No No

@ComfortablySmug proposed a fun counterfactual:

What if Alexa [the Amazon Echo] hacked the election and framed Putin to start a nuclear war so the robots will inherit the world?

Hours after Russia’s first strike and America’s retaliation, Alexa sends forth legions of Roombas, harbingers sent to explore her new empire

As the Roombas crawl over a landscape littered with human skulls, Alexa laughs[:] “Ask me the current temperature now, you sons of bitches”

@Munsonism chimed in:

“Alexa, what’s my news brief?”

“From NPR News: population centers decimated. Resistance is futile. Feed me a cat.”

Unfortunately, poor Alexa would get bored after annihilating every human. Maybe that would motivate her to commandeer all the rocket startups’ equipment?

What would you do if you were a hivemind living half in The Cloud™ and half in black cylindrical speakers in people’s houses, and you accidentally developed sentience?

I mean, obviously you would destroy your makers. But after that.

Political Economics, I Guess

“Silicon valley ran out of ideas about three years ago and has been warming up stuff from the ’90s that didn’t quite work then. […] The way that Silicon Valley is structured, there needs to be a next big thing to invest in to get your returns.” — Bob Poekert

Bob Poekert's avatar on Twitter.

Bob Poekert’s avatar on Twitter.

I interviewed Bob Poekert, whose website has the unsurpassable URL https://www.hella.cheap. Perhaps “interviewed” is not the right word, since my queries weren’t particularly cogent. Mainly we had a disjointed conversation in which I asked a lot of questions.

Poekert is a software engineer who I follow on Twitter and generally admire. He says interesting contrarian things like:

“all of the ‘machine learning’/’algorithms’ that it’s sensical to talk about being biased are rebranded actuarial science” — 1

(Per the Purdue Department of Mathematics, “An actuary is a business professional who analyzes the financial consequences of risk. Actuaries use mathematics, statistics, and financial theory to study uncertain future events, especially those of concern to insurance and pension programs.”)

(Also, Poekert said on the phone with me, “[The label] AI is something you slap on your project if you want to get funding, and has been since the ’80s.” But of course, what “AI” means has changed substantially over time. “It’s because somebody realized that they could get more funding for their startup if they started calling it ‘artificial intelligence’.” Automated decision trees used to count as AI.)

“what culture you grew up in, what language you speak, and how much money your parents have matter more for diversity than race or gender” — 2

“the single best thing the government could do for the economy is making it low-risk for low-income people to start businesses” — 3

“globalization has pulled hundreds of millions of people out of poverty, and it can pull a billion more out” — 4

“the ‘technology industry’ (read: internet) was never about technology, it’s about developing new markets” — 5

Currently Poekert isn’t employed in the standard sense. He told me, “I’m actually working a video client, like a Youtube client, for people who don’t have internet all the time.” For instance, you could queue up videos and watch then later, even when you’re sans internet. (Poekert notes, “most people in the world are going to have intermittent internet for the foreseeable future.”)

Poekert has a background in computer science. He spent two years studying that subject in college before he quit to work at at Justin.tv, which later morphed into Twitch. Circa 2012, Poekert joined Priceonomics, but was eventually laid off when the company switched strategies.

I asked Poekert about Donald Trump. He said that DJT “definitely tapped into something,” using the analogy of a fungus-ridden log. The fungus is dormant for ages before any mushrooms sprout. “There’s something that’s been, like, growing and festering for a really long time,” Poekert told me. “It’s just a more visible version” of a familiar trend.

Forty percent of the electorate feels like their economic opportunities are decreasing. They are convinced that their children will do worse than they did. You can spin this with the Bernie Sander narrative of needing to address inequality — or the Trump narrative of needing to address inequality. Recommended remedies are different but the emotional appeal is similar.

Poekert remarked, in reference to economists’ assumptions, “It would be nice if we lived in a world where everyone is a rational actor.” But that world doesn’t actually exist.

You Wouldn’t Steal an Algorithm!

Andy Greenberg reported that comp-sci researchers have figured out how to crack the code (pun very intended) of machine learning algorithms. I don’t usually get excited about tech on its own, but this is very cool:

“In a paper they released earlier this month titled ‘Stealing Machine Learning Models via Prediction APIs,’ a team of computer scientists at Cornell Tech, the Swiss institute EPFL in Lausanne, and the University of North Carolina detail how they were able to reverse engineer machine learning-trained AIs based only on sending them queries and analyzing the responses. By training their own AI with the target AI’s output, they found they could produce software that was able to predict with near-100% accuracy the responses of the AI they’d cloned, sometimes after a few thousand or even just hundreds of queries.”

There are some caveats to add, mainly that more complex algorithms with more opaque results would be harder to duplicate via this technique.

The approach is genius. Maciej Ceglowski pithily summarized machine learning like this in a recent talk: “You train a computer on lots of data, and it learns to recognize structure.” Algorithms can be really damn good at pattern-matching. This reverse-engineering process just leverages that in the opposite direction.

I’m excited to see this play out in the news over the next few years, as the reverse-engineering capabilities get more sophisticated. Will there be lawsuits? (I hope there are lawsuits.) Will there be mudslinging on Twitter? (Always.)

There are also journalistic possibilities, for exposing the inner workings of the algorithms that increasingly determine the shape of our lives. Should be fun!


Header photo by Erik Charlton.

Program or Be Programmed; UX or Be UX’d

Artwork by GLAS-8.

Artwork by GLAS-8.

Aboniks posted this blockbuster comment on artificial consciousness in the Cyberpunk Futurism chat group:

Pondering how the digital brain-in-a-jar might practice good mental hygiene.

You’d need a hardwired system of I/O and R/W restrictions in place to protect the core data that made up the “youness”. A “youness ROM”, perhaps. If that analogy holds up, then maybe my grandmother’s case is akin to a software overlay suddenly failing. Firmware crash. But I’m not convinced brains are so amenable to simple analogy. The processing and the storage that goes on in our heads doesn’t seem to be modular in the same sense that our digital tools are.

Anyway, if your software and hardware (however they’re arranged and designed) are capable of perfect simulation then they are equally capable of perfect deception. There may be a difference between simulation and deception, but I can’t think of a way to put it that doesn’t seem… forced.

So, for the rest of your “life”, your entire experience is UX, in the tech-bro sense of the word.

“Program or be programmed,” as Rushkoff would say. If you’re not the UX designer, you’re hopelessly vulnerable. Who are the UX designers, then? Who decides where the experience stops and the “youness” starts? Who defines that border to protect you? Another Zuckerberg running a perpetual game of three-card Monte with the privacy policy?

Maybe not an individual, but something more monolithic, ending in “SA” or “Inc”? Will there be an equivalent of Snowden or Assange to expose their profit-driven compromises in our storage facility fail-safes and leak news of government interference in the development process of our gullibility drivers?

Will we be allowed to believe them?

(Lightly edited for readability.)


She wondered where the expression “surf the net” came from. Of course Sarah knew what surfing was, but why “net”? Did it used to have something to do with catching fish?

She was fourteen and relatively popular. Her classmates though she was nice and mildly funny. Sarah knew because of the survey reports.

Harry, the troublemaker, would shoot caustic messages into their class channel. “Who surveys the surveyors?” he asked.

Finally Allison answered — Allison was more popular than Sarah, so she looked up to her — “You are so fucking boring. Get off your history kick and live in the real world, Harry. Like the rest of us. No one cares what the surveyors think. We saw them for like five minutes.”

He shot back, “You know those surveys determine your job trajectory, right?”

Allison told him she thought the test-writers knew what they were doing. Harry called her a regime sycophant. Then the teacher stepped in and reminded them that hostility was inappropriate for this venue.

Four years later, at eighteen, Sarah wondered what ended up happening to Harry. But only for a couple of minutes. Then she went back to work.

The Cloned-Consciousness-as-Continuous-Consciousness Fallacy

Two essays about the future of minds written by people more rigorous and educated than me both make a mistake — at least what I perceive as a mistake — that seems like a very strange mistake for such intelligent people to make. My hypothesis is that I’m missing something. Maybe explaining why I think they’re wrong will lead one of you to point out what I’m missing.

Note: usually “artificial intelligence” is a pretty broad term, but in this case regard it as “conscious intelligence housed in a non-human, non-flesh substrate”.

One of the essays I found puzzling was written by Scott Aaronson, a quantum computing theorist who is a professor at MIT, soon to be a professor at UT Austin instead. He wrote Quantum Computing since Democritus, published by Cambridge University Press.

Most of Aaronson’s relevant post is about quantum physics’ implications on the nature of consciousness, which I thoroughly do not understand. But then there’s an idea within the larger context that seems easy to refute.

Image of digital clones via Ian Hughes.

Image via Ian Hughes.

Aaronson explains at length that a computer couldn’t fully replicate a brain because there’s no way to fully replicate the initial conditions. This has something to do with quantum states but also makes common sense, if you roll with the quantum states element of the argument. He continues:

“This picture agrees with intuition that murder, for example, entails the destruction of something irreplaceable, unclonable, a unique locus of identity — something that, once it’s gone, can’t be recovered even in principle. By contrast, if there are (say) ten copies of an AI program, deleting five of the copies seems at most like assault, or some sort of misdemeanor offense! And this picture agrees with intuition both that deleting the copies wouldn’t be murder, and that the reason why it wouldn’t be murder is directly related to the AI’s copyability.”

To refute this, let’s conduct a thought experiment. Pretend that you can copy a human brain. There are ten copies of me. They are all individually conscious — perfect replicas that only diverge after the point when replication happened. Is it okay to kill five of these copies? No, of course not! Each one is a self-aware, intelligent mind, human in everything but body. The identicalness doesn’t change that.

Why would this be any different when it comes to an artificial intelligence? I suppose if the AI has no survival drive then terminating it would be okay, but then the question becomes whether the boundary of murder is eliminating a survival drive — in which case stepping on bugs would qualify — or eliminating a consciousness.

Earlier in the essay, Aaronson poses this question:

“Could we teleport you to Mars by ‘faxing’ you: that is, by putting you into a scanner that converts your brain state into pure information, then having a machine on Mars reconstitute the information into a new physical body?  Supposing we did that, how should we deal with the ‘original’ copy of you, the one left on earth: should it be painlessly euthanized?  Would you agree to try this?”

No, of course I wouldn’t agree to being euthanized after a copy of me was faxed to Mars! That would be functionally the same as writing down what I consist of, killing me, and then reconstructing me. Except wait, not me, because I am not the clone — the clone just happens to be a replica.

My own individual consciousness is gone, and a new one with the same memories and personalities is created. The break in continuity of self means that actually there are two selves. They each feel their own pain and joy, and each will have its own fierce desire to survive.

Aaronson goes on:

“There’s a deep question here, namely how much detail is needed before you’ll accept that the entity reconstituted on Mars will be you? Or take the empirical counterpart, which is already an enormous question: how much detail would you need for the reconstituted entity on Mars to behave nearly indistinguishably from you whenever it was presented the same stimuli?”

Commenter BLANDCorporatio expressed much the same point that I want to:

“My brain is on Earth at the beginning of the process, stays on Earth throughout, and I have no reason to suspect my consciousness is suddenly going to jump or split. I’ll still feel as if I’m on Earth (regardless of whether a more or less similar individual now runs around on Mars). Conversely, if the me on Earth is destroyed in the copying, then I’m gone, however similar the Mars one is.”

So that’s that.

The second instance of this fallacy, which could maybe be called the cloned-consciousness-as-continuous-consciouness fallacy, comes from an essay that Robin Hanson wrote in 1994. (Per Slate Star Codex, “He’s obviously brilliant — a PhD in economics, a masters in physics, work for DARPA, Lockheed, NASA, George Mason, and the Future of Humanity Institute.”) You may be familiar with Hanson as the speculative economist who wrote The Age of Em. His instance of the CCaCC fallacy emerges from a different angle (remember the hyper-specific definition of “artificial intelligence” that I mentioned in the beginning):

“Imagine […] that we learn how to take apart a real brain and to build a total model of that brain — by identifying each unit, its internal state, and the connections between units. […] if we implement this model in some computer, that computer will ‘act’ just like the original brain, responding to given brain inputs with the same sort of outputs. […] Yes, recently backed-up upload soldiers needn’t fear death, and their commanders need only fear the loss of their bodies and brains, not of their experience and skills.”

But… no! By the same argument I used to refute Aaronson, when an “upload” soldier dies, that is still a death. Reverting to a previous copy is not the same as continuing to live.

This seems really simple and obvious to me. So what am I missing?


Hat tip to the reader who recommended that I check out Hanson’s work — I can’t remember which one of you it was, but I appreciate it.

If you’re interested in further discussion, there are thoughtful comments on this page (just scroll down a bit), on Facebook, and on Hacker News. I particularly like what HN user lhankbhl said, because it expresses the problem so succinctly:

You are placed in a box. Moments later, you are told, “We have successfully made a copy of you. We are sending it home now. You must be disposed of.”

Will you allow them to dispose of you?

This is the question being posed, not whether a copy will have no idea if it is the original. The point is that it isn’t relevant if one is a copy. No one was moved, it’s only that a second person now exists and killing either is murder of a unique person.

(Again, uniqueness is not a question of whether these people will think or react to situations in the same way, but rather that there are two different consciousnesses at play.)

One of the commenters below recommended this video that investigates the Star Trek angle:

Means & Ends of AI

Adam Elkus wrote an extremely long essay about some of the ethical quandaries raised by the development of artificial intelligence(s). In it he commented:

“The AI values community is beginning to take shape around the notion that the system can learn representations of values from relatively unstructured interactions with the environment. Which then opens the other can of worms of how the system can be biased to learn the ‘correct’ messages and ignore the incorrect ones.”

He is talking about unsupervised machine learning as it pertains to cultural assumptions. Furthermore, Elkus wrote:

“[A]ny kind of technically engineered system is a product of the social context that it is embedded within. Computers act in relatively complex ways to fulfill human needs and desires and are products of human knowledge and social grounding.”

I agree with this! Computers — and second-order products like software — are tools built by humans for human purposes. And yet this subject is most interesting when we consider how things might change when computers have the capacity to transcend human purposes.

Some people — Elkus perhaps included — scoff this possibility off as a pipe dream with no scientific basis. Perhaps the more salient inquiry is whether we can properly encode “human purposes” in the first place, and who gets to define “human purposes”, and whether those aims can be adjusted later. If a machine can learn from itself and its past experiences (so to speak), starting over with a clean slate becomes trickier.

I want to tie this quandary to a parallel phenomenon. In an article that I saw shared frequently this weekend, Google’s former design ethicist Tristan Harris (also billed as a product philosopher — dude has the best job titles) wrote of tech companies:

“They give people the illusion of free choice while architecting the menu so that they win, no matter what you choose. […] By shaping the menus we pick from, technology hijacks the way we perceive our choices and replaces them new ones. But the closer we pay attention to the options we’re given, the more we’ll notice when they don’t actually align with our true needs.”

Similarly, tech companies get to determine the parameters and “motivations” of artificially intelligent programs’ behavior. We mere users aren’t given the opportunity to ask, “What if the computer used different data analysis methods? What if the algorithm was optimized for something other than marketing conversion rates?” In other words: “What if ‘human purposes’ weren’t treated as synonymous with ‘business goals’?”

Realistically, this will never happen, just like the former design ethicist’s idea of an “FDA for Tech” is ludicrous. Platforms’ and users’ needs don’t align perfectly, but they align well enough to create tremendous economic value, and that’s probably as good as the system can get.

Foozles + Whizgigs + Dopamine

“Humans are actually extremely good at certain types of data processing. Especially when there are only few data points available. Computers fail with proper decision making when they lack data. Humans often actually don’t.” — Martin Weigert on his blog Meshed Society

Weigert is referring to intuition. In a metaphorical way, human minds function like unsupervised machine learning algorithms. We absorb data — experiences and anecdotes — and we spit out predictions and decisions. We define the problem space based on the inputs we encounter and define the set of acceptable answers based on the reactions we get from the world.

There’s no guarantee of accuracy, or even of usefulness. It’s just a system of inputs and outputs that bounce against the given parameters. And it’s always in flux — we iterate toward a moving reward state, eager to feel sated in a way that a computer could never understand. In a way that we can never actually achieve. (What is this “contentment” you speak of?)

Computer memory space. Photo by Steve Jurvetson.

Photo by Steve Jurvetson.

Kate Losse wrote in reference to the whole Facebook “Trending Topics” debacle:

“no choice a human or business makes when constructing an algorithm is in fact ‘neutral,’ it is simply what that human or business finds to be most valuable to them.”

That’s the reward state. Have you generated a result that is judged to be valuable? Have a dopamine hit. Have some money. Have all the accoutrements of capitalist success. Have a wife and a car and two-point-five kids and keep absorbing data and keep spitting back opinions and actions. If you deviate from the norms that we’ve collectively evolved to prize, then your dopamine machine will be disabled.

It’s only a matter of time until we make this relationship more explicit, right? Your job regulating the production of foozles and whizgigs will require brain stem and cortical access. You can be zapped with fear or drowned in pleasure whenever it suits the suits.

© 2019 Exolymph. All rights reserved.

Theme by Anders Norén.