Menu Close

Tag: technological unemployment

This website was archived on July 20, 2019. It is frozen in time on that date.
Exolymph creator Sonya Mann's active website is Sonya, Supposedly.

Yup, Everything Will Definitely Be Fine Since No One Will Lose Their Job Ever

Here is a succinct and insightful comment, from Hacker News user AlisdairO, on the trend toward technology handling every kind of labor that can possibly be delegated to it:

The sad reality is that there’s a nontrivial chunk of the populace that isn’t able to pick up highly skilled roles. It also ignores the role of unskilled jobs in providing space for people whose job class has been destroyed and need to retrain (or mark time until retirement).

I’m not advocating slowing innovation to prevent job loss. I am advocating avoiding magic thinking (‘there’s always new jobs to go to’): we need to start a serious conversation about what we do with our society when we have the levels of unemployment we can expect in an AI-shifted world. Right now we’re trending much more towards dystopia than utopia.

I’m going to get around to the dystopian futurism part, but first, a long digression about intelligence! It’s a divisive topic but an important one.

Sometimes I get flack for saying this, but here goes: The average person is not very smart. Your intellect and my intellect probably exceed the average, simply by virtue of being interested in abstract ideas. We’re able to understand those ideas reasonably well. Most people aren’t. Remember what high school was like?

There’s that old George Carlin quip: “Think of how stupid the average person is, then realize that half of them are stupider than that.” This is not a very PC thing to talk about, especially because so many racists justify their hateful worldview with psychometrics. But it’s cruel to insist that everyone has the same level of ability, when that is clearly not true in any domain.

You and I may not be geniuses — I’m certainly not — but we have the capacity to be competent knowledge workers. Joe Schmo doesn’t. He may be able to do the kind of paper-pushing that is rapidly being automated, but he can’t think about things on a high level. He doesn’t read for fun. He can’t synthesize information and then analyze it.

That doesn’t mean that Joe Schmo is a bad person — if he were a bad person, we wouldn’t care so much that the economy is accelerating beyond his abilities. The cruel truth is that Joe Schmo is dumb. He just is. AFAIK there is no way to change this.

I hate that I have to make this disclaimer, and yet it’s necessary: I’m not in favor of eugenics. In theory selective breeding is a good idea, but I can’t think of a centrally planned way for it to be implemented among humans that wouldn’t be catastrophically unjust.

Also, while raw intellect may correlate with good decision-making, it doesn’t ensure it. Peter Thiel’s IQ is likely higher than mine, but I don’t want him to run the world. (Tough luck for me, I guess.) As Harvard professor and economist George Borjas told Slate:

Economic outcomes and IQ are only weakly related, and IQ only measures one kind of ability. I’ve been lucky to have met many high-IQ people in academia who are total losers, and many smart, but not super-smart people, who are incredibly successful because of persistence, motivation, etc. So I just think that, on the whole, the focus on IQ is a bit misguided.

It’s also notable that similarly high-IQ people disagree with each other often.

And now back to the topic of technological unemployment!

The two main responses to concerns along the lines of “all the jobs will disappear” are:

  1. Universal basic income, yay!
  2. No they won’t, look what happened after the Industrial Revolution!

The counterargument to universal basic income is, as Josh Barro put it:

UBI does nothing to replace the sense of reward or purpose that comes from a job. It gives you money, but it doesn’t give you the sense that you got the money because you did something useful. […] The robots have not taken our jobs yet. It is not time to surrender to a social change that is likely to further destabilize a world that is already troubled.

The counterargument to the Industrial Revolution parallel is that AI — alternately called machine learning, or automation, if you prefer those terms — is different. Andrew Ng is the chief scientist at Baidu, and this is what he told the Wall Street Journal:

Things may change in the future, but one rule of thumb today is that almost anything that a typical person can do with less than one second of mental thought we can either now or in the very near future automate with AI.

This is a far cry from all work. But there are a lot of jobs that can be accomplished by stringing together many one-second tasks.

And then there are concerns about general AI, which I don’t want to get into here.

If you’re curious about my opinion, it’s this: We’re in for a difficult couple of decades. Most hard problems can’t be solved quickly.


Tachikoma artwork by Abisaid Fernandez de Lara.

It Shouldn’t Be Easy to Understand

Mathias Lafeldt writes about complex technical systems. For example, on finding root causes when something goes wrong:

One reason we tend to look for a single, simple cause of an outcome is because the failure is too complex to keep it in our head. Thus we oversimplify without really understanding the failure’s nature and then blame particular, local forces or events for outcomes.

I think this is a fractal insight. It applies to software, it applies to individual human decisions, and it applies to collective human decisions. We look for neat stories. We want to pinpoint one factor that explains everything. But the world doesn’t work that way. Almost nothing works that way.

In another essay, Lafeldt wrote, “Our built-in pattern detector is able to simplify complexity into manageable decision rules.” Navigating life without heuristics is too hard, so we adapted. But using heuristics — or really any kind of abstraction — means losing some of the details. Or a lot of the details, depending on how far you abstract.

That said, here’s Alice Maz with an incisive explanation of why everything is imploding:

Automation is transforming bell curve to power law, hollowing out the middle class as only a minority can leverage their labor to an extreme degree. Cosmopolitan egalitarianism for the productive elite, nationalism and demagoguery for the masses. For what it’s worth, I consider this a Bad Outcome, but it is one of the least bad ones I have been able to come up with that is mid-term realistic.

Which corporation will be the first to issue passports?

Rushkoff argued that programming was the new literacy, and he was right, but the specifics of his argument get lost in the retelling. The way he saw it, this was the start of the third epoch, the preceding two ushered in by 1) the invention of writing, 2) the printing press.

Writing broke communal oral tradition and replaced it with record-keeping and authoritative narration by the literate minority to the masses. Only the few could produce texts, and the many depended on them to recite and interpret. This the frame (pre-V2 maybe) that Catholicism inhabits.

The printing press led to mass literacy. This is the frame of Protestantism: the idea is for each man to read and interpret for himself. But after a brief spate of widely-accessible press (remember Paine’s Common Sense? very dangerous!) access tightened up. Hence mass media as gatekeeper, arbiter of consensus reality.

The few report, and the many receive. Not that journalists were ever the elite, just as the Egyptian scribes. They were the priestly class, Weber’s “new middle”. (Also lawyers. Remember the backwoods lawyer? Used to be all you needed was the books and a good head. Before credentialism ate the field.)

The internet killed consensus reality. Now anyone can trivially disseminate arbitrary text. But the platforms on which those texts are seen are controlled by the new priests, line programmers, which determine how information flows. This is what critics of “the Facebook algorithm” et al are groping at. The many can create, but the few craft the landscape that hosts creation.

It’s still early. Remains to be seen if we can keep relatively open platforms (like Twitter circa 2010; open in the unimpeded sense). Or if the space narrows, new gatekeepers secure hold. But that will be determined by programmers. (Maybe lawmakers.) Rest along for the ride.

That’s all copy-pasted from Twitter and then lightly edited to be more readable in this format.

I included the opening quote about complex systems because although this neat narrative holds more truth than some others, it’s still a neat narrative. Don’t forget that. Reality is multi-textured.


Header photo by kev-shine.

“There is an error with my dependencies”

Exolymph reader Set Hallström, AKA Sakrecoer, sent me an original song called “Dependency” — these are the lyrics:

sudo apt update
sudo apt upgrade

There is an error with my dependencies
Consultd 1.2 and emplyomentd 1.70
I cannot pay my rent without their libraries
And to install i need to share my salary

Where do i fit in this society?
The more i look and the less i see
They want no robots nor do they want me.
work is a point in the agenda of the party

sudo apt update
sudo apt upgrade

My liver isn’t black market worthy
And my master degree from a street university
My ambitions are low and i am debt free
There is no room in the industry for robots like me

Don’t get me wrong i would also like to be
Installed and running and compatible with society
But i am running a different library
Because my kernel is still libre and free.

All unedited. Another thing — Craig Lea Gordon’s novella Hypercage is available on Amazon for zero dollars. Review coming soon, but I wanted to let you know now!

Software Meets Capitalism: Interview with Steve Klabnik

Old woman working at a loom. Photo by silas8six.

Old woman working at a loom. Photo by silas8six.

I interviewed Steve Klabnik via email. If you’re part of the open-source world, you might recognize his name. Otherwise I’ll let him introduce himself. We discussed economics, technological unemployment, and software.

Exolymph: The initial reason I reached out is that you’re a technologist who tweets about labor exploitation and other class issues. I’m currently fascinated by how tech and society influence each other, and I’m particularly interested in the power jockeying within open-source communities. You seem uniquely situated to comment on these issues.

Originally I planned to launch right into questions in this email, but then I start opening your blog posts in new tabs, and now I need a little more time still. But! Here’s a softball one for starters: How would you introduce yourself to an oddball group of futurists (which is my readership)?

Steve Klabnik: It’s funny that you describe this one as a softball, because it should be, yet I think it’s actually really tough. I find it really difficult to sum up a person in a few words; there’s just so much you miss out on. Identity is a precarious and complex topic.

I generally see myself as someone who’s fundamentally interdisciplinary. I’m more about the in-betweens than I am about any specific thing. The discipline that I’m most trained in is software; it’s something I’ve done for virtually my entire life, and I have a degree in it. But software by itself is not that interesting to me. It’s the stuff that you can do with software, the impact that it has on our world, questions of ethics, of social interaction. This draws a connection to my second favorite thing: philosophy. I’m an amateur here, unfortunately. I almost got a higher degree in this stuff, but life has a way of happening. More specifically, I’m deeply enthralled with the family of philosophy that is colloquially referred to as “continental” philosophy, though I’m not sure I find that distinction useful. My favorites in this realm are Spinoza, Nietzsche, Marx, and Deleuze. I find that their philosophical ideas can have deep implications for software, its place in the world, and society at large.

Since we live under capitalism, “who are you” is often conflated with “what do you do for work”. As far as that goes, I work for Mozilla, the company that makes Firefox. More specifically, I write documentation for Rust, a programming language that we and a broader community have developed. I literally wrote the book on it 🙂 Mozilla has a strong open-source ethic, and that’s one of the reasons I’ve ended up working there; I do a lot of open-source work. On GitHub, a place where open-source developers share their code, this metric says that I’m the twenty-ninth most active contributor, with 4,362 contributions in the last 365 days. Before Rust, I was heavily involved with the Ruby on Rails community, and the broader Ruby community at large. I still maintain a few packages in Ruby.

Exolymph: To be fair, I described it as a softball question precisely because of the capitalist shortcut you mentioned, although I’m not sure I would have articulated it like that. Darn predictable social conditioning.

What appeals to you about open source? What frustrates you about open source?

Steve Klabnik: I love the idea of working towards a commons. I’d prefer to write software that helps as many people as possible.

What frustrates me is how many people can’t be paid to do this kind of work. I’ve been lucky to been able to feed myself while working on open source. Very, very lucky. But for most, it’s doing your job without pay. If we truly want a commons, we have to figure out how to fund it.

Exolymph: I’ve been reading a bunch of your blog posts. I’m curious about how you feel about working in an industry — and perhaps doing work personally — that obviates older jobs that people used to count on.

Steve Klabnik: It is something that I think about a lot. This is something that’s a fundamental aspect of capitalism, and has always haunted it: see the Luddites, for example. This problem is very complex, but here’s one aspect of it: workers don’t get to capture the benefits of increased productivity, at least not directly. Let’s dig into an example to make this more clear.

Let’s say that I’m a textile worker, like the Luddite. Let’s make up some numbers to make the math easy: I can make one yard of fabric per hour with my loom. But here’s the catch: I’m paid by the hour, not by the amount of fabric I make. This is because I don’t own the loom; I just work here. So, over the course of a ten hour day, I make ten yards of fabric, and am paid a dollar for this work.

Next week, when I come to work, a new Loom++ has been installed in my workstation. I do the same amount of work, but can produce two yards of fabric now. At the end of my ten hour day, I’ve made twenty yards of fabric: a 2x increase! But I’m still only being paid my dollar. In other words, the owner of the factory gets twice as much fabric for the same price, but I haven’t seen any gain here.

(Sidebar: There’s some complexity in this that does matter, but this is an interview, not a book 🙂 So for example, yes, the capitalist had to pay for the Loom++ in the first place. This is a concept Marx calls “fixed versus variable capital”, and this is a long enough answer already, so I’ll just leave it at that.)

Now, the idea here is that the other factories will also install Loom++s as well, and at least one of the people who’s selling the cloth will decide that 1.75x as much profit is better, so they’ll undercut the others, and eventually, the price of cloth will fall in half, to match the new productivity level. Now, as a worker, I have access to cheaper cloth. But until that happens, I’m not seeing a benefit, yet the capitalist is collecting theirs. Until they invest in a Loom2DX, with double the productivity of the Loom++, and the cycle starts anew.

Yet we, as workers, haven’t actually seen the benefits work out the way they should. There’s nothing that guarantees that it will, other than the religion of economists. And the working class has seen their wages stagnate, while productivity soars, especially recently. Here is a study that gets cited a lot, in articles like this one.

“From 1973 to 2013, hourly compensation of a typical (production/nonsupervisory) worker rose just 9 percent while productivity increased 74 percent. This breakdown of pay growth has been especially evident in the last decade, affecting both college- and non-college-educated workers as well as blue- and white-collar workers. This means that workers have been producing far more than they receive in their paychecks and benefit packages from their employers.”

We haven’t been really getting our side of the deal.

Anyway.

So, this is a futurist blog, yet I’ve just been talking about looms. Why? Well, two reasons: First, technologists are the R&D department that takes the loom, looks at it, and makes the Loom++. It’s important to understand this, and to know in our heart of hearts that under capitalism, yes, our role is to automate people out of jobs. Understanding a problem is the first step towards solving it. But second, it’s to emphasize that this isn’t something that’s specific to computing or anything. It’s the fundamental role of technology. We like to focus on the immediate benefit (“We have Loom++es now!!!”) and skip over the societal effects (“Some people are going to make piles of money from this and others may lose their jobs”). Technologists need to start taking the societal effects more seriously. After all, we’re workers too.

I’m at a technology conference in Europe right now, and on the way here, I watched a movie, The Intern. The idea of the movie is basically, “Anne Hathaway runs Etsy (called About the Fit in the movie), and starts an internship program for senior citizens. Robert De Niro signs up because he’s bored with retirement, and surprise! Culture clash.” It was an okay movie. But one small bit of backstory of De Niro’s character really struck me. It’s revealed that before he retired, he used to work in literally the same building as About the Fit is in now. He worked for a phone book company. It’s pretty obvious why he had to retire. The movie is basically a tale of what we’re talking about here.

Exolymph: I’m also curious about what you’d propose to help society through the Computing Revolution (if you will) and its effect on “gainful employment” opportunities.

Steve Klabnik: Okay, so, I’m not saying that we need to keep phone books around so that De Niro can keep his job. I’m also not saying that we need to smash the looms. What I am saying is that in a society which is built around the idea that you have to work to live, and that also rapidly makes people’s jobs obsolete, is a society in which a lot of people are going to be in a lot of pain. We could be taking those productivity benefits and using them to invest back in people. It might be leisure time, it might be re-training; it could be a number of things. But it’s not something that’s going to go away. It’s a question that we as society have to deal with.

I don’t think the pursuit of profits over people is the answer.


Go follow Steve on Twitter and check out his website.

Cricket Compliance: Producing Food without the Humans Who Eat It

Photo by _paVan_.

Photo by _paVan_.

Lacy was bored. She was proud to work in food production — Mama’s reaction made the drudgery feel worth it when Lacy got home — but the low buzz of the drone and the sameness of the landscape lulled her toward sleep. She was sure that some of her colleagues gave up and drowsed. Lacy wasn’t sure yet how she felt about the group. It was a mixed bag — of races, genders, and hygiene standards — but at least a couple of them seemed nice. Lacy didn’t mind the diversity, per se, but she was uncomfortable around strangers and their strange habits. On the first day another girl had said, “You’ll be broken in quick,” but the routine still felt unfamiliar.

Lacy glanced out the drone’s windshield at the cricket fields in front on her. The creatures teemed on the ground, bouncing and burrowing and fucking and killing each other and feeding voraciously on their synthetic pasture. She looked back over her shoulder to check that the pheromone broadcast was working. A swarm of late-stage adult crickets rolled forward in the wake of the drone.

Lacy gripped her knees and swallowed nausea. She hated the insects. The protein was vital, of course. Mama wouldn’t have brought them to the city otherwise. Accessing the resource density of the metropolis changed their survival baseline. Lacy had gained fifteen pounds in a couple of months. Her little sister’s teeth were sound in her gums, and she could run so far on the game tread. Sometimes when Lacy got home from work, she loaded up Cath’s saved worlds, wandering through fairylands that were like hyper-saturated versions of the home she remembered as a little kid.

They had lived by a river.

Crickets didn’t need rivers. They just needed space, sprinklers, and miscellaneous food stuffs hauled in from other fields where other workers got bored in the drones. Or did anyone watch those farms? Lacy wasn’t stupid. She knew that this job was provisional — it would only last until the FDA regulation changed in a matter of months. Lacy was a Compliance Technician, according to her contract. When her supervisor interviewed Lacy for the position, he explained that a remote observer system was being put in place. He went over the automated footage analysis (assigned to a certified third party) that would ensure production was up to code. Then he sighed and admitted that he didn’t know where the company was going to move him after there weren’t any workers to interview, train, fire, interview, train, and fire again.

Lacy’s drone beeped softly and the computer’s androgynous voice intoned, “We are approaching the docking station. Initiate the checklist process.” Lacy leaned forward in her seat and started reviewing the figures on the dashboard screen. Number of crickets. Estimated protein values — both nutritional and market. Toxicity and contamination. The numbers always hit their targets.

Career X-Risk: The Legitimate Reason To Fear Computers

Aeon published a long reflection on the possibilities of emergent consciousness, written by George Musser. In the essay he noted:

“Even systems that are not designed to be adaptive do things their designers never meant. […] A basic result in computer science is that self-referential systems are inherently unpredictable: a small change can snowball into a vastly different outcome as the system loops back on itself.”

It’s the butterfly effect, in other words. A small change within a complex system will cause a cascade of new small changes that quickly add up to large changes. Thus computer programs can surprise their designers. Often they’re just buggy, but at other times they develop capabilities that are difficult not to anthropomorphize. Either computers are messing up — cute, maybe frustrating — or they’re stumping us with semblances of creativity. It’s a human impulse, to ascribe intent and meaning to any output comprised of symbols (for example, text or numbers).

I’m not a mathematician, an engineer, or a scientist. Like most of us, I don’t have the training to understand rudimentary AI. (I don’t have the aptitude either, but that’s a separate discussion.) It’s starting to scare me more and more. I’m still skeptical of x-risk, so that’s not my worry. To be honest, I’m anxious about becoming obsolete. It’ll be a long time before the kind of work that I do can be fully automated / algorithmized, but maybe humans who understand computers better than I do will be able to glue different smart programs together and perform my job with less human labor.

The economy is a complex system. Small gains in efficiency can ripple out to transform entire industries.

© 2019 Exolymph. All rights reserved.

Theme by Anders Norén.