Menu Close

Tag: history (page 1 of 2)

This website was archived on July 20, 2019. It is frozen in time on that date.
Exolymph creator Sonya Mann's active website is Sonya, Supposedly.

Alternate Computer Universes

The following is a guest dispatch written by John Ohno, AKA @enkiv2. His musings on the world that might have been were lightly edited for this context.


For me, the idea of cyberpunk is tied tightly to the assumptions and aesthetics of the early ’80s. And, unlike today, the early ’80s saw the peak of a Cambrian explosion in diversity with regard to home computers. It would only be later that the pathways would be culled: In the mid-to-late ’80s as GUI machines like the Macintosh, Amiga, and Atari ST pushed out the 8-bit micros, and in the early ’90s as poor marketing and business decisions killed Amiga and left Atari a shell of its former self.

When Neuromancer was published, in 1982, comparing home computers based on merit was very hard: all of them were dysfunctional in strange ways (the Apple line began selling in 1979, but it wasn’t until 1983 that the first Apple II-compatible machine capable of typing lowercase letters was released; the Sinclair machines were so strapped for RAM that they would delete portions of numbers that were too big as the user typed them). The lineages that survived were arbitrary. Minor changes to history would produce completely distinct computer universes, alien to our eyes.

In this essay, I’d like to tell you about a specific fork in computer history — one that, if handled differently, would have replaced an iconic and influential machine with one radically different. I’d like to talk about the Macintosh project before Steve Jobs.

In 1983, Apple released the Lisa. It was a flop. As the first commercial machine with a PARC-style GUI and a mouse, it was too slow to use. At a price point of just under $10,000 dollars (about $24,000 today) and all but requiring a hard disk add-on that cost about the same amount as the computer, very few people were willing to pay as much for a flashy but unusable toy as they would for a car. It only sold 100,000 units.

The Lisa was Jobs’ baby (figuratively and literally — it was named after his daughter, but it also was heavily under his control and based on extrapolations of his limited understanding of a demo of the Alto at PARC); however, by the time it was released, he had already jumped ship on that project and taken over the Macintosh project. In 1982, realizing that the Lisa would flop, Jobs had distanced himself from it and taken over Jef Raskin’s Macintosh project, turning it into a budget version of the Lisa (with most of the interesting features removed, and with all development moved from Pascal to assembler in the name of efficiency).

This part of the story is generally pretty well known. It’s part of the Jobs myth: a setback that forces him to reconsider what’s really important and leads to the creation of the Macintosh. What doesn’t get factored into the myth is that Raskin’s original plan for the Macintosh was both more revolutionary and more practical than the Macintosh was.

The Macintosh began as a variant on the dedicated word processor, with a few interesting twists. At the time, it was under the direction of Jef Raskin, previously of SAIL and PARC.

The Macintosh, as designed at the time, would use a light pen (rather than a mouse) for selection and manipulation of buttons (in other words, you’d use it like a stylus-based touch screen device), but the primary means of navigation would be something called “LEAP Keys,” wherein a modifier key would switch the behavior of typing from insertion to search. Raskin has claimed that this navigation scheme is up to three times faster than using a mouse, and considering the limits of scrolling speed on the Lisa and similar problems with all bitmapped display devices coming out of Apple at the time, this seems like an underestimate: for long documents, a quick text search would be much faster.

While in normal operation the unit would act like a dedicated word processor, it is in fact a general purpose computer, and is programmable. The normal way to program it is by writing code directly into your text document and highlighting it — upon which the language will be identified, it will be compiled, and the code will become a clickable button that when clicked is executed. In other words, it’s a system optimized for ‘literate programming’.

The proposal at the time of the project’s takeover was a little more ambitious, with support for a dial-up service for access to databases (something more like Minitel or Prodigy than today’s web) and an APL-derived language; when control over the project was taken away from Raskin, however, the core ideas mentioned above migrated to heirs to the project (an Apple II add-on called the SwyftCardand, later, a dedicated word processor called the Canon Cat).

Ultimately, a few things killed the original Macintosh project. First, Raskin and Jobs were both abrasive people with big egos, and Raskin had circumvented Jobs (rather than convincing him) in order to get the Macintosh project approved, which made him and his project an easy target later. (“The Mac and Me,”, pp 19-21).

Second, Raskin loudly criticized the Lisa project for exactly the problems it would later turn out to have (specifically: for not considering cost or speed, it became a slow expensive machine) in a way that was ineffective at making the Lisa faster while supporting other tech (the widespread use of Pascal in system software, high-res bitmapped displays, multithreading) that were blamed for some of the Lisa’s bloat.

In other words, it’s possible (and maybe even straightforward) to claim that Raskin is partially to blame for the Lisa’s failure (despite not working on that project directly) and fully to blame for making his Macintosh project a juicy target for takeover.

The SwyftCard implemented many of the planned features, but (from the limited information I can find) it looks like it didn’t sell well — after all, it was an add-on for the Apple II released shortly before the Macintosh, and the computer landscape by that point had changed.

The Macintosh project under Jobs was in many ways a product of spite: an attempt to prove that a Lisa clone could be made with the budget of a dedicated word processor project in only two years, but also an attempt to demonstrate that such a project needed to reject Pascal, structured programming, and all the elements of good design that Raskin championed.

Nevertheless, it incorporated some aspects of Raskin’s worldview (like being heavily driven by cost concerns and trying to avoid having multiple distinct idiomatic ways of performing tasks). The result was a project that was less impressive than the Lisa on all fronts except for speed and marketing.

By the time 1985 rolled around, the Amiga and the Atari ST had come out and were positioned as direct competition to the Macintosh; while these machines were both cheaper and technically superior (supporting color, multithreading, having twice the ram and a CPU double the speed), Apple had already won the marketing war with its Super Bowl ad, and while the Macintosh took another decade to start selling well, its design assumptions heavily influenced all GUI machines that appeared later.

Raskin licensed the SwyftCard designs to Canon, who produced the Canon Cat in 1987 (the same year as Windows 1.0 — in other words, the year that the IBM PC clone world adopted Apple’s assumptions). The Canon Cat cost about $1,500 (more than $3,000 in today’s money), more than many people would pay for a more capable machine at the time. Marketing slip-ups at Canon resulted in further poor sales:

Raskin claimed that its failure was due in some part to Steve Jobs, who successfully pitched Canon on the NeXT Computer at about the same time. It has also been suggested that Canon canceled the Cat due to internal rivalries within its divisions. (After running a cryptic full page advertisement in the Wall Street Journal that the “Canon Cat is coming” months before it was available, Canon failed to follow through, never airing the completed TV commercial when the Cat went on sale, only allowed the Cat to be sold by its typewriter salespeople, and prevented Raskin from selling the Cat directly with a TV demonstration of how easy it was to use.)

Shortly thereafter, the stock market crash of 1987 so panicked Information Appliance’s venture capitalists that they drained millions of dollars from the company, depriving it of the capital needed to be able to manufacture and sell the Swyft.

In the end, the Raskin’s Macintosh exerted very little influence on the landscape of computer interfaces, while Jobs’ Macintosh, nearly an unrelated project, has had enormous ramifications. GUI machines prior to 1984 were considered toys (and typically were) — pointing devices and high resolution graphics were associated with video games, and business machines maintained a “professional” image by avoiding mice and graphics. The Macintosh and its competitors changed this permanently, and ideas popularized by the Macintosh team (like hiding complexity, avoiding configurability, and omitting expansion ports) have had a huge impact on the way user interfaces are designed.

A world based on Raskin’s Macintosh would be very different: a world optimized for fast text editing, where programs were distributed as source inside text documents and both documents and user interfaces were designed for quick keyword-search-based navigation. Only a handful of systems like this exist today, although incremental search has become common in web browsers in the past decade and template languages like Ruby on Rails, Ren’Py, and JSF (along with notebook interfaces like Jupyter) have some resemblance to the Swyft UI.

Raskin continued playing with UI ideas until his death in 2005; his last big project was Archy.


Again: “Alternate Computer Universes” was written by John Ohno / @enkiv2. Header photo by Ismael Villafranco.

Controlling the Opposition to Some Extent

This quote is often attributed to Vladimir Lenin: “The best way to control the opposition is to lead it ourselves.” He speaks of puppet movements and useful idiots. (The latter term is also Leninese, as it happens.) There is a less-popular companion statement, which seems to have bubbled up from the frustrated id of anonymous extremists:

"All opposition is controlled opposition." Made with Buffer's Pablo.

“All opposition is controlled opposition.” Made with Buffer’s Pablo.

The idea behind this maxim is that the state allows a certain amount of opposition to exist, and often infiltrates protest movements or steers them from afar. (Anarchist groups have developed what they call “security culture” as a way to guard against this.)

Dissidents are permitted to bleed off tension without actually endangering the regime. People with the savvy and energy to organize real trouble are swallowed up by doomed groups fighting for doomed causes.

For example, the “controlled opposition” interpretation of the #NoDAPL protests would be: The activists feel like they’ve won a victory, but the pipeline will just be slightly rerouted, built eventually, and imperil the groundwater in due time. The tribe’s supposed success serves to placate the public. Behind the scenes, the state and its capitalist cronies do whatever they want.

Some observers interpret mainstream political parties as controlled opposition en masse. Show contests orchestrated by the deep state in order to keep the voters occupied. Wars are engineered by corporate interests. According to this paradigm, we don’t just swoop in and crush ISIS because the military-industrial complex thrives on hot wars.

I think “all opposition is controlled opposition” is a bit like “what doesn’t kill you makes you stronger”. Both sayings are nonsense when interpreted literally, but they’re catchy ways to encapsulate an emotionally compelling idea.

Yes, clearly controlled opposition does exist. But genuinely disruptive fringe groups also exist. The English government didn’t benefit from the IRA, and the French Revolution managed to behead a couple of monarchs (plus many unfortunate members of the aristocracy). Mao Zedong’s rise to power was not controlled opposition.

In general, I think people tend to see conspiracies where there are actually incentive structures. Of course the state has to strike a balance between crushing dissent entirely and allowing it to enter society’s memetic bloodstream. If the politicians and bureaucrats err too far in either direction, the state loses its power.


Header photo via the euskadi 11.

It Shouldn’t Be Easy to Understand

Mathias Lafeldt writes about complex technical systems. For example, on finding root causes when something goes wrong:

One reason we tend to look for a single, simple cause of an outcome is because the failure is too complex to keep it in our head. Thus we oversimplify without really understanding the failure’s nature and then blame particular, local forces or events for outcomes.

I think this is a fractal insight. It applies to software, it applies to individual human decisions, and it applies to collective human decisions. We look for neat stories. We want to pinpoint one factor that explains everything. But the world doesn’t work that way. Almost nothing works that way.

In another essay, Lafeldt wrote, “Our built-in pattern detector is able to simplify complexity into manageable decision rules.” Navigating life without heuristics is too hard, so we adapted. But using heuristics — or really any kind of abstraction — means losing some of the details. Or a lot of the details, depending on how far you abstract.

That said, here’s Alice Maz with an incisive explanation of why everything is imploding:

Automation is transforming bell curve to power law, hollowing out the middle class as only a minority can leverage their labor to an extreme degree. Cosmopolitan egalitarianism for the productive elite, nationalism and demagoguery for the masses. For what it’s worth, I consider this a Bad Outcome, but it is one of the least bad ones I have been able to come up with that is mid-term realistic.

Which corporation will be the first to issue passports?

Rushkoff argued that programming was the new literacy, and he was right, but the specifics of his argument get lost in the retelling. The way he saw it, this was the start of the third epoch, the preceding two ushered in by 1) the invention of writing, 2) the printing press.

Writing broke communal oral tradition and replaced it with record-keeping and authoritative narration by the literate minority to the masses. Only the few could produce texts, and the many depended on them to recite and interpret. This the frame (pre-V2 maybe) that Catholicism inhabits.

The printing press led to mass literacy. This is the frame of Protestantism: the idea is for each man to read and interpret for himself. But after a brief spate of widely-accessible press (remember Paine’s Common Sense? very dangerous!) access tightened up. Hence mass media as gatekeeper, arbiter of consensus reality.

The few report, and the many receive. Not that journalists were ever the elite, just as the Egyptian scribes. They were the priestly class, Weber’s “new middle”. (Also lawyers. Remember the backwoods lawyer? Used to be all you needed was the books and a good head. Before credentialism ate the field.)

The internet killed consensus reality. Now anyone can trivially disseminate arbitrary text. But the platforms on which those texts are seen are controlled by the new priests, line programmers, which determine how information flows. This is what critics of “the Facebook algorithm” et al are groping at. The many can create, but the few craft the landscape that hosts creation.

It’s still early. Remains to be seen if we can keep relatively open platforms (like Twitter circa 2010; open in the unimpeded sense). Or if the space narrows, new gatekeepers secure hold. But that will be determined by programmers. (Maybe lawmakers.) Rest along for the ride.

That’s all copy-pasted from Twitter and then lightly edited to be more readable in this format.

I included the opening quote about complex systems because although this neat narrative holds more truth than some others, it’s still a neat narrative. Don’t forget that. Reality is multi-textured.


Header photo by kev-shine.

The Strategic Subjects List

Detail of a satirical magazine cover for All Cops Are Beautiful, created by Krzysztof Nowak.

Detail of a satirical magazine cover created by Krzysztof Nowak.

United States policing is full of newspeak, the euphemistic language that governments use to reframe their control of citizens. Take “officer-involved shooting”, a much-maligned term that police departments and then news organizations use to flatten legitimate self-defense and extrajudicial executions into the same type of incident.

And now, in the age of algorithms, we have Chicago’s “Strategic Subjects List”:

Spearheaded by the Chicago Police Department in collaboration with the Illinois Institute of Technology, the pilot project uses an algorithm to rank and identify people most likely to be perpetrators or victims of gun violence based on data points like prior narcotics arrests, gang affiliation and age at the time of last arrest. An experiment in what is known as “predictive policing,” the algorithm initially identified 426 people whom police say they’ve targeted with preventative social services. […]

A recently published study by the RAND Corporation, a think tank that focuses on defense, found that using the list didn’t help the Chicago Police Department keep its subjects away from violent crime. Neither were they more likely to receive social services. The only noticeable difference it made was that people on the list ended up arrested more often.

WOW, WHAT A WEIRD COINCIDENCE! The “strategic subjects” on the list were subjected, strategically, to increased police attention, and I’m sure they were all thrilled by the Chicago Police Department’s interest in their welfare.

Less than fifty years ago, the Chicago Police Department literally tortured black men in order to coerce “confessions”. None of that is euphemism. A cattle prod to the genitals — but maybe it ought to be called “officer-involved agony”?

I get so worked up about language because language itself can function as a predictive model. The words people use shape how they think, and thoughts have some kind of impact on actions. Naturally, the CPD officers who carried out the torture called their victims the N-word.

I wonder what proportion of the Strategic Subjects List is black? Given “data points like prior narcotics arrests [and] gang affiliation”, an algorithm can spit out the legacy of 245 years of legal slavery more efficiently than a human. But torture in Chicago is still handcrafted by red-blooded American men. Trump would be proud.

The Elites and the Random Schmucks

In the 1940s, while England was being terrorized by Nazis, George Orwell wrote this:

“An army of unemployed led by millionaires quoting the Sermon on the Mount — that is our danger. But it cannot arise when we have once introduced a reasonable degree of social justice. The lady in the Rolls-Royce car is more damaging to morale than a fleet of Goering’s bombing planes.”

The message hasn’t expired. Orwell’s lengthy essay (which he actually refers to as a book) is particularly relevant in light of Brexit and the rise of Donald Trump.

After conducting a broad ethnography of the different demographic factions, Orwell excoriates capitalism and entreats Britain to adopt democratic socialism. With hindsight, we can see that he is extremely wrong about a couple of things — particularly the supposed “efficiency” of nationalized economies (see also). Orwell repeatedly asserts that England is incapable of defeating Hitler without a revolution, which… no.

However, I have a lot of sympathy for Orwell’s overall position. He condemns the status quo government of his day because it does not represent the regular citizens, nor does its design promote their wellbeing. Sound familiar?

"The Maunsell Sea Forts, part of London's World War II anti-aircraft defences." Photo by Steve Cadman.

“The Maunsell Sea Forts, part of London’s World War II anti-aircraft defences.” Photo by Steve Cadman.

Considering that I live in a democratic republic, and most of my readers live in democratic republics, it seems appropriate to ask — isn’t it weird that “populism” is a dirty word? Aren’t related phrases like “the common people” supposed to be the mainstays of representative governments?

Veteran financial journalist Felix Salmon wrote in response to Brexit:

“If you move from a democracy of the elites to a pure democracy of the will of the people, you will pay a very, very heavy price. Governing is a complicated and difficult job — it’s not something which can helpfully be outsourced to the masses, especially when the people often base their opinions on outright lies.”

That’s a pretty compelling argument. People are idiots with no awareness of history (myself included, often).

The problem with true, unfettered democracy is that it erodes the ground on which we build our Schelling fences. The will of the people, en masse, is not compatible with the Bill of Rights. Quinn Norton tweetstormed on this topic:

“Human rights are not democratic. Rather, they are limits placed on democracy. […] If you all get together and vote to have me for dinner, my right to not be eaten is meant to trump your democratic will. […] So when people exclaim human rights democracy blah blah blah, please remember, our rights are there to beat democracy back with a stick.”

My tentative conclusion is that successful governments figure out a balance of power not just between the executive, legislative, and judicial branches of government, but also between the elites and the random schmucks. Of course, that heavily depends on who gets to define “successful”…

Cryptocurrencies Aren’t Fake, They’re Just Libertarian

Bitcoin-themed coaster. Photo by pinguino k.

Bitcoin-themed coaster. Photo by pinguino k.

A headline from the Miami Herald: “Bitcoin not money, Miami judge rules in dismissing laundering charges” — c’mon! Bitcoin is clearly money. I have mixed feelings about how cryptocurrencies should be regulated, but they are obviously currencies. The judge’s rubric for this question was weird and ahistorical:

“Miami-Dade Circuit Judge Teresa Mary Pooler ruled that Bitcoin was not backed by any government or bank, and was not ‘tangible wealth’ and ‘cannot be hidden under a mattress like cash and gold bars.’

‘The court is not an expert in economics; however, it is very clear, even to someone with limited knowledge in the area, the Bitcoin has a long way to go before it the equivalent of money,’ Pooler wrote in an eight-page order.”

Most mainstream currencies are backed by governments, but that’s not an inherent feature of money, just a modern quirk. How do people think money got started? It grew out of bartering, and for a very long time it wasn’t regulated or centrally controlled at all. [Edited to add: David Graeber’s Debt asserts that money actually emerged before bartering. Does not change my larger point. See the note at the end.] Just as an example, per the Federal Reserve Bank of San Francisco:

“Between 1837 and 1866, a period known as the ‘Free Banking Era,’ lax federal and state banking laws permitted virtually anyone to open a bank and issue currency. Paper money was issued by states, cities, counties, private banks, railroads, stores, churches, and individuals.”

And that’s relatively recent! John Lanchester wrote a truly excellent overview of what money actually is and how it functions for the London Review of Books, and I wish I could make this judge read it.

Granted, legal definitions exist in a parallel reality, so maybe there’s some legislative reason why the US government can’t bestow official currency status on non-state-sponsored currencies. They’d certainly have to step up their game when it comes to regulating them, which would be a lot of work since so far their game has been practically nonexistent.

Just to top off the ridiculousness, Tim Maly drew my attention to this bit from the Miami Herald article: “‘Basically, it’s poker chips that people are willing to buy from you,’ said Evans, a virtual-currency expert who was paid $3,000 in Bitcoins for his defense testimony.”

As Maly quipped on Twitter, “Bitcoin isn’t money laundering because bitcoin isn’t money says bitcoin expert paid in bitcoin.”

Is this merely a question of semantics? Yes. But I’ve always come down on the side that language is important — it’s both my first love and my livelihood, after all — and it bothers me to see foundational economic concepts misapplied. Let’s at least describe our brave new world accurately.


Note on the origins of money: Facebook commenter Greg Shuflin mentioned David Graeber’s book Debt: The First 5,000 Years and its assertion that bartering came after money. It doesn’t change my larger point, but here’s the relevant Wikipedia passage:

“The author claims that debt and credit historically appeared before money, which itself appeared before barter. This is the opposite of the narrative given in standard economics texts dating back to Adam Smith. To support this, he cites numerous historical, ethnographic and archaeological studies. He also claims that the standard economics texts cite no evidence for suggesting that barter came before money, credit and debt, and he has seen no credible reports suggesting such. […] He argues that credit systems originally developed as means of account long before the advent of coinage, which appeared around 600 BC. Credit can still be seen operating in non-monetary economies. Barter, on the other hand, seems primarily to have been used for limited exchanges between different societies that had infrequent contact and often were in a context of ritualized warfare.”

Sounds like an interesting book!

The Mad Monk in a New Century

Cyber portrait of Rasputin. Artwork by ReclusiveChicken.

Artwork by ReclusiveChicken.

Some men gain their reputation and influence through sheer charisma, perhaps with a dash of self-engineered notoriety:

“I realized, of course, that a lot of the talk about him was petty, foolish invention, but nonetheless I felt there was something real behind all these tales, that they sprang from some weird, genuine, living source. […] After all, what didn’t they say about Rasputin? He was a hypnotist and a mesmerist, at once a flagellant and a lustful satyr, both a saint and a man possessed by demons. […] With the help of prayer and hypnotic suggestion he was, apparently, directing our military strategy.” — Teffi (Nadezhda Lokhvitskaya)

Now imagine if Rasputin had deep learning at his disposal — a supercomputer laden with neural nets and various arcane algorithms. What would Rasputin do with Big Data™? Perhaps the Rasputin raised on video games and fast food would be entirely different from the Rasputin who rose up from the Siberian peasantry.

Which rulers would a modern Rasputin seek to enchant? Russia has fallen from its once formidable greatness, and I don’t think Vladimir Putin is as gullible as the Tsar was. China is the obvious choice, but Xi Jinping similarly seems too savvy. Somehow I doubt that Rasputin, the charlatan Mad Monk, could gain much traction in a first-class military power these days. Would he be drawn to the turmoil of postcolonial Africa?

Maybe Rasputin would be a pseudonymous hacker, frequenting cryptocurrency collectives and illicit forums. Would that kind of power suffice? Would he be willing to undo corporate and governmental infrastructure without receiving credit? Would he have the talent for it, anyway? Not everyone can become a programmer. Maybe he’d flourish on Wall Street instead.

What I’m really wondering is whether Rasputin’s grand influence was a result of being in the right place at the right time. Would he have been important no matter when he was born? You can ask this question about any historical figure, of course, but I want to ask it about Rasputin because he’s cloaked in mysticism. I can imagine him drawing a literal dark cloak around himself, shielding his body from suspicions that he was just a regular human.

You’ve probably heard the rumors about how hard Rasputin was to kill. Who is the Mad Monk’s modern counterpart? Which person who wields the proverbial power behind the throne will be very hard to disappear when it comes time for a coup?

Software Meets Capitalism: Interview with Steve Klabnik

Old woman working at a loom. Photo by silas8six.

Old woman working at a loom. Photo by silas8six.

I interviewed Steve Klabnik via email. If you’re part of the open-source world, you might recognize his name. Otherwise I’ll let him introduce himself. We discussed economics, technological unemployment, and software.

Exolymph: The initial reason I reached out is that you’re a technologist who tweets about labor exploitation and other class issues. I’m currently fascinated by how tech and society influence each other, and I’m particularly interested in the power jockeying within open-source communities. You seem uniquely situated to comment on these issues.

Originally I planned to launch right into questions in this email, but then I start opening your blog posts in new tabs, and now I need a little more time still. But! Here’s a softball one for starters: How would you introduce yourself to an oddball group of futurists (which is my readership)?

Steve Klabnik: It’s funny that you describe this one as a softball, because it should be, yet I think it’s actually really tough. I find it really difficult to sum up a person in a few words; there’s just so much you miss out on. Identity is a precarious and complex topic.

I generally see myself as someone who’s fundamentally interdisciplinary. I’m more about the in-betweens than I am about any specific thing. The discipline that I’m most trained in is software; it’s something I’ve done for virtually my entire life, and I have a degree in it. But software by itself is not that interesting to me. It’s the stuff that you can do with software, the impact that it has on our world, questions of ethics, of social interaction. This draws a connection to my second favorite thing: philosophy. I’m an amateur here, unfortunately. I almost got a higher degree in this stuff, but life has a way of happening. More specifically, I’m deeply enthralled with the family of philosophy that is colloquially referred to as “continental” philosophy, though I’m not sure I find that distinction useful. My favorites in this realm are Spinoza, Nietzsche, Marx, and Deleuze. I find that their philosophical ideas can have deep implications for software, its place in the world, and society at large.

Since we live under capitalism, “who are you” is often conflated with “what do you do for work”. As far as that goes, I work for Mozilla, the company that makes Firefox. More specifically, I write documentation for Rust, a programming language that we and a broader community have developed. I literally wrote the book on it 🙂 Mozilla has a strong open-source ethic, and that’s one of the reasons I’ve ended up working there; I do a lot of open-source work. On GitHub, a place where open-source developers share their code, this metric says that I’m the twenty-ninth most active contributor, with 4,362 contributions in the last 365 days. Before Rust, I was heavily involved with the Ruby on Rails community, and the broader Ruby community at large. I still maintain a few packages in Ruby.

Exolymph: To be fair, I described it as a softball question precisely because of the capitalist shortcut you mentioned, although I’m not sure I would have articulated it like that. Darn predictable social conditioning.

What appeals to you about open source? What frustrates you about open source?

Steve Klabnik: I love the idea of working towards a commons. I’d prefer to write software that helps as many people as possible.

What frustrates me is how many people can’t be paid to do this kind of work. I’ve been lucky to been able to feed myself while working on open source. Very, very lucky. But for most, it’s doing your job without pay. If we truly want a commons, we have to figure out how to fund it.

Exolymph: I’ve been reading a bunch of your blog posts. I’m curious about how you feel about working in an industry — and perhaps doing work personally — that obviates older jobs that people used to count on.

Steve Klabnik: It is something that I think about a lot. This is something that’s a fundamental aspect of capitalism, and has always haunted it: see the Luddites, for example. This problem is very complex, but here’s one aspect of it: workers don’t get to capture the benefits of increased productivity, at least not directly. Let’s dig into an example to make this more clear.

Let’s say that I’m a textile worker, like the Luddite. Let’s make up some numbers to make the math easy: I can make one yard of fabric per hour with my loom. But here’s the catch: I’m paid by the hour, not by the amount of fabric I make. This is because I don’t own the loom; I just work here. So, over the course of a ten hour day, I make ten yards of fabric, and am paid a dollar for this work.

Next week, when I come to work, a new Loom++ has been installed in my workstation. I do the same amount of work, but can produce two yards of fabric now. At the end of my ten hour day, I’ve made twenty yards of fabric: a 2x increase! But I’m still only being paid my dollar. In other words, the owner of the factory gets twice as much fabric for the same price, but I haven’t seen any gain here.

(Sidebar: There’s some complexity in this that does matter, but this is an interview, not a book 🙂 So for example, yes, the capitalist had to pay for the Loom++ in the first place. This is a concept Marx calls “fixed versus variable capital”, and this is a long enough answer already, so I’ll just leave it at that.)

Now, the idea here is that the other factories will also install Loom++s as well, and at least one of the people who’s selling the cloth will decide that 1.75x as much profit is better, so they’ll undercut the others, and eventually, the price of cloth will fall in half, to match the new productivity level. Now, as a worker, I have access to cheaper cloth. But until that happens, I’m not seeing a benefit, yet the capitalist is collecting theirs. Until they invest in a Loom2DX, with double the productivity of the Loom++, and the cycle starts anew.

Yet we, as workers, haven’t actually seen the benefits work out the way they should. There’s nothing that guarantees that it will, other than the religion of economists. And the working class has seen their wages stagnate, while productivity soars, especially recently. Here is a study that gets cited a lot, in articles like this one.

“From 1973 to 2013, hourly compensation of a typical (production/nonsupervisory) worker rose just 9 percent while productivity increased 74 percent. This breakdown of pay growth has been especially evident in the last decade, affecting both college- and non-college-educated workers as well as blue- and white-collar workers. This means that workers have been producing far more than they receive in their paychecks and benefit packages from their employers.”

We haven’t been really getting our side of the deal.

Anyway.

So, this is a futurist blog, yet I’ve just been talking about looms. Why? Well, two reasons: First, technologists are the R&D department that takes the loom, looks at it, and makes the Loom++. It’s important to understand this, and to know in our heart of hearts that under capitalism, yes, our role is to automate people out of jobs. Understanding a problem is the first step towards solving it. But second, it’s to emphasize that this isn’t something that’s specific to computing or anything. It’s the fundamental role of technology. We like to focus on the immediate benefit (“We have Loom++es now!!!”) and skip over the societal effects (“Some people are going to make piles of money from this and others may lose their jobs”). Technologists need to start taking the societal effects more seriously. After all, we’re workers too.

I’m at a technology conference in Europe right now, and on the way here, I watched a movie, The Intern. The idea of the movie is basically, “Anne Hathaway runs Etsy (called About the Fit in the movie), and starts an internship program for senior citizens. Robert De Niro signs up because he’s bored with retirement, and surprise! Culture clash.” It was an okay movie. But one small bit of backstory of De Niro’s character really struck me. It’s revealed that before he retired, he used to work in literally the same building as About the Fit is in now. He worked for a phone book company. It’s pretty obvious why he had to retire. The movie is basically a tale of what we’re talking about here.

Exolymph: I’m also curious about what you’d propose to help society through the Computing Revolution (if you will) and its effect on “gainful employment” opportunities.

Steve Klabnik: Okay, so, I’m not saying that we need to keep phone books around so that De Niro can keep his job. I’m also not saying that we need to smash the looms. What I am saying is that in a society which is built around the idea that you have to work to live, and that also rapidly makes people’s jobs obsolete, is a society in which a lot of people are going to be in a lot of pain. We could be taking those productivity benefits and using them to invest back in people. It might be leisure time, it might be re-training; it could be a number of things. But it’s not something that’s going to go away. It’s a question that we as society have to deal with.

I don’t think the pursuit of profits over people is the answer.


Go follow Steve on Twitter and check out his website.

Very Basic Climate Change Reflections

I put climate change on my list of cyberpunk-adjacent topics, but this is the first time I’ve written about it. Here’s my take: Anthropogenic global warming is bad in terms of its effect on human environments and our access to natural resources. It’s not intrinsically bad because intrinsic badness doesn’t exist; ecosystems change over time and that includes extinction events.

Illustration by JD Reeves.

Illustration by JD Reeves.

However, climate change could drastically change the trajectory of the next century or three. We might have to drop everything — especially industrialization — and rebuild from scratch. Maybe it’ll look like Snowpiercer (which I haven’t actually watched). Or maybe we’ll figure out an alternate fuel source quickly and things will stay relatively close to the way they are now. I don’t have the domain knowledge to make a confident guess.

I consider climate change a cyberpunk topic because I see cyberpunk as part of a long timeline of humans using technology to leverage economic and sociopolitical change. Or, to look at it from a different angle, a long timeline of humans inventing technologies that cause economic and sociopolitical change regardless of the inventors’ intent. I remain fascinated with the way human nature flows into whatever container it’s given.

© 2019 Exolymph. All rights reserved.

Theme by Anders Norén.