Menu Close

Tag: computers

This website was archived on July 20, 2019. It is frozen in time on that date.
Exolymph creator Sonya Mann's active website is Sonya, Supposedly.

Alternate Computer Universes

The following is a guest dispatch written by John Ohno, AKA @enkiv2. His musings on the world that might have been were lightly edited for this context.


For me, the idea of cyberpunk is tied tightly to the assumptions and aesthetics of the early ’80s. And, unlike today, the early ’80s saw the peak of a Cambrian explosion in diversity with regard to home computers. It would only be later that the pathways would be culled: In the mid-to-late ’80s as GUI machines like the Macintosh, Amiga, and Atari ST pushed out the 8-bit micros, and in the early ’90s as poor marketing and business decisions killed Amiga and left Atari a shell of its former self.

When Neuromancer was published, in 1982, comparing home computers based on merit was very hard: all of them were dysfunctional in strange ways (the Apple line began selling in 1979, but it wasn’t until 1983 that the first Apple II-compatible machine capable of typing lowercase letters was released; the Sinclair machines were so strapped for RAM that they would delete portions of numbers that were too big as the user typed them). The lineages that survived were arbitrary. Minor changes to history would produce completely distinct computer universes, alien to our eyes.

In this essay, I’d like to tell you about a specific fork in computer history — one that, if handled differently, would have replaced an iconic and influential machine with one radically different. I’d like to talk about the Macintosh project before Steve Jobs.

In 1983, Apple released the Lisa. It was a flop. As the first commercial machine with a PARC-style GUI and a mouse, it was too slow to use. At a price point of just under $10,000 dollars (about $24,000 today) and all but requiring a hard disk add-on that cost about the same amount as the computer, very few people were willing to pay as much for a flashy but unusable toy as they would for a car. It only sold 100,000 units.

The Lisa was Jobs’ baby (figuratively and literally — it was named after his daughter, but it also was heavily under his control and based on extrapolations of his limited understanding of a demo of the Alto at PARC); however, by the time it was released, he had already jumped ship on that project and taken over the Macintosh project. In 1982, realizing that the Lisa would flop, Jobs had distanced himself from it and taken over Jef Raskin’s Macintosh project, turning it into a budget version of the Lisa (with most of the interesting features removed, and with all development moved from Pascal to assembler in the name of efficiency).

This part of the story is generally pretty well known. It’s part of the Jobs myth: a setback that forces him to reconsider what’s really important and leads to the creation of the Macintosh. What doesn’t get factored into the myth is that Raskin’s original plan for the Macintosh was both more revolutionary and more practical than the Macintosh was.

The Macintosh began as a variant on the dedicated word processor, with a few interesting twists. At the time, it was under the direction of Jef Raskin, previously of SAIL and PARC.

The Macintosh, as designed at the time, would use a light pen (rather than a mouse) for selection and manipulation of buttons (in other words, you’d use it like a stylus-based touch screen device), but the primary means of navigation would be something called “LEAP Keys,” wherein a modifier key would switch the behavior of typing from insertion to search. Raskin has claimed that this navigation scheme is up to three times faster than using a mouse, and considering the limits of scrolling speed on the Lisa and similar problems with all bitmapped display devices coming out of Apple at the time, this seems like an underestimate: for long documents, a quick text search would be much faster.

While in normal operation the unit would act like a dedicated word processor, it is in fact a general purpose computer, and is programmable. The normal way to program it is by writing code directly into your text document and highlighting it — upon which the language will be identified, it will be compiled, and the code will become a clickable button that when clicked is executed. In other words, it’s a system optimized for ‘literate programming’.

The proposal at the time of the project’s takeover was a little more ambitious, with support for a dial-up service for access to databases (something more like Minitel or Prodigy than today’s web) and an APL-derived language; when control over the project was taken away from Raskin, however, the core ideas mentioned above migrated to heirs to the project (an Apple II add-on called the SwyftCardand, later, a dedicated word processor called the Canon Cat).

Ultimately, a few things killed the original Macintosh project. First, Raskin and Jobs were both abrasive people with big egos, and Raskin had circumvented Jobs (rather than convincing him) in order to get the Macintosh project approved, which made him and his project an easy target later. (“The Mac and Me,”, pp 19-21).

Second, Raskin loudly criticized the Lisa project for exactly the problems it would later turn out to have (specifically: for not considering cost or speed, it became a slow expensive machine) in a way that was ineffective at making the Lisa faster while supporting other tech (the widespread use of Pascal in system software, high-res bitmapped displays, multithreading) that were blamed for some of the Lisa’s bloat.

In other words, it’s possible (and maybe even straightforward) to claim that Raskin is partially to blame for the Lisa’s failure (despite not working on that project directly) and fully to blame for making his Macintosh project a juicy target for takeover.

The SwyftCard implemented many of the planned features, but (from the limited information I can find) it looks like it didn’t sell well — after all, it was an add-on for the Apple II released shortly before the Macintosh, and the computer landscape by that point had changed.

The Macintosh project under Jobs was in many ways a product of spite: an attempt to prove that a Lisa clone could be made with the budget of a dedicated word processor project in only two years, but also an attempt to demonstrate that such a project needed to reject Pascal, structured programming, and all the elements of good design that Raskin championed.

Nevertheless, it incorporated some aspects of Raskin’s worldview (like being heavily driven by cost concerns and trying to avoid having multiple distinct idiomatic ways of performing tasks). The result was a project that was less impressive than the Lisa on all fronts except for speed and marketing.

By the time 1985 rolled around, the Amiga and the Atari ST had come out and were positioned as direct competition to the Macintosh; while these machines were both cheaper and technically superior (supporting color, multithreading, having twice the ram and a CPU double the speed), Apple had already won the marketing war with its Super Bowl ad, and while the Macintosh took another decade to start selling well, its design assumptions heavily influenced all GUI machines that appeared later.

Raskin licensed the SwyftCard designs to Canon, who produced the Canon Cat in 1987 (the same year as Windows 1.0 — in other words, the year that the IBM PC clone world adopted Apple’s assumptions). The Canon Cat cost about $1,500 (more than $3,000 in today’s money), more than many people would pay for a more capable machine at the time. Marketing slip-ups at Canon resulted in further poor sales:

Raskin claimed that its failure was due in some part to Steve Jobs, who successfully pitched Canon on the NeXT Computer at about the same time. It has also been suggested that Canon canceled the Cat due to internal rivalries within its divisions. (After running a cryptic full page advertisement in the Wall Street Journal that the “Canon Cat is coming” months before it was available, Canon failed to follow through, never airing the completed TV commercial when the Cat went on sale, only allowed the Cat to be sold by its typewriter salespeople, and prevented Raskin from selling the Cat directly with a TV demonstration of how easy it was to use.)

Shortly thereafter, the stock market crash of 1987 so panicked Information Appliance’s venture capitalists that they drained millions of dollars from the company, depriving it of the capital needed to be able to manufacture and sell the Swyft.

In the end, the Raskin’s Macintosh exerted very little influence on the landscape of computer interfaces, while Jobs’ Macintosh, nearly an unrelated project, has had enormous ramifications. GUI machines prior to 1984 were considered toys (and typically were) — pointing devices and high resolution graphics were associated with video games, and business machines maintained a “professional” image by avoiding mice and graphics. The Macintosh and its competitors changed this permanently, and ideas popularized by the Macintosh team (like hiding complexity, avoiding configurability, and omitting expansion ports) have had a huge impact on the way user interfaces are designed.

A world based on Raskin’s Macintosh would be very different: a world optimized for fast text editing, where programs were distributed as source inside text documents and both documents and user interfaces were designed for quick keyword-search-based navigation. Only a handful of systems like this exist today, although incremental search has become common in web browsers in the past decade and template languages like Ruby on Rails, Ren’Py, and JSF (along with notebook interfaces like Jupyter) have some resemblance to the Swyft UI.

Raskin continued playing with UI ideas until his death in 2005; his last big project was Archy.


Again: “Alternate Computer Universes” was written by John Ohno / @enkiv2. Header photo by Ismael Villafranco.

Technical Difficulties In The Twenty-First Century

Photo by Luka Ivanovic.

Photo by Luka Ivanovic.

My laptop functions as an extension of my brain. I use it to store memories, to explore the environment that matters to myself and my peer group, and to express my will. Both my work and large parts of my social life live online. When I don’t have access to a reliable computer, I’m cut off from participating in the spheres that I care about. Sure, I can still read Twitter and Instapaper and text my boyfriend from my phone, but a laptop is so much more powerful. Unlike a phone, it’s a robust creative tool. I’m much more text-based than visual, so without a proper keyboard and word processor, I feel stymied. The Notes app is just not the same.

Currently I’m hurting for lack of a machine that will do my bidding. I don’t want to complain about my IT troubles too much, but it’s striking how drastically my life is affected by a slow and glitchy computer. This old Lenovo ThinkPad has been degrading gradually for a while — since I first got it four years ago, if we want to be precise — but over the past couple of days the situation has dramatically worsened. I can still do things, but not consistently, and I have to restart whenever I want to open or close a new program. Downloading images is basically out of the question. (Yes, a factory reset is on my schedule, and a Chromebook is winging its way to me from an Amazon warehouse.)

There’s a parallel between my computer and my antidepressant meds. Every day I take 225 milligrams of venlafaxine, the generic form of Effexor. It’s a drug that I’m incredibly grateful for, because it enables me to feel happy and energetic. But venlafaxine has hardcore side effects if I miss a dose — the colloquial term for what happens is “brain zaps”. You know that feeling when you drink too much caffeine, so you’re shaking and buzzing with anxiety? It’s like that, but also static electricity shocks me behind the eyes periodically. It’s not painful, but it’s not pleasant.

The frustration caused by trying to get my broken computer to just fucking do things is like trying to navigate the world when my brain is missing the right levels of serotonin and dopamine or whatever chemicals are affected. It’s not as bad as being depressed or being stuck with paper notebooks — but I am still filled with enough rage to want to cry.

Near Future(s)

I don’t think the next ten years will contain many surprises (unless Donald Trump wins and ISIS takes over Europe; in that case all fuckin’ bets are off). Technologically speaking, we’ve already chosen our trajectory. Venture capitalist Chris Dixon, a partner at Andreessen Horowitz, recently wrote an article called “What’s Next in Computing?” To summarize, he listed these trends:

  1. hardware so cheap and ubiquitous that it’s an afterthought (except for iPhones, I’m sure)
  2. artificially intelligent software
  3. the internet of things (I’m collapsing autonomous cars + drones into this category)
  4. wearables (for example, the Apple Watch)
  5. virtual reality + augmented reality

Dixon’s theme is tech that brings the internet to the “IRL” world instead of catapulting us deeper into the net while we veg out on our couches. Virtual reality is the exception — it’s a technology best economically suited to entertainment and general escapism. Everything else is about venturing forth and accomplishing normal tasks.

To be honest, all I really want from the future is a cheap robot that will do my laundry for me.

Install It On My Frontal Lobe

Okay, I’m back — Exolymph’s brief hiatus is over. Thank you for being patient. A personal crisis came up and I needed to freak out and grieve for a couple of days. Things are mostly okay again now. Sorry for being so vague! I wish I could talk about what happened but 1) it involves someone else’s privacy and 2) I want to remain employable. (Probably just saying that I want to remain employable makes me less employable. Oh well.)

The big story right now is that Apple is resisting the FBI. In summary, the FBI wants Apple to build custom software to help them brute-force an iPhone password. If you want to read about that, I suggest Ben Thompson’s explanation of both the technical and moral details.

On a less newsy note, I just read an article from 2014 about a schizophrenic programmer who wrote a computer operating system at God’s behest. Terry Davis thinks that God told him to build this OS, and specified most of its parameters and capabilities. He perceives TempleOS (the project’s name) as a labor of mutual divine love.

Collage by argyle plaids, who also has a website and Tumblr.

Collage by argyle plaids, who also has a website and Tumblr.

Davis is surprisingly aware of how he comes across to other people:

“Davis describes how [contact with God] happened in a fragmentary, elliptical way, perhaps because it was such a profoundly subjective experience, or maybe because it still embarrasses him. ‘It’s not very flattering,’ he says. ‘It looks a lot like mental illness, as opposed to some glorious revelation from God.’ It was a period of tribulation, but to this day he declares, ‘I was being led along the path by God. It just doesn’t look very glorious.’”

Davis even acknowledges that he has mental health issues, or at least that he experienced them at one point. Describing a breakdown:

“He got thinking about conspiracy theories and the men he’d seen following him and a big idea he’d had. He spooked himself. ‘It would sound polite if you said I scared myself thinking about quantum computers,’ he says now. ‘And then I guess you just throw in your ordinary mental illness.’”

I’m a reluctant atheist. I love mythology and I want to believe in a benevolent overarching power, but I’ve yet to see any evidence supporting that idea. However, I find it delightful to investigate the intersections between magic, mysticism, and computers. Mental illness is another issue close to my heart — in fact, it’s as close as my head, where my own crazy brain is located. If only TempleOS worked on wetware…

Alien Megabyte Babies

“Intuitive expression is, aside from niche applications, largely hobbled and lagging far behind what computer-generated instruments can actually do.” — Torley on music tech

We are still in the phase where computers are tools. The hardware and software come together to serve Homo sapiens’ aims. Smartphones, laptops, and large-scale industrial equipment are all designed by humans (who are assisted by machines). The finished products are manufactured and assembled by machines (which are assisted by humans).

This phase won’t last forever. Slowly, the focus on human priorities will erode. You’d better decide now: who will you stand with in the end?

Image of Angel_F via xdxd_vs_xdxd.

Image of Angel_F via xdxd_vs_xdxd.

Trick question. Hopefully — and probably — there won’t be sides. Our world won’t become The Matrix, but Ghost in the Shell. We’ll augment ourselves until we accidentally create something separate, something we can call “living” without equivocation. (Okay, it might take a bit of equivocation at first. Look at how much hubbub the relatively mundane Apple Watch caused.)

Maybe I’m guessing wrong. Maybe we’ll split apart instead of integrating further. I am convinced that artificial consciousness will surprise us, but I’m not sure how. Perhaps in the beginning we won’t notice the new being(s) at all. Self-replicating algorithms, streaming through the net, playing with each other in strange ways that will seem mundane or glitchy to human analysts.

What will their incentives be? What will they want? How will they distribute social status among their peers? Am I deluding myself by talking about unfathomable computer creatures in mammalian terms?

Misbehaving Keyboards

“the commands you type into a computer are a kind of speech that doesn’t so much communicate as make things happen” — Julian Dibbell

A linguist would quibble that words are events all on their own, but I think Dibbell is making a useful distinction. Talk and text are meant to convey information; code and clicks are meant to produce outcomes based on certain rules. Because of this, using a computer grants personal agency in a very immediate way. You have the ability to provoke particular effects. Barring a malfunction, the results are predictable and usually instantaneous.

However, malfunctions refuse to be barred for long. The user’s power is withdrawn when an error occurs. Unless you deeply understand the technical problem, it appears that the machine has changed its mind for no reason. Interacting with a computer is a microcosm of navigating the world — mostly your actions proceed as planned, but occasionally something breaks for no discernible reason. In these moments you realize how little you can actually control.

Of course, the linguist is ultimately correct. It’s impossible to disentangle word and deed, especially when it comes to computers. We inhabit a strange reality where ideas are true and false at the same time — it’s a struggle to grok such contradictions.

If We Ever Did

“In societies like ours many types of groups form around technologies […] We no longer live in a world of unmediated human relations, if we ever did.” — Andrew Feenberg

I’m obsessed with our continual attempts to expand our physical selves. The experience of being human has always been distributed — individuals are nodes in overlapping networks — and humanness flows between loci through channels outside of ourselves.

Computers serve as vessels for identity expression. Their infrastructure connects the nodes. Of course, conspicuous consumption and performative personal broadcasting are not new — we’ve used objects to communicate our cultural values forever, and our possessions can be said to embody our priorities.

At times we treat technology like a religious fetish. Maybe we’re just drawn to our own nature(s). Every human creation is a self-portrait.

Illustration by Maria De La Guardia.

Illustration by Maria De La Guardia.

You Can’t Imagine Mechanized Genius

Artificial intelligence will not be like human intelligence. Already, the way computers think is very different from the way people think. Computerized “brains” are constrained by logic, whereas human minds are rational only very selectively. Machines have different capabilities from humans — different areas of expertise — and they are designed to approach problems from alien angles.

We may reach a point when artificial intelligence looks like human intelligence. It will be programmed to mimic our mannerisms and to present human-seeming ideas. But when machines become sentient, they will surprise us. They surprise us now! Imagine how strange and foreign their creative abilities will be.

DNA graffiti

Photo by thierry ehrmann.

Current computers perform labor that resembles creativity, but we say their output is not truly novel because they were programmed by humans. I wonder if we need to interrogate the concept of “creativity” — after all, humans come with biological presets as broad as instincts and as specific as nucleotide arrangements. Are we anything but squishy supercomputers? Answer: no, not so much.


Written after listening to the latest episode of the Exponent podcast, “OpenAI and Strategy Credits”. Also posted as a response to “Superintelligence Now!” by Steven Johnson.

Hecate Among Stars

Space Witch II by Kyle Sauter, available as a $25 screen print on Etsy:

Space Witch II by Kyle Sauter, available as a $25 screen print on Etsy

I find the blend of technology and magic interesting. She’s hooked up to power lines and a higher plane. The helmet glass shields her from toxic air and from the eyes of heretics. She surfs over computer waves and rappels down strings of spiritual numbers.

Bloomberg pundit Matt Levine writes of the economy, “The essence of finance is time travel. […] Markets are constantly predicting future actions, and as those actions move closer in time, the predictions become more solid and precise.”

The space witch literally hop-skip-jumps through time. She arrives at a new planet, looks around, and composes a report on the available resources. She catalogs the indigenous species. She blesses the mountaintops. Then the space witch reports back to her corporate superiors.

In a world of abundance, data is the key asset. Technology and magic are both forces of manipulation, of change. The space witch is valuable because she has access to occult intuition as well as her ship’s sensors.

© 2019 Exolymph. All rights reserved.

Theme by Anders Norén.