Menu Close

Author: Sonya Mann (page 1 of 29)

Nootropics, Outrage, and Neuroticism

Two things:

1) I published an essay called “Practical Nootropics; Political Brainhacking” on my personal website. I posted it there instead of sending it to you, dear readers, because the post was sponsored and I didn’t realize until after arranging everything that Exolymph would be a more appropriate venue. (My very first sponsored post! Are you proud of me, or shaking your fist because I’m a sellout?)

If you’re interested in applied transhumanism, the essay is up your alley. Don’t worry, it’s not a long ad disguised as a blog post — I thank the sponsor at the top and bottom, that’s all.

2) I was a little bit manic from too much caffeine last night so I wrote a long tweetstorm about the pervasive bad faith that poisons so much of online discourse. It ties together the prisoner’s dilemma, hyper-scaleable media distribution (AKA cheap virality), and the incentives of tribalism.

And now, a prose poem about the interminability of sentience. Because why not, I can be angsty and avant-garde too. Tumblr-era Sonya would be so proud.

Glass Cacophony

Artwork via (by?) the Twitter bot @youtubeartifact.

Artwork via (by?) the Twitter bot @youtubeartifact.

“The algorithm has been kind, has granted me a strong body and a violent disposition.” — @ctrlcreep

You have been yourself the entire time that you’ve been alive. Layer on layer on layer, like stacked panes of glass. Each scribbled all over with black marker.

The stack becomes murkier as it rises, when viewed from the top. It’s the same stack all the way up and down.

Despite the persistence of yourself, the entire-time-ness of it, you struggle to define your own substance. Observers list the ways of knowing what you are. You must not despise yourself. It is unseemly.

You are bothered by wanting an identity, by wanting to reduce yourself. The nebulousness is an itch. The need for a coherent mind is an itch. Counting breaths is fall asleep is an itch.

Each moment more you-ness accretes. Black marker skids on the glass; fills up the clear space. Pile on a fresh one. The ink dries gummy. It peels instead of smearing.

The very best feeling, you think, is to realize that you’ve driven home on autopilot. You didn’t need to be present. A respite from the consciousness of consciousness of consciousness that fills the mind, that is the mind, that spills into your hands and none of the onlookers can help you hold it.

Still you are yourself and still the complexity assaults you.

Alternate Computer Universes

The following is a guest dispatch written by John Ohno, AKA @enkiv2. His musings on the world that might have been were lightly edited for this context.


For me, the idea of cyberpunk is tied tightly to the assumptions and aesthetics of the early ’80s. And, unlike today, the early ’80s saw the peak of a Cambrian explosion in diversity with regard to home computers. It would only be later that the pathways would be culled: In the mid-to-late ’80s as GUI machines like the Macintosh, Amiga, and Atari ST pushed out the 8-bit micros, and in the early ’90s as poor marketing and business decisions killed Amiga and left Atari a shell of its former self.

When Neuromancer was published, in 1982, comparing home computers based on merit was very hard: all of them were dysfunctional in strange ways (the Apple line began selling in 1979, but it wasn’t until 1983 that the first Apple II-compatible machine capable of typing lowercase letters was released; the Sinclair machines were so strapped for RAM that they would delete portions of numbers that were too big as the user typed them). The lineages that survived were arbitrary. Minor changes to history would produce completely distinct computer universes, alien to our eyes.

In this essay, I’d like to tell you about a specific fork in computer history — one that, if handled differently, would have replaced an iconic and influential machine with one radically different. I’d like to talk about the Macintosh project before Steve Jobs.

In 1983, Apple released the Lisa. It was a flop. As the first commercial machine with a PARC-style GUI and a mouse, it was too slow to use. At a price point of just under $10,000 dollars (about $24,000 today) and all but requiring a hard disk add-on that cost about the same amount as the computer, very few people were willing to pay as much for a flashy but unusable toy as they would for a car. It only sold 100,000 units.

The Lisa was Jobs’ baby (figuratively and literally — it was named after his daughter, but it also was heavily under his control and based on extrapolations of his limited understanding of a demo of the Alto at PARC); however, by the time it was released, he had already jumped ship on that project and taken over the Macintosh project. In 1982, realizing that the Lisa would flop, Jobs had distanced himself from it and taken over Jef Raskin’s Macintosh project, turning it into a budget version of the Lisa (with most of the interesting features removed, and with all development moved from Pascal to assembler in the name of efficiency).

This part of the story is generally pretty well known. It’s part of the Jobs myth: a setback that forces him to reconsider what’s really important and leads to the creation of the Macintosh. What doesn’t get factored into the myth is that Raskin’s original plan for the Macintosh was both more revolutionary and more practical than the Macintosh was.

The Macintosh began as a variant on the dedicated word processor, with a few interesting twists. At the time, it was under the direction of Jef Raskin, previously of SAIL and PARC.

The Macintosh, as designed at the time, would use a light pen (rather than a mouse) for selection and manipulation of buttons (in other words, you’d use it like a stylus-based touch screen device), but the primary means of navigation would be something called “LEAP Keys,” wherein a modifier key would switch the behavior of typing from insertion to search. Raskin has claimed that this navigation scheme is up to three times faster than using a mouse, and considering the limits of scrolling speed on the Lisa and similar problems with all bitmapped display devices coming out of Apple at the time, this seems like an underestimate: for long documents, a quick text search would be much faster.

While in normal operation the unit would act like a dedicated word processor, it is in fact a general purpose computer, and is programmable. The normal way to program it is by writing code directly into your text document and highlighting it — upon which the language will be identified, it will be compiled, and the code will become a clickable button that when clicked is executed. In other words, it’s a system optimized for ‘literate programming’.

The proposal at the time of the project’s takeover was a little more ambitious, with support for a dial-up service for access to databases (something more like Minitel or Prodigy than today’s web) and an APL-derived language; when control over the project was taken away from Raskin, however, the core ideas mentioned above migrated to heirs to the project (an Apple II add-on called the SwyftCardand, later, a dedicated word processor called the Canon Cat).

Ultimately, a few things killed the original Macintosh project. First, Raskin and Jobs were both abrasive people with big egos, and Raskin had circumvented Jobs (rather than convincing him) in order to get the Macintosh project approved, which made him and his project an easy target later. (“The Mac and Me,”, pp 19-21).

Second, Raskin loudly criticized the Lisa project for exactly the problems it would later turn out to have (specifically: for not considering cost or speed, it became a slow expensive machine) in a way that was ineffective at making the Lisa faster while supporting other tech (the widespread use of Pascal in system software, high-res bitmapped displays, multithreading) that were blamed for some of the Lisa’s bloat.

In other words, it’s possible (and maybe even straightforward) to claim that Raskin is partially to blame for the Lisa’s failure (despite not working on that project directly) and fully to blame for making his Macintosh project a juicy target for takeover.

The SwyftCard implemented many of the planned features, but (from the limited information I can find) it looks like it didn’t sell well — after all, it was an add-on for the Apple II released shortly before the Macintosh, and the computer landscape by that point had changed.

The Macintosh project under Jobs was in many ways a product of spite: an attempt to prove that a Lisa clone could be made with the budget of a dedicated word processor project in only two years, but also an attempt to demonstrate that such a project needed to reject Pascal, structured programming, and all the elements of good design that Raskin championed.

Nevertheless, it incorporated some aspects of Raskin’s worldview (like being heavily driven by cost concerns and trying to avoid having multiple distinct idiomatic ways of performing tasks). The result was a project that was less impressive than the Lisa on all fronts except for speed and marketing.

By the time 1985 rolled around, the Amiga and the Atari ST had come out and were positioned as direct competition to the Macintosh; while these machines were both cheaper and technically superior (supporting color, multithreading, having twice the ram and a CPU double the speed), Apple had already won the marketing war with its Super Bowl ad, and while the Macintosh took another decade to start selling well, its design assumptions heavily influenced all GUI machines that appeared later.

Raskin licensed the SwyftCard designs to Canon, who produced the Canon Cat in 1987 (the same year as Windows 1.0 — in other words, the year that the IBM PC clone world adopted Apple’s assumptions). The Canon Cat cost about $1,500 (more than $3,000 in today’s money), more than many people would pay for a more capable machine at the time. Marketing slip-ups at Canon resulted in further poor sales:

Raskin claimed that its failure was due in some part to Steve Jobs, who successfully pitched Canon on the NeXT Computer at about the same time. It has also been suggested that Canon canceled the Cat due to internal rivalries within its divisions. (After running a cryptic full page advertisement in the Wall Street Journal that the “Canon Cat is coming” months before it was available, Canon failed to follow through, never airing the completed TV commercial when the Cat went on sale, only allowed the Cat to be sold by its typewriter salespeople, and prevented Raskin from selling the Cat directly with a TV demonstration of how easy it was to use.)

Shortly thereafter, the stock market crash of 1987 so panicked Information Appliance’s venture capitalists that they drained millions of dollars from the company, depriving it of the capital needed to be able to manufacture and sell the Swyft.

In the end, the Raskin’s Macintosh exerted very little influence on the landscape of computer interfaces, while Jobs’ Macintosh, nearly an unrelated project, has had enormous ramifications. GUI machines prior to 1984 were considered toys (and typically were) — pointing devices and high resolution graphics were associated with video games, and business machines maintained a “professional” image by avoiding mice and graphics. The Macintosh and its competitors changed this permanently, and ideas popularized by the Macintosh team (like hiding complexity, avoiding configurability, and omitting expansion ports) have had a huge impact on the way user interfaces are designed.

A world based on Raskin’s Macintosh would be very different: a world optimized for fast text editing, where programs were distributed as source inside text documents and both documents and user interfaces were designed for quick keyword-search-based navigation. Only a handful of systems like this exist today, although incremental search has become common in web browsers in the past decade and template languages like Ruby on Rails, Ren’Py, and JSF (along with notebook interfaces like Jupyter) have some resemblance to the Swyft UI.

Raskin continued playing with UI ideas until his death in 2005; his last big project was Archy.


Again: “Alternate Computer Universes” was written by John Ohno / @enkiv2. Header photo by Ismael Villafranco.

You Ain’t Seen the Last of Me Yet

The responses to my recent ennui-fueled dispatch were very encouraging and helpful. Ironically, having a bunch of you tell me that it’s okay to take a break made me feel more energetic and enthusiastic. I’m gonna play it by ear. Thank you for bearing with me.

Another thing that surprised me is how many people said they’d still be interested if I took a more links-links approach. I guess y’all don’t subscribe to 2342424314134 other newsletters like I do?

If you like links, I wanna take the opportunity to plug Glitchet, which is curated by my friend Way Spurr-Chen. You might also like Exponential View by Azeem Azhar or the Meshed Society newsletter. Part of the reason why I’ve always shied away from focusing on links is that others are out there doing a killer job already.

Lastly, here’s a relevant article that I wrote for work: Kik, the chat app, is launching its own cryptocurrency.

I Don’t Know How, or Whether, to Keep Going

Perhaps you’ve noticed the radio silence. I apologize for not sending anything last week.

I keep hoping that my verve will come back. So far it hasn’t. I thought changing Exolymph’s editorial direction would do the trick! But no dice. Whenever I sit down to write this newsletter, I just feel depleted. I feel like I have nothing to say.

My best guess is that I’ve been using up all my creative energy at work. Having a full-time journalism job means writing way more frequently than I ever did when I was freelancing, plus other responsibilities. So I think all my energy is going to that.

I’m not sure about how best to proceed. I feel like the most practical thing is to go on hiatus. I already paused my Patreon — those of you who support it won’t get charged when July rolls around. I set a calendar reminder to re-pause it (because of course I have to do that) before August starts.

I could turn Exolymph into a “here’s what I wrote at work plus some interesting links” newsletter, but that feels wrong. That wasn’t what I set out to do, nor is it what you signed up for.

So… hiatus? What do you think? How would you prefer for me to cope with my dearth of inspiration and drive?

Always the Object

There’s an intriguing subreddit called /r/trashyboners. (Today you should assume that every link is NSFW.) The tagline is “Maybe a true hot mess?” and the featured content is basically what you’d expect it to be: photos and videos of attractive women who are considered trashy. Think of the stereotypes evoked by the terms “trailer trash” and “white trash” (although women of color do show up occasionally) with a splash of “party girl.”

The sub is classist by default — “trashy” connotes undignified poverty — and often exploitative. You see girls who are passed-out drunk or out of their minds on drugs. You might see a police officer displaying a woman’s genitals under unclear circumstances. (The only reportage on that incident comes from the untrustworthy Daily Mail.) Power imbalances abound.

Interestingly, some readers will defend the women they ogle. In early May there was a topic titled “nasty whores drinking beer off each other.” The commenters complained about this derogatory phrasing. One person wrote, “We know nothing of their background, can’t we just enjoy without the shit talk?”

Not long ago I posted an Instagram snapshot of Bella Hadid and the readers downvoted me for saying that her outfit was trashy. They disapproved because I was perceived as puritanical. Overall, more comments than you might expect are about whether the woman in a given photo actually is trashy. (Here’s an example from a recent post.) It’s an inherently subjective judgment, of course.

By contrast, here’s another thread where the woman in the photo faces constant derision, this time with no protest from the readers. I’m not sure what determines which reaction will dominate. It may be that young, conventionally attractive women are more likely to be championed, but I’m not positive about that being a meaningful trend. There is also a “walking the fine line” element that’s very difficult to articulate. When is skimpy clothing just skimpy, and when is it trashy? It’s the kind of nuance that you can only intuit, not teach.

Either way, /r/trashyboners is a good companion to the Slate Star Codex essay about how class is just as cultural as it is financial. Consider:

[S]uppose a lady comes in with really over-permed dyed curly hair wearing several rings, bracelets, and necklaces. Her name is Sherri and she calls you “darling”; she’s also carrying her lunch, which is KFC plus a Big Gulp. Without knowing anything else about her, you can peg her as working class. Maybe she won the lottery ten years ago and is now the richest person in your state. It doesn’t matter. She’s still working class.

Or suppose a thin 25-year-old man comes in wearing glasses, a small close-cropped beard, and a Led Zeppelin t-shirt. His name is Alex and he apologizes for being three minutes late. This guy is probably middle-to-upper-middle-class and college educated, maybe not a great college but still college-educated. And maybe he’s fallen on hard times and doesn’t have a dollar to his name. It still doesn’t matter. He’s still middle-to-upper-middle class.

/r/trashyboners is dedicated to both shaming and celebrating the slutty versions of Sherri.

I’m some flavor of feminist, so you might expect me to be opposed to this subreddit. I have mixed feelings about it. The content is aesthetically fascinating — it’s an inversion of crazy girl chic. And I find the community puzzling, as you may have gathered. I’m not above participating. My opinion isn’t settled yet, I suppose. What do you think?


Header photo by MarkScottAustinTX.

Snoop Unto Them As They Snoop Unto Us

Abruptly returning to a previous topic, here’s a guest dispatch from Famicoman (AKA Mike Dank) on surveillance and privacy. Back to the new focus soon.


The letter sat innocently in a pile of mail on the kitchen table. A boring envelope, nondescript at a glance, that would become something of a Schrödinger’s cat before the inevitable unsealing. The front of it bared the name of the sender, in bright, black letters — “U.S. Department of Justice — Federal Bureau of Investigations.” This probably isn’t something that most people would ever want to find in their mailbox.

For me, the FBI still conjures up imagery straight out of movies, like the bumbling group in 1995’s Hackers, wrongfully pursuing Dade Murphy and his ragtag team of techno-misfits instead of the more sinister Plague. While this reference is dated, I still feel like there is a certain stigma placed upon the FBI, especially by the technophiles who understand there is more to computing than web browsers and document editing. As laws surrounding computers become more sophisticated, we can see them turn draconian. Pioneers, visionaries, and otherwise independent thinkers can be reduced to little more than a prisoner number.

Weeks earlier, I had submitted a Privacy Act inquiry through the FBI’s Freedom of Information Act service. For years, the FBI and other three-letter-agencies have allowed people to openly request information on a myriad of subjects. I was never particularly curious about the outcome of a specific court case or what information The New York Times has requested for articles; my interests were a bit more selfish.

Using the FBI’s eFOIA portal through their website, I filled out a few fields and requested my own FBI file. Creating a FOIA request is greatly simplified these days, and you can even use free services, such as getmyfbifile.com, to generate forms that can be sent to different agencies. I only opted to pursue the FBI at this time, but could always query other agencies in the future.

The whole online eFOIA process was painless, taking maybe two minutes to complete, but I had hesitations as my cursor hovered over the final “Submit” button. Whether or not I actually went through with this, I knew that the state of the information the FBI had on me already wouldn’t falter. They either have something, or they don’t, and I think I’m ready to find out. With this rationalization, I decided to submit — in more ways than one.

The following days went by slowly and my mind seemed to race. I had read anecdotes from people who had requested their FBI file, and knew the results could leave me with more questions than answers. I read one account of someone receiving a document with many redactions, large swathes of blacked-out text, giving a minute-by-minute report of his activities with a collegiate political group. A few more accounts mentioned documents of fully-redacted text, pages upon pages of black lines and nothing else.

What was I in store for? It truly astonishes me that a requester would get back anything at all, even a simple acknowledgement that some record exists. In today’s society where almost everyone has a concern about their privacy, or at least an acknowledgement that they are likely being monitored in some way, the fact that I could send a basic request for information about myself seems like a nonsensical loophole in our current cyberpolitical climate. You would never see this bureaucratic process highlighted in the latest technothriller.

About two weeks after my initial request, there I was, staring at the letter sticking out from the mail stack on the kitchen table. All at once, it filled me with both gloom and solace. This was it, I was going to see what it spelled out, for better or worse. Until I opened it, the contents would remain both good and bad news. After slicing the envelope, I unfolded the two crisp pieces of paper inside, complete with FBI letterhead and a signature from the Record/Information Dissemination Section Chief. As I ingested the first paragraph, I found the line that I hoped I would, “We were unable to identify main records responsive to the FOIA.”

Relief washed over, and any images I had of suited men arriving in black vans to take me away subsided (back down to the normal levels of paranoia, at least). It was the best information I could have received, but not at all what I had expected. For over ten years, I have been involved in several offbeat Internet subcultures and groups, and more than a few sound reason enough to land me on someone’s radar. I was involved with a popular Internet-based hacking video show, held a role in a physical hacking group/meeting, hosted a Tor relay, experimented openly with alternative, secure mesh networks, sysop’d a BitTorrent tracker, and a few other nefarious things here and there.

I always tried to stay on the legal side of things, but that doesn’t mean that I don’t dabble with technologies that could be used for less than savory purposes. In some cases, just figuring out how something can be done was more rewarding than the thought of using it to commit an act or an exploit. Normal people (like friends and coworkers) might call me “suspicious” or tell me I was “likely on a list,” but I didn’t seem to be from what I could gather from the response in front of me.

When I turned back to read the second paragraph, I eyed an interesting passage, “By standard FBI practice and pursuant to FOIA exemption… and Privacy Act exemption… this response neither confirms or denies the existence of your subject’s name on any watch lists.” So maybe I was right to be worried. Maybe I am being watched. I would have no way of knowing. This “neither confirms or denies” response is called a Glomar, which means my information has the potential to be withheld as a matter of national security, or over privacy concerns.

Maybe they do have information on me after all. Even if I received a flat confirmation that there is nothing on me, would I believe it? What is to prevent a government organization from lying to me for “my own good”? How can I be expected to show any semblance of trust at face value? Now that all is said and done, I don’t know much more than I did when I started, and have little to show for the whole exchange besides an official request number and a few pieces of paper with boilerplate, cover-your-ass language.

If we look back at someone like Kevin Mitnick, the cunning social engineer who received a fateful knock on his hotel door right before being arrested in early 1995, we see a prime example of law enforcement pursuing someone not only for the actions they took, but the skills and knowledge they possess. Echoing Operation Sundevil, only five years prior, government agencies wanted to make examples out of their targets, and incite scare tactics to keep others in line.

I can’t help but think of “The Hacker Manifesto,” written by The Mentor (an alias used by Loyd Blankenship) in 1986. “We explore… and you call us criminals. We seek knowledge… and you call us criminals,” Blankenship writes shortly after being arrested himself. Even if I received a page of blacked-out text in the mail, would I be scared and change my habits? What if I awoke to a hammering on my door in the middle of the night? I still don’t know what to make of my response, but maybe I’ll submit another request again next year.

Knock, knock.


Header artwork by Matt Brown.

The Copy-Paste Guillotine

For those of you who aren’t familiar, the Death of the Author is a theory that can be aggressively summarized like this: A creator’s interpretation of their own work is not definitive, but rather one among many interpretations generated by the people who experience the creation.

Therefore, a given piece of art doesn’t “mean” anything in a be-all-end-all sense. The meaning resides with the viewer. A sufficiently compelling interpretation may end up dominating public discourse, but whether it’s “true” is beside the point. “True” is not always a relevant or adaptive quality for an idea. Information spreads based on other factors.

"Pepe the Frog was already a perfect demonstration of The Death of the Author, but now it's even funnier"

Tweet by @drethelin.

The average person who recognizes Pepe’s woeful face has no idea that he was first drawn by Matt Furie for a comic called Boy’s Club. The internet is practically an author-killing machine, since it’s so easy for content to be pulled out of its original context. That’s how memes work — both the image-plus-caption kind and Dawkins’ original formulation.

Now Furie has symbolically slaughtered Pepe, and I really mean symbolically, since the rest of the Pepe-using internet will ignore this. (Except… it’s possible that Blepe will become a robust replacement? I dunno, y’all, 4chan has unpredictable whims.) To his credit, Furie does seem to understand how it works:

Before he got wrapped up in politics, Pepe was an inside-joke and a symbol for feeling sad or feeling good and many things in between. I understand that it’s out of my control, but in the end, Pepe is whatever you say he is, and I, the creator, say that Pepe is love.

I think the phenomenon actually goes further than the Death of the Author. First the author dies and then their body is resurrected and defiled, or spliced into a Frankenstein-style army. (Sorry, I’m butchering the metaphor, and yes, pun intended.)

Alt-right jokesters aren’t passing around the original “feels good man” reaction image anymore. Well, they are, but it’s not the dominant use-case for Pepe. He’s been remixed and morphed beyond what Furie created — internet denizens took the idea and ran with it, like they did with Slenderman. (As far as I know, Pepe hasn’t motivated a murder yet.)

I mentioned in my previous missive that I want to investigate how information flows up and down the cultural stack, from subcultures to the mainstream and back again. The process that Pepe underwent — is undergoing — constitutes one pattern. I suspect that authors who make meme-able content will lose bets on their own survival.


Header artwork by Matt Furie for The Nib.

A Shift in the Wind

I think that Exolymph is ready to change. In retrospect, I’ve been getting bored with my “dystopia is real and we’re living it” thesis since writing “The Cyberpunk Sensibility” last October. (Luckily I didn’t call the project A Cyberpunk Newsletter, so the inscrutable name will stay. Besides, it’s more like I want to zoom in on a particular niche topic, not ditch everything.)

“Cyberpunk is now” was an exciting revelation a year ago — at least to me, although I certainly didn’t come up with the idea. Now it feels banal. The mainstream press is covering cyberpunk themes more and more, and other blogs are doing my schtick better than me. I talked around this when I made a list of cyberpunk content sources back in February.

Most of the publications that I mentioned then don’t delve into the sociopolitics of cyberpunk, but anecdotally the topic is more prevalent than it used to be. Today someone posted on Hacker News, “Would you be interested in a ‘cyberpunk’ inspired news site?” In the comments people pointed out that Wired covers a lot of this territory, as do fringe outlets like those I listed months ago, and N O D E.

So anyway. Like I said, the dissatisfaction has been simmering in my head for months. But reading David Auerbach’s latest essay on the Trump regime is what flipped the switch and made me realize that I need to change Exolymph’s editorial mandate. (No, I’m not going to join #TheResistance and write about Trump all the time — let me explain before you roll your eyes.)

In his essay, Auerbach laid out the relationship(s) between the American overculture (his preferred term) and the country’s surging undercultures. “If you went on 4chan in 2016, you were part of the underculture. If you read about 4chan in the news and believed what you read, you were part of the overculture,” Auerbach quipped.

As it happens, I tend to bounce between these realms more than the average person. Subcultures have long fascinated me, since I’m an incorrigible drama voyeur (like any good journalist). That’s what I want to concentrate on now: How subcultures relate to the mainstream in the twenty-first century.

The internet has transformed the way that social information (memes, if you will) travel up and down between subculture and mainstream. Traditionally, the elites of the mainstream directed the grand narrative. The geeks, MOPs, and sociopaths of subcultures provided the components that were used to compile that grand narrative. But now the elite gatekeepers have lost so much of their power — not all of it, but enough for Auerbach’s underculture to shake things up.

These are the questions that I want to explore:

  • Who is able to travel up and down the cultural stack?
  • What do they bring with them?
  • Do the messages that they carry change along the way?
  • How much do they change?
  • Is it on purpose?
  • When are travelers able to make the trip safely, and when are they hijacked?
  • How do the different levels govern themselves?
  • How do they govern each other?
  • Which factions are able to go vertical, encompassing cross-sections of multiple strata?

On a concrete level, the newsletter probably won’t feel very different. For example, here’s an issue that I would have covered before that will be even more relevant given the new focus.

And now, an abrupt ending! I have an early flight tomorrow and honestly that’s all I have to say.


Header artwork by Albert Ramon Puig.

Bitcoin Buzz: How Does It Actually Affect the Price?

The following is an article about cryptocurrencies that I wrote back in February, intended for Mattermark. Right around the same time, Inc. hired me and my editor Alex Wilhelm left Mattermark, so the story got swallowed in the upheaval. I think it’s a decent fit for Exolymph, although the tone is much more impersonal and newsy than my usual dispatches. Anyway, I hope you enjoy this or find it thought-provoking.


Let’s say there’s a cryptocurrency called ExampleCoin, abbreviated as EXC. What would you expect to happen when stories about EXC are published in Coinbase and the mainstream financial press? You might say it depends on the tenor of the stories — are they positive or negative? That should determine how the publicity affects EXC’s price. Or you might say that raising awareness of EXC will be good for the price regardless, because some people who didn’t know it existed will find out, or some people who weren’t paying attention will start.

The influence that media attention has on real cryptocurrencies is less straightforward, regardless of which EXC hypothesis you find most convincing, according to a recent study of five cryptocurrencies. Authors Jean-Philippe Vergne and Sha Wang are associated with the Ivey Business School and economics department at Ontario’s Western University, respectively. Their research was supported by the Scotiabank Digital Banking Lab. Vergne and Wang suggest that media hype may depress the price of Bitcoin and the four other cryptocurrencies they examined.

The researchers explain, “While it has often been assumed that greater visibility in the public sphere, including in the media, would create a buzz affecting cryptocurrency prices positively, our models do not support this idea. To the contrary, we find that a one [standard deviation] increase in public interest […] corresponds to a 10% decrease in returns” (the term “public interest” is quantified in the study). This finding emerged when the researchers controlled for other variables, namely supply growth and liquidity, in an attempt to isolate the effect of media attention. Furthermore, Vergne and Shaw write, “[W]e do not find any evidence that bad press affects price.”

By contrast, Vergne and Wang found that ongoing technological development positively correlates with cryptocurrency returns. The authors hypothesize that greater security, new features, and evidence of a robust technical community that will continue to add to both, are the factors that actually increase the expected practical value of a given cryptocurrency — and thus drive up its price. Vergne and Wang summarize thus:

[T]he innovation potential embedded in technological upgrades is the most important factor associated (positively) with cryptocurrency returns. By contrast, we find that, after controlling for a variety of factors, such as supply growth and liquidity, the buzz surrounding cryptocurrencies is negatively associated with weekly returns.

In a phone interview, Jean-Philippe Vergne pointed out that it doesn’t make sense to expect a cryptocurrency to be valued in the same way as a fiat currency or a commodity like gold. What the US dollar does, in a concrete sense, hasn’t changed in a very long time, and there’s no reason to expect the dollar to develop more “features,” so to speak. Similarly, gold is gold — we can figure out new ways to use it as a material, but the substance itself remains the same. Not so with cryptocurrencies, the structural capabilities of which are always being extended by dedicated development teams.

Vergne acknowledged that this paradigm would lead us to predict that newer cryptocurrencies, with innovative technical approaches and new features, will eventually outpace Bitcoin. He pointed to Ethereum’s trajectory as an example. “People were saying, ‘Okay, in a few months the price of Ethereum will be higher than the price of Bitcoin,’ in terms of the total market cap. A lot of people started to believe that Bitcoin was dead and Ethereum was gonna be the new Bitcoin. Because of its more advanced technology team, it had more potential for future improvement.”

But then a high-profile project called The DAO got hacked, exposing Ethereum’s fundamental technical weaknesses, and the dream came crashing down. (It took a year for Ethereum to reclaim the highs it climbed to in early 2016, which it has now exceeded.) “The code underlying Ethereum was so complex that it had many more flaws than what people imagined, and it was not ready yet for large-scale implementation,” Vergne explained.

Presumably a new cryptocurrency that can excite investors and prove out its potential will not be subject to the same boom-bust oscillation, although Bitcoin’s first-mover advantage is formidable. Bitcoin has the largest number of miners and developers, providing improved cybersecurity and greater liquidity. Its name recognition also far-and-away outstrips the competitors. Up-and-coming cryptocurrencies will be hard-pressed to battle that reputation.

Tony Arcieri, a software engineer at the blockchain network company Chain, discussed the study via Twitter DM. He hopes that Bitcoin and blockchain buzzwords “are past the peak of the media hype cycle.” If so, “true technical merit should hopefully start dominating the reasoning and conversation.” Arcieri emphasized that Bitcoin’s stability, both technically and as a community, will be key to its longterm success, alluding to recent contention over a large technical update.

Joon Ian Wong, a reporter for Quartz who formerly worked at Coinbase, was a little more skeptical of Vergne and Wang’s conclusions. “I think it is accurate to say technical developments increase the value of a crypto in the long run — but its price is still driven by speculators, and media buzz plays a big part in that,” he said in an email.

“It’s analogous to the fundamentals of say a publicly traded company. Ultimately if a company has for instance a strong balance sheet, good cash flows, and strict cost controls of course it’s worth more in the long run. But its stock price is still determined by the vagaries of market rumours, trends among hedge fund managers, and the news cycle.”

In their paper, Vergne and Wang propose that the perception that publicity encourages speculators may actually be what drives reduced returns, writing, “[A] sudden increase in the ‘buzz’ surrounding a cryptocurrency could be interpreted as a signal of increasing volatility. If market participants are risk-averse, given the same expected mean returns, they would be less willing to hold the cryptocurrency if future volatility increases, which would drive prices down and affect returns negatively.”

Angel List partner Parker Thompson remarked on the state of various non-Bitcoin blockchain projects, such as Ethereum and Zcash, “These use cases are still very speculative, and these projects don’t have the maturity of Bitcoin, but my belief is that the market cap of BTC is small enough that it could be wiped out in six months by a true consumer-facing killer app built on top of one of the blockchains I mentioned, or one that does not yet exist.”

Gwern Branwen, an eclectic researcher who has studied Bitcoin in the past, was unimpressed by Vergne and Wang’s study. Branwen responded to a request for comment via Reddit comment:

[T]o sum up my problems with this analysis, the big ones are that it uses an unrepresentative and redundant set of cryptocurrencies, over a short and unrepresentative time period, to investigate a model which ignores all feedbacks and interactions between variables and returns […] to make causal claims which are not and cannot be supported by the model and data, in support of an interpretation […] which lack[s] any face validity[.] Maybe buzz and hype and the media matter a lot less than most people think to Bitcoin’s growth. But this paper doesn’t affect my beliefs on the matter one bit.

Regardless of whether you agree with how Vergne and Wang have manipulated and interpreted the data, it’s important to remember that Bitcoin and its ilk are in fact technologies. Cryptocurrencies resemble standard money — “currency” is right there in the name! — but there’s a lot going on in the code itself, and the community developing that code, that influences how the market will behave.


Header photo by BTC Keychain.

© 2017 Exolymph. All rights reserved.

Theme by Anders Norén.