Menu Close

Tag: machine learning

Political Economics, I Guess

“Silicon valley ran out of ideas about three years ago and has been warming up stuff from the ’90s that didn’t quite work then. […] The way that Silicon Valley is structured, there needs to be a next big thing to invest in to get your returns.” — Bob Poekert

Bob Poekert's avatar on Twitter.

Bob Poekert’s avatar on Twitter.

I interviewed Bob Poekert, whose website has the unsurpassable URL Perhaps “interviewed” is not the right word, since my queries weren’t particularly cogent. Mainly we had a disjointed conversation in which I asked a lot of questions.

Poekert is a software engineer who I follow on Twitter and generally admire. He says interesting contrarian things like:

“all of the ‘machine learning’/’algorithms’ that it’s sensical to talk about being biased are rebranded actuarial science” — 1

(Per the Purdue Department of Mathematics, “An actuary is a business professional who analyzes the financial consequences of risk. Actuaries use mathematics, statistics, and financial theory to study uncertain future events, especially those of concern to insurance and pension programs.”)

(Also, Poekert said on the phone with me, “[The label] AI is something you slap on your project if you want to get funding, and has been since the ’80s.” But of course, what “AI” means has changed substantially over time. “It’s because somebody realized that they could get more funding for their startup if they started calling it ‘artificial intelligence’.” Automated decision trees used to count as AI.)

“what culture you grew up in, what language you speak, and how much money your parents have matter more for diversity than race or gender” — 2

“the single best thing the government could do for the economy is making it low-risk for low-income people to start businesses” — 3

“globalization has pulled hundreds of millions of people out of poverty, and it can pull a billion more out” — 4

“the ‘technology industry’ (read: internet) was never about technology, it’s about developing new markets” — 5

Currently Poekert isn’t employed in the standard sense. He told me, “I’m actually working a video client, like a Youtube client, for people who don’t have internet all the time.” For instance, you could queue up videos and watch then later, even when you’re sans internet. (Poekert notes, “most people in the world are going to have intermittent internet for the foreseeable future.”)

Poekert has a background in computer science. He spent two years studying that subject in college before he quit to work at at, which later morphed into Twitch. Circa 2012, Poekert joined Priceonomics, but was eventually laid off when the company switched strategies.

I asked Poekert about Donald Trump. He said that DJT “definitely tapped into something,” using the analogy of a fungus-ridden log. The fungus is dormant for ages before any mushrooms sprout. “There’s something that’s been, like, growing and festering for a really long time,” Poekert told me. “It’s just a more visible version” of a familiar trend.

Forty percent of the electorate feels like their economic opportunities are decreasing. They are convinced that their children will do worse than they did. You can spin this with the Bernie Sander narrative of needing to address inequality — or the Trump narrative of needing to address inequality. Recommended remedies are different but the emotional appeal is similar.

Poekert remarked, in reference to economists’ assumptions, “It would be nice if we lived in a world where everyone is a rational actor.” But that world doesn’t actually exist.

Subscribe to Sonya Mann's updates newsletter using the form below.

You Wouldn’t Steal an Algorithm!

Andy Greenberg reported that comp-sci researchers have figured out how to crack the code (pun very intended) of machine learning algorithms. I don’t usually get excited about tech on its own, but this is very cool:

“In a paper they released earlier this month titled ‘Stealing Machine Learning Models via Prediction APIs,’ a team of computer scientists at Cornell Tech, the Swiss institute EPFL in Lausanne, and the University of North Carolina detail how they were able to reverse engineer machine learning-trained AIs based only on sending them queries and analyzing the responses. By training their own AI with the target AI’s output, they found they could produce software that was able to predict with near-100% accuracy the responses of the AI they’d cloned, sometimes after a few thousand or even just hundreds of queries.”

There are some caveats to add, mainly that more complex algorithms with more opaque results would be harder to duplicate via this technique.

The approach is genius. Maciej Ceglowski pithily summarized machine learning like this in a recent talk: “You train a computer on lots of data, and it learns to recognize structure.” Algorithms can be really damn good at pattern-matching. This reverse-engineering process just leverages that in the opposite direction.

I’m excited to see this play out in the news over the next few years, as the reverse-engineering capabilities get more sophisticated. Will there be lawsuits? (I hope there are lawsuits.) Will there be mudslinging on Twitter? (Always.)

There are also journalistic possibilities, for exposing the inner workings of the algorithms that increasingly determine the shape of our lives. Should be fun!

Header photo by Erik Charlton.

Subscribe to Sonya Mann's updates newsletter using the form below.

Means & Ends of AI

Adam Elkus wrote an extremely long essay about some of the ethical quandaries raised by the development of artificial intelligence(s). In it he commented:

“The AI values community is beginning to take shape around the notion that the system can learn representations of values from relatively unstructured interactions with the environment. Which then opens the other can of worms of how the system can be biased to learn the ‘correct’ messages and ignore the incorrect ones.”

He is talking about unsupervised machine learning as it pertains to cultural assumptions. Furthermore, Elkus wrote:

“[A]ny kind of technically engineered system is a product of the social context that it is embedded within. Computers act in relatively complex ways to fulfill human needs and desires and are products of human knowledge and social grounding.”

I agree with this! Computers — and second-order products like software — are tools built by humans for human purposes. And yet this subject is most interesting when we consider how things might change when computers have the capacity to transcend human purposes.

Some people — Elkus perhaps included — scoff this possibility off as a pipe dream with no scientific basis. Perhaps the more salient inquiry is whether we can properly encode “human purposes” in the first place, and who gets to define “human purposes”, and whether those aims can be adjusted later. If a machine can learn from itself and its past experiences (so to speak), starting over with a clean slate becomes trickier.

I want to tie this quandary to a parallel phenomenon. In an article that I saw shared frequently this weekend, Google’s former design ethicist Tristan Harris (also billed as a product philosopher — dude has the best job titles) wrote of tech companies:

“They give people the illusion of free choice while architecting the menu so that they win, no matter what you choose. […] By shaping the menus we pick from, technology hijacks the way we perceive our choices and replaces them new ones. But the closer we pay attention to the options we’re given, the more we’ll notice when they don’t actually align with our true needs.”

Similarly, tech companies get to determine the parameters and “motivations” of artificially intelligent programs’ behavior. We mere users aren’t given the opportunity to ask, “What if the computer used different data analysis methods? What if the algorithm was optimized for something other than marketing conversion rates?” In other words: “What if ‘human purposes’ weren’t treated as synonymous with ‘business goals’?”

Realistically, this will never happen, just like the former design ethicist’s idea of an “FDA for Tech” is ludicrous. Platforms’ and users’ needs don’t align perfectly, but they align well enough to create tremendous economic value, and that’s probably as good as the system can get.

Subscribe to Sonya Mann's updates newsletter using the form below.

Foozles + Whizgigs + Dopamine

“Humans are actually extremely good at certain types of data processing. Especially when there are only few data points available. Computers fail with proper decision making when they lack data. Humans often actually don’t.” — Martin Weigert on his blog Meshed Society

Weigert is referring to intuition. In a metaphorical way, human minds function like unsupervised machine learning algorithms. We absorb data — experiences and anecdotes — and we spit out predictions and decisions. We define the problem space based on the inputs we encounter and define the set of acceptable answers based on the reactions we get from the world.

There’s no guarantee of accuracy, or even of usefulness. It’s just a system of inputs and outputs that bounce against the given parameters. And it’s always in flux — we iterate toward a moving reward state, eager to feel sated in a way that a computer could never understand. In a way that we can never actually achieve. (What is this “contentment” you speak of?)

Computer memory space. Photo by Steve Jurvetson.

Photo by Steve Jurvetson.

Kate Losse wrote in reference to the whole Facebook “Trending Topics” debacle:

“no choice a human or business makes when constructing an algorithm is in fact ‘neutral,’ it is simply what that human or business finds to be most valuable to them.”

That’s the reward state. Have you generated a result that is judged to be valuable? Have a dopamine hit. Have some money. Have all the accoutrements of capitalist success. Have a wife and a car and two-point-five kids and keep absorbing data and keep spitting back opinions and actions. If you deviate from the norms that we’ve collectively evolved to prize, then your dopamine machine will be disabled.

It’s only a matter of time until we make this relationship more explicit, right? Your job regulating the production of foozles and whizgigs will require brain stem and cortical access. You can be zapped with fear or drowned in pleasure whenever it suits the suits.

Subscribe to Sonya Mann's updates newsletter using the form below.

Whence Came the Intruder?

This dispatch is a followup to “Unsolved Appearance of a Virus”. You don’t need to read the first installment to understand or enjoy this one.

“Flora had done the original forensic trace of Sam’s last actions and cried furiously when she couldn’t find enough information to explain anything.”

Flora missed Sam. They were never involved romantically, but she was always relieved to spend time with him. He did silly things like learn new typing schemes while they were supposed to be working, and then get yelled at because it slowed down his refactoring project and raised his bug ratio. Now it didn’t matter. The “only use Dvorak when you’re on the clock” rule was obsolete.

Flora was what you call “high strung” — she didn’t tolerate other humans particularly well. Management had special mood supplements just for her because everyone else’s bog-standard doses made her too jittery to work. She would sit in her room playing with old neural networks that they’d discarded more than a year ago, fine-tuning them for no particular purpose that she could name. That habit almost got her fired, in fact, but the second-level supervisor stepped in and suggested trying a different supplement combo first.

Management made their thoughts about “tetchy geniuses” known, but Flora stayed on, and her ROI as an employee was sufficient.

When Sam’s body was found, no one told Flora for several hours. She was plugged into her analytics trance, and the second-level supervisor insisted that they shouldn’t interrupt her. When Flora reemerged, everyone was gathered in the kitchen, looking at their shoes. She walked in, tugging off her rumpled sweatshirt, and stopped short when she saw their faces. “What’s going on?”

“Sam is dead. Greg found him. He was… sort of wedged into the H6 stack.”

“You didn’t need to tell her that,” Melanie hissed.

The eng lead took a step forward, raising his hand like he might touch Flora’s shoulder. “It’s so awful… I’m sorry. I know you were close.”

Without saying anything, Flora turned around and walked back up the stairs to her room. She flinched when she passed the H level, feeling the sudden pain of a cramp in her gut.

“When you maintain a computer the size of a house, the project consumes all of you. Really, the computer was a house. All the onsite engineers lived there, in the bowels of the machine. Its metal and silicon body occupied the tall column of emptiness that had been retrofitted into the building’s structure.”

Flora flipped open her laptop and powered it on. Five hours later, she was still reviewing logs, intermittently sobbing with frustration and punching the bed. She could tell that something was wrong — there was plenty of evidence at every level — but she couldn’t tell why. She felt like her brain was shaking inside her skull.

Unlike the rest of the team, Flora navigated the world by intuition. In some ways she was ill-suited to programming, even though she had the rules in her memory like anyone else. When exploring new territory, she pattern-matched without always being able to identify why the pattern was important. But she was so familiar with this system — so intimately connected to the way everything was supposed to be arranged — that the dissonance was obvious as soon as she examined layers beneath their usual dev tools.

And yet the source of the dissonance remained un-obvious. Flora reviewed the tampering that she’d found with the rest of the team, and then the second-level supervisor passed it up the chain. Management had invested too much to trash the project, and they weren’t pleased about rolling back to the version before any of this started.

“The machine was not sentient. No one ever thought that — Silicon Valley had given up on true artificial intelligence decades ago. Rather, the provocation uploaded through Sam’s brainlink was sabotage. Someone had been monkeying around with the firmware, and then the layer on top of that, and then even the UI. It was silly to mess with the interface — who cared about that part, right? This was an enterprise API endeavor, not a goddam web app.”

After a few months, they did end up letting Flora go. She couldn’t stop obsessing about what had happened to Sam, and management suggested to the supervisor that this was suspicious. He was reprimanded gently for Flora’s remiss performance. His file was tagged with a note that promotion should be delayed.

After Flora was fired, the project lost momentum and started to drift off of management’s radar. The sales staff wasn’t getting good results from this prospect, so the budget was slashed. The second-level supervisor was transferred to headquarters. After another year, the project was officially shut down and the devs dispersed.

The company still owned the building and the massive machine it contained, but they used that infrastructure to remotely process other work. Aside from the maintenance crew that swapped out components every couple of weeks, the place was left alone.

Subscribe to Sonya Mann's updates newsletter using the form below.

Software Is Hungry

You may have heard that DeepMind’s machine-learning program AlphaGo beat reigning world champion Lee Sedol in the ancient and complex game of Go. (Technically, AlphaGo has only won two of five matches, but the writing on the wall is clear.) More and more lately, artificial intelligence is in the news, gaining on the analogue world by leaps and bounds. I’m glad of this, despite the accompanying proliferation of media fear-mongering. Hardworking programmers and data scientists are accelerating the future; they deserve recognition. (Shoutout to Francis Tseng!)

Illustration by Michele Rosenthal.

Illustration by Michele Rosenthal.

Unfortunately the present — I know Exolymph’s gimmick is the future-present, but in this case I mean the past-present — consists of tediously logging back in on website after website. Daily life is so mundane compared to the cutting edge. I restored my laptop to factory defaults, which is great because it’s not broken anymore, but I had to reenter my username and password(s) all over the place. It was a little disturbing to realize how many companies have dossiers of data about me. I don’t expect anything bad to happen to that information, but it’s an inherent vulnerability. What if I had a stalker? What if I want to pursue investigative journalism at some point?

The connecting thread between AlphaGo’s prowess and the way privacy keeps slipping away from individuals is that software is eating the world. We’re subsumed by technology, by the math that powers flashing lights behind screens. I’m okay with it. Human nature is fundamentally the same — all that’s changed is the conduit.

Subscribe to Sonya Mann's updates newsletter using the form below.

Statuses To Update

Tonight I’m reading up on how machine learning actually works. To be honest, I don’t understand the concrete mechanisms by which computers do intelligence-y things. I know some of the keywords — “big data” pops into my head — and I have a general idea of how they interact, but it doesn’t go deeper than “general idea”. So I’m seeking more information! This is very mundane, but it constantly amazes me that I have access to just about everything people know about any technical topic.

Cyberpunk rabbit by Vojtěch Lacina.

Artwork by Vojtěch Lacina.

That reminds me of a line I read in an article criticizing San Francisco as a putrid dystopia: “After all, technology is social before it is technical.” When software developers make comments like that, it gives me a little hope for myself in the tech world. I love this industry — it fascinates and infuriates me — but I don’t have any of the requisite skills to participate in the normatively valued ways. I can’t write code. I can’t build databases or even make websites from scratch. But I’m okay when it comes to wrangling humans. I’m a decent communicator.

In this capitalist hellscape we inhabit, do you make time to appreciate yourself? Do you allow yourself a little vanity? I do, but mostly because I can’t help it.

Tonight I made a Slack discussion group called Cyberpunk Futurism. For those who are unfamiliar with Slack, it’s basically a group chat forum. If you want to participate, click here and sign up. I’m not sure how many people will be interested, but I figured it was worth a try 🙂

Subscribe to Sonya Mann's updates newsletter using the form below.

© 2018 Exolymph. All rights reserved.

Theme by Anders Norén.