Menu Close

Tag: Adam Elkus

This website was archived on July 20, 2019. It is frozen in time on that date.
Exolymph creator Sonya Mann's active website is Sonya, Supposedly.

The Internet of LOUD

On the way home from dinner, I wondered, “What am I gonna write about tonight?” Then I opened Twitter and faced this headline: “Hacker breaches the US agency that certifies voting machines” (only semi-confirmed).

So, ah, there’s that.

Cybersecurity is vital but hard and also the most important institutions seem to ignore it. Great!

Also, Adam Elkus said something funny:

This is 2016, so I should be able to back a secessionist kickstarter with bitcoins sent via virtual reality

It’s kinda possible if you donate to Liberland. Apparently a lot of their funds come through bitcoin.

Avalanche in progress. Photo by Sean Gillies.

Avalanche in progress. Photo by Sean Gillies.

Anyway.

What I really want to talk about is something else. I feel angsty. It’s a result of the cacophony. The unfettered flow of information that we’ve set up for ourselves, where people’s opinions about the news go straight into my face for hours on a daily basis. (What? I could choose not to do this? Preposterous.)

I like keeping track of what’s going on. But I hate putting up with the constant ambient wrongness.

Now, I’m a reasonable person, so I know that I’m not right about everything. I have natural biases, delusions engendered by tribalism, and often I must draw conclusions based on incomplete information. Some of these flaws will be discovered and fixed at some point, but others will continue to taint how I perceive and analyze the world. Just another stellar perk of being human!

Since I am human, even though I intellectually know that I’m wrong about some things, on an emotional level I think that all of my firm opinions are correct. It is extremely grating that everyone goes around disagreeing with me all the time. Especially since I have an agenda — a way that I want the world to proceed — and pesky other people never stop working against it.

This isn’t new, of course, but I can’t help but think that the volume has increased. There is so much of it. In the “olden days” did people with opinions have to restrain themselves from starting arguments left and right?

(Pun intended.)

Means & Ends of AI

Adam Elkus wrote an extremely long essay about some of the ethical quandaries raised by the development of artificial intelligence(s). In it he commented:

“The AI values community is beginning to take shape around the notion that the system can learn representations of values from relatively unstructured interactions with the environment. Which then opens the other can of worms of how the system can be biased to learn the ‘correct’ messages and ignore the incorrect ones.”

He is talking about unsupervised machine learning as it pertains to cultural assumptions. Furthermore, Elkus wrote:

“[A]ny kind of technically engineered system is a product of the social context that it is embedded within. Computers act in relatively complex ways to fulfill human needs and desires and are products of human knowledge and social grounding.”

I agree with this! Computers — and second-order products like software — are tools built by humans for human purposes. And yet this subject is most interesting when we consider how things might change when computers have the capacity to transcend human purposes.

Some people — Elkus perhaps included — scoff this possibility off as a pipe dream with no scientific basis. Perhaps the more salient inquiry is whether we can properly encode “human purposes” in the first place, and who gets to define “human purposes”, and whether those aims can be adjusted later. If a machine can learn from itself and its past experiences (so to speak), starting over with a clean slate becomes trickier.

I want to tie this quandary to a parallel phenomenon. In an article that I saw shared frequently this weekend, Google’s former design ethicist Tristan Harris (also billed as a product philosopher — dude has the best job titles) wrote of tech companies:

“They give people the illusion of free choice while architecting the menu so that they win, no matter what you choose. […] By shaping the menus we pick from, technology hijacks the way we perceive our choices and replaces them new ones. But the closer we pay attention to the options we’re given, the more we’ll notice when they don’t actually align with our true needs.”

Similarly, tech companies get to determine the parameters and “motivations” of artificially intelligent programs’ behavior. We mere users aren’t given the opportunity to ask, “What if the computer used different data analysis methods? What if the algorithm was optimized for something other than marketing conversion rates?” In other words: “What if ‘human purposes’ weren’t treated as synonymous with ‘business goals’?”

Realistically, this will never happen, just like the former design ethicist’s idea of an “FDA for Tech” is ludicrous. Platforms’ and users’ needs don’t align perfectly, but they align well enough to create tremendous economic value, and that’s probably as good as the system can get.

© 2019 Exolymph. All rights reserved.

Theme by Anders Norén.