Menu Close

Tag: tradeoffs

This website was archived on July 20, 2019. It is frozen in time on that date.
Exolymph creator Sonya Mann's active website is Sonya, Supposedly.

Wanted: Efficacious Heuristics

A system is an arrangement of interlocking elements or moving parts that all affect each other, sometimes recursively. The world is full of them (and in fact each of those systems is itself an element of the larger system that comprises the whole universe).

Mathias Lafeldt wrote about how the human mind copes with this:

“Systems are invisible to our eyes. We try to understand them indirectly through mental models and then perform actions based on these models. […] Our built-in pattern detector is able to simplify complexity into manageable decision rules.”

Lafeldt’s explanation reminds me of the saying that the map is not the territory. Reality (which is a system) is the territory. Our mental models are maps that help us navigate that territory.

Artwork by Kyung-Min Chung.

Artwork by Kyung-Min Chung.

But no map is a 1:1 representation of reality — that would be a duplicate, or a simulation. Rather, our maps give us heuristics for interpreting the lay of the land, so to speak, and rules for how to react to what we encounter. Maps are produced by fallible humans, so they contain inaccuracies. Often they don’t handle edge cases well (or at all).

Nevertheless, I like mental models. They cut through all the epistemological bullshit. Instead of optimizing a mental model to be true, you optimize it to be useful. An effective mental model is one that helps you be, well, more effective.

This is why Occam’s Razor is so popular despite being incorrect much of the time. Some plans do go off without a hitch. But expecting the chaotic worst is a socioeconomically adaptive behavior, so we keep the idea around. [Edit: Hilariously, I mixed up Occam’s Razor and Murphy’s Law — thereby demonstrating Murphy’s Law.]

My personal favorite mental model is a simple one: “There are always tradeoffs.” One of the tradeoffs of using mental models at all is that you sacrifice understanding the full complexity of a situation. Mental models, like maps, hide the genuine texture of the ground. In return they give you efficiency.

Therapy Bots and Nondisclosure Agreements

Two empty chairs. Image via R. Crap Mariner.

Image via R. Crap Mariner.

Let’s talk about therapy bots. I don’t want to list every therapy bot that’s ever existed — and there are a few — so I’ll just trust you to Google “therapy bots” if you’re looking for a survey of the efforts so far. Instead I want to discuss the next-gen tech. There are ethical quandaries.

If (when) effective therapy bots come onto the market, it will be a miracle. Note the word “effective”. Maybe it’ll be 3D facial models in VR, and machine learning for the backend, but it might be some manifestation I can’t come up with. Doesn’t really matter.

They have to actually help people deal with their angst and self-loathing and grief and resentment, but any therapy bots that are able to do that will do a tremendous amount of good. Not because I think they’ll be more skilled than human therapists — who knows — but because they’ll be more broadly available.

Software is an order of magnitude cheaper than human employees, so currently underserved demographics may have greater access to professional mental healthcare than they ever have before. Obviously the situation for rich people will still be better, but it’s preferable to be a poor person with a smartphone in a world where rich people have laptops than it is to be a poor person without a smartphone in a world where no one has a computer of any size.

Here’s the thing. Consider the data-retention policies of the companies that own the therapy bots. Of course all the processing power and raw data will live in the cloud. Will access to that information be governed by the same strict nondisclosure laws as human therapists? To what extent will HIPAA and equivalent non-USA privacy requirements apply?

Now, I don’t know about you, but if my current Homo sapiens therapist asked if she could record audio of our sessions, I would say no. I’m usually pretty blasé about privacy, and I’m somewhat open about being mentally ill, but the actual content of my conversations with my therapist is very serious to me. I trust her, but I don’t trust technology. All kinds of companies get breached.

Information on anyone else’s computer — that includes the cloud, which is really just a rented datacenter somewhere — is information that you don’t control, and information that you don’t control has a way of going places you don’t expect it to.

Here’s something I guarantee would happen: An employee at a therapy bot company has a spouse who uses the service. That employee is abusive. They access their spouse’s session data. What happens next? Who is held responsible?

I’m not saying that therapy bots are an inherently bad idea, or that the inevitable harm to individuals would outweigh the benefits to lots of other individuals. I’m saying that we have a hard enough time with sensitive data as it is. And I believe that collateral damage is a bug, not a feature.


Great comments on /r/DarkFuturology.

Release the Panama Papers, Please

Jack Smith IV on Twitter, linking to an article that I quote further below.

Jack Smith IV on Twitter, linking to an article that I quote further below.

The massive corpus of documents called the Panama Papers has been reported on, selectively, but not released for public review. I have a problem with this. I don’t think the journalists involved are malicious, but I also don’t trust them, for all the regular reasons why I don’t trust people who control the flow of information. Craig Murray’s excoriation of “western corporate media” overstates the case a bit, but he does a good job of summarizing the obvious concerns.

The International Consortium of Investigative Journalists coordinated this shindig. In response to criticism like Murray’s, the head of ICIJ told Wired “that the media organizations have no plans to release the full dataset, WikiLeaks-style, which he argues would expose the sensitive information of innocent private individuals along with the public figures on which the group’s reporting has focused.” Again, I don’t doubt that the ICIJ is a reputable organization, and their data analysis sounds quite rigorous. But this gated approach is fundamentally dangerous.

I summed up my initial thoughts on Twitter: “Yes, opening up the data would compromise the privacy of innocents. I think transparency is worth the collateral damage. Tradeoffs!” Also: “True accountability doesn’t take place in silos.” I am by no means an open-data absolutist, but I think this is a case where the benefits outweigh the costs. Cybersecurity reporter JM Porup went further in The Daily Dot:

“Whoever funds the investigation affects what gets covered — and what gets emphasized — and what doesn’t. As Wikileaks pointed out, USAID funded the attack story on Russian President Vladimir Putin. What stories did not get funded because they might make America and its allies look bad? This is a subtle form of economic censorship, but censorship all the same.”

Given all of this, I’m annoyed that the Columbia Journalism Review posted a sanctimonious article condemning critics of the ICIJ’s closed-data scheme as conspiracy theorists, lumping all of us in with Alex Jones. Jack Murtha wrote, “What’s most striking is how a misunderstanding of how the news media works can simultaneously condemn proven muckrakers and empower state-run propaganda arms.” Uh, no. I don’t think anyone misunderstands how the news media works — journalism is actually very straightforward at an object level. We just disagree with your methods, sir.

As LA Times reporter Matt Pearce quipped on Twitter, “Nobody loves a gatekeeper.”

Cybersecurity Tradeoffs & Risks

Kevin Roose hired a couple of high-end hackers to penetration-test his personal cybersecurity setup. It did not go well, unless you count “realizing that you’re incredibly vulnerable” as “well”. In his write-up of the exercise, Roose mused:

“The scariest thing about social engineering is that it can happen to literally anyone, no matter how cautious or secure they are. After all, I hadn’t messed up — my phone company had. But the interconnected nature of digital security means that all of us are vulnerable, if the companies that safeguard our data fall down on the job. It doesn’t matter how strong your passwords are if your cable provider or your utility company is willing to give your information out over the phone to a stranger.”

There is a genuine tradeoff between safety and convenience when it comes to customer service. Big companies typically err on the side of convenience. That’s why Amazon got in trouble back in January. Most support requests are legitimate, so companies practice lax security and let the malicious needles in the haystack slip through their fingers (to mix metaphors egregiously). If a business like Amazon enacts rigorous security protocols and makes employees stick to them, the average user with a real question is annoyed. Millions of average users’ mild discomfort outweighs a handful of catastrophes.

Artwork by Michael Mandiberg.

Artwork by Michael Mandiberg.

In semi-related commentary, Linux security developer Matthew Garrett said on Twitter (regarding the Apple-versus-FBI tussle):

“The assumption must always be that if it’s technically possible for a company to be compelled to betray you, it’ll happen. No matter how trustworthy the company [seems] at present. No matter how good their PR. If the law ever changes, they’ll leak your secrets. It’s important that we fight for laws that respect privacy, and it’s important that we design hardware on the assumption we won’t always win”

Although Garrett is commenting on a different issue within a different context, I think these two events are linked. The basic idea is that when you trust third parties to protect your privacy (including medical data and financial access), you should resign yourself to being pwned eventually. Perhaps with the sanction of your government.

© 2019 Exolymph. All rights reserved.

Theme by Anders Norén.