Menu Close

Tag: crime

This website was archived on July 20, 2019. It is frozen in time on that date.
Exolymph creator Sonya Mann's active website is Sonya, Supposedly.

Trust Not the Green Lock

Eric Lawrence works at Google, where he is “helping bring HTTPS everywhere on the web as a member of the Chrome Security team.” (I preserved his phrasing because I’m not 100% sure what that means concretely, but working on security at Google bestows some baseline credibility.) A couple of days ago Lawrence published a blog post about malicious actors using free certificates from Let’s Encrypt to make themselves look more legit. As he put it:

One unfortunate (albeit entirely predictable) consequence of making HTTPS certificates “fast, open, automated, and free” is that both good guys and bad guys alike will take advantage of the offer and obtain HTTPS certificates for their websites. […]

Another argument is that browsers overpromise the safety of sites by using terms like Secure in the UI — while the browser can know whether a given HTTPS connection is present and free of errors, it has no knowledge of the security of the destination site or CDN, nor its business practices. […] Security wording is a complicated topic because what the user really wants to know (“Is this safe?”) isn’t something a browser can ever really answer in the affirmative.

Lawrence goes into much more detail, of course. His post hit the front page on Hacker News, and the commentary is interesting. (As usual! Hacker News gets a worse rap than it deserves, IMO.)

I want to frame this exploitation of freely available certificates as a result of cacophony of the web. Anyone can publish, and anyone can access. Since internet users are able to choose anonymity, evading social or criminal consequences is easy. (See also: fake news, the wholly fabricated kind.) Even when there are opsec gaps, law enforcement doesn’t have anywhere near the resources to chase down everyone who’s targeting naive or careless users online.

Any trust signal that can be aped — especially if it can be aped cheaply — absolutely will be. Phishers and malware peddlers risk nothing. In fact, using https is not inherently deceptive (although it is surely intended to be). The problem is on the interpretation end. Web browsers and users have both layered extra meaning on top of the plain technical reality of https.

To his credit, Lawrence calls the problem unsolvable. It is, because the question here is: “Can you trust a stranger if they have a badge that says they’re trustworthy?” Not if the badge can be forged. Or, in the case of https, if the badge technically denotes a certain kind of trust, but most people read it as being a different kind of trust.

(I’m a little out of my depth here, but my understanding is that https doesn’t mean “this site is trustworthy”, it just means “this site is encrypted”. There are higher types of certificates that validate more, usually purchased by businesses or other institutions with financial resources.)

High-trust societies can mitigate this problem, of evaluating whether a stranger is going to screw you over, but there’s no way to upload those cultural norms. The internet is not structured for accountability. And people aren’t going to stop being gullible.

Anyway, Lawrence does have some suggestions for improving the current situation. Hopefully one or multiple of those will go forward.


Header photo by Joi Ito.

Reclaiming the Panopticon

The following is Tim Herd’s response to the previous dispatch about sousveillance.


A tech executive was quoted saying something like, “Privacy is dead. Deal with it.” [According to the Wall Street Journal, it was Scott McNealy of Sun Microsystems. He said, “You have zero privacy anyway. Get over it.”]

I think he’s right, for most working definitions of “privacy”. I think that security professionals, privacy advocates, etc, are fighting rearguard actions and they will lose eventually.

Less than a year after Amazon rolls out Alexa, cops pull audio from it to get evidence for a conviction. That microphone is on 24/7, and in full knowledge of this people still buy them.

Why?

Information is valuable. The same technology that lets me look up photos of your house for shits and grins, or to stalk you, is what powers Google Maps.

Privacy and these new technologies will, and have already, come into conflict. The value of the new tech is way, way more than the value of the privacy lost.

This can devolve into 1984 lightning fast. On the other hand, think about this: “Probably the best-known recent example of sousveillance is when Los Angeles resident George Holliday videotaped police officers beating Rodney King after he had been stopped for a traffic violation.” [From the Steve Mann paper.]

The same surveillance tech that makes us spied on all the time, makes other people spied on all the time. I can’t get up to no good, but cops can’t either.

It’s a tool, and it all depends on how it’s used.

Take me, for example. With a handful of exceptions that I am not putting to paper, there is nothing in my life that is particularly problematic. If the government were spying on me 24/7, it wouldn’t even matter. I have nothing to hide.

(I understand the implications regarding wider social norms. I’m working under the assumption that That Ship Has Sailed.)

The people who do have things to hide, well, we made that shit illegal for a reason. Why should I care when they get burned? That’s the whole goddamn point of the law.

(Aside: I believe that the more strictly enforced a law is, the better it is for everyone overall, because consistency of expectations is important. I bet that the roads would be much safer and more orderly if every single time anyone sped, ever, they automatically got a speeding ticket. Always. No matter what. No cat-and-mouse games with cops, no wondering which lights have speed cameras. Just a dirt-simple law. Here is the rule. Follow it and we are fine. Break it and you will always lose. So many problems are caused by people trying to game the rules, break them whenever possible, and follow them only when they have to.)

(Continued aside: Obviously shit would hit the fan if we started automatically 100% enforcing every traffic law. But you better believe that within a month of that policy being rolled out nationwide, speed limits would rise by at least 50%.)

The reason we care about surveillance is that a lot of things are more illegal than we think they should be.

Obvious example: In a world of perfect surveillance, 50% of California gets thrown in federal prison for smoking weed.

All of this is build-up to my hypothesis:

  • The fully surveilled world is coming, whether we like it or not.
  • This will bring us a ton of benefits if we’re smart and brave enough to leverage it.
  • This will bring an unprecedented ability for authorities to impose on us and coerce us, if we are not careful.

Which brings me to the actual thesis: Libertarianism and formal anarchy is going to be way more important in the near future, to cope with this. In a world of perfect surveillance, every person in San Francisco can be thrown in prison if a prosecutor feels like it. Because, for example, literally every in-law rental is illegal (unless they changed the law).

The way you get a perfect surveillance world without everyone going to prison is drastic liberalization of criminal law, drastic reduction of regulatory law, and live-and-let-live social norms that focus very precisely on harms suffered and on restorative justice.

A more general idea that I am anchoring everything on: A lot of people think tech is bad, but that is because they do not take agency over it. Tech is a tool with unimaginable potential for good… if you take initiative and use it. If you sit back and just wait for it to happen, it goes bad.

If you sit back and wait as Facebook starts spying on you more and more, then you will get burned. But if instead you take advantage of it and come up with a harebrained scheme to find dates by using Facebook’s extremely powerful ad-targeting technology… you will benefit so hard.


Header artwork depicting Facebook as a global panopticon by Joelle L.

The Strategic Subjects List

Detail of a satirical magazine cover for All Cops Are Beautiful, created by Krzysztof Nowak.

Detail of a satirical magazine cover created by Krzysztof Nowak.

United States policing is full of newspeak, the euphemistic language that governments use to reframe their control of citizens. Take “officer-involved shooting”, a much-maligned term that police departments and then news organizations use to flatten legitimate self-defense and extrajudicial executions into the same type of incident.

And now, in the age of algorithms, we have Chicago’s “Strategic Subjects List”:

Spearheaded by the Chicago Police Department in collaboration with the Illinois Institute of Technology, the pilot project uses an algorithm to rank and identify people most likely to be perpetrators or victims of gun violence based on data points like prior narcotics arrests, gang affiliation and age at the time of last arrest. An experiment in what is known as “predictive policing,” the algorithm initially identified 426 people whom police say they’ve targeted with preventative social services. […]

A recently published study by the RAND Corporation, a think tank that focuses on defense, found that using the list didn’t help the Chicago Police Department keep its subjects away from violent crime. Neither were they more likely to receive social services. The only noticeable difference it made was that people on the list ended up arrested more often.

WOW, WHAT A WEIRD COINCIDENCE! The “strategic subjects” on the list were subjected, strategically, to increased police attention, and I’m sure they were all thrilled by the Chicago Police Department’s interest in their welfare.

Less than fifty years ago, the Chicago Police Department literally tortured black men in order to coerce “confessions”. None of that is euphemism. A cattle prod to the genitals — but maybe it ought to be called “officer-involved agony”?

I get so worked up about language because language itself can function as a predictive model. The words people use shape how they think, and thoughts have some kind of impact on actions. Naturally, the CPD officers who carried out the torture called their victims the N-word.

I wonder what proportion of the Strategic Subjects List is black? Given “data points like prior narcotics arrests [and] gang affiliation”, an algorithm can spit out the legacy of 245 years of legal slavery more efficiently than a human. But torture in Chicago is still handcrafted by red-blooded American men. Trump would be proud.

One Step Closer To Killer Roombas

Alice Maz discovered Knightscope’s “autonomous data machines”, aka crimebots. Not robots that knock over liquor stores, but robots that prevent crime. (Theoretically? I guess we’ll find out!) On their website, Knightscope enthuses, “Imagine no longer. The future is here today. It’s affordable, friendly, intelligent and best of all, it’s available NOW!” Anyway, Alice thought the crimebot was cute:

crimebots

crimebots

crimebots

But hey, no worries — they’re not weaponized! According to Knightscope’s FAQ: “The K5 is a friendly community tool used exclusively to deliver relevant and real-time information to the appropriate authorities, not to enforce the law. It is an additional set of intelligent eyes and ears used to help security and law enforcement professionals do their jobs more effectively.”

In news that’s totally unrelated, I’m sure, @SwiftOnSecurity tweeted about humanity’s inevitable demise:

“We fear intelligent machines because humanity fears being judged. It is the fear we have no birthright claim to the throne of this world. If the machines should vote us unfit for hegemony, there exists nothing in this empty galaxy to break the tie. We’re down here alone. But really, what has scifi ever been other than a looking glass on our own insecurities in an age of lots of science, and plenty of fiction.”

¯\_(ツ)_/¯

© 2019 Exolymph. All rights reserved.

Theme by Anders Norén.