Menu Close

Tag: infosec

This website was archived on July 20, 2019. It is frozen in time on that date.
Exolymph creator Sonya Mann's active website is Sonya, Supposedly.

Trust Not the Green Lock

Eric Lawrence works at Google, where he is “helping bring HTTPS everywhere on the web as a member of the Chrome Security team.” (I preserved his phrasing because I’m not 100% sure what that means concretely, but working on security at Google bestows some baseline credibility.) A couple of days ago Lawrence published a blog post about malicious actors using free certificates from Let’s Encrypt to make themselves look more legit. As he put it:

One unfortunate (albeit entirely predictable) consequence of making HTTPS certificates “fast, open, automated, and free” is that both good guys and bad guys alike will take advantage of the offer and obtain HTTPS certificates for their websites. […]

Another argument is that browsers overpromise the safety of sites by using terms like Secure in the UI — while the browser can know whether a given HTTPS connection is present and free of errors, it has no knowledge of the security of the destination site or CDN, nor its business practices. […] Security wording is a complicated topic because what the user really wants to know (“Is this safe?”) isn’t something a browser can ever really answer in the affirmative.

Lawrence goes into much more detail, of course. His post hit the front page on Hacker News, and the commentary is interesting. (As usual! Hacker News gets a worse rap than it deserves, IMO.)

I want to frame this exploitation of freely available certificates as a result of cacophony of the web. Anyone can publish, and anyone can access. Since internet users are able to choose anonymity, evading social or criminal consequences is easy. (See also: fake news, the wholly fabricated kind.) Even when there are opsec gaps, law enforcement doesn’t have anywhere near the resources to chase down everyone who’s targeting naive or careless users online.

Any trust signal that can be aped — especially if it can be aped cheaply — absolutely will be. Phishers and malware peddlers risk nothing. In fact, using https is not inherently deceptive (although it is surely intended to be). The problem is on the interpretation end. Web browsers and users have both layered extra meaning on top of the plain technical reality of https.

To his credit, Lawrence calls the problem unsolvable. It is, because the question here is: “Can you trust a stranger if they have a badge that says they’re trustworthy?” Not if the badge can be forged. Or, in the case of https, if the badge technically denotes a certain kind of trust, but most people read it as being a different kind of trust.

(I’m a little out of my depth here, but my understanding is that https doesn’t mean “this site is trustworthy”, it just means “this site is encrypted”. There are higher types of certificates that validate more, usually purchased by businesses or other institutions with financial resources.)

High-trust societies can mitigate this problem, of evaluating whether a stranger is going to screw you over, but there’s no way to upload those cultural norms. The internet is not structured for accountability. And people aren’t going to stop being gullible.

Anyway, Lawrence does have some suggestions for improving the current situation. Hopefully one or multiple of those will go forward.


Header photo by Joi Ito.

I Hope You Like the NSA Because the NSA Sure Likes You

Today’s news about the NSA feels a little too spot-on. I hope the hackneyed scriptwriters for 2017 feel ashamed:

In its final days, the Obama administration has expanded the power of the National Security Agency to share globally intercepted personal communications with the government’s 16 other intelligence agencies before applying privacy protections.

The new rules significantly relax longstanding limits on what the N.S.A. may do with the information gathered by its most powerful surveillance operations, which are largely unregulated by American wiretapping laws. These include collecting satellite transmissions, phone calls and emails that cross network switches abroad, and messages between people abroad that cross domestic network switches.

The change means that far more officials will be searching through raw data. Essentially, the government is reducing the risk that the N.S.A. will fail to recognize that a piece of information would be valuable to another agency, but increasing the risk that officials will see private information about innocent people.

Really? Expanding the NSA’s power, so soon after the Snowden plotline? A move like this might be exciting in an earlier season, but at this point the show is just demoralizing its viewers. Especially after making the rule that no one can turn off their TV, ever, it just seems cruel.

At least the Brits have it worse? I dunno, that doesn’t make me feel better, since America likes to import UK culture. (It’s one of our founding principles!)

Now is a good time to donate to the Tor Project, is what I’m saying.

In other news, researchers can pull fingerprints from photos and use the data to unlock your phone, etc. Throwback: fingerprints are horrible passwords.

Remember, kids, remaining in your original flesh at all is a poor security practice.


Header photo via torbakhopper, who attributes it to Scott Richard.

The Productive Attitude to Privacy

Instead of considering privacy to be a right that you deserve, think of it as a condition that you can create for yourself. Comprehensive privacy is difficult to achieve — aim to hide the pieces of information that matter to you the most. Even in countries that say their citizens are entitled to privacy, abstract guarantees are meaningless if you don’t take action to protect the information that you want to conceal. (Remember, you’re only one “national security emergency” away from losing all the rights you were promised.)

What is privacy? Photo by Cory Doctorow.

Photo by Cory Doctorow.

For the most part, protecting information with your actions means restricting access to it. As I wrote before, “when you trust third parties to protect your privacy (including medical data and financial access), you should resign yourself to being pwned eventually.”

The key to perfect privacy is to avoid recording or sharing any information in the first place. If you never write down your secret, then no one can copy-paste it elsewhere, nor bruteforce any cipher that you may have used to obscure it. Thank goodness we haven’t figured out how to hack brains in detail! But unfortunately, some pieces of information — like passwords with plenty of entropy — aren’t useful unless you’re able to copy-paste them. Who can memorize fifty different diceware phrases? The key to imperfect-but-acceptable privacy is figuring out your limits and acting accordingly. How much risk are you willing to live with?

The main argument against my position is that responsibilities that could be assigned to communities are instead pushed onto individuals, who are demonstrably ill-equipped to cope with the requirements of infosec.

“Neoliberalism insists that we are all responsible for ourselves, and its prime characteristic is the privatisation of resources — like education, healthcare, and water — once considered essential rights for everyone (for at least a relatively brief period in human history so far). Within this severely privatised realm, choice emerges as a mantra for all individuals: we can all now have infinite choices, whether between brands of orange juice or schools or banks. This reverence for choice extends to how we are continually pushed to think of ourselves as not just rewarded with choices in material goods and services but with choices in how we constitute our individual selves in order to survive.” — Yasmin Nair

Reddit user m_bishop weighed in:

“I’ve been saying this for years. Treat anything you say online like you’re shouting it in a crowded subway station. It’s not everyone else’s job to ignore you, though it is generally considered rude to listen in.

Bottom line, if you don’t want people to see you naked, don’t walk down the street without your clothes on. All the written agreements and promises to simply ‘not look’ aren’t going to work.”

Hacking as a Business

Update 1/19/2018: The interviewee asked me to redact his identity from this blog post, and I obliged.

[Redacted] describes himself as a “web application penetration tester.” I asked him a bunch of questions about what that entails. [Redacted] answered in great depth, so I redacted my boring questions, lightly edited hisanswers, and made it into an essay. Take a tour through the 2000s-era internet as well as a crash course in how an independent hacker makes money. Without any further ado, here’s the story…


Origin Story

I got into my line of work when I was thirteen, playing the game StarCraft. I saw people cheating to get to the top and I wanted to know how they did it. At first I wasn’t that interested in programming, purely because I didn’t understand it. I moved my gaming to Xbox (the original!) shortly thereafter and was a massive fan of Halo 2. Again, I saw people cheating (modding, standbying, level boosting) and instantly thought, “I want to do this!” I learned how people were making mods and took my Xbox apart to start mucking with things.

I moved away from Xbox and back to the computer (I can never multitask). Bebo was just popping up. With an intro to coding already, I saw that you could send people “luv”. Based on my mentality from the last two games I played… I wanted the most luv and to be rank #1. I joined a forum called “AciidForums” and went by the names [redacted] and [redacted]. Suddenly I was surrounded by people who shared my interests. I started to code bots for Bebo to send myself luv. My coding got a lot better and so did my thinking path. I’d come home from school and instantly go on my computer — it was a whole new world to me. I still have old screenshots of myself with seventy-six million luv.

As my coding came along I met a lot of different types of people. Some couldn’t code but had ideas for bots; some couldn’t code but knew how to break code. We all shared information and formed a team. Suddenly I became the main coder and my friends would tell me about exploits they found. We got noticed. I’m not sure how, or why, but I seem to always get in with the right people. Perhaps it’s the way I talk or act — who knows. I made friends with a couple of Bebo employees. They were interested in how I was doing what I was doing.

This was my introduction to hacking and exploiting. I moved on from Bebo after coming to an agreement with the company that I’d leave them alone. Sadly my friends and I all lost contact, and it was time to move on.

Next came Facebook. At this point I already knew how to code and exploit. I instantly found exploits on Facebook and started again, getting up to mischief. Along the way I meet [redacted] and we became best friends because we share the same ideas and interests. Two years passed and again, my mischief went a bit far, so I got in trouble with Facebook. We resolved the issue and I vowed to never touch Facebook again.

I guess three times lucky, hey? I moved my exploiting to porn sites. After a year I was finally forced to make peace with the porn site I was targeting. I was getting fed up with always having to stop… but I was also getting annoyed at how easy it was to exploit. I needed a challenge.

I took a year off from exploiting to focus on improving my coding skills. I worked for a few people and also on some of my own personal projects, but it got repetitive and I needed a change. At this point, I was actually arrested by the eCrime Unit for apparently being [redacted] from [a hacking group; name redacted]. The charges were dropped since I was innocent. My former friend [redacted] was in prison for hacking so I was feeling quite lonely and not sure what to do. I’ll be honest, he had become like a brother to me.

I kept on coding for a bit, feeling too scared to even look for exploits after what happened to [friend’s name redacted]. (A few years have passed since then — [redacted] is out and he’s learned his lesson.) I knew that hacking was illegal and bad. I’d just like to note that I’ve never once maliciously hacked a site or stolen data, in case you think I was a super blackhat hacker, but the incident also scared me. Especially since I got arrested too.

Because of this and through other life changes, I knew I wanted to help people. I took my exploiting skills and starting looking. I found some exploits instantly and started reporting them to companies to let them know, and to also help fix them. 99% of the companies replied and were extremely thankful. Some even sent me T-shirts, etc.

I started targeting a few sites (I can’t name which because we have NDAs now; I’m still actively helping many). By using my words right, I managed to get in with a few people. I start reporting vulnerabilities and helping many companies. Months passed and one company showed a lot of interest in what I was doing. I got invited to fly over to meet them. I knew something was going right at this point, so I knuckled down and put all of my focus on finding vulnerabilities and reporting them to this company. Things were going great and I soon overloaded their team with more than they could handle. I started looking further afield at more sites, and suddenly I was introduced to HackerOne. I saw that LOADS of sites had bounties and paid for vulnerabilities. I instantly knew that this was where I wanted to stay. To this day I am still active on HackerOne, but normally I run in private programs now (better payouts).

Fast forward through a year of exploiting and helping companies and now we’re here. I’ve been a nerd for ten years. Eight years coding, and around seven years exploiting.

Business Practices

For companies that don’t have a bug bounty, I tend to spend thirty minutes to an hour finding simple bugs such as XSS (cross-site scripting) or CSRF (cross-site request forgery). I’ll try find a contact email and send them a nice detailed email about what I’ve found and what the impact is. I also supply them with information about how they can fix it. I never ask for money or anything over the first few emails — I tend to get their attention first, get them to acknowledge what I’ve found, and get them to agree that I can look for more. At that point I’ll ask if they offer any type of reward for helping them. The majority reply that they are up for rewarding me, due to the amount of help I’ve given them.

After I’ve helped the company for a while and they’ve rewarded me, etc, I usually suggest that they join HackerOne for a much cleaner process of reporting bugs and rewarding me (it also helps my rep on HackerOne). So far two have joined and one started their own private bounty system.

To sum it up, I’ll start of with basic bugs to get their attention, then once I’ve gotten the green light to dig deeper, I’ll go and find the bigger bugs. This helps me not waste my time on companies who don’t care about security. (Trust me, I’ve reported bugs and gotten no reply, or a very rude response!) I like to build a good relationship with companies before putting a lot of hours into looking for bugs. A good relationship with companies is a win-win situation for everyone — they get told about vulnerabilities on their site, and I get rewarded. Perfect.

In case you wanted to know, I’ve helped around ten companies who didn’t have a bug bounty. Nine of them have rewarded me (with either money, swag, or recognition on their website). Only one has told me they don’t offer any type of reward, but welcomed me to look for bugs to help them (pfft, who works for free?). Out of the nine who rewarded me, I’ve built a VERY close relationship with three of them. (Met with one company in January, and meeting with another in June.)

There are two types of companies. Those who simply can’t afford to reward researchers and those who think, “Well, no one has hacked us yet, so why bother paying someone to find bugs?” [Redacted] is probably the worst company I’ve dealt with after reporting a few critical bugs. They rarely reply to bugs, let alone fix them. It took an email letting them know that I was disclosing one bug to the public, to warn users that their information on [redacted] was at risk. After that they finally replied and fixed it.

100% of companies should change their perspectives. Again I’ll use [redacted] as an example. I only really look at their site when I’m bored (which is rarely) and I’ve uncovered a ton of vulns. I wonder what I could find if I spent a week looking for bugs (and if they rewarded me). Companies need to stop thinking, “No one has hacked us yet, so we’re good.”

If a company can’t afford to pay researchers to find bugs, then they should reconsider their business. Hacking is on the rise and it’s not going anywhere anytime soon (if ever). If you honestly can’t afford it, though, then my suggestion (if I was the CEO of a company that couldn’t afford security) would be to run a hackathon within the company. Let the devs go look for bugs and run a competition in-house. Your devs not only learn about writing secure code, but it’s fun too!


Many thanks to [redacted] for writing great answers to my questions.

Cybersecurity Tradeoffs & Risks

Kevin Roose hired a couple of high-end hackers to penetration-test his personal cybersecurity setup. It did not go well, unless you count “realizing that you’re incredibly vulnerable” as “well”. In his write-up of the exercise, Roose mused:

“The scariest thing about social engineering is that it can happen to literally anyone, no matter how cautious or secure they are. After all, I hadn’t messed up — my phone company had. But the interconnected nature of digital security means that all of us are vulnerable, if the companies that safeguard our data fall down on the job. It doesn’t matter how strong your passwords are if your cable provider or your utility company is willing to give your information out over the phone to a stranger.”

There is a genuine tradeoff between safety and convenience when it comes to customer service. Big companies typically err on the side of convenience. That’s why Amazon got in trouble back in January. Most support requests are legitimate, so companies practice lax security and let the malicious needles in the haystack slip through their fingers (to mix metaphors egregiously). If a business like Amazon enacts rigorous security protocols and makes employees stick to them, the average user with a real question is annoyed. Millions of average users’ mild discomfort outweighs a handful of catastrophes.

Artwork by Michael Mandiberg.

Artwork by Michael Mandiberg.

In semi-related commentary, Linux security developer Matthew Garrett said on Twitter (regarding the Apple-versus-FBI tussle):

“The assumption must always be that if it’s technically possible for a company to be compelled to betray you, it’ll happen. No matter how trustworthy the company [seems] at present. No matter how good their PR. If the law ever changes, they’ll leak your secrets. It’s important that we fight for laws that respect privacy, and it’s important that we design hardware on the assumption we won’t always win”

Although Garrett is commenting on a different issue within a different context, I think these two events are linked. The basic idea is that when you trust third parties to protect your privacy (including medical data and financial access), you should resign yourself to being pwned eventually. Perhaps with the sanction of your government.

Behavior Can Trump Encryption

Advice from Whonix’s guide to preserving internet anonymity via web opsec:

“Someone sent you an pdf by mail or gave you a link to a pdf? That sender/mailbox/account/key could be compromised and the pdf could be prepared to infect your system. Don’t open it with the default tool you were expected use […] by the creator. For example, don’t open a pdf with a pdf viewer.”

I’m less interested in this specific suggestion than the principle behind it. The bolded sentence hits on a key insight — when you can, subvert your enemy’s expectations. How would their perfect target behave? Adopt the opposite practices. Of course, this adds a lot of inconvenience to your life, so keep in mind whether your situation warrants elaborate identity-protection.

Photo by Shannon Kringen.

Photo by Shannon Kringen.

I also read through some of the Hacker News comments on Whonix’s how-to, and this one by user nikcub stood out as being particular savvy: “Your personas [should be] isolated and segregated. They share no information, hobbies, interests and at a tech level they don’t share connections, machines, browsers, apps.”

When you’re trying to stay anonymous, having access to high-tech tools is very helpful — donate to TOR! — but being careful and thinking through every step is even more crucial.

nikcub’s comment also recommended Underground Tradecraft (thegrugq’s Tumblr) so I fell into an opsec rabbit hole. Expect more on this topic soon.

© 2019 Exolymph. All rights reserved.

Theme by Anders Norén.