Menu Close

Tag: cybersecurity (page 1 of 2)

No Escape from the Dreaded Content

When I started Exolymph, I thought about making it a links newsletter instead of a random-reflections newsletter. I decided not to do that for two reasons:

  1. There are also already tons of links newsletters, and far fewer newsletters that offer a five- or ten-minute shot of ideas. (Glitchet is an excellent links newsletter that also features weird net art.)
  2. As a person who subscribes to many links newsletters, I know that they can be stressful. There are more interesting articles than I have time to read.

However. I’ve come across so many incredible stories over the past forty-eight hours that I can’t narrow it down. (I did limit the Trump content.) Not all of these articles were published recently, but they’re all indicative of The State of the World, Cyber Edition.

Don’t click on anything that doesn’t truly grab you, just let the deluge of headlines keep flowing…

“Who is Anna-Senpai, the Mirai Worm Author?”

Brian Krebs, a respected cybersecurity journalist, investigated the botnet that knocked his site down with a massive DDoS attack last September. The result is a bizarre real-life whodunnit that takes place almost entirely online, replete with braggadocious shitposting on blackhat forums and the tumbling of shaky Minecraft empires. SO GOOD. (Also, buy his book!)

“Security Economics”

Spammers and hackers are just in it to get rich, or whatever the Eastern European equivalent is. (That stereotype exists for a reason. Again, buy Krebs’ book!) This is a quick overview of the players’ financial motives from an industry participant.

“Scammers Say They Got Uber to Pay Them With Fake Rides and Drivers”

The headline sums it up pretty well. Bonus: identity-theft slang!

“Doomsday Prep for the Super-Rich”

Both hilarious and depressing, my favorite combo. Silicon Valley billionaires and multimillionaires are buying up land in New Zealand, stockpiling weapons, and getting surgery to fix their eyesight. Their paranoia — or is it pragmatism? — is framed as a reaction to Trump’s election. Here’s a more explicitly political companion piece, if you want that.

“This Team Runs Mark Zuckerberg’s Facebook Page”

As the wise elders have counseled us, “He who leads Brand… must become Brand.” Zuck is taking that ancient adage seriously. The kicker: “There are more than a dozen Facebook employees writing Mark Zuckerberg’s posts or scouring the comments for spammers and trolls.” MORE THAN TWELVE HUMAN BEINGS.

“Advanced Samizdat Techniques: Scalping Millennials”

Warning: authored by a notorious neo-Nazi. Everything weev does is evil. But also brilliant. Here we have an example of both, which is funny if you’re able to momentarily suspend your sense of decency. (I didn’t cloak the link, because it leads to Storify rather than a Nazi-controlled website.)

“World’s main list of science ‘predators’ vanishes with no warning”

Either someone is suing the poor guy who compiled it, or… threatening his family? Let’s hope the situation isn’t that sinister.

“Dictators use the Media Differently than Narcissists and Bullies”

Guess which self-obsessed politician this is about? (Granted, all politicians are more self-obsessed than the average person. But the MAGNITUDE, my friends, the magnitude!)

“RAND’s Christopher Paul Discusses the Russian ‘Firehose of Falsehood'”

A counterpoint to the previous link.

“How Casinos Enable Gambling Addicts”

Modern slot machines are expertly engineered to trick players and engender addiction. (The writer strongly implies a regulatory solution, which I don’t endorse, but the gambling industry is definitely diabolical.)

Lastly — most crucially — Ted Cruz totally clobbered Deadspin on Twitter. Aaand that’s it. Enjoy your Wednesday.


Header artwork by Emre Aktuna.

Trust Not the Green Lock

Eric Lawrence works at Google, where he is “helping bring HTTPS everywhere on the web as a member of the Chrome Security team.” (I preserved his phrasing because I’m not 100% sure what that means concretely, but working on security at Google bestows some baseline credibility.) A couple of days ago Lawrence published a blog post about malicious actors using free certificates from Let’s Encrypt to make themselves look more legit. As he put it:

One unfortunate (albeit entirely predictable) consequence of making HTTPS certificates “fast, open, automated, and free” is that both good guys and bad guys alike will take advantage of the offer and obtain HTTPS certificates for their websites. […]

Another argument is that browsers overpromise the safety of sites by using terms like Secure in the UI — while the browser can know whether a given HTTPS connection is present and free of errors, it has no knowledge of the security of the destination site or CDN, nor its business practices. […] Security wording is a complicated topic because what the user really wants to know (“Is this safe?”) isn’t something a browser can ever really answer in the affirmative.

Lawrence goes into much more detail, of course. His post hit the front page on Hacker News, and the commentary is interesting. (As usual! Hacker News gets a worse rap than it deserves, IMO.)

I want to frame this exploitation of freely available certificates as a result of cacophony of the web. Anyone can publish, and anyone can access. Since internet users are able to choose anonymity, evading social or criminal consequences is easy. (See also: fake news, the wholly fabricated kind.) Even when there are opsec gaps, law enforcement doesn’t have anywhere near the resources to chase down everyone who’s targeting naive or careless users online.

Any trust signal that can be aped — especially if it can be aped cheaply — absolutely will be. Phishers and malware peddlers risk nothing. In fact, using https is not inherently deceptive (although it is surely intended to be). The problem is on the interpretation end. Web browsers and users have both layered extra meaning on top of the plain technical reality of https.

To his credit, Lawrence calls the problem unsolvable. It is, because the question here is: “Can you trust a stranger if they have a badge that says they’re trustworthy?” Not if the badge can be forged. Or, in the case of https, if the badge technically denotes a certain kind of trust, but most people read it as being a different kind of trust.

(I’m a little out of my depth here, but my understanding is that https doesn’t mean “this site is trustworthy”, it just means “this site is encrypted”. There are higher types of certificates that validate more, usually purchased by businesses or other institutions with financial resources.)

High-trust societies can mitigate this problem, of evaluating whether a stranger is going to screw you over, but there’s no way to upload those cultural norms. The internet is not structured for accountability. And people aren’t going to stop being gullible.

Anyway, Lawrence does have some suggestions for improving the current situation. Hopefully one or multiple of those will go forward.


Header photo by Joi Ito.

I Hope You Like the NSA Because the NSA Sure Likes You

Today’s news about the NSA feels a little too spot-on. I hope the hackneyed scriptwriters for 2017 feel ashamed:

In its final days, the Obama administration has expanded the power of the National Security Agency to share globally intercepted personal communications with the government’s 16 other intelligence agencies before applying privacy protections.

The new rules significantly relax longstanding limits on what the N.S.A. may do with the information gathered by its most powerful surveillance operations, which are largely unregulated by American wiretapping laws. These include collecting satellite transmissions, phone calls and emails that cross network switches abroad, and messages between people abroad that cross domestic network switches.

The change means that far more officials will be searching through raw data. Essentially, the government is reducing the risk that the N.S.A. will fail to recognize that a piece of information would be valuable to another agency, but increasing the risk that officials will see private information about innocent people.

Really? Expanding the NSA’s power, so soon after the Snowden plotline? A move like this might be exciting in an earlier season, but at this point the show is just demoralizing its viewers. Especially after making the rule that no one can turn off their TV, ever, it just seems cruel.

At least the Brits have it worse? I dunno, that doesn’t make me feel better, since America likes to import UK culture. (It’s one of our founding principles!)

Now is a good time to donate to the Tor Project, is what I’m saying.

In other news, researchers can pull fingerprints from photos and use the data to unlock your phone, etc. Throwback: fingerprints are horrible passwords.

Remember, kids, remaining in your original flesh at all is a poor security practice.


Header photo via torbakhopper, who attributes it to Scott Richard.

Cyber Arms Racing

Cybersecurity researcher Bruce Schneier published a provocatively titled blog post — “Someone Is Learning How to Take Down the Internet” — which can either be interpreted as shocking or blasé, depending on your perspective. The gist is that sources within high-level web infrastructure companies told Schneier that they’re facing increasingly sophisticated DDoS attacks:

“These attacks are significantly larger than the ones they’re used to seeing. They last longer. They’re more sophisticated. And they look like probing. One week, the attack would start at a particular level of attack and slowly ramp up before stopping. The next week, it would start at that higher point and continue. And so on, along those lines, as if the attacker were looking for the exact point of failure.”

Schneier goes on to speculate that the culprit is a state actor, likely Russia or China. So, I have a few reactions:

1) I would be very surprised in the opposite case, if Schneier asserted that no one was trying to figure out how to take down the internet. Just like the executives of public companies have a fiduciary duty to be as evil as possible in order to make money for their shareholders, government agencies have a mandate to be as evil as possible in order to maintain global power.

When I say “evil” I don’t mean that they’re malicious. I mean they end up doing evil things. And then their adversaries do evil things too, upping the ante. Etc, etc.

2) Schneier’s disclosure may end up in the headlines, but the disclosure itself is not a big deal in the grand scheme of things. Venkatesh Rao said (in reference to Trump, but it’s still relevant), “It takes very low energy to rattle media into sound and fury, ‘break the Internet’ etc. Rattling the deep state takes 10,000x more energy.”

3) I don’t expect whoever is figuring out how to “DDoS ALL THE THINGS!” to actually do it anytime soon. Take this with a grain of salt, since I’m not a NatSec expert by any means, but it would be counterproductive for China, Russia, or the United States itself to take the internet offline under normal circumstances. “Normal circumstances” is key — the expectations change if an active physical conflict breaks out, as some Hacker News commenters noted.

I suspect that being able to take down the internet is somewhat akin to having nukes — it’s a capability that you’d like your enemies to be aware of, but not necessarily one that you want to exercise.

I also like what “Random Guy 17” commented on Schneier’s original post: “An attack on a service is best done by an attacker that doesn’t need that service.”

Therapy Bots and Nondisclosure Agreements

Two empty chairs. Image via R. Crap Mariner.

Image via R. Crap Mariner.

Let’s talk about therapy bots. I don’t want to list every therapy bot that’s ever existed — and there are a few — so I’ll just trust you to Google “therapy bots” if you’re looking for a survey of the efforts so far. Instead I want to discuss the next-gen tech. There are ethical quandaries.

If (when) effective therapy bots come onto the market, it will be a miracle. Note the word “effective”. Maybe it’ll be 3D facial models in VR, and machine learning for the backend, but it might be some manifestation I can’t come up with. Doesn’t really matter.

They have to actually help people deal with their angst and self-loathing and grief and resentment, but any therapy bots that are able to do that will do a tremendous amount of good. Not because I think they’ll be more skilled than human therapists — who knows — but because they’ll be more broadly available.

Software is an order of magnitude cheaper than human employees, so currently underserved demographics may have greater access to professional mental healthcare than they ever have before. Obviously the situation for rich people will still be better, but it’s preferable to be a poor person with a smartphone in a world where rich people have laptops than it is to be a poor person without a smartphone in a world where no one has a computer of any size.

Here’s the thing. Consider the data-retention policies of the companies that own the therapy bots. Of course all the processing power and raw data will live in the cloud. Will access to that information be governed by the same strict nondisclosure laws as human therapists? To what extent will HIPAA and equivalent non-USA privacy requirements apply?

Now, I don’t know about you, but if my current Homo sapiens therapist asked if she could record audio of our sessions, I would say no. I’m usually pretty blasé about privacy, and I’m somewhat open about being mentally ill, but the actual content of my conversations with my therapist is very serious to me. I trust her, but I don’t trust technology. All kinds of companies get breached.

Information on anyone else’s computer — that includes the cloud, which is really just a rented datacenter somewhere — is information that you don’t control, and information that you don’t control has a way of going places you don’t expect it to.

Here’s something I guarantee would happen: An employee at a therapy bot company has a spouse who uses the service. That employee is abusive. They access their spouse’s session data. What happens next? Who is held responsible?

I’m not saying that therapy bots are an inherently bad idea, or that the inevitable harm to individuals would outweigh the benefits to lots of other individuals. I’m saying that we have a hard enough time with sensitive data as it is. And I believe that collateral damage is a bug, not a feature.


Great comments on /r/DarkFuturology.

Two Announcements + One Cyber Link

Artwork by Eduarda Mariz.

Artwork by Eduarda Mariz.

First off, my email provider told me that I misconfigured sonya@exolymph.news, so if you contacted me within the last week or so and didn’t get a response, I may not have received your email. It’s fixed now, so hit me up again.

Secondly, I’m going backpacking in Desolation Wilderness this week, so Exolymph is on hold. Instead of sending you the usual dispatches, I’ll send you article suggestions.

Today’s recommended reading is “Cyber Security Motivations Guessing Game” by The Grugq, an infamous infosec researcher and exploit broker. (Well, at least he’s infamous on Twitter.)

“If you’re from the ‘killing bugs makes the internet inherently safer’ camp, then Chinese companies are clearly doing more to secure the Internet than any European company. [However, it might be] a strategic cyber operation to deny Chinese adversaries access to critical resources. For example, if your cyber program doesn’t need unpatched vulnerabilities as a critical component but your adversary’s does, you may invest in disclosing vulnerabilities.”

Enjoy!

Hacking as a Business: Interview with Sean Roesner

Sean Roesner describes himself on Twitter as a “web application penetration tester.” I asked him a bunch of questions about what that entails. Sean answered in great depth, so I redacted my boring questions, lightly edited Sean’s answers, and made it into an essay. Take a tour through the 2000s-era internet as well as a crash course in how an independent hacker makes money. Without any further ado, Sean Roesner…


Origin Story

I got into my line of work when I was thirteen, playing the game StarCraft. I saw people cheating to get to the top and I wanted to know how they did it. At first I wasn’t that interested in programming, purely because I didn’t understand it. I moved my gaming to Xbox (the original!) shortly thereafter and was a massive fan of Halo 2. Again, I saw people cheating (modding, standbying, level boosting) and instantly thought, “I want to do this!” I learned how people were making mods and took my Xbox apart to start mucking with things.

I moved away from Xbox and back to the computer (I can never multitask). Bebo was just popping up. With an intro to coding already, I saw that you could send people “luv”. Based on my mentality from the last two games I played… I wanted the most luv and to be rank #1. I joined a forum called “AciidForums” and went by the names “DCH SlayeR” and “SlayeR”. Suddenly I was surrounded by people who shared my interests. I started to code bots for Bebo to send myself luv. My coding got a lot better and so did my thinking path. I’d come home from school and instantly go on my computer — it was a whole new world to me. I still have old screenshots of myself with seventy-six million luv.

76,000,000 luv.

Bebo screenshot from back in the day — check out the luv stats.

As my coding came along I met a lot of different types of people. Some couldn’t code but had ideas for bots; some couldn’t code but knew how to break code. We all shared information and formed a team. Suddenly I became the main coder and my friends would tell me about exploits they found. We got noticed. I’m not sure how, or why, but I seem to always get in with the right people. Perhaps it’s the way I talk or act — who knows. I made friends with a couple of Bebo employees, “Andy Cutright” and “Brian” (never did know his last name). They were interested in how I was doing what I was doing.

This was my introduction to hacking and exploiting. I moved on from Bebo after coming to an agreement with the company that I’d leave them alone. Sadly my friends and I all lost contact, and it was time to move on.

Next came Facebook. At this point I already knew how to code and exploit. I instantly found exploits on Facebook and started again, getting up to mischief. Along the way I meet James Jeffery and we became best friends because we share the same ideas and interests. Two years passed and again, my mischief went a bit far, so I got in trouble with Facebook. We resolved the issue and I vowed to never touch Facebook again.

I guess three times lucky, hey? I moved my exploiting to porn sites. After a year I was finally forced to make peace with the porn site I was targeting. I was getting fed up with always having to stop… but I was also getting annoyed at how easy it was to exploit. I needed a challenge.

I took a year off from exploiting to focus on improving my coding skills. I worked for a few people and also on some of my own personal projects, but it got repetitive and I needed a change. At this point, I was actually arrested by the eCrime Unit for apparently being “in^sane” from TeaMp0isoN. The charges were dropped since I was innocent. My former friend James Jeffery was in prison for hacking (a quick Google search will yield you results) so I was feeling quite lonely and not sure what to do. I’ll be honest, he had become like a brother to me.

I kept on coding for a bit, feeling too scared to even look for exploits after what happened to James Jeffery. (A few years have passed since then — James is out and he’s learned his lesson.) I knew that hacking was illegal and bad. I’d just like to note that I’ve never once maliciously hacked a site or stolen data, in case you think I was a super blackhat hacker, but the James incident also scared me. Especially since I got arrested too.

Because of this and through other life changes, I knew I wanted to help people. I took my exploiting skills and starting looking. I found some exploits instantly and started reporting them to companies to let them know, and to also help fix them. 99% of the companies replied and were extremely thankful. Some even sent me T-shirts, etc.

I started targeting a few sites (I can’t name which because we have NDAs now; I’m still actively helping many). By using my words right, I managed to get in with a few people. I start reporting vulnerabilities and helping many companies. Months passed and one company showed a lot of interest in what I was doing. I got invited to fly over to meet them. I knew something was going right at this point, so I knuckled down and put all of my focus on finding vulnerabilities and reporting them to this company. Things were going great and I soon overloaded their team with more than they could handle. I started looking further afield at more sites, and suddenly I was introduced to HackerOne. I saw that LOADS of sites had bounties and paid for vulnerabilities. I instantly knew that this was where I wanted to stay. To this day I am still active on HackerOne, but normally I run in private programs now (better payouts).

Fast forward through a year of exploiting and helping companies and now we’re here. I’ve been a nerd for ten years. Eight years coding, and around seven years exploiting.

Business Practices

For companies that don’t have a bug bounty, I tend to spend thirty minutes to an hour finding simple bugs such as XSS (cross-site scripting) or CSRF (cross-site request forgery). I’ll try find a contact email and send them a nice detailed email about what I’ve found and what the impact is. I also supply them with information about how they can fix it. I never ask for money or anything over the first few emails — I tend to get their attention first, get them to acknowledge what I’ve found, and get them to agree that I can look for more. At that point I’ll ask if they offer any type of reward for helping them. The majority reply that they are up for rewarding me, due to the amount of help I’ve given them.

After I’ve helped the company for a while and they’ve rewarded me, etc, I usually suggest that they join HackerOne for a much cleaner process of reporting bugs and rewarding me (it also helps my rep on HackerOne). So far two have joined and one started their own private bounty system.

To sum it up, I’ll start of with basic bugs to get their attention, then once I’ve gotten the green light to dig deeper, I’ll go and find the bigger bugs. This helps me not waste my time on companies who don’t care about security. (Trust me, I’ve reported bugs and gotten no reply, or a very rude response!) I like to build a good relationship with companies before putting a lot of hours into looking for bugs. A good relationship with companies is a win-win situation for everyone — they get told about vulnerabilities on their site, and I get rewarded. Perfect.

In case you wanted to know, I’ve helped around ten companies who didn’t have a bug bounty. Nine of them have rewarded me (with either money, swag, or recognition on their website). Only one has told me they don’t offer any type of reward, but welcomed me to look for bugs to help them (pfft, who works for free?). Out of the nine who rewarded me, I’ve built a VERY close relationship with three of them. (Met with one company in January, and meeting with another in June.)

There are two types of companies. Those who simply can’t afford to reward researchers and those who think, “Well, no one has hacked us yet, so why bother paying someone to find bugs?” AutoTrader is probably the worst company I’ve dealt with after reporting a few critical bugs. They rarely reply to bugs, let alone fix them. It took an email letting them know that I was disclosing one bug to the public, to warn users that their information on AutoTrader was at risk. After that they finally replied and fixed it.

100% of companies should change their perspectives. Again I’ll use AutoTrader as an example. I only really look at their site when I’m bored (which is rarely) and I’ve uncovered a ton of vulns. I wonder what I could find if I spent a week looking for bugs (and if they rewarded me). Companies need to stop thinking, “No one has hacked us yet, so we’re good.”

If a company can’t afford to pay researchers to find bugs, then they should reconsider their business. Hacking is on the rise and it’s not going anywhere anytime soon (if ever). If you honestly can’t afford it, though, then my suggestion (if I was the CEO of a company that couldn’t afford security) would be to run a hackathon within the company. Let the devs go look for bugs and run a competition in-house. Your devs not only learn about writing secure code, but it’s fun too!


Many thanks to Sean Roesner for writing great answers to my questions. Follow him on Twitter and hire him to hack your website 🙂

This post has been edited on 11/17/2016 to reflect than Roesner and James Jeffery are no longer friends.

Excrement Online: The Perilous Connected Home

Mike Dank (Famicoman) wrote an article for Node about the Internet of Things. Here are few interesting tidbits:

“We have these devices that we never consider to be a potential threat to us, but they are just as vulnerable as any other entity on the web. […] Can you imagine a drone flying around, delivering malware to other drones? Maybe the future of botnets is an actual network of infected flying robots. […] Is it only a matter of time before we see modifications and hacks that can cause these machine to feel? Will our computers hallucinate and spout junk? Maybe my coffee maker will only brew half a pot before it decides to no longer be subservient in my morning ritual.”

I think we’re a long way from coffeemakers with emergent minds, and my guess is that machine intelligence will be induced before it starts appearing randomly. But I like the idea of a mischievous hacker giving “life” to someone’s household appliances. Of course, connected devices can wreak havoc unintentionally, like when people’s Nest thermostats glitched (the incident written up in The New York Times wasn’t the only one). The clever Twitter account Internet of Shit provides a helpful stream of additional examples.

Artwork by Tumitu Design.

Artwork by Tumitu Design.

I’m not worried about someone cracking my doorknob’s software or meddling with my refrigerator settings, because I’m insignificant and there’s no reason why a hacker would target me. (Not saying that it couldn’t happen, just that it’s not likely enough to fret about. Especially since I don’t actually have any connected thingamajigs… yet.) Most regular folks are like me. However, I think keeping the Internet of Things secure is crucial, for a couple of reasons:

  1. Physical safety is absolutely key. Data-based privacy invasions can jeopardize your employment, but they’re unlikely to outright kill you or your family. Someone who is immunocompromised or frail (think people who are very sick, very old, or very young) can be seriously harmed by unexpected low temperatures or spoiled meat from a faulty fridge.
  2. In order to feel safe, people need to be able to reliably control their environment. When we go out into the world, events are unpredictable and we can’t be at ease. Home is supposed to be the opposite — it’s your own domain, and you feel comfortable because everything is how you like it. I know that I’d feel uneasy if the Roomba suddenly barged into my bedroom and tried to eat my feet.

Cybersecurity Tradeoffs & Risks

Kevin Roose hired a couple of high-end hackers to penetration-test his personal cybersecurity setup. It did not go well, unless you count “realizing that you’re incredibly vulnerable” as “well”. In his write-up of the exercise, Roose mused:

“The scariest thing about social engineering is that it can happen to literally anyone, no matter how cautious or secure they are. After all, I hadn’t messed up — my phone company had. But the interconnected nature of digital security means that all of us are vulnerable, if the companies that safeguard our data fall down on the job. It doesn’t matter how strong your passwords are if your cable provider or your utility company is willing to give your information out over the phone to a stranger.”

There is a genuine tradeoff between safety and convenience when it comes to customer service. Big companies typically err on the side of convenience. That’s why Amazon got in trouble back in January. Most support requests are legitimate, so companies practice lax security and let the malicious needles in the haystack slip through their fingers (to mix metaphors egregiously). If a business like Amazon enacts rigorous security protocols and makes employees stick to them, the average user with a real question is annoyed. Millions of average users’ mild discomfort outweighs a handful of catastrophes.

Artwork by Michael Mandiberg.

Artwork by Michael Mandiberg.

In semi-related commentary, Linux security developer Matthew Garrett said on Twitter (regarding the Apple-versus-FBI tussle):

“The assumption must always be that if it’s technically possible for a company to be compelled to betray you, it’ll happen. No matter how trustworthy the company [seems] at present. No matter how good their PR. If the law ever changes, they’ll leak your secrets. It’s important that we fight for laws that respect privacy, and it’s important that we design hardware on the assumption we won’t always win”

Although Garrett is commenting on a different issue within a different context, I think these two events are linked. The basic idea is that when you trust third parties to protect your privacy (including medical data and financial access), you should resign yourself to being pwned eventually. Perhaps with the sanction of your government.