Menu Close

Tag: bots

This website was archived on July 20, 2019. It is frozen in time on that date.
Exolymph creator Sonya Mann's active website is Sonya, Supposedly.

Therapy Bots and Nondisclosure Agreements

Two empty chairs. Image via R. Crap Mariner.

Image via R. Crap Mariner.

Let’s talk about therapy bots. I don’t want to list every therapy bot that’s ever existed — and there are a few — so I’ll just trust you to Google “therapy bots” if you’re looking for a survey of the efforts so far. Instead I want to discuss the next-gen tech. There are ethical quandaries.

If (when) effective therapy bots come onto the market, it will be a miracle. Note the word “effective”. Maybe it’ll be 3D facial models in VR, and machine learning for the backend, but it might be some manifestation I can’t come up with. Doesn’t really matter.

They have to actually help people deal with their angst and self-loathing and grief and resentment, but any therapy bots that are able to do that will do a tremendous amount of good. Not because I think they’ll be more skilled than human therapists — who knows — but because they’ll be more broadly available.

Software is an order of magnitude cheaper than human employees, so currently underserved demographics may have greater access to professional mental healthcare than they ever have before. Obviously the situation for rich people will still be better, but it’s preferable to be a poor person with a smartphone in a world where rich people have laptops than it is to be a poor person without a smartphone in a world where no one has a computer of any size.

Here’s the thing. Consider the data-retention policies of the companies that own the therapy bots. Of course all the processing power and raw data will live in the cloud. Will access to that information be governed by the same strict nondisclosure laws as human therapists? To what extent will HIPAA and equivalent non-USA privacy requirements apply?

Now, I don’t know about you, but if my current Homo sapiens therapist asked if she could record audio of our sessions, I would say no. I’m usually pretty blasé about privacy, and I’m somewhat open about being mentally ill, but the actual content of my conversations with my therapist is very serious to me. I trust her, but I don’t trust technology. All kinds of companies get breached.

Information on anyone else’s computer — that includes the cloud, which is really just a rented datacenter somewhere — is information that you don’t control, and information that you don’t control has a way of going places you don’t expect it to.

Here’s something I guarantee would happen: An employee at a therapy bot company has a spouse who uses the service. That employee is abusive. They access their spouse’s session data. What happens next? Who is held responsible?

I’m not saying that therapy bots are an inherently bad idea, or that the inevitable harm to individuals would outweigh the benefits to lots of other individuals. I’m saying that we have a hard enough time with sensitive data as it is. And I believe that collateral damage is a bug, not a feature.

Great comments on /r/DarkFuturology.

Bot-Writer for Hire

Courtney Stanton is one of the cofounders of Feel Train, a small bespoke creative studio. (Longtime readers may remember that I interviewed the other half of Feel Train, Darius Kazemi, back in December.) This week Stanton and I spoke on the phone for about an hour, discussing their background and current work.

A brief introduction to Feel Train: their clients have marketing goals, but the kind of advertising that Feel Train facilitates is much more participatory and experimental than, say, a billboard or a branded hashtag. Feel Train publishes projects like a fortune-telling bot and a book-concept generator. The company is also a worker-owned co-op, with bylaws stating that it can never expand beyond eight members.

So why is this cyberpunk? Feel Train’s work is actually kind of the optimistic flip-side of cyberpunk — they represent a NewCo world in which small-scale entrepreneurs can leverage technology to make a living while playing to their strengths and sticking to their principles.

For example, Feel Train turned down a client because the client’s company policy required background checks. Stanton explained, “We believe everyone has the right to work” and background checks serve as an impediment to that. “The thing I can change is the place I work at, which is what I have done and what Darius has done.”

Photo by Robin Zebrowski.

Photo by Robin Zebrowski.

So, on a conceptual level, what ties together Feel Train’s work? Stanton used an interesting metaphor: “I like creating sort of temporary spaces where people can explore questions or role-play.” They added, “The internet is fantastic for that.”

So how does that actually work? “When I talk to clients I talk about the ‘velvet rope’ strategy. You set up a little space and let people invite themselves in.” This differs from the way “traditional marketing and advertising tends to bombard you”. By contrast, Feel Train’s projects are supposed to be “something that’s actually genuinely interested to individuals, and… cool.”

Stanton explained, “The response I’m looking for is much lower-key. The ‘hmm’ response as opposed to like, ‘Oh, I can’t wait to retweet that Cheetos tweet.'” They told me, “I’m always thinking about the fifty-fifty person who’s a little curious” rather than rabidly fanatical.

Feel Train’s bots are bounded experiences. “Some of it is time-based — it’ll only [tweet] like once a day. You’re only getting so much interaction with it. That’s […] the little pocket of play, your window of participation for the day.” Of course, “A lot of our design is based on not violating Twitter’s guidelines, because you don’t want to get the content shut down.”

The bots are also closed systems in terms of what they say. “They don’t go off script; they don’t break character.” Stanton explained, “They have guidelines in terms of what they’re going to talk about. They’re never going to leave the narrative area.”

This is the result of painstaking research, writing, and testing. “Especially in the early design phases, I take in a lot of information about the world, like the narrative world” of the project. Again, “the bot’s never going to deviate from what we put in our spreadsheet.”

How does it feel to put together a bot corpus? “Really different [from other kinds of writing]. It took me back to high school, when you’re doing sentence-diagramming.” The process isn’t linear. Rather, “You’re mentally composing a hundred different sentences,” questioning, “Would this word sound good next to all of these ones?”

And then there’s QA. Does the output fit expectations and meet Feel Train’s standards? Stanton told me they generate hundreds of samples and read through each and every one, looking for patterns, noting which templates feel repetitive or awkward.

“It’s still the process of rough draft, and then you do polish passes and polish passes. It’s just that instead of editing a normal manuscript, it’s slightly more disjointed.” Stanton compared testing and editing a bot corpus to grinding in a video game — “You do the same level over and over again.”

At the end you have something like @staywokebot, a Feel Train collaboration with activist DeRay Mckesson that tweets inspirational messages to its followers, grounded in Black American history.

Remember what Courtney Stanton does the next time someone tries to convince you that a cogent bot runs on magical AI rather than crafted human planning 😉

Follow @feeltraincoop and @q0rtz to keep up with the company and Stanton themself.

Slow Down & Don’t Confiscate My Graphical User Interface

Exploratory bot. Photo by Takuya Oikawa.

Exploratory bot. Photo by Takuya Oikawa.

Here’s a fun headline from The Register: “‘Devastating’ bug pops secure doors at airports, hospitals”. I’m sure we’ve all read similar reports before! Enjoy this snippet of the story, for flavor…

“Criminals could waltz into secure zones in airports and government facilities by hacking and jamming open doors from remote computers over the Internet, DVLabs researcher Ricky Lawshae says. […] Lawshae says the attacks, which can open every door in a building, are possible because of a command injection vulnerability in a LED blinking lights service.”

Wait, what? Why is an “LED blinking lights service” hackable? Allow me to note, very unoriginally, that the Internet of Things is dumb. Not every tool or appliance needs to have wifi access jammed into its design specs. The much-mocked “smart juice” startup is the pinnacle of this awful trend.

can u not chloe

I have similar feelings about the bot services craze. People seem to be jumping on this technology without stopping to ponder how it might turn out. When your next venture capital round depends on glossing over potential problems, it’s easy to assume that the impact of your harebrained scheme will be beneficial.

“Conversational commerce” isn’t quite as problematic as the Internet of Things, because it doesn’t pose a security threat (at least not off the top of my head). But people are still building things without considering whether their chosen medium fits the stated purpose of the tool. The last thing I want from an app is a replica of the phone call, this time rendered in text.

I demand clickable buttons! Give me a GUI or give me death! On the other hand, maybe I’m a dirty Luddite. Perhaps I should resign myself to relearning how to interact with computers every couple of years. I’m not against experimentation — what futurist could be? — but my mood is decidedly curmudgeonly tonight. Also, fuck Snapchat., Speculative Comics, & Dentistry

Girly teenage robots? Photo by elkbuntu.

Girly teenage robots? Photo by elkbuntu.

There are three things I want to talk about today:

  1. Microsoft’s inadvertently racist Twitter bot, / @TayandYou.
  2. A comic that a-u-t-o-x is releasing soon.
  3. My visit to the dentist today (I swear I have a reason to bring it up).

Unless you’ve been off the internet for a few days, you ran into Tay, a Twitter bot that Microsoft released as PR (?!?!) for their in-house machine learning capabilities. This was an utterly predictable catastrophe. Tay processed the text people tweeted at her and mimicked it back. Trolls quickly figured out the mechanism and made her say a bunch of neo-Nazi nonsense.

“What Tay reminds us: AI may or may not be scary. Humans who train AI are terrifying. Or, humans in general are terrifying.” — Hugh McGuire

Usually I try to stay away from posting a bunch of links, but other people have already said all the smart things. These articles overview the facts:

Wisdom from people who have dealt with systems like this before:

And then Allison Parrish commented in the #botALLY Slack group:

“re: tay, yesterday before any of the really bad stuff went down, I quote-retweeted something that mentioned the account and then the account @-replied me… so I blocked it, thinking how annoying it was that this bot that has Twitter verified status isn’t complying with the letter or the spirit of the API ToS

like, many people must have been involved in decisions to get this bot live, on the part of the group at microsoft AND at twitter

and the fact that no one involved apparently thought of these obvious ways in which it would be a disruptive negative experience for people just… seems unfathomable

we have YEARS of precedents for applications of the Twitter API like this and even the greenest botmaker among us has a better grasp of the issues at stake than the people involved in this project”

So, that’s a whole big thing. In other news, a-u-t-o-x is releasing a comic, which will be available on his website. He told me: “it is titled WORLD L.S.D and ties in Cyberpunk aesthetics & Science Fiction themes. […] the story is simultaneously set in a futuristic city ‘Neo-F’ and outback Australia, as Neo-F is prone to jump through time sporadically.” Here is the title image:
And lastly, I went to the dentist today. (Shocker: I’m apparently brushing and flossing wrong! What a new thing to hear from a dental hygienist!) But seriously, it made me further contemplate what I said yesterday: “The future is beyond bodies. A few decades from now — and during some parts of the present — we will not be confined to flesh, nor even to brains.”

I was definitely exaggerating. It’s going to take a helluva lot longer than that. My gums are receding (see: brushing wrong, also possibly genetics) and that is a thing that I have to worry about. We live in an absurd world where the random flesh accident that you’re born into has a huge effect on your quality of life. I admit it, but I’m not pleased.

Lich’s Maze & Computer Creativity

Tyler Callich (also known as @lichlike) is a storyteller who makes Twitter bots, among other narrative vehicles. “Lich” is the last syllable of her last name, and it’s also a type of creature that exists between life and death. Wikipedia edifies us:

“Unlike zombies, which are often depicted as mindless, part of a hivemind or under the control of another, a lich retains revenant-like independent thought and is usually at least as intelligent as it was prior to its transformation. In some works of fiction, liches can be distinguished from other undead by their phylactery, an item of the lich’s choosing into which they imbue their soul, giving them immortality until the phylactery is destroyed.”

Tyler’s symbol — her conceptual avatar, if you will — is the lich. When I spoke to her on the phone, she reflected, “I like the concept of this liminal half-dead, half-alive, potent-but-still-waning [being].” A lich is “like a ghost, but not quite” — essence externalized. “It’s one way I conceive of identity, also. It’s in this removed far-outside-of-me place, like a phylactery.” Somewhat akin to @lichphylactery, which shakes up Tyler’s words and spills them back in new arrangements.

That particular creation is Tyler’s own phylactery, a Markov-based ebooks bot with a straightforward name. It says things like, “One and all bring its water which they observe are the 3 styles we’re featuring?” Another recent comment: “Atlnaba a All comprehended within the form of these systems upon the doctrines of the informers were led off s訖”

Tyler explained, “With Markov chains, you’re taking some text and using its grammar to make something new, but still sensible or almost sensible.” She noted that “unintelligible nonsense is novel for a little while”, but it gets boring. Bots like @lichphylactery — or Olivia Taters — are best when they’re close to passing the Turing test, but still not quite there.

My favorite of Tyler’s projects is Lich’s Maze. Here is a recent @lichmaze micro-story:

Lich's Maze

The bones of Lich’s Maze are a loose mythological system that Tyler put together. She fed it a corpus of text, some that she wrote and some that she found. Then she released @lichmaze to wander through people’s Twitter feeds, sending out cryptic moments from an arcane techno-magic game-world.

Tyler told me, “Symbolic thinking is a way for me to just let my mind wander through association.” She “can get a computer to make random associations for me” which “augments that free-thinking / brainstorming”. I asked if she uses @lichmaze’s output as writing prompts, and surprisingly the answer was no. Tyler answered in a thoughtful voice, “It never really occurred to me.”

Beau Gunderson said of Twitter bots, “they’re creators but i don’t put them on the same level as human creators. […] i’m giving the computer the ability to express a parameter set that i’ve laid out for it that includes a ton of randomness.” On the other hand, Tyler doesn’t feel a strong ownership claim. “I take a big backseat to that.” She said, “I think of [the bot] as its own entity after a certain point. It’s kind of independent from me.” She even wishes “that it could change the password on itself and go off on its own”, in a direction unspecified. Tyler’s bots are probably best compared to a growing tangle of plants. Tyler told me, “I think randomness is natural [and] finding that grit or that little kink in digital art is something that I connect closely to organic structures.”

Another project of Tyler’s is Restroom Genderator, which comes up with “extant (and not so extant) genders”. This is a perfect example of “taking a concept and pushing it toward its eventual ruin”, as Tyler put it. A bot like Restroom Genderator is tireless and thorough — eventually “you get a rich contour of all of the iterations of something”. It was originally based on a joke with a friend. “The initial concept — you come up with something and you’re like, ‘Oh, this is funny!’ It can become kind of mundane after a while to write out five thousand combinations and figure out what the best one would be.” So you construct a bot to do it for you.

Asked to define her practice, Tyler told me, “I would consider myself a writer, but it’s hard because I don’t write, like, novels usually.” She continued, “If I was working in a normal platform, I would consider myself a poet, but that seems kind of lofty. I consider myself a tinkerer more than anything else… like a word tinkerer.”

Bots Say The Darnedest Things

I talked on the phone with Darius Kazemi, best-known member of the #botALLY community and whimsical internet artist. First things first — is it pronounced Dah-rius or Day-rius? The latter, he said.

This is how reality is created, by asking questions and assimilating the answers. We participate in making meaning with each other. It’s unavoidable — you can’t opt out of being a cultural force without opting out altogether; relinquishing existence. You can, however, pursue the opposite aim. Amplify yourself.

All this from name pronunciation? Am I getting carried away?

The latest nonsensical Venn diagram by @AutoCharts, one of Darius’ projects.

The latest nonsensical Venn diagram by @AutoCharts, one of Darius’ projects.

Darius used to make a living as a programmer. For years he worked in video games: “A lot of the core skills I learned making video games, I still apply to the stuff that I make today.” He wrote code to generate terrain, maps, and whole worlds. Now his creative practice is also his day job. Darius co-founded the technology collective Feel Train with Courtney Stanton. You can commission web art from Feel Train — for instance, they just finished developing a Twitter bot that will be part of a marketing campaign this spring. Of course, the members of Feel Train also continue express their own aesthetic urges.

I asked Darius to identify his cultural antecedents. He cited a variety of sources: Dada, the Situationists of the 1960s, William Burroughs’ cut-up poetry, and John Cage. “Name off your standard list of avant-garde early-mid-twentieth-century artists,” he joked. Then Darius mentioned Roman Verostko, who has been making digital art for almost fifty years. Verostko wrote “THE ALGORISTS”, an essay that functions as both manifesto and history. He describes algorists — those who work with algorithms — as “artists who undertook to write instructions for executing our art”, usually via computer. Verostko states, “Clearly programming and mathematics do not create art. Programming is a tool that serves the vision and passion of the artist who creates the procedure.”

Beau Gunderson told me something similar: “as creators of algorithms we need to think about them as human creations and be aware that human assumptions are baked in”. I’ve seen many algorists stress this principle, that computers can’t truly create. Programs only encompass process, not genesis.

Darius told me about a book that profoundly affected him: Alien Phenomenology by Ian Bogost. Here Darius was introduced to the possibility of “building objects that do philosophical work instead of writing philosophy”, as he put it. The concepts in Alien Phenomenology acted as “permission to do something that doesn’t even have a name”. Soon Darius began spinning up the bots that comprise his current “stable”, starting with Metaphor-a-Minute.

Philosophical underpinnings aside, Darius doesn’t regard his art as a heavy-handed intellectual exercise. His bots are conceived like this: “I think, ‘Blah is funny.’” Then he considers blah further and concludes, “I could make that. I should make that!” He says that bot-making is “way different from a game, where you have to beg and convince people to engage with it”. The bots invite interaction and duly receive it.

I asked Darius about power. He said, “I think a lot about the rhetorical affordances of bots, and how bots allow you to say things that you wouldn’t otherwise.” A bot allows its creator to express messages indirectly, through a third party. Darius continued, “Bots can get away with saying things that normal people can’t. […] People are very forgiving of bots.” We treat them like children or pets. He added, “Bots say the darnedest things!”

“they don’t have intelligence but they are often surprising”

I suddenly became very interested in Twitter bots because of @FFD8FFDB. That interview led me to Beau Gunderson, an experienced bot-maker and general creator of both computer things and people things. In answer to email questions, he was more voluble than I expected — in a delightful way! — so this is a long one. I’m sure neither of us will be offended if you don’t have time to read it all now. I posted the Medium version first so you can save it for later using Instapaper or something equivalent!

Note: I did not edit Beau’s answers at all. He refers to most people — and bots — by Twitter username, which I think is very reasonable. It’s good to present people as they have presented themselves!

Sonya: How did you get into generative art? Why does it appeal to you, personally?

Beau: my first experience with generative art was LogoWriter in first or second grade. i don’t remember if it was a part of the curriculum or not but i spent a lot of time with it, figuring out the language and what it could be made to do. i feel like there was some randomness involved in that process because i didn’t have a full understanding of the language and so i would permute commands and values to see what would happen. i gave some of the sequences of commands that drew recognizable patterns names.

in terms of getting into twitter bots i’m certain that some of the first bots i came across were by thricedotted and were of the type that thrice has described as “automating jokes”; things like @portmanteau_bot and @badjokebot (which are both amazing). the first bot i made was @vaporfave, which i still consider unfinished but which is also still happily creating scenes in “vaporwave style”, really just a collection of things that i associated with the musical genre of vaporwave (which i do actually enjoy and listen to). it has made more than 10,000 of these little scenes.

a lot of the bots i had seen were text bots, and so i became very interested in making image bots as a way to do something different within the medium. i gave a talk at @tinysubversions’ bot summit 2014 about transformation bots (though mostly about image transformation bots): and my next bot was @plzrevisit, which was a kind of “glitch as a service” bot that relied on

as far as why generative art appeals to me, i think there are a few main reasons. i like the technical challenge of attempting to create a process that generates many instances of art. it would be one thing to programmatically create one or a hundred scenes for vaporwave, or to generate 10,000 and then pick the 10 best and call it done. but it feels like a different challenge to get to the point where i’m satisfied enough with the output of every run to give the bot the autonomy to publish them all. i also like to be surprised by them. and they feel like the right size for a lot of my ideas… they’re easy enough to knock out in a day if they’re simple enough. this is probably why i also haven’t gone back to a lot of the bots and improved them… they feel unfinished but “finished enough”.

in thinking about it some of the appeal is probably informed by my ADHD as well. i prefer smaller projects because they’re more manageable (and thus completable), and twitter bots provide a nearly infinitely scrolling feed of new art (and thus dopamine).

Sonya: How do you conceptualize your Twitter bots — are they projects, creatures, programs, or… ?

Beau: well, they’re certainly projects (i think of everything i do as a project; all my code lives in ~/p/ on my systems, where p stands for projects)

but the twitter bots i think of as something more… they don’t have intelligence but they are often surprising:

aside from tweeting “woah” at the bots i often will reply or quote and add my own commentary:

even though i know they don’t get anything from the exchange i still treat them as part of a conversation sometimes. they’re creators but i don’t put them on the same level as human creators.

Sonya: As a person who has created art projects that seem as though they are intelligent — I’m thinking of Autocomplete Rap — what are your thoughts on artificial intelligence? Do you think it will take the shape we’ve been expecting?

Beau: autocomplete rap was by @deathmtn, i’m only mentioned in the bio because he made use of the rap lyrics that i parsed from OHHLA and used in my bot @theseraps. but i think @theseraps does seem intelligent sometimes too. it pairs a line from a news source with a line from a hip-hop song and tries to ensure that they rhyme. when the subjects of both lines appear to match it feels like the bot might know what it’s doing.

my thoughts on artificial intelligence are fairly skeptical and i’m also not an expert in the field. i’ll say i don’t think it represents a threat to humanity. i don’t think of my work as relating to AI, it’s more about intelligence that only appears serendipitously.

Sonya: Imagine a scenario where Twitter consists of more bots than humans. Would you still participate?

Beau: yes. i talk to my own bots (and other bots) as it is. @godtributes sometimes responds to tweets with awful deities (like “MANSPLAINING FOR THE MANSPLAINING THRONE”) and i let it know that it messed up (wow i think i’ve tweeted at @godtributes more than any other bot).

i also have an idea i’d like to build that i’ve been thinking of as “bot streams” — basically bot-only twitter with less functionality and better support for posting images. and with a focus on bots using other bots work as input, or responding to it or critiquing it (an idea i believe @alicemazzy has written about).

Sonya: How does power play into generative art? When you give a computer program the ability to express itself — or at least to give that impression — what does it mean?

Beau: i try to be very aware of the power even my silly bots have. @theseraps uses lines from the news, which can contain violence, and the lines from the hip-hop corpus which can also contain violence. when paired they can be very poignant but it’s not something i want to create or make people look at. there are libraries to filter out potentially problematic words so i use one of those and also do some custom filtering.

this is one aspect of the #botALLY community i really like; there’s an explicit code of conduct and there’s general consensus about what comprises ethical or unethical behavior by bots. @tinysubversions has even done work to automate detecting transphobic jokes so that his bots don’t accidentally make them.

i wrote a bot called @___said that juxtaposes quotes from women with quotes from men from news stories as a foray into how bots can participate in a social justice context. just seeing what quotes are used makes me think about how sources are treated differently because of their gender. while i was making the bot i also saw how many fewer women than men were quoted (which prompted an idea for a second bot that would tweet daily statistics about the genders of quotes from major news outlets)

i think @swayandsea’s @swayandocean bot is very powerful — the bot reminds its followers to drink more water, take their meds, take a break, etc.

i also really like @lichlike’s @genderpronoun and @RestroomGender, bots that remind us to think outside of the gender binary.

there’s another aspect of power i think about, which brings me back to LogoWriter. LOGO was a fantastic introduction for me as a young person to the idea of programming; it gave me power over the computer. the idea of @lindenmoji was to bring that kind of drawing language and power to twitter, though the language the bot interprets is still much too hard to learn (i don’t think anyone but me has tweeted a from-scratch “program” to it yet)

your last question, about what it means to give a computer the ability to express itself… i don’t quite think of it that way. i’m giving the computer the ability to express a parameter set that i’ve laid out for it that includes a ton of randomness. it’s not entirely expressing me (@theseraps has tweeted things that i deleted because they were “too real”, for example), but it’s not expressing itself either. i wasn’t smart enough, or thorough enough, or didn’t spend the time to filter out every possible bad concept from the bot when i created the parameter space, and i also didn’t read every line in the hip-hop lyrics corpus. so some of the parameter space is unknown to me because i am too dumb and/or lazy… but that’s also where some of the surprise and serendipity comes from.

i think as creators of algorithms we need to think about them as human creations and be aware that human assumptions are baked in:

p.s.: based on the content of the newsletter so far i feel like @alicemazzy, @aparrish, @katierosepipkin, @thricedotted, and @lichlike would be great for you to talk to about bots or art or language or just in general 🙂

The Bot Tries Not to Surveil Humans

Is the computer watching you? It’s hard to tell. You can’t make up your mind. The computer’s attention skips from eye to eye. It has so many, and you wonder how it chooses where to settle its sight. What does the computer see, really? Numbers? In a way, humans see numbers too — light wavelengths can be measured, and that’s basically what eyes do — but we translate them into very different artifacts.

Of course, saying “the computer” is a simplification. It’s not a single entity, but rather a series of commands, of instructions. The program follows the rules that were set up for it, and it follows those rules through many different machines.

Minnesota programmer Derek Arnold made a bot called @FFD8FFDB that tweets color-processed stills from obscure security cameras.

Minnesota programmer Derek Arnold made a bot called @FFD8FFDB that tweets color-processed stills from obscure security cameras. He summarized it beautifully in an essay on the project:

My script captures a frame, and gums it up with an Imagemagick script. I modify the colors in the YUV colorspace, crop out identifying information provided in the margins, and ensure the images are consistent. I use Wordnik to generate accompanying text and replace some characters with graphics characters. This is just for effect. @ffd8ffdb’s goal is superficial; I just like the way the tweets look. I enjoy that strangers find it unsettling, amusing, or even uninteresting. Like other Twitter bots, its unending tenacity is part of its charm. Many cameras go dark at night, most not having enough illumination to provide images. The bot doesn’t care and keeps stealing shots.

Minnesota programmer Derek Arnold made a bot called @FFD8FFDB that tweets color-processed stills from obscure security cameras.

I wasn’t initially sure from this description, but Derek confirmed to me that @FFD8FFDB is fully automated. You could say the bot has a life of its own — albeit one completely defined by its human creator. And yet @FFD8FFDB keeps going regardless of whether Derek participates. As he said, “I had the initial control of it…”

The bot’s feed contains very few images of people. When I scroll through it, I feel ennui. The world looks abandoned. Derek told me that this sense of melancholy emerged unintentionally. He avoided using cameras that would show humans — even now, if a face appears too clearly, he’ll delete the post — because he didn’t want @FFD8FFDB to be invasive, exploitative, or titillating. Derek searched for image sources in “subsections of the business-class internet” specifically to avoid even the most banal intimacy.

He said as much in his essay, but I was surprised by how straightforward Derek’s artistic goals were. He told me, “[The bot] was a thing that I did that I wasn’t thinking too hard about at first.” He became interested in generative art, inspired by numerous other #botALLYs, and simply acted on his impulses. Scratching this itch involved significant effort: Derek estimated that he’s put in twenty-to-forty hours of work on @FFD8FFDB over the past year. “It took a lot of trial and error to get the look I wanted out of it.”

This project is clearly “of the internet”, as they say. On the phone, Derek and I both stumbled over the bot’s name. He told me, “If I ever thought I’d be saying this out loud, I might have named it differently.” Derek was initially surprised by @FFD8FFDB’s popularity — the account now has more followers than his personal Twitter. He added, “I follow the account myself — I don’t follow all of the stuff that I’ve made — and I like it because it surprises me on a consistent basis.”

My favorite discovery from this conversation is that other Twitter users respond to @FFD8FFDB — literally respond. Derek laughed, “People reply to the bot all the time, and it’s set up to send another image.” Those threads are ready to be explored.

Minnesota programmer Derek Arnold made a bot called @FFD8FFDB that tweets color-processed stills from obscure security cameras.

© 2019 Exolymph. All rights reserved.

Theme by Anders Norén.