Two empty chairs. Image via R. Crap Mariner.

Image via R. Crap Mariner.

Let’s talk about therapy bots. I don’t want to list every therapy bot that’s ever existed — and there are a few — so I’ll just trust you to Google “therapy bots” if you’re looking for a survey of the efforts so far. Instead I want to discuss the next-gen tech. There are ethical quandaries.

If (when) effective therapy bots come onto the market, it will be a miracle. Note the word “effective”. Maybe it’ll be 3D facial models in VR, and machine learning for the backend, but it might be some manifestation I can’t come up with. Doesn’t really matter.

They have to actually help people deal with their angst and self-loathing and grief and resentment, but any therapy bots that are able to do that will do a tremendous amount of good. Not because I think they’ll be more skilled than human therapists — who knows — but because they’ll be more broadly available.

Software is an order of magnitude cheaper than human employees, so currently underserved demographics may have greater access to professional mental healthcare than they ever have before. Obviously the situation for rich people will still be better, but it’s preferable to be a poor person with a smartphone in a world where rich people have laptops than it is to be a poor person without a smartphone in a world where no one has a computer of any size.

Here’s the thing. Consider the data-retention policies of the companies that own the therapy bots. Of course all the processing power and raw data will live in the cloud. Will access to that information be governed by the same strict nondisclosure laws as human therapists? To what extent will HIPAA and equivalent non-USA privacy requirements apply?

Now, I don’t know about you, but if my current Homo sapiens therapist asked if she could record audio of our sessions, I would say no. I’m usually pretty blasé about privacy, and I’m somewhat open about being mentally ill, but the actual content of my conversations with my therapist is very serious to me. I trust her, but I don’t trust technology. All kinds of companies get breached.

Information on anyone else’s computer — that includes the cloud, which is really just a rented datacenter somewhere — is information that you don’t control, and information that you don’t control has a way of going places you don’t expect it to.

Here’s something I guarantee would happen: An employee at a therapy bot company has a spouse who uses the service. That employee is abusive. They access their spouse’s session data. What happens next? Who is held responsible?

I’m not saying that therapy bots are an inherently bad idea, or that the inevitable harm to individuals would outweigh the benefits to lots of other individuals. I’m saying that we have a hard enough time with sensitive data as it is. And I believe that collateral damage is a bug, not a feature.


Great comments on /r/DarkFuturology.