Saturday, October 12, 2013

The Ethics of Robot Sex



Human beings have long performed sexual acts with artifacts. Ancient religious rituals oftentimes involved the performance of sexual acts with statues, and down through the ages a vast array of devices for sexual stimulation and gratification have been created. Little wonder then that a perennial goal among roboticists and AI experts has been the creation of sex robots (“sexbots”): robots from whom we can receive sexual gratification, and with whom we may even be able achieve an emotional connection.

But is this something we should welcome? Or is it deeply worrying? David Levy has been at the forefront of research on this question, most notably in his book Love and Sex with Robots: The Evolution of Human-Robot Relationships. In this post, I want to take a look at some of the arguments he makes. Although I will use his book as a reference point, I will structure my discussion more around his article “The Ethics of Robot Prostitutes”, which appears in the edited collection Robot Ethics.

Levy makes two arguments in this piece. The first is a predictive argument, which holds that people will, as a matter of fact, have sex with robots in the future. The second is an ethical argument, which holds that there is nothing deeply ethically objectionable about sex.

For what it’s worth, I tend to agree with Levy on the first argument, if only because people have already demonstrated their willingness to have sex with artifacts. On the second issue, I think Levy’s discussion is not as careful as it could be and I hope to rectify that. I think one needs to distinguish between the intrinsic and extrinsic ethical aspects of robot sex and treat them separately. This is because while there is probably nothing intrinsically wrong with having sex with robots, it may be extrinsically problematic. That said, I’m fairly agnostic about this issue because it requires us to predict the likely effects of sexbot usage and I’m not sure that we are well-placed to do that.

I’ll divide my discussion into four parts. First, I’ll look at Levy’s predictive argument, suggesting one plausible criticism of it. Second, I’ll look at the intrinsic ethics of robot sex. Third, I’ll look at the extrinsic ethics of robot sex. Finally, and largely in jest, I’ll look at the most plausible ethical objection to robot sex: the Futurama argument.


1. Will people have sex with robots?
Levy’s predictive argument is based on two main limbs. The first limb aims to show that the primary motivations for having sex with other human beings will transfer over to robots. The second limb aims to show, more narrowly, that the primary motivations for having sex with prostitutes will transfer over to robots.

You may well wonder why the second limb is needed. Surely if the reasons for having sex with humans will transfer over to robots, we have a decent predictive argument? But I think it is relatively easy to see what the problem is. Even if it is true that the primary motivations for having sex with other human beings will transfer over to robots, there is still the question of why people would opt for robots in preference to (or in addition to) ordinary human partners. The answer to this question can partly be resolved by looking at why people opt for prostitutes in preference to (or in addition to) “ordinary” human partners.

So let’s look at the first limb of the argument: why do people have sex with other humans? Levy discusses this issue at length in his book (but not in the article). He mentions three studies. The first coming from Barbara Leigh, the second from Valerie Hoffman and Ralph Bolton, and the third from Deborah Davis and colleagues. For some reason he doesn’t provide references for these studies; luckily, I was able to track them down and have linked to them in the text.

In any event, there is nothing especially dramatic about the findings from these studies. Each of them found that pleasure and emotional connectivity were the primary motivations for having sex (and, interestingly, that procreation ranked pretty low). Some of the other stated reasons could be significant in different contexts, but those two are the relevant ones for now because Levy’s argument, as you might guess, is simply that robot sex could satisfy both of them. Obviously enough, having sex with a robot could be pleasurable and it could help people obtain gratification and sexual release. Emotional connectivity is a trickier prospect, but Levy claims that sophisticated robots could respond with at least the facade of emotionality, and, furthermore, that people do become emotionally attached to robots (the first half of his book is pretty good on this point).

That brings us to the second limb of the argument: why would people have sex with robots in preference to (or in addition to) having sex with human beings? The evidence on why people have sex with prostitutes is thought to be revealing. Now, there is quite a bit of evidence on this topic (though most of it focuses on male-female interactions), so I’ll just list some of the relevant sources first before going into the actual reasons. Some of the sources are: McKeganey and Barnard 1996; Xantidis and McCabe, 2000; Monto 2001; Bernstein 2007; and Sanders 2008.

Moving onto the actual reasons, Levy breaks these down into three main categories (with a fourth being hinted at in his book): (i) the myth of mutuality - i.e. people have sex with prostitutes in order to secure some kind of emotional connection; (ii) variety - i.e. people have sex with prostitutes because they are willing to engage in sex acts that “ordinary” human partners are not (though this can change as sexual norms among the general population change); (iii) lack of complications and constraints; and (iv) lack of sexual success in “normal” life.

Once again, Levy’s claim is that all four reasons could be satisfied by sexbots. Indeed, one thing that sexbots may be better at than human prostitutes is cultivating the “fake” emotional connection. After all, robots could be programmed to dote upon, or even to fall in love with their owners, thus creating a connection that is more substantive than that found in typical prostitute-client relationships. Contrariwise, they could be programmed not to, if that is what the owner would prefer (lack of complications and constraints). Furthermore, variety and willingness to have sex with those who are otherwise sexually unsuccessful, should not be problem for robots.

That gives us Levy’s predictive argument:


  • (1) If the motivations for having sex with ordinary human partners and prostitutes would carry over to sexbots, then people are highly likely to have sex with robots.
  • (2) The motivations for having sex with ordinary human partners and prostitutes would carry over to sexbots.
  • (3) Therefore, people are highly likely to have sex with robots.


As I said in the introduction, I’m inclined to agree with this predictive argument, partly for the reasons given by Levy but also partly because people have already demonstrated a willingness to have sex with artifacts. Still, there is at least one countervailing consideration to bear in mind: “the uncanny valley” effect. First mentioned by Masahiro Mori, the “uncanny valley” is the name given to the apparent revulsion people experience when they see an object that is almost, but not quite, human-like in appearance and function. A now-classic illustration of the phenomenon comes from Robert Zemeckis’s 2004 film The Polar Express. In that film, human actors were used to create computer simulations that were very close to being perfectly human-like in appearance. The result was that many viewers were uneasy about, and even slightly horrified by, the characters on screen. Having seen bits of the film myself (but never the whole thing) I can report a similar feeling of eerieness. (Note: I took this objection from Blay Whitby's article, which also appears in the Robot Ethics collection).

If the uncanny valley is a robust phenomenon — and it’s not at all clear to me that it is — then it might block the path to robot sex. The claim would be that as robots get more and more human-like in appearance and function, a point will be reached at which humans begin to experience severe revulsion toward them. This should make them less willing to have sex with them. It has to be noted, however, that the uncanny valley is just that: a dip in likeability before more complete human-likeness is reached. It may be little more than a speedbump on the road to rampant robot sex.

One final point that is worth mentioning is that one of the factors that might hasten the development and use of sexbots is a prohibitive attitude toward human prostitution. Levy gives the example of South Korean hotels that rent out “love dolls” to their patrons, largely because human sex work is prohibited in that country. This suggests that robot sex might be viewed as a viable alternative to human sex work in countries that prohibit human prostitution. That said, I’m not sure how credible this is given that human prostitution thrives in many countries with prohibitive laws.


2. Intrinsic Ethical Objections to Robot Sex
If we accept that increasingly sophisticated sexbots will be developed, and that people are likely to avail of them, then the question turns to the ethical. Is this a trend that should be welcomed, prevented or treated with indifference? Habitually, I tend toward indifference on questions like this, but let’s see if there are any objections to robot sex that ought to shake me out of that indifference.

Let’s start with intrinsic objections to robot sex. These are objections that claim that there is something inherently wrong about having sex with a human artifact, no matter how sophisticated it may be. I find this kind of objection hard to credit. Unless one adopts an extreme, Catholic, natural-law like view of permissible sex — in which only procreative or procreative-type sexual acts are permissible — there would seem to be little to object to in the notion of robot sex.

Still, there are better and worse arguments that can be made on this issue. In my opinion, Levy makes a bad one (in his article) by drawing an analogy between vibrators and sexbots (p. 227). Roughly, his argument is:


  • (4) Using vibrators to achieve orgasm is permissible.
  • (5) Using a sexbot to achieve orgasm is similar in all important respects to using a vibrator.
  • (6) Therefore, using a sexbot to achieve orgasm is permissible.


The problem here is that premise (5) may or may not be true. It depends on the degree of sophistication of the sexbot. If the sexbot is essentially a lifeless artifact with no autonomy, then the argument would be reasonably compelling. This may well be the position we are at with contemporary sex dolls and the like. But if the sexbot has some sophisticated AI, and some semblance of autonomy and personhood, the situation is rather different. In that case the conditions under which sex with robots is permissible, will tend to become equivalent to the conditions under which sex with ordinary human beings is permissible.

Levy seems to acknowledge this point later in his article when he discusses the possibility of artificially conscious robots. But I would say that any discussion of “consciousness” is a distraction here. Whether robots should be treated with the same ethical respect as humans does not depend on whether or not they are conscious (after all, how could even know this?). Rather, it depends on whether or not they display the external evidential marks of personhood. If they do, we should err on the side of caution and treat them equivalently to human beings (or so I believe).

One slight hiccup to bear in mind is Petersen’s “Robot Slave” argument, which I have covered on the blog before. If Petersen is right then it is permissible to create robot slaves and, naturally, this would cover robot sex slaves. I pass no judgment on the success of that argument here though.


3. Extrinsic Objections to Robot Sex
If there is nothing intrinsically wrong with having sex with robots, then we must consider the possibility that there is something extrinsically wrong about it. Levy mentions three possible extrinsic concerns. Let’s go through them briefly.

The first extrinsic concern that Levy mentions has to do with the stigma that might be experienced by the users of sexbots. Of course, this isn’t really an ethical concern. Whether or not users deserve to be stigmatised is something that is driven by ethical conclusions, not something that itself drives us toward ethical conclusions. At best, the likelihood of stigma makes engaging in robot sex prudentially unwise, which it may well be, but it doesn’t render it ethically impermissible.

The second extrinsic concern has to do with how ordinary human partners of sexbot users might feel. Will the use of sexbots be viewed as akin to infidelity? Will it harm ordinary human relationships? There are several things to be said on this matter. First, this will only be relevant to a certain sub-group of sexbot users, i.e. those with ordinary human partners. Second, the attitude toward sexbot use is likely to vary considerably across relationships. As Levy points out, some partners might feel threatened by it, but others may embrace it as it could free them up from sexual demands, or could be used to “spice things up”. In short, whether or not it is wrong to use a sexbot will have to be determined within the particular context of an actual relationship, and not in the abstract.

The third extrinsic concern has to do with the effect of sexbots on human sex workers. Is it possible that human sex workers will be rendered unemployed by the ready-availability of sexbots? Would this be a good or bad thing? I find this to be the most interesting extrinsic ethical concern, so much so that I’ve decided to write a paper on technological unemployment and sex work. Levy says relatively little about it in his article. He accepts that it might happen, but says that it might be good (if we think human sex work is morally objectionable), or bad (since human sex workers are a vulnerable sector of the population who might be rendered more vulnerable by technological unemployment). My own feeling is that sex work may be less vulnerable to technological unemployment than other industries, not least because technological unemployment in other industries may drive people into sex work. That could definitely have significant social and ethical implications, ones which I hope to explore in the paper I am writing.

So where does that leave us? Well, I think it leaves us with a set of objections to robot sex that are not particularly persuasive, partly because they are dependent on contingencies that cannot be evaluated in the abstract, and partly because they rely on difficult-to-make predictive arguments. (Edit, added on the 12/10/13: there are other extrinsic concerns one could raise. For example, one could claim that people are likely to be violent or extremely perverse in their relations with sexbots, and this might encourage them to be violent and perverse with human beings. But, again, this relies on dubious assumptions about how people are likely to behave with sexbots, and, in any event, could cut both ways: maybe having non-human robots upon whom we can act out our disturbing sexual fantasies will make things better for humans).

There is one other objection we have yet to consider…


4. The Most Plausible Argument Against Sex with Robots?
Futurama fans will be aware of the ethical perils of robot sex. In the third season episode “I dated a robot”, Fry falls in love with a robot with the holographic personality and image of Lucy Liu. His colleagues and friends warn him that he is doing a terrible thing: one shouldn’t have an emotional and sexual relationship with a robot. But Fry isn’t aware of the perils, having only recently been transported from the 20th Century to the 30th.

To educate him about the problem, they play him an old public health film: “Don’t Date Robots”. As the film explains, everything that is good or worthwhile (and some of what is bad and not so worthwhile) about civilisation (art, music, science, technology, sport etc.), is actually driven by the motivation to find a willing (human) romantic and sexual partner. If you take away that motivation, civilisation collapses. Unfortunately, this is exactly what the ready-availability of robot partners does. If all one needs to do to find a willing partner is to download one from a database, and include in that download all of one’s preferred characteristics, then all the striving and yearning that made civilisation possible will disappear.

We can call this the Futurama argument against sex with robots:


  • (7) If we remove the motivation to find a willing (human) partner, civilisation will collapse.
  • (8) Engaging with a robot sexual partner will remove that motivation.
  • (9) Therefore, if we start having sex with robots, civilisation will collapse.


The Futurama Argument is undoubtedly silly and hyperbolic. But part of me thinks it might be onto something…


No comments:

Post a Comment