Tuesday, November 24, 2015

Will technological unemployment lead to human disenhancement?




I have written a lot about the prospects of widespread technological unemployment; I have also written a lot about the ethics of human enhancement. Are the two topics connected? Yes. At least, that’s what Michele Loi tries to argue in his recent paper “Technological Unemployment and Human Disenhancement”. In this post, I want to analyse his argument and offer some mild criticisms. I do so in a constructive spirit since I share similar views.

As you might guess from the title, Loi’s claim is that the displacement of human workers by machines could lead to widespread human disenhancement. This is due to the differential impact of technological unemployment on the mass of human workers: some will find that technology has an enhancing effect, but most will not. This is supported largely, though not entirely, by the work of the economist David Autor (discussed previously on this blog). Autor is famous for describing the polarisation effect that technology is having on the workforce. In essence, Loi’s argument is that this polarisation effect is likely to result in disenhancement.

This might sound confusing right now but it should all make sense by the end of the post. I’ll break the discussion down into four main parts. I’ll start by looking more closely at the concept of ‘disenhancement’; then I’ll outline Loi’s main argument; then I’ll look at his defence of that argument; and then I will close by presenting some limited criticisms of that argument.


1. The Concept of Disenhancement
One of Loi’s goals is to demonstrate that there is an interesting connection between the economic debate about technological unemployment, and the bioethical debate about human enhancement. To prove this he needs to define his terms. Since ‘disenhancement’ is simply the inverse of ‘enhancement’, it makes sense to start with the latter. But anyone who has been paying attention to the enhancement debate for the past decade or so will know that clear definitions are an elusive quarry. There are so many sub-categories, sub-definitions and terminological kerfuffles, that it is hard to keep up. Loi says we need to understand two things to keep up with his argument.

The first is the distinction between traditional ‘functional’ definitions of enhancement and more recent ‘welfarist’ definitions. The distinction can be characterised in the following manner (this does not follow exactly what is presented in Loi’s article):

Functional Enhancement: Person X is enhanced if their capacities and abilities are improved (or added to) relative to some species-level or population-level functional norm.

Welfarist Enhancement: Person X is enhanced if the likelihood of their life going well is improved, relative to some set of circumstances.

Functional enhancement is more in keeping with what people generally mean when they think about enhancement. It assumes that there is some normal level of human ability and that the enhancing effect of a technology or intervention must be judged relative to that norm. Welfarist enhancement is a slightly more recent development, associated largely with the work of Savulescu and Kahane. It focuses on the individual’s welfare, not the general norm, and holds that the enhancing effect of a technology or intervention must be judged relative to the individual’s life and circumstances. The functional definition is inherently moralised because of how it implicates a norm; the welfarist definition is more about prudential well-being and hence less moralised. Loi wants his argument to cover both types of enhancement.

The other thing Loi wants us to understand is the distinction between broad and narrow forms of enhancement. These concern the nature of the enhancing intervention and can be characterised as follows:

Narrow Enhancements: Biomedical technologies that directly target biological capacities and have an enhancing effect.
Broad Enhancements: Any intervention — including non-biomedical technology, education, political governance etc — with an enhancing effect.

Although much of the bioethical debate is concerned with narrow forms of enhancement, Loi thinks it is difficult to maintain a principled distinction between the two. Indeed, the fuzziness of the distinction is something that is routinely exploited by proponents of enhancement. They often try to argue for biomedical enhancements on the grounds that they are not substantially different from broader enhancements to which no one has an objection. John Harris is probably the quintessential exponent of such arguments.

If you like, you could categorise enhancement arguments using these four concepts — as in the diagram below. This might help you better understand the argument you are dealing with.




In fact, it might even help us to understand Loi’s argument. Loi’s argument uses a broad definition of enhancement, and focuses on both the functionalist and welfarist types. This means he is concerned with the right half of the matrix given above. Of course technically, Loi is concerned with ‘disenhancement’ not ‘enhancement’, but that only means we have to invert the definitions in our mind. In essence, he is trying to argue that technological displacement in the workplace will have a disenhancing effect in both the welfarist and functionalist sense.


2. Loi’s Main Argument: Technological Unemployment could be Disenhancing
Loi doesn’t set out his main argument in an explicit, logical form in his article. As best I can tell, it works like this:

  • (1) A technology is disenhancing if it reduces or subtracts from ‘normal’ human capacities (functionalist sense), or if it reduces the chances of someone’s life going well, relative to some set of circumstances (welfarist sense). 

  • (2) Technological displacement at work gives rise to a polarisation effect: it pushes some people into highly-skilled, abstract forms of work, but pushes most people into lower-skilled, less rewarding forms of work. 

  • (3) If people are in lower skilled or lower paid forms of work, then they will witness a reduction in their normal capacities and the chances of their lives going well will be reduced. 

  • (4) Therefore, technological displacement at work leads to disenhancement.

This is messy, but I think it does justice to what Loi is trying to say. The first premise appeals to the definition of enhancement that Loi favours in his article. It should not cause any great controversy. The second premise is the key empirical support for the argument and, as I mentioned in the introduction, is based largely on the work of the economist David Autor (though Loi mentions others, including Autor’s collaborators). The third premise is where Loi’s real contribution to the debate comes: it links technological unemployment to disenhancement. Loi doesn’t set it out in these explicit terms — and that may be one of his argument’s main flaws — but something like it does seem to be implied by what he says. The conclusion then follows (for those who care, the argument’s structure is roughly: A = B; C → D; D = B; therefore C → A).

I’ll go through Loi’s defence of premises (2) and (3) next, but before I do so it’s worth noting something about Loi’s overarching goal. As he himself makes clear, he is not trying to offer concrete predictions about the future. He acknowledges that there is a large degree of empirical uncertainty in his argument. Rather, his goal is to simply identify a plausible scenario and tease out its ethical and social implications. This means we are better-armed when the technological changes come. This strikes a chord with me since I approach my own work in a similar light.


3. The Causes of Disenhancement
The second and third premises of Loi’s argument are all about the effect that technology has on the workplace. The traditional view — heavily influenced by the first wave of automation during the industrial age — is that machines replace human workers in the performance of arduous, routine physical tasks. Hence the takeover by machines of certain types of agricultural and manufacturing work. This, arguably, has had a long-term enhancing effect: it gave people the opportunity to work in more challenging, cognitive forms of employment.

This is no longer true. With the rise of computerisation and machine learning, technology is taking over from more and more cognitive work. Indeed, there is an interesting paradox, first observed by Hans Moravec, to the effect that machines are good at taking over routine work. This includes certain constrained physical tasks, but far more rule-based cognitive tasks (e.g. computing itself was a task once performed by human workers). Such tasks represented, for much of the 20th century, the core of the middle-skill, middle-income jobs that made a prosperous middle-class possible. These jobs are now slowly eroding in the wake of automation.

This is giving rise to a polarisation effect. It turns out that there are two types of work that are hard to automate. David Autor refers to these as “manual” and “abstract” work, respectively. Manual work is anything that requires fine sensorimotor skills and includes things like fruit-picking, food preparation, and cleaning. Though there are some initial forays into the automation of these tasks, it requires far more computing power to replace humans in these jobs than it does in the middle-skill cognitive jobs. Abstract work is anything that requires high levels of analytical ability and creative thinking. It includes things like entrepreneurialism, certain forms of managerial work, and high-level professional services. These are also difficult to automate, but benefit a lot from the automation of the middle-skill cognitive jobs (e.g. because abstract workers can now use computer technology to cheaply perform their own data analysis and processing).

With the automation of the middle-skill jobs, the workforce is being polarised into manual and abstract forms of work. The problem is that these forms of work are very different in character. Manual work is generally viewed as being low-skill and is often precarious and poorly paid. Manual workers tend to have little on-the-job autonomy and may find their work boring and unfulfilling. Abstract work is usually the opposite. The workforce is well-educated, well-paid and highly autonomous. Many abstract workers are deeply committed to and fulfilled by what they do.

The problem is that there are relatively few abstract jobs as compared to manual jobs. Indeed, the paucity of such jobs is partly driven by the effect of technology: it takes longer to educate a well-paid abstract worker, and they are able to gain larger market shares, with less human input, thanks to the automation of lower-skill jobs. Thus, the effect of automation is to drive relatively more workers into the more precarious, lower-paid, less-fulfilling, and less-rewarding types of work.

This is how Loi supports premises two and three. Premise two is supported by appeal to evidence relating to the polarisation effect and predictions about its future. Premise three is supported by the effects of more precarious, lower paid, less-fulfilling, and less rewarding types of work. Loi’s belief is that they are likely to have a ‘disenhancing’ effect.


4. Thoughts and Criticisms
What are the implications of this argument? Loi discusses several. Two stood out for me. First, there was his focus on the basic income guarantee as one way in which to ameliorate the negative effects of technological unemployment. This is not surprising since many make the same argument, but Loi does link it directly to concerns about disenhancement and not about social inequality more generally. Second, there was his discussion of biomedical enhancement as a way in which to correct for the disenhancing effects of technological unemployment. In other words, enhancement via the biomedical route may be a necessary countermeasure to disenhancement via the automation route. This is something I have argued in relation to the political effects of automating technology, and something I also discuss in an upcoming paper.

Is the overarching argument any good? In general, I agree with Loi that one can usefully fuse together the debates about technological unemployment and human enhancement. Indeed, I think it is worth doing so. That said, there were two omissions from the article that bothered me. The first was that I don’t think Loi did enough the emphasise the merits of the anti-work position. Proponents of this view argue that non-work can be better for an individual than work. Hence, there are ways in which technologically-induced unemployment could be a good thing and this could counteract some of the disenhancing effect. I have talked about this antiwork view ad nauseum on the blog before so I won’t repeat myself now. Safe to say, the antiwork view only really makes sense if the productive gains from technology are shared reasonably widely. If people are suffering from deprivation, and still being forced to find work, then Loi is aware of this, hence why he discusses the importance of the basic income guarantee.

The other issue I had with the article had to do with its success in demonstrating a disenhancing effect for technological unemployment in both the functionalist and welfarist senses. At the outset, Loi claims he is arguing for both, but towards the end he seems to limit himself to just the welfarist sense:

If ICT innovation leads to intrinsically worst [sic] jobs and low wages for most workers, technology will disenhance more workers (in the welfarist sense) than it enhances. This seems, unfortunately, to be the present trend.
(Loi 2015, 208)

As best I can tell, he makes no attempt to argue for disenhancement in the functional sense. He might be able to do this by, say, arguing that manual workers suffer from a reduction in capacity and ability. But it's not obvious to me that this is true. They may not have their mental abilities expanded, but their physical abilities could be. Furthermore, there is an argument out there to the effect that the kinds of automating technology used by abstract workers can have a (functionally) disenhancing effect. Nicholas Carr makes much of this in his recent book, claiming that assistive technologies often lead to the degeneration of mental abilities. I’m not endorsing that argument here (I discussed it at length on a previous occasion) but it adds an interesting angle to Loi’s argument. It suggests that the disenhancing effect might be broader than he suggests.

Anyway, those are just some quick — no doubt poorly thought-out — reflections on Loi’s article. I’m thinking a lot about the relationship between the enhancement debate and other techno-ethical debates at the moment, so I will continue to explore these issues.

Thursday, November 19, 2015

Theory and Application of the Extended Mind (Series Index)




In the past year, I have written several posts about Chalmers and Clark's famous extended mind thesis. This thesis takes seriously the functionalist explanation of mental events, and holds that the mind is not confined to skull. Instead, it can extend into artefacts and objects in the world around it.

I have been interested in both the theoretical underpinnings to this thesis and its potential applications, particularly to the human enhancement debate. Anyway, here are links to everything I have done on the concept -- two of them are podcasts in which I discuss it at some length.


  • Neuroenhancement and the Extended Mind Thesis: This post introduces the thesis and looks at Neil Levy's so-called Ethical Parity Principle which, to put it crudely, holds that what goes on inside the skull should be ethically on a par with what goes on outside the skull. This could have interesting consequences for the enhancement debate.

  • Two Interpretations of the Extended Mind Thesis: Some people have trouble understanding what the extended mind thesis is all about. This post tries to help by considering two interpretations put forward by the philosopher Katalin Farkas.

  • Extended Mind and the Coupling-Constitution Fallacy: The biggest criticism of the extended mind thesis comes from Kenneth Aizawa and Fred Adams. They argue that Chalmers and Clark confuse a causal relationship between the brain and external objects with a constitutive relationship. I try to explain this criticism and consider a possible reply.




Wednesday, November 18, 2015

The Philosophy of Games and the Postwork Utopia




I want to start with a thought experiment: Suppose the most extreme predictions regarding technological unemployment come to pass. The new wave of automating technologies take over most forms of human employment. The result is that there is no economically productive domain for human workers to escape into. Suppose, at the same time, that we all benefit from this state of affairs. In other words, the productive gains of the technology do not flow solely to a handful of super-wealthy capitalists; they are fairly distributed to all (perhaps through an guaranteed income scheme). Call this the ‘postwork’ world. What would life be like in such a world?

For some, this is the ideal world. It is a world in which we no longer have to work in order to secure our wants and needs. And the absence of compelled work sounds utopian. Bob Black, in his famous polemic ‘The Abolition of Work’, makes the case that:

No one should ever work. Work is the source of nearly all the misery in the world. Almost any evil you'd care to name comes from working or from living in a world designed for work. In order to stop suffering, we have to stop working.

But is the postwork world really all that desirable? To me, it all depends on what it takes to live a meaningful and flourishing life. Philosophers think that in order to live a flourishing life you need to satisfy certain basic conditions of value. Can those conditions be satisfied in the absence of work? Black seems to think they can. He paints a rosy picture of the ‘ludic’ (i.e. game-playing) life we can live in the absence of work:

[The postwork world means] creating a new way of life based on play; in other words, a *ludic* conviviality, commensality, and maybe even art. There is more to play than child's play, as worthy as that is. I call for a collective adventure in generalized joy and freely interdependent exuberance.

That sounds rather nice. But deeper analysis of this ludic life is needed. Only then will we know whether it provides for the kind of flourishing we seek. I want to provide that deeper analysis in this post. I do so by drawing from the work of Bernard Suits and Thomas Hurka, and in particular from the argument in Hurka’s paper ‘Games and the Good’. I want to suggest that a purely ludic life (one consisting of ‘games’) does allow for a certain type of flourishing. It is distinct from that included in traditional understandings of the good life, but it may provide a plausible blueprint for a postwork utopia.

To make this case, I’m going to have to do three things. First, I’m going to have to start with a pessimistic view, one suggesting that a postwork world would robs us of some value. Second, I will have to outline Hurka’s analysis of games and the good. And third, I will have to argue that this analysis provides one way of defending Black’s ideal of the ludic life.

[Note: The main idea in this post came from a conversation I recently had with Jon Perry and Ted Kupper on the Review the Future podcast. I would like to thank both of them for making me think about this issue.]


1. A Pessimistic View of the Postwork World
Antiwork theorists think that work is bad and nonwork is better. I have analysed this argumentative posture on previous occasions. One thing I noted on those occasions is that antiwork theorists are good at explaining why work is bad; but not-so-good at explaining why non-work is better. This is because their vision of the good life is often undertheorised. In other words, they lack clarity about what it takes to live a flourishing and meaningful life, and how that life might be enhanced in a postwork world. Theorisation is needed for a full defence of the antiwork position.

Here is one plausible theory of meaning, taken from the work of Thaddeus Metz. In one of his papers (LINK), Metz argues that there are three main sources of value in life: the Good, the True and the Beautiful. Our lives flourish and accumulate meaning when we contour our intellects to the pursuit of these three things. In other words, our lives flourish when we act to bring about the moral good, to pursue and attain a true conception of reality, and produce (and admire) things of great aesthetic beauty. The more we do of each, the better our lives are.

Under this account of meaning, your activities (and your intellect) must bring about valuable changes in the external reality. For example, I could dedicate my life to ending cancer. If I succeed, and my actions realise (or at least form some significant part of) the cure for cancer, the world would be a slightly better place. This would make my life meaningful (perhaps very meaningful). Why so? Because my actions would have helped to attain the Good (maybe also the True).

Here is one concern you could have about this type of meaning in the postwork future. The centrepiece of this theory is the link (typically causal and/or mental) between what I do and what happens in the world around me. I cause or help to bring about the Good, the True and the Beautiful: that’s what makes my life meaningful. But it is the very essence of automating technologies to sever the link between what I do and what happens in the world around me. Automating technologies, after all, obviate the need for humans in certain endeavours. The concern is that this power to sever the link might take hold in many domains, thereby distancing us from potential sources of meaning.

The concern needs to be fleshed out. The danger with the futurist antiwork position is that it assumes automating technologies will takeover the boring, degrading and dehumanising jobs, and leave us free to pursue things that provide opportunities for genuine meaning and flourishing. But there doesn’t seem to be any good reason to think that advances in automating technologies will only effect ‘bad’ or meaningless activities. They could takeover other more meaningful tasks too, thereby severing the connection between what we do and the things that are supposed to provide meaning. Indeed, if we assume that science is the main way in which we pursue Truth in the modern world, then there are already some obvious ways in which technology is taking over in its pursuit. Science is increasingly a big data enterprise, in which machine learning algorithms are leveraged to make sense of large datasets, and to make new and interesting discoveries. They are in their infancy now, but already we see ways in which the algorithms are attenuating the link between individual scientists and new discoveries. Why? Because they are becoming increasingly complex, and working in ways that are beyond the understanding and control of the individual scientists.

So the concern is that automating technologies narrow the domain for genuinely meaningful activities. Some such activities will no doubt remain accessible to humans (e.g. there are serious questions as to whether machines could ever really takeover the pursuit of the Beautiful), but the totality will diminish in the wake of automation. Humans could still be very well off in this world: the machines could solve most moral problems (e.g. curing disease, distributing goods and services, deciding on and implementing important social policies) and make new and interesting discoveries in which we can delight, but we will be the passive recipients of these benefits, not active contributors to them.
There is something less-than-idyllic about such a world.


2. Games as a Forum for Flourishing
One thing that would be left open to us in this postwork future, however, is game-playing. While the machines are busy solving our moral crises and making great discoveries, we can participate in more and more elaborate and interesting games. These games would be of no instrumental significance — they wouldn’t solve moral problems or be sources of income or status, for example — but they might be sources of value.

To make this argument, we first need a better handle on what a game is. To do this, we can turn to the conceptual analysis of games provided by Bernard Suits’s famous book the Grasshopper. Suits argued, contra-Wittgenstein, that all games (properly so-called) shared three key features:


Prelusory Goals: These are outcomes or changes in the world that are intelligible apart from the game itself. For example, in a game like golf the prelusory goal would be something like: putting a small, dimpled ball into a hole, marked by a flag. In a game like tic-tac-toe (or “noughts and crosses”) it would be something like: being the first to mark three Xs or Os in row, and/or preventing someone else from doing the same. The prelusory goals are the states of affairs that help us keep score and determine who wins or loses the game.

Constitutive Rules: These are the rules that determine how the prelusory goal is to be attained. According to Suits, these rules set up artificial obstacles that prevent the players from achieving the prelusory goal in the most straightforward and efficient manner. For example, the most efficient and straightforward way to get a dimpled ball in a hole would probably be to pick up the ball and drop it directly in the hole. But the constitutive rules of golf do not allow you to do this. You have to manipulate the ball through the air and along the ground using a set of clubs, in a very particular constrained environment. These artificial constraints are what make the game interesting.

Lusory Attitude: This is the psychological orientation of the game players to the game itself. In order for a game to work, the players have to accept the constraints imposed by the constitutive rules. This is an obvious point. Golf could not survive as a game if the players refused to use their clubs to get the ball into the hole.


This three-part analysis of games has struck many as both illuminating and (in broad brush) correct. We could quibble, but let’s accept it for now. The question then becomes: can a world in which we have nothing to do but play games (so-defined) provide the basis for a flourishing life? Maybe. Suits himself seems to have thought it would be the best possible life. But Suits was notoriously esoteric in his defence of this claim. His book on the topic, the Grasshopper, is an allegorical dialogue, which discusses games in the context of a future of technological perfection, but doesn’t present a clearcut argument. It is also somewhat equivocal and uncertain in its final views, which is what you would expect from a good philosophical dialogue. This makes for good reading, but not good arguing. So this is where we need to turn to the work of Thomas Hurka. Taking onboard Suits’s analysis, Hurka argues that games are a way of realising two important kinds of value.

The first value concerns the structure of means-end reasoning (or ‘practical’ reasoning if you prefer). Means-end reasoning is all about working out the most appropriate course of action for realising some particular goal. A well-designed game allows for some complexity in the relationship between means and ends. Thus, when one finally attains those ends, there is a great sense of achievement involved (you have overcome the obstacles established by the rules of the game). This sense of achievement, according to Hurka, is an important source of value. And games are good because they provide a pure platform for realising higher degrees of achievement.

An analogy helps to make the argument. Compare theoretical reasoning with practical reasoning. In theoretical reasoning, you are trying to attain true insights about the structure of the world around you. This enables you to realise a distinct value: knowledge. But this requires something more that the mere description of facts. You need to identify general laws or principles that help to explain those facts. When you succeed in identifying those general laws or principles you will have attained a deep level of insight. This has more value than mere description. For example, when Newton identified his laws of gravity, he provided overarching principles that could explain many distinct facts. This is valuable in a way that simply describing facts about objects in motion is not.

The point here is that in theoretical reasoning there is extra value to knowledge that is explanatorily-integrated. Hurka argues that the parallel to knowledge in the practical domain is achievement. There is some good to achievement of all kinds, but there is greater good in achievement that involves some means-end complexity. The more obstacles you have to overcome, the more achievement you have. Hurka illustrates the point using the diagrams I have reconstructed below. They the illustrate the depth and complexity of insight and achievement that can be acquired in both theoretical and practical domains.





The second source of value in game-playing has to do with Aristotle’s distinction between two types of activity: energeia and kinesis (this is how the distinction is described in Hurka - I’m not an expert on Aristotelian metaphysics but there are related distinctions in Aristotle’s work, e.g. praxis vs poesis). Energeiai are activities that are all about process. Aristotle viewed philosophy and self-examination as being of this sort: it was a constant process of questioning and gaining insight: it never bottomed out in some goal or end state. Kineseis are activities that are all about goals or end states. Aristotle thought that process-related activities were ultimately better than goal-related activities. The reason for this is that he thought the value of a kinesis was always trumped by or subordinate to its goal (i.e. it wasn’t good in itself). This is why Aristotle advocated the life of contemplation and philosophising. Such a life would be one in which the activity is an end in itself (I spoke about this before).

At first glance, it would seem like games don’t fit neatly within this Aristotelian framework. They are certainly goal-directed activities (the prelusory goal is essential to their structure). And so this makes them look like kineseis. But these goals are essentially inconsequential. They have no deeper meaning or significance. As a result, the game is really all about process. It is about finding ways to overcome the artificial obstacles established by the constitutive rules. As Hurka puts it, games are consequently excellent platforms for attaining a particularly modern conception of value (one found in the writings of existentialists). They are activities directed at some external end, but the internal process is the sole source of value. Indeed, there is a sense in which they are an even purer way of achieving Aristotle’s ideal. The problem with Aristotle’s suggestion that the best life is the life of intellectual virtue is that intellectual activity often does have goals lurking in the background (e.g. attaining some true insight). There is always the risk that these goals trump the inherent value of the intellectual process. With games, you never have that risk. The goals are valueless from the get-go. Purely procedural goods can really flourish in the world of games.

To sum up, a life filled with games does allow for certain forms of flourishing. Two are singled out in Hurka’s analysis. First, games allow for people to attain the good of achievement (overcoming obstacles to goals). And better games add the right amount of complexity and difficulty to the process and thereby enable deeper levels of achievement. Second, games allow for the inherent value of processes to flourish in the absence of trumping external goods. Hence, we can revel purely in exercising the physical, cognitive and emotional skills needed to overcome the obstacles within the game.


3. Is this the utopia we've been looking for?
But is this enough? Again, Bernard Suits certainly thought so. He thought the game-playing life was one of supreme value. Hurka is more doubtful. While he accepts that the game-playing life allows for some flourishing, he still thinks it is of a weaker or inferior sort. To quote:

Now, because game-playing has a trivial end-result, it cannot have the additional intrinsic value that derives from instrumental value. This implies that excellence in games, though admirable, is less so than success in equally challenging activities that produce a great good or prevent a great evil. This seems intuitively right: the honour due athletic achievement for themselves is less than that due the achievements of great political reformers or medical researchers. 
(Hurka 2006)

This suggests a retreat to the vision of meaning I outlined earlier in this post, i.e. truly meaningful activity must be directed toward the Good, the True and the Beautiful. The problem is that even if this vision is right, there is the risk that advances in automating technologies cut us off from these more valuable activities. We may need to make do with games.

But perhaps this should not cause us despair. In many ways, this is a plausible vision of what a utopian world would look like. If you think about it, the other proposed sources of meaning (like the Good and the True) make most sense in an imperfect world. It is because people suffer or lack basic goods and services that we need to engage in moral projects that improve their well-being. It is because we are epistemically impaired that we need to pursue the truth. If we lived in a world in which those impairments had been overcome, the meaning derived from those activities would no longer make sense. The external goods would be available to all. In such a world, we would expect purely procedural or instrumental goods to be the only game in town.

And what is a world devoid of suffering, impairment and limitation? Surely it is a utopia?

Monday, November 16, 2015

Is Anyone Competent to Regulate Artificial Intelligence?




Artificial intelligence is a classic risk/reward technology. If developed safely and properly, it could be a great boon. If developed recklessly and improperly, it could pose a significant risk. Typically, we try to manage this risk/reward ratio through various regulatory mechanisms. But AI poses significant regulatory challenges. In a previous post, I outlined eight of these challenges. They were arranged into three main groups. The first consisted of definitional problems: what is AI anyway? The second consisted of ex ante problems: how could you safely guide the development of AI technology? And the third consisted of ex post problems: what happens once the technology is unleashed into the world? They are depicted in the diagram above.

In outlining these problems, I was drawing from the work of Matthew Scherer and his soon-to-be-published article “Regulating Artificially Intelligent Systems: Risks, Challenges, Competencies and Strategies”. Today I want to return to that article and consider the next step in the regulatory project. Once we have a handle on the basic problems, we need to consider who might competent to deal with them. In most Western countries, there are three main regulatory bodies:

Legislatures: The body of elected officials who enact general laws and put in place regulatory structures (e.g. in the US the Houses of Congress; in the UK the Houses of Parliament; in Ireland the Houses of the Oireachtas).

Regulatory Agencies: The body of subject-area specialists, established through legislation, and empowered to regulate a particular industry/social problem, often by creating, investigating and enforcing regulatory standards (there are many examples, e.g. the US Food and Drug Administration, the UK Financial Conduct Authority; the Irish Planning Board (An Bord Pleanala))

Courts: The judges and other legal officials tasked with arguing, adjudicating and, ultimately, enforcing legal standards (both civil and criminal).

To these three bodies, you could perhaps add “The Market” which can enforce a certain forms of discipline on private commercial entities, and also internal regulatory bodies within those commercial entities (though such bodies are usually forced into existence by law). For the purposes of this discussion, however, I’ll be sticking to the three bodies just outlined. The question is whether any of these three bodies are competent to regulate the field of artificial intelligence. This is something Scherer tries to answer in his article. I’ll follow his analysis in the remainder of this post, but where Scherer focuses entirely on the example of the United States I’ll try to be a little more universal.

Before I get underway, it is worth flagging two things about AI that could affect the competency of anyone to regulate its development and deployment. The first is that AI is (potentially) a rapidly advancing technology: many technological developments made over the past 50 years are now coming together in the form of AI. This makes it difficult for regulatory bodies to ‘keep up’. The second is that advances in AI can draw on many different fields of inquiry, e.g. engineering, statistics, linguistics, computer science, applied mathematics, psychology, economics and so on. This makes it difficult for anyone to have the relevant subject-area expertise.


1. The Competencies of Legislatures
Legislatures typically consist of elected officials, appointed to represent the interests of particular constituencies of voters, with the primary goal of enacting policy via legislation. Legislatures are set up slightly differently around the world. For example, in some countries, there are non-elected legislatures working in tandem with elected legislatures. In some countries, lobbyists have significant influence over legislators; in others this influence is relatively weak. In some countries, the executive branch of government effectively controls the legislature; in others the executive is an entirely distinct branch of government.

Scherer argues that three things must be remembered when it comes to understanding the regulatory role of a legislature:

Democratic Legitimacy: The legislature is generally viewed as the institution with the most democratic legitimacy, i.e. it is the institution that represents the people’s interests and answers directly to them. Obviously, the perceived legitimacy of the legislature can wax and wane (e.g. it may wane when lobbying power is excessive). Nevertheless, it will still tend to have more perceived democratic legitimacy than the other regulatory bodies.

Lack of Expertise: Legislatures are generally made up of career politicians. It is very rare for these career politicians to have subject matter expertise when it comes to a proposed regulatory bill. They will have to rely on judgments from constituents, advisors, lobbyists and experts called to give evidence before a legislative committee.

Delegation and Oversight: Legislatures have the ability to delegate regulatory power to other agencies. Sometimes they do this by creating an entirely new agency through a piece of legislation. Other times they do so by expanding or reorganising the mission of a pre-existing agency. The legislature then has the power to oversee this agency and periodically call it account for its actions.

What does all this mean when it comes to the AI debate? It means that legislatures are best placed to determine the values and public interests that should go into any proposed regulatory scheme. They are directly accountable to the people and so they can (imperfectly) channel those interests into the formation of a regulatory system. Because they lack subject matter expertise, they will be unable to determine particular standards or rules that should govern the development and deployment of AI. They will need to delegate that power to others. But in doing so, they could set important general constraints that reflect the public interest in AI.

There is nothing too dramatic in this analysis. This is what legislatures are best-placed to do in virtually all regulatory matters. That said, the model here is idealistic. There are many ways in which legislatures can fail to properly represent the interests of the public.


2. The Competencies of Regulatory Agencies
Regulatory agencies are bodies established via legislation and empowered to regulate a particular area. They are quite variable in terms of structure and remit. This is because they are effectively designed from scratch by legislatures. In most legal systems, there are some general constraints imposed on possible regulatory structures by constitutional principles (e.g. a regulatory agency cannot violate or undermine constitutionally protected rights). But this still gives plenty of scope for regulatory innovation.

Scherer argues that there are four things about regulatory agencies that affect their regulatory competence:

Flexibility: This is what I just said. Regulatory agencies can be designed from scratch to deal with particular industries or social problems. They can exercise a variety of powers, including policy-formation, rule-setting, information-collection, investigation, enforcement, and sanction. Flexibility often reduces over time. Most of the flexibility arises during the ‘design phase’. Once an agency comes into existence, it tends to become more rigid for both sociological and legal reasons.

Specialisation and Expertise: Regulatory agencies can appoint subject-matter experts to assist in their regulatory mission. Unlike legislatures who have to deal with all social problems, the agency can keep focused on one mission. This enhances their expertise. After all, expertise is a product of both: (a) pre-existing qualification/ability and (b) singular dedication to a particular task.

Independence and Alienation: Regulatory agencies are set up so as to be independent from the usual vagaries of politics. Thus, for example, they are not directly answerable to constituents and do not have to stand for election every few years. That said, the independence of agencies is often more illusory than real. Agencies are usually answerable to politicians and so (to some extent) vulnerable to the same forces. Lobbyists often impact upon regulatory agencies (in some countries there is a well-known ‘revolving door’ for staff between lobbying firms, private enterprises, and regulatory agencies). Finally, independence can come at the price of alienation, i.e. a perceived lack of democratic legitimacy.

The Power of Ex Ante Action: Regulatory agencies can establish rules and standards that govern companies and organisations when they are developing products and services. This allows them to have a genuine impact on the ex ante problems in any given field. This makes them very different from the courts, who usually only have ex post powers.


What does this mean for AI regulation? Well, it means that a bespoke regulatory agency would be best placed to develop the detailed, industry-specific rules and standards that should govern the research and development of AI. This agency could appoint relevant experts who could further develop their expertise through their work. This is the only way to really target the ex ante problems highlighted previously.

But there are clearly limitations to what a bespoke regulatory agency can do. For one thing, the fact that regulatory structures become rigid once created is a problem when it comes to a rapidly advancing field like AI. For another, because AI potentially draws on so many diffuse fields, it may be difficult to recruit an appropriate team of experts. Relevant insights that catapult AI development into high gear may come from unexpected sources. Furthermore, people who have the relevant expertise may be hoovered up by the enterprises they are trying to regulate. Once again, we may see a revolving door between the regulatory agency and the AI industry.


3. The Competencies of Courts
Courts are judicial bodies that adjudicate on particular legal disputes. They usually have some residual authority over regulatory agencies. For instance, if you are penalised by a regulatory agency you will often have the right to appeal that decision to the courts. This is a branch of law known as administrative law. Although legal rules vary, most courts adopt a pretty deferential attitude toward regulatory agencies. They do so on the grounds that the agencies are the relevant subject-matter experts. That said, courts can still use traditional legal mechanisms (e.g. criminal law or tort law) to resolve disputes that may arise from the use of a technology or service.

Scherer focuses on the tort law system in his article. So the scenario lurking in the background of his analysis is a case in which someone is injured or harmed by an AI system and tries to sue the manufacturer for damages. He argues that four things must be kept in mind when assessing the regulatory competence of the tort law system in cases like this:

Fact-Finding Powers: Rules of evidence have been established that give courts extensive fact-finding powers in particular disputes. These rules reflect both a desire to get at the truth and to be fair to the parties involved. This means that courts can often acquire good information about how products are designed and safety standards implemented, but that information is tailored to a particular case and not to what happens in the industry more generally.

Reactive and Reactionary: Courts can only intervene and impose legal standards after a problem has arisen. This can have a deterrent effect on future activity within an industry. But the reactive nature of the court also means that it has a tendency to be reactionary in its rulings. In other words, courts can be victims of “hindsight bias” and assume that the risk posed by a technology is greater than it really is.

Incrementalist: Because courts only deal with individual cases, and because the system as a whole moves quite slowly, it can really only make incremental changes.

Misaligend Incentives: In common law systems, the litigation process is adversarial in nature: one side prosecutes a claim; the other defends. Lawyers only take cases to court that they think can be won. They call witnesses that support their side. In this, they are concerned solely with the interests of their clients, not with the interests of the public at large. That said, in some countries class actions are possible, which allow for many people to bring the same type of case against a defendant. This means some cases can represent a broader set of interests.

What does all this mean for AI regulation? Well, it suggests that the court system cannot deal with any of the ex ante problems alluded to earlier on. It can only deal with ex post problems. Furthermore, in dealing with those problems, it may move too slowly to keep up with the rapid advances in the technology, and may tend to overestimate the risks associated with the technology. If you think those risks are great (bordering on the so-called “existential” risk-category proposed by Nick Bostrom), this reactionary nature might be a good thing. But, even still, the slowness of the system will count against it. Scherer thinks this tips the balance decisively in favour of some specially constructed regulatory agency.




4. Conclusion: Is there hope for regulation?
Now that we have a clearer picture of the regulatory ecosystem, we can think more seriously about the potential for regulation in solving the problems of AI. Scherer has a proposal in his article, sketched out in some reasonable detail. It involves leveraging the different competencies of the three bodies. The legislature should enact an Artificial Intelligence Development Act. The Act should set out the values for the regulatory system:

[T]o ensure that AI is safe, secure, susceptible to human control, and aligned with human interests, both by deterring the creation of AI that lack those features and by encouraging the development of beneficial AI that include those features. 
(Scherer 2015)

The act should, in turn, establish a regulatory agency with responsibility for the safe development of AI. This agency should not create detailed rules and standards for AI, and should not have the power to sanction or punish those who fail to comply with its standards. Instead, it should create a certification system, under which agency members can review and certify an AI system as “safe”. Companies developing AI systems can volunteer for certification.

You may wonder why any company would bother to do this. The answer is that the Act would also create a differential system of tort liability. Companies that undergo certification will have limited liability in the event that something goes wrong. Companies that fail to undergo certification will face strict liability standards in the event of something going wrong. Furthermore, this strict liability system will be joint and several in nature: any entity in the design process could face full liability. This creates an incentive for AI developers to undergo certification, whilst at the same time not overburdening them with compliance rules.

In a way, this is a clever proposal. It tries to balance the risks and rewards of AI. The belief is that we shouldn’t stifle creativity and development within the sector, and that we should encourage safe and beneficial forms of AI. My concern is that this system misses some of the unique properties of AI that make it such a regulatory challenge. In particular, the proposal seems to ignore the difficulty of (a) finding someone to regulate and (b) the control problem.

This is ironic given that Scherer was quite good at outlining those challenges in the first part of his article. There, he noted how AI developers need not be large, well-integrated organisations based in a single jurisdiction. But if they are not, then it may be difficult to ‘reach’ them with the proposed regulatory regime. I am guessing the joint and several liability proposal is designed to address this problem as it creates an incentive for anyone involved in the process to undergo certification, but it assumes that diffuse networks of developers have the end goal of producing a ‘consumer’ type device. This may not be true.

Furthermore, earlier in the article, Scherer noted how AI systems can do things that are beyond the control or anticipation of their original designers. This creates liability problems but these problems can be addressed through the use of strict liability standards. At the same time, however, it also creates problems in the certification process. Surely if AI systems can act in unplanned and unanticipated ways, it follows that members of a putative regulatory agency would not be well-equipped to certify an AI system as “safe”? That could be concerning. The proposed system would probably be better than nothing, and we shouldn’t make the perfect the enemy of the good, but anyone who is convinced of the potential for AI to pose an “existential threat” to humanity is unlikely to think that regulation of this sort can play a valuable role in mitigating that risk.

Scherer is aware of this. He closes by stating that his goal is not to provide the final word but rather to start a conversation on the best legal mechanisms for managing AI risk. That’s certainly a conversation that needs to continue.

Saturday, November 14, 2015

Blockchain Technology, Smart Contracts and Smart Property






Blockchain technology is at the heart of cryptocurrencies like Bitcoin. Most people have heard of Bitcoin and some are excited by the prospect it raises of a decentralised, stateless currency/payment system. But this is not the most interesting thing about Bitcoin. It is the blockchain technology itself that is the real breakthrough. It not only provides the foundation for a currency and payment system; it also provides the foundation for new ways of organising and managing basic social relationships. This includes legal relationships such as those involved in contractual exchange and proprietary ownership. The most prominent expression of this potential comes in the shape of Ethereum, an open source platform that allows developers to use blockchains for whatever purpose they see fit.

This might sound a little abstract and confusing. Blockchain technology is exciting, but many people are put off by the technical and abstruse concepts underpinning it. Proponents of the technology talk about strange things like cryptographic hash functions and public key encryption. They also refer to obscure mathematical puzzles like the Byzantine Generals problem in order to explain how it works. This is daunting. Many wonder whether they have to master this obscure conceptual vocabulary in order to understand what all the fuss is about.

If they want to engage with the technology at the deepest levels, they do. But to gain a high level understanding of how it works, and to share some of the excitement of its proponents, they don’t. My goal in this post is to provide that high-level understanding, and to explain how the technology could provide an underpinning for things like smart contracts and smart property. With luck, this will enable people to see the potential for this technology and will pique their interest in its political, legal and ethical implications.

I appreciate that there are many other articles out there that try to do the same thing. I am merely adding one more to the pile. I do so in the hope that it may prove useful to some, but also in the hope that it helps me to better understand the phenomenon. After all, most writing is an exercise in self-explanation. It is through communication that we truly begin to understand.

The remainder of this post is divided into three main sections. The first talks about the ‘Trust Problem’ that motivates the creation of the blockchain. The second tries to provide a detailed but non-mathematical description of how the blockchain works to solve the trust problem. The third explains how the technology could support a system of smart contracts and smart property.


1. The Trust Problem and the Motivation for the Blockchain
All human societies have a trust problem. In order to survive and make a living, we must coordinate and cooperate with others. In doing so, there is potential for these others to mislead, deceive and disappoint. To ensure successful ongoing cooperation, we need to be able to trust each other. Many societies have invented elaborate rituals, laws and governance systems to address this trust problem. At its most fundamental level, blockchain technology tries to do the same.

To illustrate, let’s use the example of a currency and payment system. This seems appropriate given the origins of blockchain technology in the development of such systems. I’m going to use the example of a real-world currency system: the currency used (historically) on the Island of Yap. Some people will be familiar with this example as it is beloved by economists. The only problem is that example has become heavily mythologised and abstracted from the actual historical reality. I’m not an expert on that history, so what I am about to describe is also likely to be highly mythologised and simplified. I hope that’s okay: the goal here is to explain the rationale behind blockchain technology, not to write an accurate monetary history of the Island of Yap.

Anyway, with that caveat in mind, the Islanders of Yap had an unusual monetary system. They did not use coins as money. Instead, they used stone discs of varying sizes. These discs were mined from another island, several hundred miles away. This ensured the discs that had been mined and brought back to the island retained their value over time. The picture below provides an example and illustrates just how large these discs could get. People would exchange these large discs in important transactions. But obviously the islanders could not just hand the discs to one another to finalise the transaction. The discs remain fixed in place. In order to know who-owned-what, the islanders needed to keep some kind of ledger, which recorded transactional data and allowed them to figure out which stone disc belongs to which islander.




One way to do this would have been to use a trusted third party ledger. In other words, to find some respected tribal elder or chief and make it a requirement that all transactions be logged with him/her. That way, whenever a dispute arose, the islanders could go to the elder and he/she could resolve the dispute. The elder could confirm that Islander A really does own the disc and is entitled to exchange it with Islander B, or vice versa. This is illustrated in the diagram below.



We make use of such trusted third party systems everyday. Indeed, modern political, legal and monetary systems are almost entirely founded upon them. When you make a payment via credit or debit card, that transaction must first be logged with a bank or credit card company, who will verify that you have the necessary funds and that the payment came from you, before the payment is finally confirmed. The same goes for disputes over legal rights. Courts function as trusted third parties who resolve disputes (ultimately via the threat of violence) about contractual rights and property rights (to give just two examples).

But that is not the only way to solve the trust problem. Another way would be to use a distributed consensus ledger. In other words, instead of logging transactional data with a trusted third party, you could require all the islanders to keep an ongoing, updated, record of transactions. Then, when a dispute arises, you go with either the majority or unanimous view of this network of ledger-keepers. As far as I am aware (and this is where my caveat about historical accuracy needs to be borne in mind) this is what the Islanders of Yap seem to have done. Each islander kept a mental record of who owned what, and this distributed mental record could be used to resolve transactional disputes.




Blockchain technology follows this distributed consensus method. It tries to create a computer-based protocol for resolving the trust problem through a distributed and publicly verifiable ledger. This is known as the blockchain. We can define it in the following way (from Wright and De Filippi, 2015):

Blockchain = A distributed, shared, encrypted database which serves as an irreversible and incorruptible public repository of information.


2. How the Blockchain is Built
But how exactly does the technology build the ledger? This is where things can get quite technical. In essence, the blockchain works by leveraging the networking capabilities of modern computers and by using a variety of cryptographic tools for verifying transactional data.

A network is established consisting of many different computers located in many different places. Each computer is a node in the network. You could have one node in South Africa, one in England, one in France, one in the USA, one in Yemen, one in Australia and so on. The network can, in theory, be distributed across the entire world. This network is then used for logging, recording and verifying transactional information. Every computer on the network keeps a record of all transactions taking place on the network. This record is known as the blockchain. It is comprehensive, permanent, public and distributed across all nodes in the network. The network can thus function as a decentralised authority for managing and maintaining records of transactions.

It is easy enough to see how this works in the case of two people exchanging money. Suppose Person A wants to transfer 100 bitcoin (or whatever) to Person B. Person A has a digital ‘wallet’ which contains a record of how much bitcoin they currently own. They sign into this and agree to transfer a certain sum to Person B. They do this by broadcasting to the network that they wish to transfer the money to Person B’s digital wallet. Details of this proposed transaction are then added to a ‘block’ of transactional data that is stored across the network. The ‘block’ is like a temporary record that is in the process of being added to the permanent record (the blockchain). The ‘block’ represents all the transactions that took place on the network during a particular interval of time. In the case of bitcoin, the block includes information about all the transactions taking place in a ten minute interval.

At this stage, the transaction between A and B has not been verified and does not form part of the permanent distributed ledger. What happens next is that once all the data has been collected for a given interval of time, the network works on verifying the details in those transactions (i.e. does A really have that amount of money to send to B? Did A really initiate the transaction? etc). Each computer on the network participates in a competition to verify the transactional data. The winner of this competition gets to add the ‘block’ to the ‘blockchain’ (i.e. they get to update the ledger). When they do so, they broadcast their ‘proof of work’ to the rest of the network. This shows the network how the winning computer verified the transactional data. The other computers on the network then check that proof of work and confirm that the record is correct. This is where the ‘distributed consensus’ comes in. It is only if the winning ‘solution’ is confirmed by the majority that it becomes a permanent part of the blockchain.

This verification process is technically tricky. I have a given a simple descriptive account. For the full picture, you would need to engage with the cryptographic concepts underpinning it.

There are a couple of interesting things about this, over and above its ‘distributed consensus’ nature. The first has to do with the role of trust. Some people refer to the blockchain as a ‘trustless’ system. I think people say this because it is the computer protocol and its combination of cryptographic verification methods that underpin the ledger. Thus, when you are using the system, you do not have to trust or place faith in another human being. This makes it seem very different from, say, the situation faced by the islanders of Yap, who really do have to trust one another when using their distributed ledger. But clearly there is trust of a kind involved in the process. You have to trust the technology, and the theory underpinning it. Maybe that trust is justified, but it still seems to be there. Also, since most people lack the technical know-how to fully understand the system, there is a stronger sense of trust involved for most users: they have to trust the technical experts who establish and maintain the network.

The other interesting thing has to do with the incentive to maintain the network. You may wonder why people would be willing to give up their computing resources to maintain such an elaborate system. The technologically-inclined might do so initially out of curiosity, or maybe some sense of idealism, but to have a widespread network you probably need something more enticing. The solution used by most blockchain systems is to reward members of the network with some digital token that can be used to conduct exchanges on the network. In the case of bitcoin, the winner of the verification competition receives newly minted bitcoin for their troubles. This makes it attractive for people to join and maintain the network. Bitcoin adopts a particular economic philosophy in its reward system: the winner takes all the newly-minted bitcoin. This doesn’t have to be the case. You could adopt a more egalitarian or socialist system in which all members of the network share whatever token of value is being used.


3. Smart Contracts and Smart Property
To this point, I have stuck with the example of bitcoin and illustrated how it uses blockchain technology. But as I noted at the outset, this is merely one use-case. The really interesting thing about blockchain technology is how it can be used to manage and maintain other kinds of transactional data. In essence, the blockchain is a decentralised database that can maintain a record of any and all machine-to-machine communications. And since smart devices, involving machine-to-machine communication, are now everywhere, this makes the blockchain a potentially pervasive technology. Smart contracts and smart property are two illustrations of this potential. I’ll try to explain both.

A contract is an agreement between two or more people involving conditional commitments, i.e. “If you do X for me, I will do Y for you”. A legal contract makes those conditional commitments legally enforceable. If you fail to do X for me, I can take you to court and have you ordered to do X, or ordered to pay me compensation for failing to do X. A smart contract is effectively the same, only you use some technological infrastructure to ensure that conditions have been met and/or to automatically enforce commitments. This can be done using blockchain technology because the distributed ledger system can be used to confirm whether contractual conditions have been met.

Suppose I am selling drugs illegally via the (now-defunct) Silk Road. We agree that you will pay me X bitcoin if you receive the drugs by a particular date. That condition could be built into the initial transaction that is logged on the blockchain platform. In this case, the system will only release the bitcoin to me if the relevant condition is met. How will it know? Well, suppose the drugs are of a certain weight and have to be delivered to a certain locker that you use for these purposes. The locker is equipped with a ‘smart’-weighing scales. Once a package of the right weight is delivered to the locker, the weighing scales will broadcast the fact to the network, which then confirms that the relevant contractual condition has been met. This results in the money being released to me.

Notice how the contract here is enforced automatically. I do not have to wait for you to release the bitcoin to me and you do not have to worry about losing your bitcoin and never receiving the drugs. The relevant conditions are coded into the original smart contract and once they are met the contract is automatically executed. There is no need for recourse to the courts (though you could build in conditional recourse to courts if you liked). The increasing number of ‘smart’ devices makes smart contracts enticing. Why? Because these devices allow for more ways in which to record, implement, and confirm the performance of relevant contractual conditions. The advantage of the blockchain is that it provides a way to manage and coordinate these devices without relying on trusted third parties.

Smart property is really just a variation on this. Tangible, physical property in the real world (e.g. cars, houses, cookers, fridges etc) can have smart technology embedded in them. Indeed, this is already true for many cars. Information about these physical objects can then be registered on the blockchain along with details of who stands in what type of ownership relationship to those physical objects. Smart keys could then be used to facilitate ownership rights. So, for example, you might only be able to access and use a car if you had the right smart key stored on your phone. The same could be true for a smart-house. These keys can then be exchanged and the exchanges verified using the blockchain. The blockchain thus becomes a system for recording and managing property rights.

Hopefully, these two examples give some sense of the excitement surrounding blockchain technology.


4. Conclusion
To sum up, the blockchain is a distributed, publicly verifiable and encrypted ledger used for recording and updating transactional data. It helps to solve the trust problem associated with most forms of social cooperation and coordination by obviating the need for trusted third parties. The technology is exciting because it can be used to manage and maintain networks of smart devices. As such devices become more and more widespread, there is the potential for blockchain technology to become pervasive. I’ll try to explore some of the more philosophically and legally interesting questions this throws up in future posts.

Wednesday, November 11, 2015

The Campaign Against Sex Robots: A Critical Analysis

Logo from the Campaign's Website


The Campaign Against Sex Robots launched to much media fanfare back in September. The brainchild of Dr. Kathleen Richardson from De Montfort University in Leicester UK, and Dr. Erik Brilling from University of Skovde in Sweden, the campaign aims to highlight the ways in which the development of sex robots could be ‘potentially harmful and will contribute to inequalities in society’. What’s more, despite being a relative newcomer, the campaign may have already achieved its first significant ‘scalp’. The 2nd International Conference on Love and Sex with Robots, organised by sex robot pioneer David Levy was due to be held in Malaysia this month (November 2015) but was cancelled by Malaysian authorities shortly after the campaign was launched.

Now, to be sure, it’s difficult to claim a direct causal relationship between the campaign and the cancellation of the conference, but there is no doubting the media success of the campaign: it has been featured in major newspapers, weblogs and TV shows around the world. Most recently, Dr Richardson participated in a panel at the Web Summit conference in Dublin, and this was discussed in the national media here in Ireland. Furthermore, the actions of the Malaysian authorities suggest that there is the potential for the campaign to gain some traction.

And yet, I find the Campaign Against Sex Robots somewhat bizarre. I’m puzzled by the media attention being given to it, especially since the ethics and psychology of human-robot relationships (including sexual relationships) has been a topic of serious inquiry for many years. And I’m also puzzled about the position of the campaign and the arguments its proponents proffer. I say this as someone with a bit of form in this area. I have written previously about the potential impact of sex robots on the traditional (human) sex work industry; I have also written about the case for legal bans of certain types of sex robot; and, with my friend and colleague Neil McArthur, I am currently co-editing a collection of essays on the legal, ethical and social implications of sex robots for MIT Press. So I am not unsympathetic to the kinds of issues being raised. But I cannot see what the campaign is driving at.

In this post, I want to provide some support for my puzzlement by analysing the goals of the campaign and the ‘position paper’ it has published in support of these goals. I want to make two main arguments: (i) the goals of the campaign are insufficiently clear and much of its media success may be trading on this lack of clarity; and (ii) the reasons proffered in support of the campaign are either unpersuasive or insufficiently strong to merit a ‘campaign’ against sex robots. I appreciate that others have done some of this critical work before. My goal is to do so in a more thorough way.

(Note: this post is long -- far longer than I originally envisaged. If you want to just get the gist of my criticisms, I suggest reading section one and the conclusion, and then having a look at the argument diagrams.)


1. What are the goals of the campaign against sex robots?
Let me start with a prediction: sex robots will become a reality. I say this with some confidence. I am not usually prone to making predictions about the future development of technology. I think people who make such predictions are routinely proved wrong, and hence forced into some awkward backtracking and self-amendment. Nevertheless, I feel pretty sure about this one. My confidence stems from two main sources: (i) history suggests that sex and technology have always gone together, hence if there is to be a revolution in robotics it is likely to include the development of sex robots; and (ii) sex robots already exist (in primitive and unsophisticated forms) and there are several companies actively trying to develop more sophisticated versions (perhaps most notably Real Doll). In making this prediction, I won't make specific claims about the likely form or degree of intelligence that will be associated with these sex robots. But I’m still sure they will exist.

Granting this, it seems to me that there are three stances one can take towards the existence of such robots:

Liberation: i.e. adopt a libertarian attitude towards the creation and deployment of such robots. Allow manufacturers to make them however they see fit, and sell or share them with whoever wants them.

Regulation: i.e. adopt a middle-of-the-road attitude towards the creation and deployment of such robots. Perhaps regulate and restrict the manufacture and/or sale of some types; insist upon certain standards for consumer/social protection for others; but do not implement an outright ban.

Criminalisation: i.e. adopt a restrictive attitude towards the creation and deployment of such robots. Ban their use and manufacture, and possibly seek criminal sanctions for those who breach the terms of those bans (such sanctions need not include incarceration or other forms of harsh treatment).

These three stances define a spectrum. At one end, you have extreme forms of liberation, which would enthusiastically welcome any and all sex robots; and at the other end you would have extreme forms of criminalisation, which would ban any and all sex robots. The great grey middle of ‘regulation’ lies in between.





For what it is worth, I favour a middle-of-the-road attitude. I think there could be some benefits to sex robots, and some problems. On balance, I would lean in favour of liberation for most types of sex robots, but might favour strict regulation or, indeed, restrictions, for other types. For instance, I previously wrote an article suggesting that sex robots used for rape fantasies and shaped like children could be plausibly criminalised. I did not strongly endorse that argument (it rested on a certain moralistic view of the criminal law that I dislike); I did not favour harsh punishment for potential offenders; and I would never claim that this policy would be successful in actually preventing the development or use of such technologies. But that’s not the point: we often criminalise things we never expect to prevent. I was also clear that the argument I made was weak and vulnerable to several potential defeaters. My goal in presenting it was not to defend a particular stance, but rather to map out the terrain for future ethical debate.

Anyway, leaving my own views to the side, the question arises: where on this spectrum do the proponents of the Campaign Against Sex Robots fall?

The answer is unclear. Obviously, they are not in favour of liberation, but are they are in favour of regulation or criminalisation? The naming of the campaign suggests something more towards the latter: they are against sex robots. And some of their pronouncements seem to reinforce this more extreme position. For instance, on their ‘About’ page, they say that “an organized approach against the development of sex robots is necessary”. On the same page, they also list a number of relatively unqualified objections to the development of sex robots. These include:

We believe the development of sex robots further sexually objectifies women. 
We propose that the development of sex robots will further reduce human empathy that can only be developed by an experience of mutual relationship. 
We challenge the view that the development of adult and child sex robots will have a positive benefit to society, but instead further reinforce power relations of inequality and violence.

On top of this, in her ‘position paper’, Richardson notes how she is modeling her campaign on the ‘Stop Killer Robots’ campaign. That campaign works to completely ban autonomous robots with lethal capabilities. If Richardson means for that model to be taken seriously, it suggests a similarly restrictive attitude motivates the Campaign Against Sex Robots.

But despite all this, there is some noticeable equivocation and hedging in what the campaign and its spokespeople have to say. Elsewhere on their “About” page they state that:

We propose to campaign to support the development of ethical technologies that reflect human principles of dignity, mutuality and freedom.

And that they wish:

To encourage computer scientists and roboticists to examine their own conscience when asked to provide code, hardware or ideas to develop this field.

Throughout the position paper, Richardson also makes clear that it is the fact that current sex robot proposals are modeled on a ‘prostitute-john’ relationship that bothers her. This suggests that if sex robots could embody an alternative and more egalitarian relationship she might not be so opposed.

On top of all this, Richardson appears to have disowned the more restrictive attitude in her recent statements. In an article about her appearance at the Web Summit, she is reported to have said we should “think about what it means” to create sex robots, not that we shouldn’t make them at all. That said, in the very same article she is reported to have called for a “ban” on sex robots. Maybe the journalist is being inaccurate in the summary (I wasn’t at the event) or maybe this reflects some genuine ambiguity on Richardson’s part. Either way, it seems problematic to me.

Why? Because I think the Campaign Against Sex Robots is currently trading on an equivocation about its core policy aims. Its branding as a general campaign “against” sex robots, along with the more unqualified objections to their development, seem to suggest that the core aim is to completely ban sex robots of all kinds. This provides juicy fodder for the media, but would require a very strong set of arguments in defence. As I hope to make clear below, I don’t think that the proponents of the campaign have met that high standard. On the other hand, the more reserved and implicitly qualified claims seem to suggest a more modest aim: to encourage creators of sex robots to think more clearly about the ethical risks associated with their development, in particular the impact it could have on gender inequality and objectification. This strikes me as a reasonably unobjectionable aim, one that would not require such strong arguments in defence, but would not be anywhere near as interesting. There are many people who already share this modest aim, and I think most people would not need much to be persuaded of its wisdom. But then the campaign would need to be more honest in its branding. It would need to be renamed something like “The Campaign for Ethical Sex Robots’.

In any event, until the Campaign provides more clarity about its core policy aims, it will be difficult to know what to make of it.


2. Why Campaign Against Sex Robots in the First Place?
Granting this difficulty, I nevertheless propose to evaluate the main arguments in favour of the campaign, as presented by its proponents. For this, I turn to the “Position Paper” on the Campaign’s website, which was written by Richardson. With the exception of its conclusion (which as I just noted is somewhat obscure) this paper does present a reasonably clear argument “against” sex robots. The argument is built around an analogy with human sex worker-client relationships (or, as Richardson prefers, ‘prostitute-john’ relationships). It is not set out explicitly anywhere in the text of the article. Here is my attempt to make its structure more explicit:


  • (1) Prostitution is bad (e.g. because it reinforces gender inequality, contributes to the objectification of women, denies the subjectivity of the sex worker etc.)
  • (2) Sex robots will be like prostitution in all these relevant bad-making ways (perhaps worse).
  • (3) Therefore, sex robots will be bad.
  • (4) Therefore, we ought to campaign against them.



This is an analogical argument, so it is not formally valid. I have tried to be reasonably generous in this reconstruction. My generosity comes in the vagueness of the premises and conclusions. The idea is that this vagueness allows the argument to work for either the strong or weak versions of the Campaign that I outlined above. So the first premise merely claims that there are several bad or negative features of prostitution; the second premise claims that these features will be shared by the development of sex robots; the first conclusion confirms the “badness” of sex robots; and the second conclusion is tacked on (minus a relevant supporting principle) in order to link the argument to the goals of the Campaign itself. It is left unclear what these goals actually are.

Vagueness of this sort is usually a vice, but in this context I’m hoping it will allow me to be somewhat flexible in my analysis. So in what follows I will evaluate each premise of the argument and see what kind of support they lend the conclusion(s). It will be impossible to divorce this analysis from the practical policy questions (i.e. should we campaign for regulation or criminalisation?). So I will try to evaluate the argument in relation to both strong and weak versions of the policy aims. To remove any sense of mystery from this analysis, I will state upfront that my conclusion will be that the argument is too weak to support a strong version of the campaign. It may suffice to support a weaker version, but this would have to be very modest in its aims, and even then it wouldn’t be particularly persuasive because it ignores reasons to favour the creation of sex robots and reasons to doubt the wisdom of interventionist policies.


3. Is Prostitution Bad?
Let’s start with premise (1) and the claim that prostitution is bad. I have written several pieces about the ethics of sex work. Those pieces evaluate most of the leading objections to the legalisation/normalisation of sex work. Richardson’s article recapitulates many of these objections. It initially expresses some disapproval for the “sex work” discourse, viewing the use of terms like ‘sex work’ and ‘sex worker’ as part of an attempt to legitimate an oppressive form of labour. (I should qualify that because Richardson doesn’t write with the normative clarity of an ethicist; she is an anthropologist and the detached stance of the anthropologist is apparent at times in her paper, despite the fact that the paper and the Campaign clearly have normative aims). She then starts to identify various bad-making properties of prostitution. These include things like the prevalence of violence and human trafficking in the industry, along with reference to statistics about the relative youth of its workers (75% are between 13 and 25, according to one source that she cites).

Her main objection to prostitution, however, focuses on the asymmetrical relationship between the prostitute and the client, the highly gendered nature of the employment (predominantly women and some men providing the service for men), and the denial of subjectivity (and corresponding objectification) the commercialisation entails. To support this view, Richardson quotes from a study of consumers of prostitution, who said things like:

‘Prostitution is like masturbating without having to use your hand’, 
‘It’s like renting a girlfriend or wife. You get to choose like a catalogue’, 
‘I feel sorry for these girls but this is what I want’ 
(Farley et al 2009)

Each of these views seems to reinforce the notion that the sex worker is being treated as little more than an object and that their subjectivity is being denied. The client and his needs are all that matters. What’s happening here, according to Richardson, is that the client is elevating his status and failing to empathise with the prostitute: substituting his fantasies for her real feelings. This is a big problem. The failure or inability to empathise is often associated with higher rates of crime and violence. She cites Baron-Cohen’s work on empathy and evil in support of this view.

To sum up, we seem to have two main criticisms of prostitution in Richardson’s article:


  • (5) Prostitution is bad because the (predominantly) female workers suffer from violence at the hands of their clients, can be victims of trafficking and are, often, quite young.

  • (6) Prostitution is bad because it thrives on an asymmetrical relationship between the client and prostitute, denies the subjectivity of the prostitute, compromises the ability of the client to empathise, and reinforces gender inequalities.


Are these criticisms any good? I have my doubts. Two points jump out at me. First, I think Richardson is being extremely selective and biased in her treatment of the evidence in relation to prostitutes and their clients. Second, even if she is right about these bad-making properties, there is no direct line from these properties to the appropriate policy response. In particular, there is no direct line from these properties to the criminalisation or restriction of prostitution. Let me briefly expand on these points.

On the first point, Richardson does cite evidence supporting the view that violence and trafficking are common in the sex work industry, and that clients deny the subjectivity of sex workers. But she ignores countervailing evidence. I don’t want to get too embroiled in weighing the empirical evidence. This is a complex debate, and there are certainly many negative features of the sex work industry. All I would say is that things are not as unremittingly awful as Richardson seems to suggest. Sanders, O’Neill and Pitcher, in their book Prostitution: Sex Work, Policy and Politics offer a more nuanced summary of the empirical literature. For instance, in relation to violence within the industry, they note that while the incidence is “high” and probably under-reported, it tends to be more prevalent for street-based sex workers, and that violence is usually associated with a minority of clients:

While clients are the most commonly reported perpetrators of violence against female sex workers, Kinnell (2006a) suggests that a minority of clients commit violence against sex workers and that often men who attack or murder sex workers frequently have a past history of violence against sex workers and other women….It must be remembered that the majority of commercial transactions take place without violence or incidence. 
(Sanders et al 2009, 44)

On the lack of empathy and the denial subjectivity, they offer a similarly nuanced view. First, they note how a highly conservative view of sexuality is often embedded in critiques of sex work:

There is generally a taboo about the types of sex involved in a commercial contact. The idea of time-limited, unemotional sex between strangers is what is often conjured up when commercial sex is imagined… The ‘seedy’ idea of commercial sex preserves the notion that only emotional, intimate sex can be found in long-term conventional relationships, and that other forms of sex (casual, group, masturbatory, BDSM, etc.) are unsatisfying, abnormal and also immoral. 
(Sanders et al 2009, 83)

They then go on to paint a complex picture of the attitude of clients toward sex workers:

[T]he argument is that general understandings of sex work and prostitution are based on false dichotomies that distinguish commercial sexual relationships as dissonant from non-commercial ones. Sanders (2008b) shows that there is mutual respect and understanding between regular clients and sex workers, dispelling the myth that all interactions between sex workers and clients are emotionless. There is ample counter-evidence (such as Bernstein 2001, 2007) that indicates that clients are ‘average’ men without any particular or peculiar characteristics and increasingly seeking ‘authenticity’, intimacy and mutuality rather that trying to fulfil any mythology of violent, non-consensual sex. 
(Sanders et al 2009, 84).

I cite this not to paint a rosy and pollyannish view of sex work. Far from it. I merely cite it to highlight the need for greater nuance than Richardson seems willing to provide. It is simply not true that all forms of prostitution involve the troubling features she identifies. Furthermore, in relation to an issue like trafficking, while I would agree that certain forms of trafficking are unremittingly awful, there is still a need for nuance. Trafficking-related statistics sometimes conflate general illegal labour migration (i.e. workers moving for better opportunities) with the stereotypical view of trafficking as a modern form of slavery.

This brings me to the second criticism. Even if Richardson is right about the bad-making properties of prostitution, there is no reason to think that those properties are sufficient to warrant criminalisation or any other highly restrictive policy. For instance, denials of subjectivity and asymmetries of power are rife throughout the capitalistic workplace. Many of the consumer products we buy are made possible by, arguably, exploitative international trade networks. And many service workers in our economies have their subjectivity denied by their clients. I often fail to care about the feelings of the barista making my morning coffee. But in these cases we typically do not favour criminalisation or restriction. At most, we favour a change in regulation and behaviour. Likewise, many of the negative features of prostitution could be caused (or worsened) by its criminalisation. This is arguably true of violence and trafficking. It is because sex workers are criminalised that they fail to obtain the protections afforded to most workers and fail to report what happens to them. This is why many sex worker activists — who are in no way unrealistic about the negative features of the job — favour legalisation and regulation. So Richardson will need to do more than single out some negative features of prostitution to support her analogical argument. I have tried to summarise these lines of criticism in the diagram below.




In the end, however, it is not worth dwelling too much on the bad-making properties of prostitution. The analogy is important to Richardson’s argument, but it is not the badness of prostitution that matters. What matters is the claim that these properties will be shared by the development of sex robots. This is where premise (2) comes in.


4. Would the development of sex robots be bad in the same way?
Premise (2) claims that the development of sex robots will replicate and reinforce the bad-making properties of prostitution. There are two things we need to figure out in relation to this claim. The first is how it should be interpreted; the second is how it is supported.

In relation to the interpretive issue, we must ask: Is the claim that, just as the treatment and attitude toward prostitutes is bad, so too will be the treatment and attitude toward sex robots? Or is it that the development of sex robots will increase the demand for human prostitution and/or thereby encourage users of sex robots to treat more real human (females) as objects? Richardson’s paper supports the latter interpretation. At the outset, she states that her concern about sex robots is that they:

[legitimate] a dangerous mode of existence where humans can move about in relations with other humans but not recognise them as human subjects in their own right. 
(Richardson 2015)

The key phrase here seems to be “in relations with other humans”, suggesting that the worry is about how we end up treating one another, not how we treat the robots themselves. This is supported in the conclusion where she states:

In this paper I have tried to show the explicit connections between prostitution and the development and imagination of human-sex robot relations. I propose that extending relations of prostitution into machines is neither ethical, nor is it safe. If anything the development of sex robots will further reinforce relations of power that do not recognise both parties as human subjects. 
(Richardson 2015)

Again, the emphasis in this quote seems to be on how the development of sex robots will affect inter-human relationships. Let’s reflect this in a modified version of premise (2):


  • (2*) Sex robots will add to and reinforce the bad-making properties of prostitution (i.e. they will encourage us to treat one another with a lack of empathy and exacerbate existing gender/power inequalities).


How exactly is this supported? As best I can tell, Richardson supports it by referring to the work of David Levy and then responding to a number of counter-arguments. In his book Love and Sex with Robots, David Levy drew explicit parallels between the development of sex robots and prostitution. The idea being that the relationship between a user and his/her sex robot would be akin to the relationship between a client and a prostitute. Levy was quite explicit about this and spent a good part of his book looking at the motivations of those who purchase sex and how those motivations might transfer onto sex robots. He was reasonably nuanced in his discussion of this literature, though you wouldn’t be able to tell this from Richardson’s article (for those who are interested, I’ve analysed Levy’s work previously). In any event, the inference Richardson draws from this is that the development of sex robots is proceeding along the lines that Levy imagines and hence we should be concerned about its potential to reinforce the bad-making properties of prostitution.


  • (9) Levy models the development of sex robots on the relationship between clients and prostitutes, therefore it is likely that the development of such robots will add to and reinforce the bad-making properties of prostitution.


I have to say I find this to be a weak argument, but I’ll get back to that later because Richardson isn’t quite finished with the defence of her view. She recognises that there are at least two major criticisms of her claim. The first holds that if robots are not persons (and for now we will assume that they are not) then there is nothing wrong with treating them as objects/things which we can use for our own pleasure. In other words, the technology is a morally neutral domain in which we can act out our fantasies. The second criticism points to the potentially cathartic effect of these technologies. If people act out negative or violent sexual fantasies on a robot, they might be less inclined to do so to a real human being. Sex robots may consequently help to prevent the bad things that Richardson worries about.


  • (10) Sex robots are not persons; they are things: it is appropriate for us to treat them as things (i.e. the technology is a morally neutral domain for acting out our sexual fantasies)
  • (11) Use of sex robots could be cathartic, e.g. using the technology to act out negative or violent sexual fantasies might stop people from doing the same thing to a real human being.


Richardson has responses to both of these criticisms. In the first instance, she believes that technology is not a value-neutral domain. Our culture and our norms are reflected in our technology. So we should be worried about how cultural meaning gets incorporated into our technology. Furthermore, she has serious doubts about the catharsis argument. She points to the historical relationship between pornography and prostitution. Pornography has now become widely available, but this has not led to a corresponding decline in prostitution nor, in the case of child pornography, abuse of real children. On the contrary, prostitution actually appears to have increased while pornography has increased. The same appears to be true of the relationship between sex toys/dolls and prostitution:

The arguments that sex robots will provide artificial sexual substitutes and reduce the purchase of sex by buyers is not borne out by evidence. There are numerous sexual artificial substitutes already available, RealDolls, vibrators, blow-up dolls etc., If an artificial substitute reduced the need to buy sex, there would be a reduction in prostitution but no such correlation is found. 
(Richardson 2015)

In other words:


  • (12) Technology is not a morally neutral domain: societal values and ethics are inflected in our technologies.
  • (13) There is no evidence to suggest that the cathartic argument is correct: prostitution has not decreased in response to the increased availability of pornography and/or sex toys.



Is this a robust defence of premise (2)? Does it support the overall argument Richardson wishes to make? Once again, I have my doubts. Some of the evidence she adduces is weak and even if it is correct it in no way supports a strongly restrictive approach to the development of sex robots. At best, it supports a regulative approach. Furthermore, in adopting that more regulative approach we need to be sensitive to both the merits and demerits of this technology and the costs proposed regulative strategy. This is something that Richardson neglects because she focuses almost entirely on the negative. In this vein, let me offer five responses to her argument, some of which target her support of premise (2*), others of which target the relationship between any putative bad-making properties of sex robots and the need for a ‘campaign’ against them.

First, I think Richardson’s primary support for premise (2) - viz. that it is reflected in the model of sex robot development used by David Levy — is weak. True, Levy is a pioneer in this field and may have a degree of influence (I cannot say for sure). But that doesn’t mean that all sex robot developers have to adopt his model. If we are worried about the relationship between the sex robot user and the robot, we can try to introduce standards and regulations that reflect a more positive set of sexual norms. For instance, the makers of Roxxxy (billed as the world’s first sex robot) claim to include a personality setting called ‘Frigid Farah’ with their robot. Frigid Farah will demonstrate some reluctance to the user’s sexual advances. You could argue that this reflects a troubling view of sexual consent: that resistance is not taken seriously (i.e. that ‘no’ doesn’t really mean ‘no’). But you could try to regulate against this and insist that every sex robot be required to give positive, affirmative signals of consent. This might reflect and reinforce a more desirable attitude toward sexual consent. And this is just an illustration of the broader point: that sex robots need not reflect negative social attitudes toward sex. We could demand and enforce a more positive set of attitudes. Maybe this is all Richardson really wants her campaign to achieve, i.e. to change the models adopted in the development of sex robots. But in that case, she is not really campaigning against them, she is campaigning for a better version of them.

Second, I think it is difficult to make good claims about the likely link between the use of a future technology like sex robots and actions toward real human beings. In this light, I find her point about the correlation between pornography and an increase in prostitution relatively unpersuasive. Unlike her, I don’t believe sex work is unremittingly bad and so I am not immediately worried about this correlation. What would be more persuasive to me is whether there was some correlation (and ultimately some causal link) between the increase in pornography/prostitution and the mistreatment of sex workers. I don’t know what the evidence is on that, but I think there is some reason to doubt it. Again, Sanders et al discuss ways in which the mainstreaming and legalisation of prostitution is sometimes associated with a decrease in mistreatment, particularly violence. This might give some reason for optimism.

A better case study for Richardson’s argument would probably be the debate about the link between pornography (adult hardcore or child) and real-world sexual violence/assault (toward adults or children). If it can be shown that exposure to pornography increases real-world sexual assault, then maybe we do have reason to worry about sex robots. But what does that evidence currently say? I reviewed the empirical literature in my article on robotic rape and robotic child sexual abuse. I concluded that the evidence at the moment is relatively ambiguous. Some studies show an increase; some show a decrease; and some are neutral. I speculated that we may be landed in a similarly ambiguous position when it comes to evidence concerning a link between sex robot usage and real-world sexual assault. That said, I also speculated that sex robots may be quite different to pornography: there may be a more robust real-world effect from using a sex robot. It is simply too early and too difficult to tell. Either way, I don’t see anything in this to support Richardson’s moral panic.

Third, if the evidence in relation to sex robot usage does end up being ambiguous, then I suspect the best way to argue against the development of sex robots is to focus on the symbolic meaning that attaches to their use. Richardson doesn’t seem to make this argument (though there are hints). I explored it in my paper on robotic rape and robotic child sexual abuse, and others have explored it in relation to video games and fiction. The idea would be that a person who derives pleasure from having sex with a robot displays a disturbing moral insensitivity to the symbolic meaning of their act, and this may reflect negatively on their moral character. I suggested that this might be true for people who derive sexual pleasure from robots that are shaped like children or that cater to rape fantasies. The problem here is not to do with the possible downstream, real-world consequences of this insensitivity. The problem has to do with the act itself. In other words, the argument is about the intrinsic properties of the act; not its extrinsic, consequential properties. This is a better argument because it doesn’t force one to speculate about the likely effects of a technology on future behaviour. But this argument is quite limited. I think it would, at best, apply to a limited subset of sex robot usages, and probably would not warrant a ban or, indeed, campaign against any and all sex robots.

Fourth, when thinking about the appropriate policy toward sex robots, it is important that we weigh the good against the bad. Richardson seems to ignore this point. Apart from her references to the catharsis argument, she nowhere mentions the possible good that could be done by sex robots. My colleague Neil McArthur has looked into some of these possibilities. There are several arguments that could be made. There is the simple hedonistic argument: sex robots provide people with a way of achieving pleasurable states of consciousness. There is the distributive argument: for whatever reason, there are people in the world today who lack access to certain types of sexual experience, sex robots could make those experiences (or, at least, close approximations of them) available to such people. This type of argument has been made in relation to the value of sex workers for persons with disabilities. Indeed, there are charities set up that try to link persons with disabilities to sex workers for this very reason. There is also the argument that sex robots could ameliorate imbalances in sex drive between the partners in existing relationships; or could add some diversity to the sex lives of such couples, without involving third parties (and the potential interpersonal strife to which they could give rise). It could also be the case that sex robots allow for particular forms of sexual self-expression to flourish, and so, in the interests of basic sexual freedom, we should permit it. Finally, unlike Richardson, we shouldn’t completely discount the possibility of sex robots reducing other forms of sexual harm. This is by no means an exhaustive list of positive attributes. It simply highlights the fact that there is some potential good to the technology and this must be weighed against any putative negative features when determining the appropriate policy.

Fifth, and finally, when thinking about the appropriate policy you also need to think about the potential costs of that policy. We might agree that there are bad-making properties to sex robots, but it could be that any proposed regulatory intervention would do more harm than good. I can see plausible ways in which this could be true for regulatory interventions into sex robots. Regulation of pornography, for instance, has historically involved greater restrictions toward pornography from sexual minorities (e.g. gay and lesbian porn). Regulatory intervention into sex robots may end up doing the same. I think it is particularly important to bear this in mind in light of Sanders et al’s comments about stereotypical views of unemotional commercialised sex feeding into prohibitive policies. It may also be the case that policing the development and use of sex robots requires significant resources and significant intrusions into our private lives. I’m not sure that we should want to bear those costs. Less instrusive regulatory policies — e.g. one that just encourage manufacturers to avoid problematic stereotypes or norms in the construction of sex robots — might be more tolerable. Again, maybe that’s all Richardson wants. But she needs to make that clear and to avoid simply emphasising the negative.




5. Conclusion
This post has been long. To sum up, I find the Campaign Against Sex Robots puzzling and problematic. I do so for three main reasons:

A. I think the current fanfare associated with the Campaign stems from its own equivocation regarding its core policy aims. Some of the statements by its members, as well as the name of the campaign itself, suggest a generalised campaign against all forms of sex robots. This is interesting from a media perspective, but difficult to defend. Some other statements suggest a desire for more ethical awareness in the creation of sex robots. This seems unobjectionable, but a lot less interesting and in need of far more nuance. It would also necessitate some re-branding of the Campaign (e.g. to ‘The Campaign for Ethical Sex Robots”).

B. The first premise of the argument in favour of the campaign focuses on the bad-making properties of prostitution. But this premise is flawed because it fails to factor in countervailing evidence about the experiences of sex workers and the attitudes of their clients, and because, even if it were true, it would not support a generalised campaign against sex work. Indeed, sex worker activists often argue the reverse: that the bad-making properties of prostitution are partly a result of its criminalisation and restriction, and not intrinsic to the practice itself.

C. The second premise of the argument focuses on how the bad-making properties of prostitution might carry over to the development of sex robots. But this premise is flawed for several reasons: (i) it is supported by reference to the work of one sex robot theorist and there is no reason why his view must dominate the development process; (ii) it relies on dubious claims about the likely causal link between the use of sex robots and the treatment of human beings; (iii) it fails to make the strongest argument in support of a restrictive attitude toward sex robots (the symbolic meaning argument), but even if it did, that argument would be limited and would not lend support to a general campaign; (iv) it fails to consider the possible good-making properties of sex robots; and (v) it fails to consider the possible costs of regulatory intervention.

None of this is to suggest that we shouldn’t think carefully about the ethics of sex robots. We should. But the Campaign Against Sex Robots does not seem to be contributing much to the current discussion.