Monday, January 30, 2017

Episode #18: Jonathan Pugh on Bio-Conservatism and Human Enhancement

pugh_jonny

In this episode I talk to Jonathan Pugh about bio-conservatism and human enhancement. Jonny is a Postdoctoral Research Fellow in Applied Moral Philosophy at The University of Oxford, on the Wellcome Trust funded project "Neurointerventions in Crime Prevention: An Ethical Analysis". His new paper, written with Guy Kahane and Julian Savulescu,  'Bio-Conservatism, Partiality, and The Human Nature Objection to Enhancement' is due out soon in The Monist.

You can download the episode here or listen below. You can also subscribe on Stitcher and iTunes (via RSS).


Show Notes


  • 0:00 - introduction 
  • 2:00 - what is the nature of human enhancement – the functionalist and welfarist accounts/models 
  • 10:30 - bio-conservative oppositions to enhancement – evaluative and epistemic approaches, the naturalistic fallacy 
  • 19:00 - Cohen’s conservatism – intrinsic value – personal and particular valuing – art and pets
  • 30:30 – personal values and bio-enhancement 
  • 40:30 - the partiality problem – who would you save from the river? Value-based partiality and discrimination. 
  • 54:00 - species bias, human prejudice, partiality, family and nationalism - Bernard Williams, John Cottingham, Thomas Hurka, Samuel Scheffler, genetic enhancement 
  • 1:03:00 -  should human enhancement be opposed on the grounds of bio-conservatism? - Biological enhancement in the context of other social and technical changes - Is conservatism a foundational moral principle? 
  • 1:11:00 - conclusion


Relevant Links


Monday, January 23, 2017

Understanding the Experience Machine Argument




The Experience Machine is Robert Nozick’s classic thought experiment about the importance of being connected to reality. It went through several iterations in his work, but its mature expression can be found in his 1989 book The Examined Life:

The Experience Machine: “Imagine a machine that could give you any experience (or sequence of experiences) you might desire. When connected to this experience machine, you can have the experience of writing a great poem or bring about world peace or loving someone and being loved in return. You can experience the felt pleasures of these things, how they “feel from the inside”. You can program your experiences for…the rest of your life. If your imagination is impoverished, you can use the library of suggestions extracted from biographies and enhanced by novelists and psychologists. You can live your fondest dreams “from the inside”. Would you choose to do this for the rest of your life?…Upon entering you will not remember having done this; so no pleasures will get ruined by realizing they are machine-produced.” 
(Nozick 1989, 104)
So would you? Nozick suggests that most would not, and there is some evidence to suggest that his intuitive reaction to it is widely shared. Most of us seem to have the sneaking suspicion that if what we are doing turned out to be ‘fake’ — if we were living in some ‘Truman Show’-like world — our lives would be denuded of something important.

But what is the actual argument that emerges from our consideration of the Experience Machine? And what conclusions are we entitled to draw from it? These are questions taken up by Ben Bramble’s article ‘The Experience Machine’, which recently appeared in Philosophy Compass. This post is my review of that article.


1. Reconstructing the Experience Machine Argument
Nozick thought that the Experience Machine provided an argument against hedonism. Most have agreed. Indeed, Bramble suggests that the influence of the Experience Machine argument is such that there are few contemporary defenders of hedonism.

Hedonism is the view that the only thing that makes a life good or bad for an individual is the extent to which that individual experiences pleasure and pain. Or, to put it another way, it is the view that subjective experiences of pleasure and pain are the ultimate bearers of value — that the personal value of anything else is derived from or reducible to those mental states. Hedonism can be contrasted with other theories of well-being such as desire satisfaction theories or objective list theories. According to the former, what matters for the individual is that their desires are satisfied (even if those sometimes cause them pleasure/pain), and according to the latter what matters is whether the individual achieves or participates in certain objectively good states of affair.

Nozick claims that if hedonism is true, then we should all plug into the Experience Machine. This would be the best thing for us because it would give us the best subjective experiences. The fact that we don’t think it is the best thing for us provides some evidence against hedonism. Bramble suggests that we can, consequently, reconstruct Nozick’s argument in the following way:


  • (1) Plugging into the experience machine would not be best for one.
  • (2) Hedonism entails that plugging into the experience machine would be best for one.
  • (3) Therefore, hedonism is false.


We’ll spend the remainder of the post evaluating various objections to the two premises of this argument. Before we do so, there is an important interpretive point to be made. Some people think that Nozick’s defence of premise (1) rests on an appeal to our subjunctive preferences, i.e. an appeal to what we would prefer to do if we were confronted by the choice to plug into the machine. As a result, these people think that Nozick’s argument presupposes a desire satisfaction theory of well-being. Bramble argues that this interpretation is wrong. Nozick is not really interested in what we would do if confronted by the choice. He is using the thought experiment as an intuition pump, i.e. as a way of prompting us to see that plugging into the machine would not be best. This interpretation is supported by what Nozick actually says in his writings.


2. Evaluating Premise One
Most of the critical discussions of Nozick’s argument have focused on premise one. This is unsurprising since premise one is what the thought experiment is all about. Bramble suggests that there are four major lines of criticism. Here’s the first:


  • (4) The intuition against plugging in is, in fact, consistent with hedonism because it is based on a reasonable fear of catastrophe.


In other words, people don’t resist plugging in because they think that subjective experiences are not the ultimate determinants of a good life; people resist plugging in because they think something will go wrong and that this will affect the experiences they have while plugged in. For example, people might reasonably fear that “the machine might malfunction, that the premises on which it is kept might be overrun by fundamentalist zealots, that the scientists running the machine might turn evil, etc.” (Bramble 2016, 139).

There is one basic response to this. It is to reiterate the point made above that the thought experiment is not really about what people would choose to do if confronted with the possibility of plugging in. It’s about getting people to see that the connection to reality is somehow important to the good life. Thus, you can run the thought experiment again and try to remove all potential sources of reasonable fear. If people still object to plugging-in in this scenario, the argument stands. Indeed, some philosophers go so far as to suggest that you can modify the thought experiment entirely so that its not about choice at all. Instead, we can ask people to compare the lives of two experientially identical individuals, one of whom lives in the experience machine and the other of whom does not. If you think the life of the latter is somehow better or more valuable than the former, then the argument still works. Bramble, however, is less convinced by this modified version of the thought experiment because he doesn’t share the intuition underlying it. To summarise:


  • (5) You can modify the thought experiment to remove sources of reasonable fear or run an alternative version where you ask people to compare to experientially identical lives, one of which is lived in an experience machine and one of which is not. If people still prefer the non-plugged-in life, premise one holds.


These modifications of the thought experiment do reveal a more general concern, which is the second major line of criticism:


  • (6) It is very difficult to construct a thought experiment in which people have a fine-grained intuition about hedonism: it is likely that their thinking about the scenario is contaminated by other moral/normative considerations.


So, for example, when people are asked whether they would plug in or not they might think about their obligations to friends and families and how they would be breaching those obligations by abandoning reality. Consequently, it is difficult to enable people to have an intuition that is solely about well-being.

Nozick himself had a response to this:


  • (7) In imagining the case, you can also imagine that other people “can plug in to have the experiences they want, so there’s no need to stay unplugged to serve them.


Bramble says little more about this line of argument, but I will. A concern about family and friends might be thought to provide another argument against hedonism since it suggests that we have non-experiential intuitions about what the good life consists in. This is something that Samuel Scheffler has explored in his various doomsday thought experiments. These thought experiments ask us to imagine that we are living in a world that is going to end thirty days after our deaths or a world in which the entire human race is going gradually infertile. Scheffler suggests that our aversion to these scenarios reveals a strong altruistic element to our conception of well-being. So what might be emerging from these fears about family and others is the intuition that hedonism is not the only thing that matters. Nozick’s modified thought experiment does not eliminate this intuition rather it accommodates it. Of course, that’s all to the good as far as he is concerned. His goal, after all, is to disprove hedonism. To that extent the affinities between Nozick’s experience machine and Scheffler’s doomsday scenarios seem like they are worth exploring.

The third line of criticism is similar to the first, but it ends up pointing in an interesting direction:


  • (8) Our unwillingness to plug in might be due to an irrational fear, revulsion or bias.


In other words, unlike the reasonable fear of catastrophe previously discussed, our resistance stems from an unreasonable fear or bias. As Bramble puts it: “Perhaps we’d refuse to plug in because we’d be too scared of having wires inserted into our skull…[or because of] an irrational tendency to prefer the way things are now to new or different ways” (2016, 139). What’s more, there is some evidence to suggest that status quo bias of the latter type affects how people think about the Experience Machine. Experimental philosophers like DeBrigard have run alternate versions of the thought experiment where people are told they have been living in an experience machine up to this point in their lives and asked if they would like to plug out. In those cases, people are reluctant to do so, suggesting there is a bias to the status quo.

Bramble thinks that this criticism again misses the point of the thought experiment. It is not about what we would choose to do but about what would be best for us:


  • (9) “Nozick could gladly accept that an important part of the reason we would be unwilling to plug in is that we have an irrational fear, revulsion or bias…His gripe with hedonism stands: it does not seem best for someone to plug in to the machine.” (Bramble 2016, 139)


This is to assume that people drawing attention to the status quo bias are making the interpretive mistake that we mentioned earlier. There is, however, a much better way to understand their criticism:


  • (10) The Debunking Problem: The fact that our intuitions about the Experience Machine are affected by things like status quo bias gives us reason not to trust those intuitions.


This line of objection follows the standard debunking argument playbook (something I have discussed at much greater length before). It suggests that there is no reason to think that our intuitions in the Experience Machine case track the axiological truth (the truth about what is good and bad). Instead, our intuitions are the products of social/behavioural conditioning or evolutionary hardwiring.

Bramble thinks this is the most serious criticism of premise (1) but that there are three challenges to it:


  • (11) Proponents of debunking need to explain how our intuitions about well-being got to be affected in this way: how do our conditioned or hardwired preferences get to affect our pre-theoretical feeling that contact with reality is an important part of well-being.



  • (12) Proponents of debunking need to be challenged to identify some uncontaminated intuitions. Since, presumably, theories of well-being ultimately rest on some intuitive beliefs about what makes for the good life we need to figure out which ones we can trust.



  • (13) Proponents of debunking need to explain how some people can have the intuition that connecting reality is important without the deeper desire or belief that connecting to reality in intrinsically important. (Bramble gives himself as an example of such a person)


For what it’s worth, I find (12) to be the most persuasive of these challenges and it gets to the heart of the debunking debate. I’m less convinced by the other two. I’m not sure how highly specified debunking theories need to be in order to be persuasive (11) and I think the debunking project can probably tolerate a few outliers (13).



3. Evaluating Premise Two
Relatively few people challenge the second premise of the Experience Machine Argument, but there are some things to be said against it. They all boil down to pretty much the same thing: there are reasons to think that your subjective well-being may not be best served by plugging into the machine. These criticisms tend to be empirical/contingent in nature. They are not really ‘in principle’ objections. They dispute Nozick’s claim that life in the machine would be experientially identical to (or better than) life outside.

The first objection to premise (2) comes from the work of Fred Feldman. He suggests that a truth-adjusted hedonism might be plausible — i.e. a version of hedonism in which being connected to reality adds more subjective well-being than being disconnected — and that this would undermine the claim that we would be subjectively better off if plugged into the machine:


  • (14) Truth-adjusted hedonism is likely to be true, i.e. we are likely to get more subjective well-being from pleasures taken in true things than in false things.


There are a few problems with this response. The most obvious is that truth-adjusted hedonism could well be false. I certainly don’t have any strong intuition in its favour. If two states of being are so subjectively similar that I cannot distinguish them, it is hard to see how the mere fact that one involves subjective connection to something that is true could add more experiential well-being than one that is connected to something false. Or to put it another way, I think subjective knowledge of the connection to reality is probably the only thing that could make subjective pleasures taken in things that are true experientially better than subjective pleasures taken in things that are false. Without that knowledge of truth/falsity, I can’t see how the two things could be different. Nozick explicitly rules out that knowledge in the thought experiment. On top of this, I think the pleasures of virtual reality could be just as ‘true’ (in many instances) as the pleasures of real reality.


  • (15) Truth-adjusted hedonism could be false: intuitively it is hard to see how the link to reality could make an experience subjectively superior if one is unaware of it.


Another argument is that hedonism is focused on individual experiences, and individual experiences are compromised by plugging in because plugging in amounts to a form of suicide. Why? Because in his description of the thought experiment, Nozick himself said that it would require memory erasure to ensure that you didn’t know you were plugged in and this memory erasure may be a ‘kind of suicide’:


  • (16) Plugging-in would involve a kind of suicide because it would require memory erasure to ensure experiential indistinguishability.


Bramble is dismissive of this objection. He questions whether tampering with someone’s memory in this manner would require anything close to suicide and also suggests that it would be possible to enter the machine without being aware that one’s memory needs to be erased. For example, you could be plugged in by someone else while you were sleeping. You would then wake up in an experientially similar world that just happened to be an elaborate simulation. You would be none the wiser. It’s hard to see how you wouldn’t survive this process.


  • (17) The memory erasure involved is unlikely to require anything akin to suicide and, in any event, one could be plugged in without one’s awareness that this is happening.


This brings us to the final line of criticism. This one tries to make the case for a distinct class of pleasures that are only possible in the real world:


  • (18) There are certain pleasures that are only possible in the real world.


This might be the most promising criticism of premise (2) but the devil is in the detail. What might these unique, reality-only, pleasures be? Bramble considers two possibilities: (i) the pleasures that come from exercising free will (if you are in a simulated environment you cannot really exercise free will) and (ii) the pleasures of love and friendship. The problem in both instances is that it is not clear why these pleasures are ruled out in a simulated environment. For example, there are many accounts of free will, some of which would allow for free will in a virtual world. It would be too time-consuming to consider each particular account now but the point can be made with a commonsense example: I play video games all the time and they certainly seem to allow for exercises of genuine free will (whatever it might be). I exercise the free will in a constrained game environment and the consequences are limited, but it is no less real for all that. Additionally, there is no obvious reason why one couldn’t experience friendship and love in a virtual world. Another problem with both examples is that whatever pleasures they might bring would have to be so powerful and so unique that it trumped or outweighed the subjective pleasures that would be possible inside the experience machine.


  • (19) The pleasures that are alleged to only be possible in the real world might turn out not to be.



  • (20) The pleasures in question would have to be so powerful and unique that they could not be compensated for by the pleasures possible inside the experience machine.



4. Conclusion
Okay, that brings me to the end of this post. To briefly recap, the Experience Machine argument is supposed to provide a case against hedonism. The argument is premised on the belief that (intuitively) plugging-in would not be the best thing for us and yet hedonism entails that plugging in would be the best thing for us. There is an obvious contradiction between these two premises so something has to give. As Bramble has shown, however, there are reasons to challenge both premises, although each of these challenges has its problems.

I should close by noting that Bramble himself is a defender of hedonism. He has an interesting paper written in defence of hedonism that just came out in the journal Ergo. It takes a look at three leading objections to hedonism, including the experience machine argument. I recommend checking it out.

Wednesday, January 18, 2017

Algorithmic Freedom as Non-Domination




Freedom is important. Modern liberal societies are built around it. Citizens fight for their fundamental freedoms — rights to speech, thought, religious expression, education, work and so on — and governments are evaluated relative to their success in securing these freedoms. But liberal freedom is a highly contested concept. What does it really mean to say that we are ‘free’? When is our freedom undermined or compromised? Different theories say different things. For example, Quentin Skinner — an intellectual historian who has focused on the evolution of freedom — has developed a ‘genealogy of freedom’ which maps out three major concepts of freedom that have been articulated since the mid-17th century, all of which have sub-concepts, and all of which vary in their answer to the questions just posed.

This diversity of opinion is particularly important when we consider the practical implications of our commitment to freedom. One practical implication I am particularly keen on considering is how freedom is affected in an age of algocratic governance. If digital technologies are going to be surveilling my every move, if they are going to be hoovering up my data and mining it for useful information, and if on foot of this they are going to be recommending particular courses of action to me or, even more extreme, if they are going to be making decisions on my behalf, then I would like to know if this undermines my freedom or not.

Intuitively it seems like increased algocratic governance cannot fail to affect my freedom in some way. But it turns out that traditional liberal theories of freedom are not well-equipped to explain why. That, at any rate, is the thesis presented in Hoye and Monaghan’s article ‘Surveillance, Freedom and the Republic’, which appeared a couple of years back in the European Journal of Political Theory. Although ostensibly about digital surveillance, their article is really (as we shall see) about the freedom-undermining consequences of algocratic governance more generally.

The thesis they present has three parts to it. First, they argue that existing approaches to understanding the impact of algocratic governance on freedom are limited due to their state-centric focus. Second, they argue that the republican concept of freedom as non-domination (FND) offers a better way of understanding the impact of algocratic governance on freedom. And finally, they argue that even the republican concept is limited in its ability to understand the phenomenon and needs to be modified to address the freedom-undermining consequences of algocratic governance.

I want to go through each part of this thesis in the remainder of the blogpost. I do so with a general caveat. What I am about to say is heavily influenced by Hoye and Monaghan’s article, but I found parts of their work difficult to interpret because it was caught up in disputes within surveillance scholarship — disputes in which I am not invested. As a result, my presentation of their argument shifts focus to what I care about, namely: different ways of understanding the threat of algocracy.


1.Surveillance and Algocracy: How are they linked?
Before I get into the meat of Hoye and Monaghan’s argument, I should say something about the connection between surveillance and algocracy. I’m interested in the latter but their article is explicitly about the former. To be more precise, it is about mass digital surveillance and its political consequences. How can I justify interpreting their argument as being about algocratic governance and not surveillance?

The answer is that I don’t see these as separate topics. Indeed, I think that most contemporary scholarship on mass digital surveillance is best understood as part of a more general scholarly effort to understand the consequences of algocratic governance. Elsewhere, I have defined algocratic governance as the use of data-mining technologies to nudge, manipulate, constrain and structure human behaviour. I take a broad view of the phenomenon, arguing that algocratic modes of governance can be public and/or private in nature and origin. In fact, I think many of the more interesting forms of algocratic governance are private in origin — developed from technologies created by private enterprise — which sometimes get fed back into public governance systems. The predictive policing and recidivism algorithms used in criminal justice systems are good examples of this.

The data-mining technologies that undergird algocratic modes of governance are dependent on surveillance. In order for the systems to work, they must have a seedbed of data upon which to work their statistical magic. In the modern era, this seedbed of data is increasingly ‘big’ in nature. More and more of our lives are becoming digitised and as this happens more and more of what we do is being digitally surveilled. The information collected is brought together and made sense of by data-mining technologies. Oftentimes there is no specific rationale or intention behind the collection of data. Institutions have developed a ‘collect everything’ mentality because they hope that machine learning systems will find useful patterns in the data after the fact.

There is, consequently, an intimate link between algocractic governance and surveillance. The connection is not logical or necessary. It is possible to construct algorithmic governance systems that do not rely on mass surveillance. But it does seem to be inherent to the technology and philosophies associated with algocratic governance in the world today. Thus, I think that discussions of mass surveillance and its implications for freedom are intimately tied into discussions about algocracy and freedom.

What’s more, I think it is important to highlight the link between surveillance and algocracy. This is because many of the freedom-related concerns about surveillance only make sense if we assume that the surveillance is tied into broader systems of governance. In other words, it is because the data collected could be used to nudge, manipulate, structure and constrain your behaviour that most concern arises. If all the data was being collected and shipped off to a distant galaxy — with no hope of it ever being used in governance-related decisions — the debate would be almost moot (though some might insist that there is an intrinsic harm to violations of privacy).


2. Problems with Traditional Understandings of the Phenomenon
That’s all by way of introduction. Now let’s get into the heart of the matter: does algocratic governance undermine freedom? Hoye and Monaghan express some dissatisfaction with the existing debate on this question. As they see it, there are two major normative schools of thought on the topic:

Marxist social control: This school of thought conceives of surveillance as a form of coercive social control instituted from the ‘top-down’. People are monitored by the state in order to ‘effect discipline’ and to maintain the capitalist order.

Liberal freedom as non-interference: This school of thought is wedded to the traditional liberal conception of freedom as non-interference, i.e. the absence of actual interference in how one acts and behaves. It conceives of state surveillance as a potential form of interference in privacy rights (which are essential for liberal freedom) and demands regimes of oversight and transparency. It also focuses on the concept of informed consent as a means to waive one’s privacy rights.

While both of these schools of thought have their merits, Hoye and Monaghan are dissatisfied with them because they tend to fetishise (my word) the state as potential source of coercion and interference. As such, they tend to ignore (or downplay) the possibly more pervasive forms of algocratic governance which are private or social in nature (think Facebook, Google, Amazon, Apple etc) and to accept the fact that these governance tools tend to secure some form of consent from their subjects. They think a third school of thought does a better job in covering these more pervasive forms of algocractic governance:

Foucauldian Governmentality: This school of thought, inspired by the work of Michel Foucault, looks beyond state-centric modes of social control. It suggests that the dominant social ideology (some form of neoliberalism) involves us all in the process of disciplining and controlling our bodies (and selves) through surveillance and governance. It conceives of algocractic governance as just the latest manifestation/tool for implementing this ideology.

The Foucauldian school of thought does a great job identifying non-state sources of interference and in highlighting the role of voluntary acquiescence to algocractic governance, but it too suffers from defects. A major one, in Hoye and Monaghan’s view, is that it tends to be non-normative in nature and to reject discussions of ‘freedom’. So they wonder whether there is an alternative model of freedom that is normative in nature and which does justice to the pervasive (and sometimes consensual) nature of algocratic governance. They think republican freedom as non-domination can do the trick.


3. Freedom as Non-Domination
The republican theory of FND has ancient roots but is best-known through its modern revival. The work of Philip Pettit has been particularly influential in this regard. In his 1999 book Republicanism, and in several subsequent books and papers, he has gone to great lengths to articulate and defend the theory of FND.

He usually does this by distinguishing it from the more traditional liberal theory of freedom as non-interference. The traditional theory focuses on actual sources of interference in one’s life; not on potential sources of interference. So to say that your freedom is undermined according to the principles of non-interference, you have to actually point to some interfering act: somebody threatening you with a knife or making a coercive offer; somebody actually doing something to affect your rights/interests without your consent.

Pettit finds this unsatisfactory because it cannot explain our intuitions about the lack of freedom in certain cases where there is no actual interference in what someone does, but there is some lurking potential for interference. He has several go-to examples of this. The most well-known is the ‘happy slave’ thought experiment, but I’m sick of repeating this so I’ll opt for another of Pettit’s examples, taken from Henrik Ibsen’s 1879 play A Doll’s House.

The play’s main characters are Torvald and his wife Nora. Torvald is a young successful banker. By law, he has considerable power over his wife. He could, if he so desired, prevent her from doing many things. But he doesn’t exert those powers. He is happy for Nora to do whatever she desires, within certain limits. Pettit puts it like this:

Torvald has enormous power over how his wife can act, but he dotes on her and denies her nothing— nothing, at least, within the accepted parameters of life as a banker’s wife…She has all the latitude that a woman in late nineteenth-century Europe could have wished for. 
(Pettit 2014, xiv)

But is she free? Pettit thinks not. The problem is that Nora lives under the domination of Torvald. If she wants to do anything outside the accepted parameters of a banker’s wife, she will have to seek his permission. He may not interfere with her actions on a daily basis, but he stands poised to do so if she ever gets out of line. She has to ingratiate herself to him, match her preferences to his, if she is to live a trouble-free life. Pettit argues that this is contrary to true freedom:

To be a free person you must have the capacity to make certain central choices—choices about what religion to practice, whether to speak your mind, who to associate with, and so on—without having to seek the permission of another. You must be able to exercise such basic or fundamental liberties, as they are usually called, without having to answer to any master or dominus in your life. 
(Pettit 2014, xv
This, then, is the essence of FND. Elsewhere, I have adopted definitions of it it as requiring the ‘robust’ absence of interference in your life, i.e. the absence of interference across numerous possible worlds, not just in this world. But there is slightly more to it than that. It also requires a lack of accountability or answerability in certain core areas of life. Put more succinctly, it requires both depth (robust absence of interference) and breadth (across several domains of choice).

From this foundation, Pettit builds a rich political philosophy. He thinks FND gives us a full blueprint for a just society. This just society requires political and social institutions that allow for the absence of dominium (i.e. private power over another’s life) and imperium (i.e. civic or public power). Of course, this isn’t perfectly achievable. In order to remove dominium you often have to have an elaborate set of civic institutions that create the infrastructure for meaningful FND and you have to grant this infrastructures powers to intervene in someone’s life. But this is acceptable, according to Pettit, if the interference is non-arbitary and citizens have some meaningful input into the processes underlying the construction and operation of these institutions.

There is more to it, of course. But this sketch should suffice for present purposes.


4. Algocracy and Freedom as Non-Domination
Why are Hoye and Monaghan enthused by FND? The answer lies in how FND is more sensitive than freedom as non-interference to the ways in which power can undermine freedom. FND does not require actual interferences to conclude that something undermines your freedom. If you live in the shadow of some potential source of interference — if you have to adapt your behaviour to that shadow — your freedom is being compromised.

Furthermore, FND does not fetishise the state as a source of unfreedom in the same way that some liberals do. It thinks that sources of unfreedom are pervasive, include both the state and private actors, and sees the state as an essential tool for deconstructing private power. The latter is not unusual in mainstream liberal theory, but is ignored or downplayed by libertarians/classical liberals, and when compared to mainstream liberal theory FND has the advantage of seeing more potential sources of private unfreedom.

This is all useful because algocratic governance systems seem to undermine our freedom in a way that is similar to how Torvald undermines Nora’s freedom in The Doll House. Where Nora lived in the shadow of Torvald’s will; we live in the shadow of mass surveillance systems. These systems are collecting information about us at all times. This information is used to limit domains of choice, and can also be used to interfere with us at a future date. The systems might be happy to leave us alone so long as we operate within their pre-defined parameters, but as soon as we act outside them — as soon as we start doing things that are unanticipated by the system’s algorithms — we will run into roadblocks. Focusing in particular on the problems arising from state surveillance, Hoye and Monaghan put it like this:

Indiscriminate mass state surveillance is ipso facto dominating as it demands the universal ingratiation of citizens who must follow rules that are barely expressed in law, but manifest culturally as norms of fear and suspicion.

And the same set of concerns clearly applies beyond the state, to private modes of algocratic governance, even if they are formally consented to.

To put the case in argumentative form:


  • (1) Freedom as non-domination requires the robust absence of arbitrary interference across core domains of life.
  • (2) Algocratic modes of governance (specifically those including mass surveillance) bring with them the possibility (if not the actuality) of arbitrary interference in certain core domains of life.
  • (3) Therefore, algocratic modes of governance undermine our freedom.


More would probably need to be said in defence of premise (2) — more examples given of how this dynamic plays out in reality — but this gives a sense of how FND could be effectively used in the algocracy debate.



5. Two Limitations of FND in relation to Algocracy
For all their initial enthusiasm about FND, Hoye and Monaghan think that it has certain limitations when it comes to debates about algocratic modes of governance. These limitations are not immediately obvious from the sketch I just gave. They emerge only when you get into some of the finer details of FND. Still, they could be significant and might require some reshaping of the concept.

The first problem has to do with sources of domination. In his defence of FND, Pettit is clear that not all forms of domination count from a political perspective. It is probably true to say that you live under the domination of the weather, for example. You have to live your life within the parameters it sets down. But that doesn’t mean that domination-by-the-weather is a politically relevant source of domination. You need to avoid the mistake of seeing everything — every physical law or limit — as a source of domination. Pettit usually does this by limiting his focus to sources of domination that are rooted in agents, i.e. humans or corporate bodies that have agential powers.

The problem, as Hoye and Monaghan see it, is that mass surveillance is functionally agentless. The domination does not come from the actions or intentions of any particular agent. Rather, it comes from the surveillance network as a whole:

…one of the distinctive features of surveillance power is that it is functionally agentless. The new power of surveillance functions as though there were no ascribable agents — not because there are no agents within the aggregate, but because the levels of imbrication, secrecy, and redundancy are so high as to make individual agents inconsequential. The network itself affects ingratiation. 
(Hoye and Monaghan 2015, 9)

So FND would need to be modified so that it accepted at least some forms of agentless power.

For what it is worth, I am not entirely convinced by this line of argument. Pettit himself has written quite a bit about the possibility of group agency and the conditions that would need to be met in order for a corporate entity to count as an agent. And I, myself, have suggested that algocratic forms of governance meet certain conditions of agency. So I think a more detailed argument is needed here before we conclude that FND needs to be modified. I accept that there may be complex technological systems that are not like agents in any meaningful sense but I wonder how helpful it is to modify FND to accommodate such systems if we have no real control over how they function. Pettit himself suggests that the ability to change the system of dominance is key to it counting from a political perspective.

The other limitation identified by Hoye and Monaghan is a bit more esoteric. As noted above, one feature of Pettit’s theory is that it requires breadth of freedom, i.e. freedom across many domains. Pettit insists that a range of governance structures (public health systems, educational systems, social welfare etc) need to be created to facilitate that breadth. One fear that Hoye and Monaghan have is that republicans could view surveillance as an essential tool for ensuring that breadth. After all, these governance structures will need to collect and monitor data about their subjects if they are to ensure robust and broad freedom. Republicans might then think that these systems can help dismantle sources of domination.

Hoye and Monaghan argue that this is wrong. Surveillant power, as they see it, is transversal — it cuts across traditional institutional boundaries. Republicans seem to think that you can create institutions (public and private) that compete with one another in order to monitor and limit domination, but because surveillance is transversal, it doesn’t respect these institutional competitions. Its transversal nature tends to aggravate the problem of domination:

When everyone can surveil and with relatively low cost of entry, domination is not minimised by competition between institutions and levels of government. It is maximised. 
(Hoye and Monaghan 2015, 13)

I tend to think this is right, but note that it runs deeply contrary to the claims made by sousveillance activists. They think that lowering the cost of entry into surveillance, turning the tools of surveillance back on the governors, will accentuate freedom.

Okay, that’s where I am going to leave it. Hoye and Monaghan say a lot more in their paper. They go on to discuss potential solutions to the problem of domination by surveillance networks. But I’m not going to follow that discussion. I just wanted to outline the core argument about algocracy and freedom as non-domination.

Saturday, January 14, 2017

Episode #17- Steve Fuller on Transhumanism and the Proactionary Imperative

fuller2

[If you like this blog, consider signing up for the newsletter...]

In this episode I talk to Professor Steve Fuller about his sometimes controversial views on transhumanism, religion, science and technology, enhancement and evolution. Steve is Auguste Comte Professor of Social Epistemology at the University of Warwick. He is the author of a trilogy relating to the idea of a ‘post-’ or ‘trans-‘ human future, all published with Palgrave Macmillan: Humanity 2.0: What It Means to Be Human Past, Present and Future (2011), Preparing for Life in Humanity 2.0 (2012) and (with Veronika Lipinska) The Proactionary Imperative: A Foundation for Transhumanism (2014). Our conversation focuses primarily on the arguments and ideas found in the last book of the trilogy.

You can download the episode here or listen below. You can also subscribe via Stitcher or iTunes (via RSS).



Show Notes


  • 0:00 - introduction 
  • 04:00 - untangling posthumanism and transhumanism via Bostrom, Hayles, Haraway 
  • 21:45 - the relationship between theology, science and technology
  • 39:50 - theological and libertarian rationales of transhumanism 
  • 52:00 - freedom from suffering or a freedom to suffer? – questions of risk, consent and compensation 
  • 1:03:40 - the rehabilitation of Eugenics – could it / should it be done? 
  • 1:13:50 - Darwinism and the intelligent design debate 
  • 1:22:00 - are there limits to transhumanism and enhancement? Homo Sapiens, humanity and morphological freedom 
  • 1:28:00 - conclusion


Relevant Links


















 

 

Wednesday, January 11, 2017

Algocracy as Hypernudging: A New Way to Understand the Threat of Algocracy




[If you like this blog, consider signing up for the newsletter...]

It is a noticeable feature of intellectual life that many people research the same topics, but do so using different conceptual and disciplinary baggage, and consequently fail to appreciate how the conclusions they reach echo or complement the conclusions reached by others.

I see this repeatedly in my work on algorithms in governance. It’s pretty obvious to me now that this is a major topic of scholarly interest, being pursued by hundreds (probably thousands) of academics and researchers, across multiple fields. They are all interested in much the same things. They care about the increasing pervasiveness of algorithmic governance; they want to know how this affects political power, human freedom, and human rights; and they want to mitigate the negative effects and accentuate the positive (if any). And yet, I get a tremendous sense that many of these scholarly groups are talking past each other: packaging their ideas in the conceptual garb that is familiar to their own discipline or that follows from their past scholarly work, and failing to appreciate how what they are saying fits in with what others have said. Perhaps this is just the inevitable result of the academic echo chambers created by institutional and professional networks.

But note that this isn’t just a problem of interdisciplinarity. Many scholars within the same disciplines fail to see the similarities between what they are doing, partly because of the different theories and ideas they use, partly because there is too much work out there for any one scholar to keep up with, and partly because everyone longs to be ‘original’ — to make some unique contribution to the body of knowledge — and avoid accusations of plagiarism. I think this is a shame. Pure plagiarism is a problem, for sure, but reaching the same conclusions from slightly different angles is not. I think that if we were more honest about the similarities between the work we do and the work of others we could advance research and inquiry.

Admittedly this is little more than a hunch, but in keeping with its spirit, I’m trying to find the similarities between the work I have done on the topic of algorithmic governance and the work being done by others. As a first step in that direction, I want to analyse a recent paper by Karen Yeung on hypernudging and algorithmic governance. As I hope to demonstrate, Yeung reaches similar conclusions in this paper to the ones I reached in a paper entitled ‘The Threat of Algocracy’ but by using a different theoretical framework she provides important additional insight on the phenomenon I described in that paper.

Allow me to explain.


1. From the Threat of Algocracy to Hypernudging
In the ‘Threat of Algocracy’ I used ideas and arguments drawn from political philosophy to assess the social and political impact of algorithmic governance. I defined algorithmic governance — or as I prefer ‘algocracy’ — as the use of data-mining, predictive and descriptive analytics to constrain and control human behaviour. I then argued that the increased prevalence of algocratic systems posed a threat to the legitimacy of governance. This was because of their likely opacity and incomprehensibility. These twin features meant that people would be less able to participate in and challenge governance-related decisions, which is contrary to some key normative principles of liberal democratic governance. Individuals would be subjects of algorithmic governance but not meaningful creators or controllers of it.

To put it more succinctly, I argued that the increased prevalence of algorithmic governance posed a threat to the liberal democratic order because it potentially reduced human beings to moral patients and denied their moral agency. In making this argument, I drew explicitly on a similar argument from the work of David Estlund about the ‘Threat of Epistocracy’. I also reviewed various resistance and accommodation strategies for dealing with the threat and concluded that they were unlikely to succeed. You can read the full paper here.

What’s interesting to me is that Karen Yeung’s paper deals with the same basic phenomenon — viz. the increased prevalence of algorithmic governance systems — but assesses it using tools drawn from regulatory theory and behavioural economics. The end result is not massively dissimilar from what I said. She also thinks that algorithmic governance can pose a threat to human agency (or, more correctly in her case, ‘freedom’), but by using the concept of nudging — drawn from behavioural economics — to understand that threat, she provides an alternative perspective on it and an alternative conceptual toolkit for addressing it.

I’ll explain the main elements of her argument over the remainder of this post.


2. Design-based Regulation and Algocracy
The first thing Yeung tries to do is locate the phenomenon of algorithmic governance within the landscape of regulatory theory. She follows Julia Black (a well-known regulatory theorist) in defining regulation as:

the organised attempt to manage risks or behaviour in order to achieve a publicly stated objective or set of objectives. 
(Black quoted in Yeung 2016, at 120)

She then identifies two main forms of regulation:

Command and Control Regulation: This is the use of laws or rules to dictate behaviour. These laws or rules usually come with some carrot or stick incentive: follow them and you will be rewarded; disobey them and you will be punished.

Design-based Regulation: This is the attempt to build regulatory standards into the design of the system being regulated, i.e. to create an architecture for human behaviour that ‘hardwires’ in the preferred behavioural patterns.

Suppose we wanted to prevent people from driving while drunk. We could do this via the command and control route by setting down legal limits for the amount of alcohol one can have in one’s bloodstream while driving, by periodically checking people’s compliance with those limits, and by punishing them if they breach those limits. Alternatively, we could take the design-based route. We could redesign cars so that people simply cannot drive if they are drunk. Alcohol interlocks could be installed in every car. This would force people to take a breathalyser test before starting the car. If they fail this test, the car will not start.

With this conceptual framework in place, Yeung tries to argue that many forms of algorithmic governance — particularly algorithmic decision support systems — constitute a type of design-based regulation. But she makes this argument in a circuitous way by first arguing that nudging is a type of design-based regulation and that algorithmic decision support systems are a type of nudging. Let’s look at both of these claims.


3. Algocracy as Hypernudging
Nudging is a regulatory philosophy developed by Cass Sunstein and Richard Thaler. It has its origins in behavioural economics. I’ll explain how it works by way of an example.

One of the key insights of cognitive psychology is that people are not, in fact, as rational as economists would like to believe. People display all sorts of biases and psychological quirks that cause them to deviate from the expectations of rational choice theory. Sometimes these biases are detrimental to their long-term well-being.

A classic example of this is the tendency to overprioritise the short-term future. It is rational to discount the value of future events to some extent. What happens tomorrow should matter more than what happens next year, particularly given that tomorrow is the gateway to next year: if I don’t get through tomorrow unscathed I won’t be around to appreciate whatever happens next year. But humans seem to discount the value of future events too much. Instead of discounting according to an exponential curve, they discount according to a hyperbolic curve. This leads them to favour smaller sooner rewards over larger later rewards, even when the expected value of the latter is higher than the former. Thus I might prefer to receive 10 dollars tomorrow rather than 100 dollars in a year's time, even though the value of the latter greatly exceeds the value of the former.

This creates particular problems when it comes to retirement savings. The bias towards the short-term means that people often under-save for their retirements. One famous example of nudging — perhaps the progenitor of the theory — were the attempts made by Richard Thaler (and others) to address this problem of undersaving. He did so by taking advantage of another psychological bias: the bias toward the status quo. People are lazy. They like to avoid cognitive effort, particularly when it comes to mundane tasks like saving for retirement. Thaler suggested that you could use this bias to encourage people to save more for retirement by simply changing the default policy setting on retirement savings plans. Instead of making them opt-in policies, you should make them opt-out. Thus, the default setting should be that money is saved and that people have to exert effort to not save. Making this simple change had dramatic effects on how much people saved for retirement.

We have here the essence of nudging. The savings policy was altered so as to nudge people into a preferred course of action. According to Thaler and Sunstein, the same basic philosophy can apply to many regulatory domains. Policy wonks and regulators can construct ‘choice architectures’ (roughly: decision-making situations) that take advantage of the quirks of human psychology and nudge them into preferred behavioural patterns. This is a philosophy that has really taken off over the past fifteen years, with governments around the world setting up behavioural analysis units to implement nudge-based thinking in many policy settings (energy, tax collection, healthcare, education, finance etc.).

Yeung argues that nudging is a type of design-based regulation. Why? Because it is not about creating rules and regulations and enforcing them but about hardwiring policy preferences into behavioural architectures. Changing the default setting on retirement savings policy is, according to this argument, more akin to putting speed bumps on the road than it is to changing the speed limit.

She also argues that algorithmic governance systems operate like nudges. This is particularly true of decision-support systems. These are forms of algorithmic governance with which we are all familiar. They use data-mining techniques to present choice options to humans. Think about the PageRank algorithm on Google search; the Amazon recommended choices algorithm; Facebook’s newsfeed algorithm; the route planner algorithm on Google maps; and so on. All of these algorithms sort through options on our behalf (websites to browse, books to buy, stories to read, routes to take) and present us with one or more preferred options. They consequently shape the choice architecture in which we operate and nudge us toward certain actions. We typically don’t question the defaults provided by our algorithmic overlords. Who, after all, is inclined to question the wisdom of the route-planner on Google maps? This is noteworthy given that algorithmic decision support systems are used in many policy domains, including policing, sentencing, and healthcare.

There is, however, one crucial distinction between algorithmic governance and traditional nudging. Traditional nudges are reasonably static, generalised, policy interventions. A decision is made to alter the choice architecture for all affected agents at one moment in time. The decision is then implemented and reviewed. The architecture may be changed again in response to incoming data, but it all operates on a reasonably slow, general and human timescale.

Algorithmic nudging is different. It is dynamic and personalised. The algorithms learn stuff about you from your decisions. They also learn from diverse others. And so they update and alter their nudges in response to this information. This allows them to engage in a kind of ‘hypernudging’. We can define hypernudging as follows (the wording, with some modifications, is taken from Yeung 2016, 122):

Algorithmic Hypernudging: Algorithmically driven nudging which highlights and takes advantage of patterns in data that would not be observable through human cognition alone and which allows for an individual’s choice architecture to be continuously reconfigured and personalised in real time.

This is certainly an interesting way of understanding what algorithmic governance can do. But what consequences does it have and how does it tie back into the themes I discussed in my work on the threat of algocracy?


4. The Threat of Hypernudging
The answer to that question lies in the criticisms that have already been thrown at the philosophy of nudging. When they originally presented it, Sunstein and Thaler were conscious of the fact that they were advocating a form of paternalistic manipulation. Taking advantage of quirks in human psychology, changing default options, and otherwise tweaking the choice architecture, seems on the face of it to disrespect the autonomy and freedom of the individuals affected. The choice architects presume that they know best: the substitute their judgment for the judgment of the affected individuals. This runs contrary to the spirit of liberal democratic governance, which demands respect for the autonomy and freedom of all.

Sunstein and Thaler defended their regulatory philosophy in two ways. First, they made the reasonable point (in my view anyway) that there is no ‘neutral’ starting point for any choice architecture. Every choice architecture embodies value preferences and biases: making a retirement savings policy opt-in rather than opt-out is just as value-laden as the opposite. So why not make the starting-point one that embodies and encourages values we share (e.g. long-term health and well-being)? Second, they argued their’s was a libertarian form of paternalism. That is to say, they felt that altering the choice architecture to facilitate nudging did not eliminate choice. You could always refuse or resist the nudge, if you so desired.

Critics found this somewhat disingenuous. Look again at the retirement savings example. While this does technically preserve choice — you can opt-out if you like — it is clearly designed in the hope that you won’t do any choosing. The choice architects don’t really want you to exercise your freedom or autonomy because that would more than likely thwart their policy aims. There is some residual respect for freedom and autonomy, but not much. Contrast that with an opt-in policy which, when coupled with a desire to encourage people to opt-in, does try to get you to exercise your autonomy in making a decision that is in your long-term interests.

It’s important not to get too bogged down in this one example. Not all nudges take advantage of cognitive laziness in this way. Others, for instance, take advantage of preferences for certain kinds of information, or desires to fit in with a social group. Nevertheless, further criticisms of nudging have emerged over the years. Yeung mentions two in her discussion:

Illegitimate Motive Critique: The people designing the choice architecture may work from illegitimate motives, e.g. they may not have your best interests at heart.

The Deception Critique: Nudges are usually designed to work best when they are covert, i.e. when people are unaware of them. This is tied to the way in which they exploit cognitive weaknesses.*

Both of these critiques, once again, run contrary to the requirements of liberal democratic governance which is grounded in respect for the individual. Consequently, it follows that if algorithmic governance systems are nudges, they too can run contrary to the requirements of liberal democratic governance. Indeed, the problem may be even more acute in the case of algorithmic nudges since they are hypernudges: designed to operate on non-human timescales, to take advantage of patterns in data that cannot be easily observed by human beings, and to be tailored to your unique set of psychological foibles.

This is very similar to the critique I mounted in 'The Threat of Algocracy'. I also focused on the legitimacy of algocratic governance and worried about the way in which algocratic systems treat us passively and paternalistically. I argued that this could be due to the intrinsic complexity and incomprehensibility of those systems: people just wouldn’t be able to second guess or challenge algorithmic recommendations (or decisions) because their minds couldn’t function at the same cognitive level.

As I see it, Yeung adds at least two important perspectives to that argument. First, she highlights how the ‘threat’ I discussed may not arise from factors inherent to algocratic systems themselves but also from the regulatory philosophy underlying them. And second, by tying her argument to the debate around nudging and regulatory policy, she probably makes the ‘threat’ more meaningful to those involved in practical policy-making. My somewhat esoteric discussion of liberal political theory and philosophy would really only appeal to those versed in those topics. But the concept of nudging has become common currency in many policy settings and brings with it a rich set of associations. Using the term to explain algocratic modes of governance might help those people to better appreciate the advantages and risks they entail.

Which ties into the last argument Yeung makes in her paper. Having identified algorithms as potential hypernudges, and having argued that they may be illegitimate governance tools, Yeung then challenges liberal political theory itself, arguing that it is incapable of fully appreciating the threat that algorithms pose to our freedom and autonomy. She suggests that alternative, Foucauldian, understandings of freedom and governance (or governmentality) might be needed. I’m not sure I agree with this — I think mainstream liberal theory is pretty capacious — but I’m going to be really annoying and postpone discussion of that topic to a future post about a different paper.


* Very briefly, nudges tend to work best covertly because they take advantage of quirks in what Daniel Kahneman (and others) call System 1 — the subconscious, fast-acting, part of the mind — not quirks in System 2 — the slower, conscious and deliberative part of the mind.

Sunday, January 8, 2017

Moral Arguments for God (2): Non-Evidential Forms

[If you like this blog, consider signing up for the newsletter...]

(Previous Entry)

God and morality are often yoked together. Many religious believers think that morality is not possible without God. And some religious believers use this alleged dependency between God and morality as the basis for an argument in favour of his existence. As I noted in part one, there are two main forms that these arguments take:

Evidential Arguments: These are arguments that highlight the existence of some moral fact (E) and argue that God is the best explanation of E. This provides some support for the existence of God.

Non-Evidential Arguments: These are arguments that highlight some moral goal or end and argue that God’s existence is necessary if that goal or end is to be achieved.

We looked at evidential arguments and their problems in part one. Today we’ll look at non-evidential arguments. Once again, I’ll be taking my lead from Peter Byrne’s article ‘Kant and the Moral Argument’, which is to be found in Jeffrey Jordan’s book Key Thinkers in Philosophy of Religion. Since Kant is himself a proponent of a non-evidential argument, and since Byrne’s essay is ostensibly about Kant, it is not surprising that the analysis of Kant’s argument takes up more space in Byrne’s article than did the analysis of the evidential arguments. I tend to find non-evidential arguments less interesting, for reasons that will become apparent, but I am trying to be sympathetic in my summary/commentary.


1. Kant’s Non-Evidential Argument
I mentioned in part one that non-evidential arguments are less common than evidential ones, but that if you read a lot of religious philosophy you’ve probably come across one or two of them. William Lane Craig, for instance, often appeals to non-evidential arguments in his writings. He thinks that moral values make no sense under atheism because, in order to have a coherent moral life, we need to have ‘ultimate accountability’. That is to say, we need to know that:

…Evil and wrong will be punished; righteousness will be vindicated. Despite the inequities of life, in the end of the scales of God’s justice will be balanced. 
(Craig, Is Goodness Without God Good Enough?)

Here, Craig is identifying a moral goal that needs to be satisfied if moral life is to be possible. He is then suggesting that it is not possible without God. The problem with atheism is that life ends at the grave. I could be a total moral jerk my whole life and not be punished for it; I could be a total moral saint and suffer needlessly. The morality or immorality of my behaviour need have no bearing on my ultimate fate. That seems wrong to Craig. Morality should count for something in the end.

I’ve discussed this argument on a previous occasion and explained what I think is wrong with it. Kant’s non-evidential argument turns out to be quite similar in its form. The only difference is that where Craig emphasises ultimate accountability, Kant emphasises ultimate goodness (which requires a combination of happiness and complete virtue). Byrne reconstructs Kant’s argument like this (I have modified this slightly):


  • (1) It is rationally and morally necessary to attain the highest good (perfect happiness arising out of complete virtue)
  • (2) What we are obliged to attain, it must be possible for us to attain (i.e. ought implies can).
  • (3) Attaining the highest good is only possible if natural order and causality are part of an overarching moral order and moral causality.
  • (4) Natural order and causality can only be part of an overarching moral order and moral causality if God exists.
  • (5) Therefore, it is rationally and morally necessary for God to exist.


Byrne doesn’t actually provide a conclusion in his version of the argument. I have done so because I think the argument needs to reach some sort of conclusion, even if it turns out to be a misleading or unimpressive one.

Let’s now consider some objections to this Kantian argument.



2. Five Objections to the Kantian Non-Evidential Argument
Byrne isn’t terribly explicit about this but by my reading of him he presents five different objections to the Kantian argument. They all have different targets (see diagram above). The first targets the conclusion of the argument:


  • (6) Even if successful, the argument does not prove that God exists or that morality is impossible without him; it only proves that his existence is necessary for us to live moral lives.


There are unsubtle and subtle aspects to this. The subtle point is that this argument does not rule out the cognitivity of moral propositions in an atheistic universe. For all this argument says, it could be true that propositions like ‘torturing cats is wrong’ and ‘friendship is good’ are true and God not existent. In other words, it is possible to accept this argument and still be an atheistic moral realist (i.e. someone who believes that moral facts do not depend on God for their existence).

The unsubtle point is that the argument clearly does not provide evidence for God’s existence. It could be, for all this argument says, that we live in a completely amoral, indifferent universe. The argument is premised on a hope or aspiration; not a concrete fact about the morality of the world in which we live. As such, the argument does not count among the epistemic proofs of God’s existence. It might count, instead, as a rational proof of God, i.e. a proof that God’s existence is subjectively necessary: that we cannot live without believing in him. The problem with this ‘rational proof’ interpretation is that it casts the argument in a very different light. It suggests that belief in God is what matters; not his actual existence. This means that the conclusion that I offered above should be altered to the following:


  • (5*) Therefore, it is rationally and morally necessary to believe in God’s existence.


And this throws open the possibility of believing in God as a useful moral fiction. The theist would resist this, no doubt. They would point back to the first premise and argue that mere belief is not enough: we have to have the possibility of attaining the highest good and this means we need God to actually exist, not just exist in our minds. This runs into the promotion vs attainment objection that I discuss below.

The second objection to the argument is this:


  • (7) It is not clear that the attainment of happiness is rationally/morally necessary, partly because it is not clear what happiness consists in.


This is an attack on premise (1). Byrne explains the objection by pointing out that the nature of happiness is contested and, depending on which definition you adopt, people often give up the pursuit of happiness for the attainment of other goods. So, for example, if happiness is understood as hedonistic pleasure, it seems true to say that people will subject themselves to displeasurable states of affairs in order to attain other goods. I might hate studying for exams, for example, but I accept the displeasure in return for an education. A better example, and one given by Byrne, is that of someone giving up their career to care for a sick relative. The former made them happy whereas the latter does not, but they find the latter more rationally and morally compelling.

Byrne detects three distinct accounts of happiness in Kant’s work: a pleasure-based theory; a contentment-based theory; and a desire-satisfaction based theory. The problem is that these three theories pull in different directions. It is possible that the pleasure-filled life is not the life in which the most desires are satisfied. I often pursue momentary pleasures at the expense of long-term desire satisfaction. Does this make me more or less happy? Likewise, contentment is a very different notion from pleasure or desire-satisfaction. I can be content with my lot in life and not experience much pleasure.

The point here is that happiness is a deeply contested concept and given its deep contestation its not at all clear that it’s attainment is rationally and morally compelling. For what it is worth, I’m not sure that this is a good objection. The mere fact that there is uncertainty as to what a goal consists in does not mean that it is not morally or rationally necessary. We may simply need a wide and capacious understanding of what it is. On top of it, it feels intuitively correct to suggest that the pursuit of happiness (understood broadly) is the ultimate goal of human life.

But this is to speak only to happiness and not virtue. For Kant, it was essential for virtue and happiness to align, i.e. for true happiness to be only attainable by living the morally virtuous life. For this, God was deemed essential. I previously mentioned that in an atheistic universe it seems eminently possible for someone to live an immoral and vicious life and yet to be happy (in some sense of the term). This is the great fear of theists like Kant and Lane Craig. They think God is necessary to ensure some alignment between virtue and happiness.

This insistence, however, gives rise to a third objection. This one is somewhat specific to Kant’s understanding of how God ensures that virtue and happiness align. I’ll state it bluntly first and then explain:


  • (8) Kant implies that virtue requires endless progress and hence implies that true happiness is unattainable. This is contrary to premises (1) and (2) of his argument.


In his Metaphysics of Morals, Kant states that:

Virtue is always in progress and yet always starts again from the beginning. The first point holds because, considered objectively, it is an ideal and unattainable, even though it is a duty to approximate constantly to it. 
(Quoted in Byrne 2011, 90)

And in the Critique of Practical Reason, Kant suggests that virtue (or what he there calls ‘holiness’) requires immortality because it involves the constant asymptotic approach to a perfect ideal. This seriously undercuts the non-evidential argument because it suggests that even with God it is impossible to attain true happiness. Immortality seems to be doing all the work; not God.
On some occasions, Kant modifies his talk about endless pursuit by suggesting that God can decree that the pursuit itself is sufficient, but this then opens up another objection:


  • (9) Why is it not enough to simply promote the ultimate good? Why do we need to actually attain it?


One should always be wary of objections phrased as rhetorical questions, but in this instance the question seems like a good one. Surely we can respect the need for some moral ideals, perhaps we can even accept that having those moral ideals in mind is necessary if we are to live moral lives, but why is it morally and rationally necessary to actually attain them, particularly if this is ultimately impossible? The second premise of Kant’s argument puts down his famous ‘ought implies can’ principle. It states that we are only morally obliged to do that which it is possible to do. Well, it is possible to promote moral good; but it may not be possible to achieve moral perfection. This suggests that the former is what is rationally and morally necessary; not the latter. This, therefore, undercuts the first premise of the argument.

Which brings us to the last objection. This one targets premise (4):


  • (10) There could be some impersonal moral force that binds the naturalistic order to the moral order.


In other words, we don’t need the Western, personal, monotheistic God in order to ensure that moral goals are attainable. All we need is some impersonal moral force. Some Eastern religions believe in such forces, e.g. karma. Of course, the tenability of this objection depends very much on the plausibility of this metaphysical view. Somewhat ironically, whether you believe in the possibility of impersonal moral forces is likely to depend on how you feel about evidential moral arguments (the kind of arguments discussed in part one).

Thursday, January 5, 2017

Hume's Objections to the Design Argument (Part Two)




[If you like this blog, consider signing up for the newsletter...]

(Part one)

This is the second in a two-part series on David Hume’s objections to the design argument. If you haven’t read part one, I would recommend doing so before going any further. To briefly recap, in part one I mentioned that Hume’s treatment of the design argument — primarily in his Dialogues but also in his Enquiry — is lauded by many. JCA Gaskin is particularly impressed by it and dedicates a good portion of his book Hume’s Philosophy of Religion to its analysis.

In Gaskin’s reading of Hume, there are two versions of the design argument with which to contend:

The Nomological Design Argument: which focuses on the order/regularity that is found in nature and argues that some designer must be responsible for it.

The Teleological Design Argument: which focuses on the purpose/adaptation that is found in nature and argues that some designer must be responsible for it.

Both versions of the argument are defended by the character of Cleanthes in the Dialogues. They are then challenged by the character of Philo (often thought to represent Hume’s own views). Gaskin identifies ten different objections to both design arguments in Hume’s writings. These ten objections are organised into four main groups, illustrated in the diagram below. We dealt with the first two groups in part one. Let’s now look at the other two.



1. Analogical Weaknesses
The nomological and teleological arguments rest on an analogy. In Cleanthes presentation, we are entitled to infer that there is some designer of the order/purpose that we see in the natural world because our experience with the artificial world tells us that whenever there is order/purpose there is some human designer behind it all. In other words, the arguments are explicitly constructed in such a way that the natural and artificial worlds are deemed to be similar enough to ground an analogical inference. Hume has several objections to this attempted analogy.

The first of his objections is (numbering continues from part one):

(5) The ‘Weak and Remote’ Problem: ‘The analogy between those objects known to proceed from design and any natural object is too weak and too remote to suggest similar causes.’ (Gaskin 1988, 27 - citing an argument found in several locations in the Dialogues)

To understand this objection we need to understand how analogical arguments work. They all have the following structure:


  • P1. Y is true in Case A.
  • P2. Case B is like Case A in all important respects (i.e. with respect to features X1…Xn).
  • P3. Therefore Y* (similar to Y in important respects) is likely to be true in Case B.


In my more detailed presentation of the design arguments in part one, I showed how this abstract structure could be applied to both the teleological and nomological arguments. This application is not important right now. The critical detail for now is how all analogical arguments depend upon a similarity claim. It is because Case B is similar to Case A with respect to features X1…Xn that the inference can be made. The more similar the two cases are, the stronger that inference is; the more different they are, the weaker it is.

Hume’s challenge to both versions of the design argument is to suggest that the two cases (natural order/purpose and artificial order/purpose) are not very similar at all. They do not share as many features as proponents of the argument like to suggest and those that they do share are, upon closer inspection, more dissimilar than we might first think. There are many examples of this in practice. The purpose we see in the design of non-human animals and plants can often be opaque and bizarre. Think about the instances of ‘bad’ design that we see in nature — examples like the laryngeal nerve of the Giraffe, the vestigial thumb of the Panda, and all manner of cruel and painful livelihoods that are eked out by predators and prey. These examples are all better explained by Darwinian evolution, of course, but if we set that to the side and seriously entertain the design hypothesis we have to acknowledge that we are dealing with a designer with very different purposes or intentions to any human designer — ones that are ‘beyond our ken’.

On top of this, there is a scale and immensity to the order we see in the universe that makes its construction something quite beyond the abilities of any human designer. This might seem to warrant the inference to a supreme being with the powers traditionally attributed to God, but for Hume the dissimilarities of scale serve to undercut the analogy used to ground the design argument:

All the new discoveries in astronomy, which prove the immense grandeur and magnificence of the works of nature…become so many objections, by removing the effect still farther from all resemblance to the effects of human art and contrivance. 
(Hume’s Dialogues — quoted in Gaskin 1988, 30)

This brings us to the next objection:

(6) The Non-Agential Order Problem: ‘Order, arrangement, or the adjustment of final causes is not, of itself, any proof of design; but only so far as it has been experienced to proceed from that principle.’ (Hume’s Dialogues - quoted in Gaskin 1988, 31)

This is a little bit tricky to understand. Hume makes two points in relation to this objection. The first is that when you look at the totality of human experience, the evidence we have for thinking that order/purpose proceeds from agency is pretty flimsy. The second is that we have some reason for the thinking that order can be brought into existence without agency. Both points require some elaboration.

The first point is probably the more subtle one and its significance may be underappreciated. Look around the world. Look at all the instances of order/purpose that you find in it. How many of those instances of order/purposes are known — independently of the design argument — to proceed from agency? The answer is very few (proportionately speaking). Humans have had a remarkable impact on the world around them, but it is still true to say that the natural world (including the universe as whole) is much larger than the human created world and contains in it many instances of order/purpose that are not known — independently of the design argument — to proceed from agency. This is a problem because the strength of the analogy underlying the design argument is dependent on the totality of the available evidence regarding design in the universe. It’s only if we are more certain, based on our experience, that design is brought about by an agent that we can infer that all instances of design must have an agential origin.

Another way of putting it might be like this: we frequently reason from samples of the whole to explanations of the whole. We are entitled to reason from small samples if there are good grounds for thinking that they are representative of the whole (think about the way in which polling data is collected from samples). But in the case of the design argument, there is no good reason for thinking that the small sample of human-created design is representative of the whole. On the contrary, the total background evidence we have suggests that most instances of design do not have an agential creator. So we cannot reason by analogy from the few cases of human-created design to the assumption that there is a designer for the whole.

The second point Hume makes is more straightforward. It is simply that there are known cases in which order is brought about by non-agential forces. Certain geological processes, for example, are non-agential and yet can result in orderly patterns. Similarly, most people would agree that plants are not agents and yet when they grow and develop, plants create order in the world around them. So here we have some direct counter-analogies: cases in which design is not the product of a designer.
Some theists might object to this on the grounds that these examples all involve new forms of order/design being created from previous forms of order/design. Thus, the non-agential forces to which Hume appeals do not explain the origins of order/design in the first place. That’s as may be, but in making this move, the theist is shifting to a different kind of argument — something more akin to the cosmological argument — which tries to suggest that there must be a first link in a chain of causation. The regress problem — discussed below — challenges this style of argument.

The last of Hume’s three analogical objections is this:

(7) The Tightrope Problem: The design arguments try to walk a very fine line between excessive anthropomorphism and incomprehensible remoteness when it comes to the nature of the designer. It is very difficult to walk this line and maintain the credibility of the design argument.

This objection functions like a dilemma. Suppose the theist is right and the analogy between artificial order/purpose and natural order/purpose is strong. In that case, the strength of the analogy means we should infer a very human-like designer. We would then end up with something that falls a long way short of the supreme being beloved by theists. Contrariwise, suppose the theist is wrong and the analogy is not strong — the scale and magnitude of the universe points to a being with properties very distinct from a human creator. In that case, they might be able to get to the supreme being they desire, but at the cost of undercutting the analogy they were originally using to support their case.
So theists have to walk a very fine line — a tightrope if you will — one that insists that the two cases are just similar enough to warrant the inference to a designer and also different enough to enable them to insist that the designer is a supreme, non-human like being.

In fact, for Hume, it gets worse than that because not only do theists have to walk that line when it comes to the design argument, they also have to walk that line with their very conception of God. The God they want has to be human-like in some ways (with a human like mind/personality and moral interests in humanity’s affairs) but also practically incomprehensible and ineffable in others (omnipotent, omniscience, perfectly simple etc.).



2. Problems in Explaining Order/Purpose
This brings us to the final branch of Hume’s taxonomy of objections. There are three specific objections lying along this branch. Each of them takes issue with the motivation behind the design arguments, i.e. the desire to explain order/purpose in terms of divine agency. They each take a slightly different perspective on the issue though.

The first objection points to a general problem with all attempts to explain order/purpose:

(8) The Regress Problem: “If an intelligent agent is required to explain the order in nature then the intelligent agent will in turn need to be explained…But if we stop at the agent explanation, and go no farther; why go so far? Why not stop at the material world? How can we satisfy ourselves without going on in infinitum? And after all, what satisfaction is there in that infinite progress?” (Hume’s Dialogues - quoted in Gaskin 1988, 41)

The point that Hume (via Philo) is making in this quote is a familiar one. Anybody who has spent anytime sniffing around the philosophy of religion will have encountered some variant on it before. Hume is saying that explanations must bottom out somewhere, i.e. with some fundamental origin or source for the order/purpose we see. Theists want the explanation to bottom out in God: they want him to be the ground of all being. But do they have good grounds for doing so?

Hume thinks that they face two problems. First, in trying to get to God they appeal to principles that do not justify stopping with God. Thus, as noted above when discussing their objection to Hume’s plant example, they will suggest that there must be some fundamental origin to the order/purpose we see in the universe: an explanation that doesn’t just explain one type of order in terms of another type of order. They think God satisfies this role, but Hume argues that he doesn’t. What is God if not an ordered, purposeful being? Appealing to Him means that we explain one type of order in terms of another type of order. But if order itself needs to be explained then we will need to find some explanation for God. Hence, the infinite regress. Some theists might think they have a response to this by arguing that God is a perfectly simple, unitary being and hence doesn’t display the kind of ordered complexity that they think is needs to be explained. But Hume has a reply to this:

A mind whose acts and sentiments and ideas are not distinct and successive, one that is wholly simple and totally immutable, is a mind which has no thought, no reason, no will, no sentiment, no love, no hatred; or, in a word, is no mind at all.
(Hume's Dialogues, Part IV) 


The other problem that theists face is that if they accept that there cannot be an infinite regress of explanations, they will need to justify going beyond the laws of nature in explaining the order/purpose we find in the universe. As Hume puts it: If we have to stop somewhere, ’why not stop at the material world?’ We might be justified in seeking a further explanation for the laws of nature if the explanation we posit can provide us with greater insight/understanding of those laws. But Hume argues that God does not provide this additional insight. On the contrary, God is usually a deeply mysterious explanation, with little or no explanatory power or predictive potential. Of course, theists disagree, but I tend to think Hume is right on this score: appeals to the explanatory power of God are usually little more than an attempt to explain one mystery in terms of something even more mysterious.

The next objection takes a different tack and suggests that there are, in any case, alternative naturalistic explanations for the order/purpose we see in the universe:

9. The Alternative Explanations Problem: There is a ‘system, an order, an economy of things, by which matter can preserve the perpetual agitation, which seems essential to it, and yet maintains a constancy in the forms’ (Hume’s Dialogues - quoted in Gaskin 1988, 43)

This is obscurely expressed but it is the shortest summary of one of Philo’s more complex arguments in the Dialogues. In one sense, what Philo argues is more appealing to us nowadays than it was in Hume’s time. Remember, when Hume wrote the Dialogues Darwin was yet to expound his theory of evolution by natural selection. This theory — and subsequent developments of it — provided a compelling (to most, at any rate) naturalistic explanation for the purpose and adaptation we see in the living world. Hume had to defend an alternative naturalistic explanation without the benefits of Darwinism. But Gaskin argues that Hume anticipated Darwin’s ideas in the following passage:

It is vain, therefore, to insist upon the uses of the parts in animals and vegetables, and in their curious adjustment to each other. I would fain know how an animal could subsist, unless its parts were so adjusted? Do we not find, that it immediately perishes whenever this adjustment ceases, and that its matter corrupting tried some new form? 
(Hume’s Dialogues — quoted in Gaskin 1988, 44)

Did you spot the anticipation of Darwin? It’s a little hard to see, but in this short quote Hume does mention two things that are redolent of Darwinism. First, he points to something akin to a Darwinian selection pressure when he says that he would ‘fain know how an animal could subsist [i.e. survive], unless its parts were so adjusted?’. Second, he seems to suggest that something akin to mutation (‘matter corrupting’) could be responsible the animal forms that survive and subsist. This is, of course, a generous interpretation, conducted with all the benefit of hindsight.

In any event, offering a naturalistic explanation for the purpose and adaptation we see in living nature is only half the battle. There is still the question of where all the order and regularity in the universe as a whole came from. Here, it seems that Hume paid homage to the classic Epicurean view. According to this view, the universe is an infinite soup of matter in motion. Much of that motion is chaotic and disordered. But if the universe is truly infinite, there will, of necessity, be localised pockets of order and we, as ordered beings, will necessarily find ourselves in those localised pockets. Thus, in an infinite universe in which all possibilities are eventually tried out, there is nothing explanatorily surprising about our existence:

Thus the universe goes on for many ages in a continued succession of chaos and disorder. But is it not possible that it may settle at last, so as not to lose its motion and active force…yet so as to preserve a uniformity of appearance amidst the continual motion and fluctuation of its parts? This we find to be the case with the universe at present…May we not hope for such a position, or rather be assured of it, from the eternal revolutions of unguided matter, and may not this account for all the appearing wisdom and contrivance which is in the universe? 
(Hume’s Dialogues — quoted in Gaskin 1988, 46)

The challenge for this Epicurean view lies in contemporary cosmology. We know much more about the universe in which we live now than we did in Hume’s day. The scientific evidence we currently have suggests that our universe started at a finite point in the past and that the matter within it obeys fairly uniform laws. This seems to rule out the possibility of the infinite, churning chaos that Hume requires. But this isn’t quite right either. Our current scientific model of the universe is incomplete in certain important respects and the finite duration of the portion of the universe in which we live and do our science doesn’t rule out the possibility that we are part of some larger, infinite chain of universes or multiverses. Indeed, some scientific theories point in that direction already. So the Humean alternative may still be viable, and I find it appealing on philosophical grounds (viz. positing an infinite chaos in which everything that is possible happens seems like the neatest and most satisfactory explanation of everything).

Hume has one last objection to theistic attempts to explain order/purpose:

10. The Cognitive Limitation Problem: Human reason is a poor instrument for working out universal truths. We should be sceptical of all our attempts to explain the ultimate origins of reality.

This is theme that runs throughout Hume’s philosophical works. He is the arch-sceptic. He thinks that human reasoning is weak and highly fallible. It doesn’t deliver the results we would like. We cannot even justify simply practices like scientific induction on the basis of reason alone. Consequently, we should be very sceptical of any attempt to use these weak and fallible capacities to explain the origins of order/purpose.

I’m not sure if I completely follow Hume on this front. I think we should be sceptical of the powers of human reason but I tend to sympathise more with the view Hume attributes to Cleanthes in the Dialogues, namely: complete scepticism is unwarranted and human reason can provide us with some insights. It’s really all about the domain of inquiry: whether human reason is up to the task in that domain. I might agree with Hume that religious matters are one area where human reason is not up to the task. JL Schellenberg is probably the contemporary philosopher who has done the most to take up Hume’s cudgels on this front. His book The Wisdom of Doubt presents the best argument I know of for religious scepticism of the Humean type (though note: Schellenberg is more optimistic about the long-term prospects for human reason). I recommend it to anyone who would like to pursue this tenth line of objection in more depth.



3. Conclusion

That brings us to the end of this analysis of Hume’s objections to the design argument. To sum up, Hume focuses on two versions of the design argument: the nomological and the teleological. The former is concerned with the order/regularity we observe in the natural world; the latter is concerned with the purpose/adaptation we observe in the natural world. Hume levels ten objections against these two arguments. These objections have been discussed in detail over the two posts in this series. I won’t repeat the details here.

As you no doubt noticed, there are parts of Hume’s critique that are quite antiquated. He concerned himself with the religious debates and views of his day and he wrote at a time when our scientific understanding of biology and cosmology was in its infancy. Nevertheless, much of what he says continues to have relevance, and it always amazes me to see how closely Hume’s reasoning resembles or anticipates what we find in contemporary philosophy of religion.

Let me close, however, on a more critical note. Hume deals with an explicitly analogical version of the design argument in his writings. One problem with this is that most modern defenders of the design argument abjure the analogical form. They favour arguments that are couched in Bayesian terms or in terms of inference to best explanation. It sometimes turns out that these formulations of the argument rely, implicitly, on some analogy between human design and divine design, but to reveal that implicit reliance you have to engage with some complex debates in probability theory and theories of explanation. I still think that parts of what Hume says are relevant to those debates, but it requires more work to demonstrate this.