Thursday, June 22, 2017

Understanding Ideologies: Liberalism, Socialism and Conservatism

Are you a liberal, socialist or conservative? Are you fiscally conservative but socially liberal? Or socially conservative and fiscally liberal? Are you a classical liberal or a neo-liberal? Are you a Marxist socialist or a neo-Marxist socialist?

We frequently use these terms to describe our political beliefs and ideological preferences. But what do they really mean? If I am a liberal what makes me different, if anything, from a socialist or conservative? These are important questions. These terms effectively define the major points on the landscape of political opinion. But answering them is not easy. There are many different definitions and understandings of liberalism, socialism and conservatism. To borrow a phrase, it often seems to be the case that ‘one man’s liberalism is another man’s conservatism’ and so on.

In this post, I want to share one recent attempt to capture the distinction between these different ideologies. That attempt comes from James Alexander’s article ‘The Major Ideologies of Liberalism, Socialism and Conservatism’. It is a unique and somewhat idiosyncratic take on the topic, suggesting that there is one core defining feature to each of the three ideologies and that they can be arranged in a logical/hierarchical order. I want to see why Alexander thinks this. I start with his general approach to understanding ideologies.

1. Understanding Ideologies
Alexander’s article — like most academic articles — starts from a point of departure (i.e. a disagreement with the existing literature). In his case, the point of departure is the ‘collectivising’ approach to ideologies that is characteristic of most previous attempts to taxonomise and define liberalism, socialism and conservatism. The collectivising approach tries to group different conceptualisations of the respective ideologies together. Authors who adopt this approach tend to think that there is nothing simple about any of the ideologies. There is no one canonical version of liberalism, for example. Instead, there are many different liberals, each emphasising different properties or features of the ideology. Authors often proceed to categorise these properties and features, suggesting that some are ‘core’ and others ‘peripheral’. Nevertheless, they think that ideologies are complex assemblages of these properties, and that the distinctions between the different ideologies are blurry and often a matter of emphasis rather than something more fundamental.

Alexander favours a different approach. Instead of collecting and grouping, he prefers to distinguish and differentiate. He wants to focus on what makes the ideologies different from one another. This is a reasonable approach to take. As he points out, there is a certain irony to the fact that all those authors who focus on the complex and plural nature of the three ideologies still tend to separate them out and, usually, assume some hierarchy between them. He discusses three books in particular that take this approach: Heywood’s Political Ideologies; Vincent’s Modern Political Ideologies; and Freeden’s Ideologies and Political Theory.

[I]n all of these books the ideologies are compartmentalised into prefabricated categories—called chapters…Liberalism, conservatism and socialism are the ‘major ideologies’, and liberalism is the most important or the original of the three. 
(Alexander 2015)

They never fully justify why they do this or why liberalism is taken to be the most important ideology. Alexander tries to supply the missing justification. He does so by first trying to define what an ideology is. As he sees it, ideologies are distinctively modern phenomena. They arose in the aftermath of the Enlightenment and its associated revolutions, when the traditional Christian and monarchical order was called into question. Appeals to God-given inequalities or rights to govern were no longer persuasive. A new way to justify political arrangements was required. That’s how ideologies came into being.

More specifically, Alexander suggests that ideologies use a (typically secular) criterion to evaluate political arrangements; that they do so in an environment in which that criterion is always being challenged and contested by other ideologies; and that the criterion used by ideologies is best understood in terms of debt (i.e. to whom does society owe its fundamental duties). This gives him the following definition:

Ideology: 'An ideology is a view about what ought to be thought, said and done about politics in terms of a sole criterion, where that sole criterion is a suggestion about to what or whom a fundamental debt is owed; and where this view is contested by views dependent on rival criteria within a situation which is constituted by the continual contestation of criteria.' (Alexander 2015)

The idea that debt is the basic concept underlying all major political ideologies might seem a little odd, and Alexander’s approach to the topic of ideology is, as I said earlier, idiosyncratic. Nevertheless, you have to admire his attempt to develop a coherent theory, and its more abstract elements make more sense when you look at the criteria used by the three main ideologies.

2. Understanding Liberalism, Socialism and Conservatism
There are many ideologies. Nationalism is an ideology. Feminism is an ideology. Fascism is an ideology. Each of these ideologies has been or continues to be important. Nevertheless, most theorists agree that liberalism, socialism and conservatism are the most important ideologies, and that understanding them gives you access to most of the current political landscape. Why is this?As noted earlier, the collectivising approach to ideology gives no clear answer. But Alexander thinks that his ‘differentiating’ approach does.

He argues that the reason why these three ideologies are grouped together is that they all agree that society owes its debts to the self. In other words, that when arranging the political order, the powers-that-be must explain themselves and justify themselves by reference to the self. This attempt to justify political orders by reference to the self is the defining feature of the modern, post-Enlightenment, era.

Now, you might argue that justifying political orders by reference to the self sounds like it describes liberalism pretty well, but not socialism and conservatism. But that’s where the most interesting and novel feature of Alexander’s theory kicks-in. He argues that, contrary to what you might believe, socialism and conservatism also think that society owes its debts to the self. Where they differ from liberalism is in how they conceive of and understand the self. This, in turn, helps to explain why liberalism is usually taken to be the most important modern ideology. Liberalism, it turns out, has an exceptionally austere and simple (many would say ‘wrongheaded’) view of the self. Socialism and conservatism add layers of complexity to that view.

This is all a little abstract. Let’s make it more concrete by specifying exactly how the three different ideologies are to be understood.

Liberalism = The view that social arrangements have to be made acceptable to the self (i.e. that the fundamental debt in society is owed to the self). This is often taken to entail that social arrangements need to be understood and consented to by the self.

As Alexander puts it, somewhat poetically, the essence of the liberal view is that ‘the self has infinite credit, everything else…is in debt to that credit’. This should make sense to anyone with a passing familiarity with liberal political theory. Such theory is preoccupied with how the state justifies itself to its subjects, viewing them as having some fundamental normative (as opposed to practical) veto power over the rules by which they are governed.

The liberal conception of the self is simple. It views the self as an isolated, atomistic, individual. The self is capable of reason and understanding, and this is what marks it out as special and unique. Furthermore, the self is the only thing that is intrinsically necessary and valuable. The external world includes resources that might be instrumentally necessary to the survival of that self, but apart from those resources everything else is contingent. Many critique this view on the grounds that this self is illusory: no atomistic, isolated, rational self has ever, or will ever, exist.

Socialists and conservatives agree with this critique. Socialists agree because they think that the self cannot be understood in isolation from the community in which it lives. That community provides support for the self, and shapes how the self thinks and views the world. The liberal glosses over that dependency, noting some relationship between the self and the external world, but ignoring how important it is. This alternative conception of the self means that socialists have a different understanding of the fundamental debt:

Socialism = The view that the fundamental debt is owed to the self as constituted by society, i.e. that when justifying political orders you cannot assume a version of the self that is abstracted away from the society in which they are created.

This might seem like a subtle difference, but Alexander argues that it requires a radical shift of perspective. The essence of liberalism lies in its opposition of the self to society. The liberal self has to be protected from society. The socialist argues that the opposition presumed by the liberal is misleading. That said, Alexander suggests that socialists are often confused as to whether they should destroy liberalism or complete it. Marx, for example, favoured the goal of individual emancipation from certain aspects of the present social order, but then rejected liberalism in other ways.

Conservatives add a further layer of complexity to the understanding of the self. They agree with socialists (as odd as that may sound) that the self cannot be abstracted away from the society in which it is constituted. They add to this that it cannot be abstracted away from the historical forces that shaped that society either. In other words, we don’t just owe a debt to the self-as-constituted-by-society, we owe a debt to the self-as-constituted-by-society-and-tradition.

Conservatism = The view that the fundamental debt is owed to the self as constituted by society and by the set of traditions and cultures that shaped that society, i.e. that when justifying political arrangements you cannot assume a version of the self that is abstracted away from social and historical factors.

Now, you might argue that this doesn’t really differentiate conservatism from socialism. After all, Marxism is acutely aware of the historical forces that shape the societies we inhabit. That is true. But socialists like Marx do not think we have any obligations to history. Indeed, they often look to reform or deconstruct those historical forces. They focus on future possibilities and they long for the revolution in the social order. Conservatives are different. They think we ought to respect the historical forces. They want to hang on to what we have, rather than aspire to something hypothetical.
That said, conservatives are not necessarily opposed to change. They often resist change until change is inevitable and simply insist on a high degree of continuity with the past. There are confusing statements from self-describing conservatives on this score, and it means that they are not always consistent in the ideological commitments.

And that’s pretty much it. That’s how Alexander understands the differences between the three major political ideologies. As I said at the outset, it is an idiosyncratic view. I’m sure anyone who is associated the three named ideologies will be incensed at how much has been left out. But that’s kind of the point. This is a stripped-down taxonomy. It focuses on differentiating and ordering the ideologies. It does not aim for inclusivity and plurality. If you want to learn more, I’d recommend reading the full paper, as well as Alexander’s other work.

Thursday, June 15, 2017

The Quantified Relationship

I have a new paper coming out entitled 'The Quantified Relationship'. I wrote it with my colleagues Sven Nyholm (Eindhoven) and Brian Earp (Oxford). It's going to be a 'target article' in the American Journal of Bioethics. For those who don't know, when something is accepted as a 'target article' it is open to others to publish short (1500 word) replies/critiques. If you are interested in doing this, let me know and I'll keep you posted about when this is possible.

In the meantime, here are the paper details along with links to a pre-publication draft.

Title: The Quantified Relationship

Authors: John Danaher, Sven Nyholm, Brian Earp

Journal: American Journal of Bioethics

Links: Philpapers, Researchgate; Academia

Abstract: The growth of self-tracking and personal surveillance has given rise to the Quantified Self movement. Members of this movement seek to enhance their personal well-being, productivity and self-actualization through the tracking and gamification of personal data. The technologies that make this possible can also track and gamify aspects of our interpersonal, romantic relationships. Several authors have begun to challenge the ethical and normative implications of this development. In the present article, we build upon this work to provide a detailed ethical analysis of the Quantified Relationship (QR). We identify eight core objections to QR and subject them to critical scrutiny. We argue that although critics raise legitimate concerns, there are ways in which tracking technologies can be used to support and facilitate good relationships. We thus adopt a stance of cautious openness towards this technology and advocate the development of a research agenda for the positive use of QR technologies. 

Sunday, June 11, 2017

Can we derive meaning and value from virtual reality? An Analysis of the Postwork Future

Image courtesy of BagoGames via Flickr

Yuval Noah Harari wrote an article in the Guardian a couple of months back entitled ‘The meaning of life in a world without work’. I was intrigued. Harari has gained a great deal of notoriety for his books Sapiens and Homo Deus. They are ambitious books, both in scope and intent. Harari’s subject is nothing less than the entire history and future of humanity. He wants to show us where we have come from, how we got here, and where we are going. He writes in a sweeping, breathless and occasionally grandiose style. As you read, you can’t help but get caught up in the epic sense of scale.

The Guardian article was somewhat different. It was a narrower, more provocative thinkpiece, dealing with a theme raised in his second book Homo Deus: What happens when machines take over all forms of work? What will the ‘useless class’ of humans have left to do? These are questions that interest me greatly too. I have published a couple of articles about the meaning of life in a world without work, and I am always interested to hear others opine on the same topic.

Unfortunately, I was less than whelmed by Harari’s article. It seemed a little flippant and shallow in its argumentation. To some extent, I figured this was unavoidable: you can’t cover all the nuance and detail in a short newspaper piece. But I tend to think a better job could, nevertheless, have been done, whatever the word limits on the Guardian might have been. I want to explain why in the remainder of this post. I’ll start by outlining what I take to be Harari’s main thesis. I’ll then analyse and evaluate what I see as the two main arguments in his piece, highlighting flaws in both. I’ll conclude by explaining what I think Harari gets right.

1. Harari’s Thesis: Video Games Might be Good Enough
Interestingly, Harari starts his discussion in much the same place that I started my discussion in my paper ‘Technological Unemployment and the Search for Meaning’. He starts by pondering the role of immersive video games in the lives of those rendered obsolete by automation:

People must engage in purposeful activities, or they go crazy. So what will the useless class do all day? One answer might be computer games. Economically redundant people might spend increasing amounts of time within 3D virtual reality worlds, which would provide them with far more excitement and emotional engagement than the “real world” outside. 
(Harari 2017)

This isn’t purely idle, armchair speculation. Research by the economist Erik Hurst (and his colleagues) already suggests that young men in the US (specifically non-college educated men in their 20s) are opting for leisure activities, such as video games, over low paid and precarious forms of work. If the preference profiles of these young men carry over to others, then the automated future could be one in which the economically displaced live out their lives in virtual fantasies.

Is this a good or bad thing? Will it allow for human flourishing and meaning? Many will be inclined to say ‘no’. They will argue that spending your time in an immersive virtual reality world is deeply inhuman, perhaps even tragic. Harari’s central thesis is that it is neither. If we understand the lessons of human history, and if we play close attention to our different cultural practices and beliefs, we see that playing virtual reality games has always been at the core of human flourishing and meaning.

Harari’s Thesis: A future where those rendered economically useless spend their time playing virtual reality games is neither bizarre nor tragic; virtual reality games have always been central to human flourishing and meaning.

This is provocative stuff. It seems so counterintuitive and yet he might be on to something. We’ve all had the sense that there is something slightly unreal and fantastical about the trivial tasks that make up our daily lives. But can we put this on a firmer, intellectual footing? Perhaps. The way I read him, Harari offers two main arguments in support of his thesis. Let’s look at them both now.

2. The Big Argument: It’s All Virtual
Isaiah Berlin famously divided the intellectual world into two camps: the foxes and the hedgehogs. The foxes knew many little things and used them all, in various ways, to chisel away at the world of ideas, not giving much thought to how it all fit together in the process. The hedgehogs knew one big thing — they had one big idea or theory — through which everything was filtered and regurgitated. They had a hammer and everything was a nail.

Harari is definitely a hedgehog. His scope may be vast, but he has one big idea that he uses to elucidate the tapestry of human history. The idea is almost Kantian in nature. It is that the reality in which we live (i.e. the one that we really experience and engage with) is largely virtual in nature. That is to say: we don’t experience the world as it is in itself (in the ‘noumenal’ sense, to use Kant’s words), but rather through a set of virtual/augmented reality lenses that are generated by our intellects. Harari explains the idea by reference to his own experiences of Pokeman Go and the similarity between it and the perceived religious conflicts in the city of Jerusalem:

It struck me how similar the situation [playing Pokemon Go with his nephew] was to the conflict between Jews and Muslims in the holy city of Jerusalem. When you look at the objective reality of Jerusalem, all you see are stones and buildings. There is no holiness anywhere. But when you look through the medium of smartbooks (such as the Bible and the Qu’ran) you see holy places and angels everywhere. 
(Harari 2017

Later he supports this observation by appealing to his big idea:

In the end, the real action always takes place inside the human brain…In all cases, the meaning we ascribe to what we see is generated by our own minds. 
(Harari 2017)

Which leads me to formulate something I’m going to call ‘Harari’s General Principle’:

Harari’s General Principle: Much of the reality we experience (particularly the value and meaning we ascribe to it) is virtual in nature.

This general principle provides existential reassurance when it comes to contemplating a future spent living inside a virtual reality game. The idea is that there is nothing bizarre or tragic about this possibility because we already live inside a big virtual reality game and we seem to derive great meaning from that irrespective of its virtuality. That’s his main argument. It seems to work like this (this formulation is mine, not Harari’s)

  • (1) If it turns out that we already derive great meaning and value from virtual reality games, then a future in which we live out our lives in virtual reality games will also provide great meaning and value.

  • (2) It turns out that we already derive great meaning and value from virtual reality games.

  • (3) Therefore, a future in which we live out our lives in virtual reality games will provide great meaning and value.

Premise (1) is practically tautologous. It’s hard to see how one could object to it. There is, however, one important, perhaps pedantic, objection that could be raised: there may be differences in the quality of the experience provided by different virtual reality games. So the mere fact that we derive great meaning and value from the current crop of virtual reality games provides no guarantee that we will continue to derive meaning and value from a future crop. This is significant, but I won’t belabour this objection since I’m the one who formulated premise (1) and you could rectify the problem by arguing that the future crop of games will be broadly analogous to the current crop, though that may turn out to be contentious.

Premise (2) is supported by Harari’s general principle, but he also uses some case studies to show how it works in practice. One is that religion is a big virtual reality game; the other is that consumerism is a virtual reality game.

Religion: “What is religion if not a big virtual reality game played by millions of people together. Religions such as Islam and Christianity invent imaginary laws, such as “don’t eat pork”, “repeat the same prayers a set number of times each day”, “don’t have sex with somebody from your own gender” and so forth…Muslims and Christians go through life trying to gain points in their favorite virtual reality game…If by the end of your life you gain enough points, then after you die you go to the next level of the game (aka heaven).” (Harari 2017)

Consumerism: “Consumerism too is a virtual reality game. You gain points by acquiring new cars, buying expensive brands and taking vacations abroad, and if you have more points than everybody else, you tell yourself you won the game.” (Harari 2017)

You can probably see why I used the word ‘flippant’ to describe Harari’s argumentation earlier on, but let me give him his due. To someone like me — a religious sceptic and an agnostic capitalist — there something quite attractive in what he is saying. I think religion really is a bit of a virtual reality game: that all the rules and regulations are fake and illusory. But I am attracted to this line of reasoning only because it disproves the very point that Harari is trying to make. His view of religion and consumerism is deflationist in nature. To say that both practices are virtual reality games is to denude them of value; to rob them of their meaning and significance. It’s like ripping the mask off the ghost at the end of Scooby Doo.

And this is the critical point. Harari’s big argument doesn’t work because it isn’t true to the lived experiences of devoted religious believers and avid consumerists. They don’t think that the reality in which they live is virtual. They think the rules and regulations are real — handed down to them by God — and that the angels and demons they believe to exist are part of some deeper reality. They also probably don’t experience their daily practice in the gamified sense that Harari ascribes to them. It’s not about gaining points or levelling up; it’s about being true to the commitments and requirements of authentic religious practice. His perspective is that of the outsider — someone who has seen through the sham — not that of the insider.

This means that it is very difficult to draw any solace from Harari’s general principle, or the two case studies he uses to support his argument. The cultural practices and beliefs from which we currently derive great meaning and value are not normally understood by us to be either virtual or gamelike in nature (Perhaps some few people do understand them in that way) and we may not continue to derive meaning and value from them if we perceive them in this way. This matters. Presumably, in the virtual reality future, we will know that the reality we experience is virtual, and that the activities we engage in are part of one big game. To assume that we can still derive meaning and value from our activities when we have this knowledge requires a different, narrower argument.

Fortunately, Harari might have one.

3. The Narrower Argument: The Value of Deep Play
To this point, we have been trading on an ambiguity about the meaning of ‘virtual reality game’. Harari never defines it in his article, but we can get a sense of how he understands the term by reading between the lines. Harari seems to view religion and consumerism as ‘games’ because they involve goal-seeking and competitiveness (getting to heaven; acquiring more stuff than your peers) and ‘virtual’ because the rules by which people play these games involve constructs (beliefs, laws etc) that are not ‘out there’ but are generated by the brain.

I do not think this is a good way to understand the concept of a virtual reality game. It doesn’t really track with ordinary usage of the relevant terms. As per the argument just outlined, religious believers don’t think of their religious belief as ‘virtual’ or their practice as a ‘game’. And there seem to me to be decent reasons to reject the notion that goal-seeking and competitiveness are necessary properties of games — some of the goals that we pursue competitively (say knowledge or truth) might be objectively valuable — and that the reality we experience is virtual simply because it relies on internally-generated constructs — if for no other reason than accepting this leads to the absurdity that Harari seems to want to lead us to: that everything is virtual.

My preferred understanding of the concept ‘virtual reality game’, essentially collapses both ‘game’ and ‘virtual’ into the same thing. Following the work of the philosopher Bernard Suits, I would define a game as the ‘voluntary attempt to overcome unnecessary obstacles’. (Suits actually has a longer definition that I discuss here) In other words, it involves picking arbitrary or value neutral goals and imposing a set of constraints on the pursuit of those goals that are not required or dictated by reality (the ‘rules’). Thus the world constructed by the game is ‘virtual’ in nature. It floats free from objectively valuable ends and layers additional rules on top of those provided by objective reality. An example would be 100m freestyle swimming. There, the arbitrary goal is traversing 100m in water in the fastest time. The constraints are that you must do this using a particular stroke, wearing a particular costume, and without the aid of propellant technologies (such as flippers or underwater motors). These rules effectively construct a ‘virtual world’ within the swimming pool.

Admittedly this is still a pretty broad definition. If you are really cynical and nihilistic then it could well turn out that everything is a game. But if you retain any objectivist bent — i.e. still maintain that there is a reality beyond your head and that there are objective values — then it does narrow the concept of the game quite a bit. This is useful for the debate about the postwork future. As I see it, the future in which we all play virtual reality games would involve playing games in the Suitsian sense. The critical question then is whether if we know that we are playing Suitsian games, are we still living lives of meaning and value?

Although he doesn’t use any of this conceptual apparatus, Harari does offer an argument that answers that question in the affirmative. This is his narrower argument. The argument still follows the logic of the argument I laid out in the previous section (i.e. premises (1) - (3) are still the relevant ones), but uses a narrower understanding of what a virtual reality game is to motivate its central claims. Once again, Harari uses a case study to support his point: the Balinese Cockfight. The example comes from the work of Clifford Geertz:

Balinese Cockfight: “In his groundbreaking essay, Deep Play: Notes on the Balinese Cockfight (1973), the anthropologist Clifford Geertz describes how on the island of Bali, people spent much time and money betting on cockfights. The betting and fights involved elaborate rituals, and the outcomes had a substantial impact on the social, economic and political standing of both players and spectators. The cockfights were so important to the Balinese that when the Indonesian government declared the practice illegal, people ignored the law and risked arrest and hefty fines.” (Harari 2017).

The cockfight is clearly a game (a cruel and inhumane one, to be sure) and presumably is understood as such by the Balinese people (it’s unlike religious practice and belief in this sense). Furthermore, it is just one example of a far more general phenomenon. Soccer, American football, tennis, rugby, and golf are all games from which many people derive great meaning and value. Indeed, they become so important to people that the games — artificial and virtual though they may be — become a new and important part of people’s lives. When this happens the distinction between what was once virtual and what is real starts to breakdown:

For the Balinese, cockfights were “deep play” - a made up game that is invested with so much meaning that it becomes reality. 
(Harari 2017)

There is certainly something to this. For many people, games (that are clearly understood to be games) are central to their existence. They live for their sports and hobbies and leisure pursuits. They talk about them constantly with their peers. They dedicate themselves to understanding the intricacies of these games. Playing and conversing about them are their major social activities. It is how they achieve a sense of community and belonging, perhaps even a degree of social status. Does this, then, provide a proof of principle for the future? If we can find so much meaning and value in these forms of ‘deep play’, can we expect to find much meaning and value in a future of virtual reality games?

Perhaps. I definitely think that focusing on these examples of deep play is more persuasive than trying to argue that pretty much everything we do is a virtual reality game. But I don’t know if these examples of deep play are going to be sufficient. I suspect that every historical instance of deep play takes place in a world in which the games in question are merely a part of life, not the totality of life. In other words, although people derive significant meaning and value from those games, the games are only part of what they do. They still have jobs and families and other projects that seem (to them) to have some connection to the objective world. What will happen when they shift from a world in which the games are merely part of life to a world in which games are the majority (perhaps even the totality) of life?

I think it is hard to say.

4. Conclusion
I have suggested that Harari presents two arguments for thinking that a future in which we play virtual reality games would provide us with great meaning and value. I have argued that his second argument is more persuasive than the first. To argue that pretty much everything we do is a virtual reality game does violence to the lived experiences of those who derive meaning and value from what we currently do. On the other hand, to argue that we currently derive great meaning and value from pursuits that are clearly game-like in nature, is undoubtedly correct. The problem is that, at the moment, these games make up part of our reality, not its totality.

In conclusion, let me highlight something that I think Harari’s article gets right and that is worthy of serious reflection. Harari’s article reveals how troubled the distinction between the ‘virtual reality’ and ‘real reality’ really is. Some things that seem real to us may, already, be largely virtual; and some things that are clearly virtual have the tendency to become so important to us that they might as well be real. Even my attempt to clarify the distinction by appealing to Suits’s definition of game doesn’t eliminate all the problems. Within a Suitsian game, there are definitely things that happen that are ‘real’. The emotional responses one has to the game are real; the skills and knowledge that one develops are real; the social interactions and friendships are real; the virtues one acquires are real; and so on.

When it comes to discussions about meaning and value in a world without work, we need to consider whether it is worth continuing with the virtual/real distinction, or whether an alternative conceptual vocabulary is needed.

Wednesday, June 7, 2017

Episode #24 - Bryson on Why Robots Should Be Slaves


In this episode I interview Joanna Bryson. Joanna is Reader in Computer Science at the University of Bath. Joanna’s primary research interest lies in using AI to understand natural intelligence, but she is also interested in the ethics of AI and robotics, the social uses of robots, and the political and legal implications of advances in robotics. In the latter field, she is probably best known for her article, published in 2010 entitled ‘Robots Should be Slaves’. We talk about the ideas and arguments contained in that paper as well as some related issues in roboethics.

You can download the episode here or listen below. You can also subscribe on Stitcher or Itunes (or RSS).

Show Notes

  • 0:00 - Introduction
  • 1:10 - Robots and Moral Subjects
  • 5:15 - The Possibility of Robot Moral Subjects
  • 10:30 - Is it bad to be emotionally attached to a robot?
  • 15:22 - Robots and legal/moral responsibility
  • 19:57 - The standards for human robot commanders
  • 22:22 - Are there some contexts in which we might want to create a person-like robot?
  • 26:10 - Can we stop people from creating person-like robots?
  • 28:00 - The principles that ought to guide robot design

Relevant Links

Tuesday, June 6, 2017

Why we should create artificial offspring: meaning and the collective afterlife

The iCub Robot - Image courtesy of Jiuguang Wang

That's the title of a new article I have coming out. It argues that the creation of artificial offspring could add meaning to our lives and that it might consequently be worth committing to the project of doing so. It's going to be published in the journal Science and Engineering Ethics. The official version will be out in a few weeks. In the meantime, you can read the abstract below and download a pre-publication version at the links provided.

Journal: Science and Engineering Ethics

Links: Philpapers;

Abstract: This article argues that the creation of artificial offspring could make our lives more meaningful (i.e. satisfy more meaning-relevant conditions of value). By ‘artificial offspring’ is meant beings that we construct, with a mix of human and non-human-like qualities. Robotic artificial intelligences are paradigmatic examples of the form. There are two reasons for thinking that the creation of such beings could make our lives more meaningful. The first is that the existence of a collective afterlife — i.e. a set of human-like lives that continue in this universe after we die — is likely to be an important source and sustainer of meaning in our present lives (Scheffler 2013). The second is that the creation of artificial offspring provides a plausible and potentially better pathway to a collective afterlife than the traditional biological pathway (i.e. there are reasons to favour this pathway and there are no good defeaters to trying it out). Both of these arguments are defended from a variety of objections and misunderstandings.

Wednesday, May 24, 2017

Advice on Publishing Peer Review Articles

I was recently asked to give a short, ten minute presentation on writing and publishing peer review articles. The presentation was aimed at PhD students. In preparing for the talk, I realised how difficult it is to distill my thoughts on the process into just ten minutes. I have a love-hate relationship with publishing for peer review. It is essential to my life as an academic, but I sometimes feel trapped by the publication ‘game’, and I often feel that the benefits are often minimal and ephemeral. I could probably talk for several hours about these feelings without getting to any practical advice.

Anyway, since I didn’t have several hours, I decided I would focus my talk on eight key ‘tips’, divided broadly into three main categories (perspective, process, and promotion). None of these tips deals with how to actually write an article (I have dealt with that topic on a previous occasion). Instead, they focus on the attitude toward the process and how to respond to reviewer’s comments. I thought it might be worth sharing them here.

A. Perspective
It is important to approach the peer review process with the right attitude. I have three tips for cultivating the right attitude:

(1) Don’t lose sight of ‘why’: This is the most important thing. As a budding academic, it is very easy to get trapped in the ‘game’ of publication. As you begin to succeed in publishing you become acutely aware of your total number of publications. Very few academics can keep track of the substance of what their colleagues write, but they can all keep track of the number of pieces they publish. And so your number becomes the currency of self-worth. Try to avoid thinking in this way. If you become obsessed with your number, you will never be happy. I speak from experience. I once set myself the target of publishing 20 peer reviewed articles, thinking that if I reached that target I would have ‘arrived’ as an academic. But once I reached the 20-article target, I realised that the 30-article target wasn’t too far away. I needed to knuckle down and reach that too. I just as soon realised how silly I was being. I had lost sight of why I was publishing in the first place. Publishing is not an end in itself. There are reasons for doing it. The most important of those reasons — and the ones that sustain you in the long-run — are the intrinsic joys/pleasures you experience in researching, thinking and writing about a topic that interests you. The other reasons are more instrumental in nature. They are quite important too, but for more practical reasons. After all, publication is a gateway to achieving academic impact, social impact, public engagement and career advancement.

(2) Prepare for failure: The average article is rejected. You are unlikely to be above average. It’s possible that you are, but don’t bet on it. The important thing is that you learn to expect failure and frame it in a positive way. Following Paul Silvia, I would suggest that you have the goal of becoming ‘the most rejected author in your department/peer group’. If you are being rejected, at least you haven’t given up. Giving up is worse than being rejected. (I gave this advice previously. On that occasion I suggested that it was the most important thing to bear in mind when publishing. I no longer think that is true. I think remembering why you are publishing is the most important thing. This might reflect a degree of maturity on my part and an increasing sense of detachment from the need to publish.)

(3) Don’t fetishise failure: Don’t assume that you can learn too much from your failures. Sometimes you can, but most of the time you can’t. Academic failure is overdetermined. What I mean by this, is that there are probably many factors that prevented your article from being accepted for publication, no one of which was necessarily fatal or would be fatal if you were to resubmit the article elsewhere. Editors and reviewers are looking for reasons to reject your paper. Their default is ‘reject’. They have to set this default to maintain the prestige of their journal [thanks to Ashley Piggins for emphasising this point to me]. The reasons for rejection provided by reviewers often do not overlap. If you addressed every objection they raised before sending your article on to another journal, you would probably end up with an incoherent article. If you are rejected by a journal, look over the reviewer reports (if any), see if there are any consistent criticisms or comments that strike you as being particularly astute, revise the article in light of those comments, and then send it off to another journal. If there are no such comments, just send it off to another journal without substantive revisions. Persistence is the name of the game. I am now willing to resubmit the same piece to several journals (sometimes as many as 4 or 5) before giving up on it.

B. Process
You must deal with the process of submitting to journals and responding to reviewer’s comments in the right way. The most important thing here, of course, is to submit a high-quality piece, i.e. something that is well-written, full of persuasive arguments, and makes an original contribution to the literature. I don’t think there is a perfect formula for doing that. But there are a few other things to keep in mind:

(4) Have at least 3-4 target journals: This really follows from my previous bit of advice (“Don’t fetishise failure”). I always start writing articles by having at least 3-4 target journals in mind. I don’t think you should be too wedded to one target journal. You should aim for something of reasonably high quality, but don’t predicate your well-being on having your article accepted by the top journal in your field. That’s something that will come with time and persistence. I also don’t think it is worth revising your article for your target journal’s ‘house style’. I have never had an article desk-rejected because I failed to format it in house style. As long as the article is a good fit for your target journal and you have written and referenced it well, it stands a chance. You can worry about house style after you have been accepted.

(5) Be meticulous in responding to reviewers’ comments: If you are lucky enough to be asked for revisions, be sure to take the process seriously. You should always prepare a separate ‘response to reviewers’ document as well as a revised manuscript. In this document, you should respond to everything the reviewer has highlighted and pinpoint exactly where in the revised draft you have addressed what they have said. Speaking as someone who has reviewed many manuscripts, I feel pretty confident in saying that reviewers are lazy. They don’t want to have to read your article again. They only want to read that parts that are relevant to the comments they made and check to see whether you have taken them seriously. This is all I ever do when I read a revised manuscript.

(6) Be courteous in responding to reviewers’ comments: Remember that reviewers have egos; they want to be flattered. They will have taken time out of their busy schedules to read you article. They will have raised what they take to be important criticisms or concerns about your article. You should always thank them for their ‘thoughtful’, ‘insightful’, and ‘penetrating’ comments. This is one area of life where you cannot be too obsequious.

(7) Pick your battles: Sometimes reviewers will say things with which you fundamentally disagree. You don’t have to bow down and accept everything they say. You should stand your ground when you think it is appropriate to do so. But when doing this be sure to acknowledge that the reviewer is raising a reasonable point (and always consider the possibility that the fault lies in how you originally worded or phrased what you wrote) and be sure to make concessions to them in other ways. To give a somewhat trivial example, I feel pretty strongly that academic articles shouldn’t be dry and devoid of ‘colour’. One of the ways in which I try to provide colour is by using well-known cultural or fictional stories to illustrate the key points I am making. This is one of the principles on which I stand firm. I once had a reviewer who wanted me to take a cultural reference out of an article because it was unnecessary to the point I was making. I stood my ground in my response, explaining at some length why I felt the example was stylistically valuable, even if logically unnecessary, and further discussing the importance of lively academic style. At the same time, I accepted pretty much everything else they reviewer had to say. Fortunately, they were gracious in their response, saying that they enjoyed my ‘spirited’ defence of the example, and accepting the article for publication. (It was this article, in case you were wondering).

C. Promotion
If you get an article accepted for publication, you should celebrate the success (particularly if it is your first acceptance), but you should also:

(8) Remember that it doesn’t end with publication: If you care about your research and writing, you won’t want it to languish unread in a pay-walled academic journal. You will want to promote it and share it with others. There are a variety of ways to do this, and discussing them all would probably warrant an entire thesis in and of itself. I personally use a combination of strategies: sharing open access penultimate versions of the text on various academic repositories; blogging; social media; podcasting; and interviews with journalists. I have never issued a ‘press release’ for anything I have written. I find I get enough attention from journalists anyway, but I think there probably is value in doing so and I may experiment with this in the future.

Bonus: Can you fast-track publications?
It takes a long time to write and publish for peer review. It is easy to get disheartened if you experience a lot of rejection. I am not sure that there is any way to truly ‘fast-track’ the process, but if you are hungry for an acceptance, I would suggest two strategies:

Write a response piece: i.e. write an article for a particular journal that responds, in detail, to another article that recently appeared in the same journal. This was how I got my first couple of acceptances and I think it can be very effective. In reality, of course, every academic article is a ‘response’ piece (they all respond to some aspect of the literature), it’s just that most are not explicitly labeled as such. What I am calling a ‘response piece’ is an article that is noteworthy for its academic narrowness (it only responds to one particular article) and journal specificity (it is really only appropriate for one journal). Both of those features limit its overall value. It is likely to have a more limited audience and is unlikely to achieve long-term impact. But it can provide invaluable experience of the peer review process.

Collaborate: In some disciplines collaboration is common; in others it is rare. I come from one of the latter disciplines. Nearly everything I have published has been solo-authored, but I have recently started to collaborate with others and I am beginning to appreciate its virtues. I think collaboration can work to accelerate the writing and publishing process, provided you collaborate with the right people. Some people are really frustrating to collaborate with (I’m pretty sure I am one of those people); some people are a delight. Obviously, you should pick a collaborator who shares some relevant research interest with you. On top of that, I recommend finding someone who is more productive and more ambitious than you are: they are likely to write fast and will push you outside your comfort zone. Furthermore, collaborating with them is far more likely to elicit engagement than simply asking them to provide feedback on something you have written. That said, I don’t think you should aim too high with your potential collaborators, at least when you are starting out. Pick people you know and who are broadly within your peer group. Don’t aim for the most renowned professor in your field, unless they happen to be your supervisor or a close friend. Again, you can build up to that.

Okay, so those are all my tips. To reiterate what I said at the outset, these tips only address part of the process. They don’t engage with the substance of your article and that really is the most important thing. Still, I hope some of you find them useful. The handout below summarises everything discussed above.

Monday, May 22, 2017

Episode #23 - Liu on Responsibility and Discrimination in Autonomous Weapons and Self-Driving Cars


In this episode I talk to Hin-Yan Liu. Hin-Yan is an Associate Professor of Law at the University of Copenhagen. His research interests lie at the frontiers of emerging technology governance, and in the law and policy of existential risks. His core agenda focuses upon the myriad challenges posed by artificial intelligence (AI) and robotics regulation. We talk about responsibility gaps in the deployment of autonomous weapons and crash optimisation algorithms for self-driving cars.

You can download the episode here or listen below. You can also subscribe on Stitcher and iTunes (the RSS feed is here).

Show Notes

  • 0:00 - Introduction
  • 1:03 - What is an autonomous weapon?
  • 4:14 - The responsibility gap in the autonomous weapons debate
  • 7:20 - The circumstantial responsibility gap
  • 13:44 - The conceptual responsibility gap
  • 21:00 - A tracing solution to the conceptual problem?
  • 27:47 - Should we use strict liability standards to plug the gap(s)?
  • 29:48 - What can we learn from the child soldiers debate
  • 33:02 - Crash optimisation algorithms for self-driving cars
  • 36:15 - Could self-driving cars give rise to structural discrimination?
  • 46:10 - Why it may not be easy to solve the structural discrimination problem
  • 49:35 - The Immunity Device Thought Experiment
  • 54:12 - Distinctions between the immunity device and other forms of insurance
  • 59:30 - What's missing from the self-driving car debate?


Friday, May 19, 2017

The Right to Attention in an Age of Distraction

We are living through a crisis of attention that is now widely remarked upon, usually in the context of some complaint or other about technology.

That’s how Matthew Crawford starts his 2015 book The World Beyond Your Head, his inquiry into the self in an age of distraction. He was prompted to write the book by a profound sense of unease over how the ‘attentional commons’ was being hijacked by advertising and digital media. One day, he was paying for groceries using a credit card. He swiped the card on the machine and waited for a prompt to enter his details to appear on the screen. He was surprised to find that he was shown advertisements while he waited for the prompt. Somebody had decided that this moment — the moment between swiping your card and inputting your details — was a moment when they had a captive audience and that they could capitalise on it. Crawford noticed that these intrusions into our attentional commons were everywhere. We live, after all, in an attentional economy, where grabbing and holding someone’s attention is highly prized.

There is something disturbing about this trend. What we pay attention to, in large part, determines the quality of our lives. If our attention is monopolised by things that make us unhappy, anxious, sad, self-conscious, petty, jealous (and so on), our lives may end up worse than they might otherwise be. I am sure we have all shared the sense that the social media platforms, video and news websites, and advertisements that currently vie for our attention can have a tendency to do these very things. I find I have become obsessed with the number of retweets I receive. I constantly check my Facebook feed to see if I have any new notifications. I’m always tempted to watch one last one last funny cat video. My attention is thus swallowed whole by shallow and frivolous things. I am distracted away from experiences and activities that are ultimately more satisfying.

Given this state of affairs, perhaps it is time that we recognised a right to attentional protection? In other words, a right to do with our attention as we please, and a corresponding duty to protect our attentional ecosphere from intrusions that are captivating, but ultimately shallow and unfulfilling. I want to consider the argument in favour of recognising that right in this post. I do so by looking at the arguments that can be made in favour of three propositions:

Proposition 1: Attention is valuable and hence something worthy of protection.

Proposition 2: Attention is increasingly under threat, i.e. there is greater need/cause for protecting attention nowadays.

Proposition 3: We should (consequently) recognise a right to attentional protection (doing so might be politically and practically useful).

My analysis of these propositions is my own, but is heavily influenced by the work of others. Jasper L. Tran’s article ‘The Right to Attention’ is probably the main source and provides perhaps the best introduction to the topic of attentional rights. He casts a wide net, discussing the importance of attention across a number of domains. But there is something of an emerging zeitgeist when it comes to the protection of attention. Tristan Harris, Tim Wu, Matthew Crawford and Adam Alter are just some of the people who have recently written about or advocated for the importance of attention in the modern era.

1. Attention is Valuable
It would probably help if we started with a definition of attention. Here’s a possible one:

Attention = focused conscious awareness.

We all live in a stream of consciousness (occasionally interrupted by sleep, concussion, and coma). This stream of consciousness has different qualitative elements. Some things we are never consciously aware of — they are unseen and unknown; some things we are only dimly aware of — they hover in the background, ready to be brought into the light; some things we are acutely aware of — they are in the spotlight. The spotlight is our attention. As I sit writing this, I am dimly aware of some birds singing in the background. If I force myself, I can pay attention to their songs, but I’m not really paying attention to them. The screen of my laptop is where my attention lies. That’s where my thoughts are being translated into words. It’s where the spotlight shines.

This definition of attention is relatively uncontroversial. Tran, in his article on the right to attention, argues that there is, in fact, little disagreement about the definition of attention across different disciplines. He notes, for example, that psychologists define it as ‘the concentration of awareness’, and economists define it as ‘focused mental engagement’. There is little to choose between these definitions.

So granting that the definition is on the right track, does it help us to identify the value of attention? Perhaps. Think for a moment about the things that make life worth living — the experiences, capacities, resources (etc.) that make for a flourishing existence. Philosophers have thought long and hard about these things. They have identified many candidate elements of the good life. But lurking behind them all — and taking pride of place in many accounts of moral status — is the capacity for conscious awareness. It is our ability to experience the world, to experience pleasure and pain, hope and despair, joy and suffering, that makes what we do morally salient. A rock is not conscious. If you split the rock with a pickaxe you are not making its existence any worse. If you do the same thing to a human being, it’s rather different. You are making the human’s life go worse. This is because the human being is conscious. Cracking open the human skull with a pickaxe will almost certainly cause the human great suffering and, possibly, end its stream of consciousness (the very thing that makes other valuable things possible).

That consciousness is central to what makes life worth living is fairly widely accepted. The only disputes tend to relate to how wide the net of consciousness expands (are animals conscious? do we make their lives worse by killing and eating them?). Given that attention is simply a specific form of consciousness (focused conscious awareness), it would seem to follow that attention is valuable. A simple argument can be made:

  • (1) Consciousness is valuable (hence worth protecting).
  • (2) Attention is a form of consciousness (focused conscious awareness).
  • (3) Therefore, attention is valuable (hence worth protecting).

But this argument throws up a problem. If attention is merely a form of conscious awareness, then what is the point in talking specifically about a right to attentional protection? Shouldn’t we just focus on consciousness more generally?

I think there is some value to focusing specifically on attention. Part of the reason for this is practical and political (I talk about this later); part of the reason is more fundamental and axiological. As I suggested in my definition, there are different levels or grades of conscious awareness. Attention is the highest grade. It has a particular importance in our lives. What we pay attention to, in a very real sense, determines the quality of our lives. Paying attention to the right things makes for higher levels of satisfaction and contentment, and it is only in certain states of acutely focused awareness that we achieve the most rewarding states of consciousness.

I have a couple of examples to support this point. Both are originally taken from Cal Newport’s book Deep Work, which I enjoyed reading over the past year. The central thesis of Newport’s book is that certain kinds of work are more valuable and satisfying than others. In particular, he argues that engaging in ‘deep work’ (which he defines as ‘activity performed in a state of distraction-free concentration that pushes your cognitive capacities to their limit and produce new value, insight etc’) is better than ‘shallow work’ (which is the opposite). In chapter 3 of his book, he sets out to defend this claim by discussing how deep work makes life more meaningful. The value of attention features heavily in his argument. He discusses the work of two authors who have highlighted this.

The first is the science reporter Winifred Gallagher. In her 2009 book Rapt, she made the case for the role of attention in the well-lived life. She wrote the book after being diagnosed with an advanced form of cancer. As she coped with her treatment and diagnosis, she had a revelation. She realised that the disease was trying to monopolise her attention and that her outlook and emotional well-being was suffering as a result. By systematically training herself to focus on other things (simple day-to-day pleasures and the like) she could prevent this from happening. The circumstances of her life (her disease, its prognosis) were bad; but the attentional focus could be good and that ultimately counted for more. She then set out to research the science behind her revelation, discovering in the process that there was considerable empirical support for her revelation. Her conclusion is neatly summarised in the following quote:

Like fingers point to the moon, other diverse disciplines from anthropology to education, behavioral economics to family counseling, similarly suggest that the skillful management of attention is the sine qua non of the good life and the key to improving virtually every aspect of your experience. 
(Gallagher 2009, 2)

This chimes with my own experience. I find that my outlook and sense of well-being is far more affected by what I pay attention to on a daily basis than on what I achieve or by improvements my overall life circumstances. Those things are important, don’t get me wrong, but they count for less than we might think. The benefits of achievements are often short-lived. You bask in the glory for a few moments but quickly move on to the next goal. Improvements in life circumstances quickly become the new baseline of expectation. The benefits of what you pay attention to are more sustainable. A life in which you focus on important and pleasurable things is a good life. This gains additional support when we consider how certain forms of torture work (they often work by forcing you to pay attention to displeasurable things) and how people tout the benefits of meditation (by focusing your attention on the here and now you can improve your psychological well-being).

The other author Newport uses to support his thesis is the psychologist Mihaly Csikszentmihalyi, who is best-known for his work on the concept of ‘flow’. Csikszentmihalyi set out to understand what it is that makes people really happy, i.e. what daily activities make them feel good. Were people happiest at work or at play? Interestingly, Csikszentmihalyi found that people were often happiest at work. Why was this? His answer was that work enabled people to enter states of concentrated, focused awareness that were intensely pleasurable and rewarding. He called these ‘flow states’.

He subsequently developed a theory of flow. The theory holds that you enter into a flow state when you are engaging in some activity that pushes your cognitive capacities to their limits. In other words, when you are doing something that tests your abilities but that is completely beyond your abilities. Engaging in such activities fills your attentional sphere. They are so demanding that you cannot focus on anything else. This means you don’t have time to pay attention to things that might make you unhappy or put you ill at ease. A flow state is perhaps the highest state of attentional focus and, if Csiksentmihalyi is to be believed, the one that is central to the fulfilling life.

So we have here two arguments for thinking that attention, rather than consciousness more generally, is worthy of special consideration. What we pay attention to is central to our emotional well-being and outlook on life, and certain attentional states are intensely pleasurable and rewarding (Csikszentmihalyi’s argument). This does not mean that attention is all that matters in the good life (our health, income, friends etc. are also important) but it does suggest that attention is of particular importance. Furthermore, it suggests that protecting our attention has two important components:

Content protection: ensuring that we pay attention to things that make our lives go better (things that are meaningful and contribute to well-being) and that we are not constantly distracted by things that are trivial and unimportant.

Capacity protection: ensuring that we acquire and retain the capacity for extreme concentrated awareness (i.e. the capacity to enter flow states).

2. Attention is Under Threat and Needs Protection
You may not be entirely satisfied with the preceding argument, but set aside your objections for now. If we assume that attention is valuable and worth protecting, we must confront the next question: why is it worth protecting now? After all, if attention is valuable surely it has been valuable for the entire history of humanity? What is special about the present moment that demands a right to attentional protection?

There’s a simple answer and a more complex one. The simple answer is that no one who is currently concerned with attention would deny that it has always been valuable and worthy of protection. We probably didn’t recognise it before now because we didn’t have the conceptual vocabulary to effectively articulate the right and the political and social climate that would be receptive to a rights-claim of this sort. The more complex answer returns us to the opening quote: the one I took from Matthew Crawford. There is something about the present moment that seems to involve a ‘crisis of attention’. Our attentional ecosphere has become dominated by smart devices, addictive apps, social media services, and ubiquitous advertising. This is making it increasingly difficult to pay attention to things that matter and to retain the ability to focus.

We live in an attentional economy, where thousands upon thousands of actors compete, second by second, for our attention. This competition has driven some extraordinary innovation in the tools and techniques of attentional manipulation. We are getting really good at distracting people and disrupting their attention. Adam Alter’s recent book Irresistible documents the various tools and techniques that are used to grab and hold our attention. He identifies six key ‘ingredients’ that are needed to create an experience that holds our attention. He then explains how modern technologies make it easier to create such experiences. I’ll explain by going through each of the six ingredients:

Goals: Doing something with a target or end state in mind makes it more likely that it will grab your attention. It gives your efforts a purpose. Modern information technology has made it easier to identify and track our achievements of certain goals. It has also made seemingly arbitrary or meaningless goals more salient and attention-grabbing. Goals such as getting 2,000 Instagram followers, or beating your Strava segment times, are not only novel, they are also more easily brought into our attentional spotlights.

Feedback: Getting feedback on what you do tells you what is worth doing (what you are good at) and hence what is worthy of your attention. Modern technology makes it easier to get this feedback. Tracking and surveillance software can give you precise, quantifiable data about your actions, and social media platforms allow others to comment, criticise and cajole us into trying the same thing over and over. What’s more, designers of games and apps often provide attention-grabbing feedback that is relatively unimportant (known as ‘juicing’ in game design) and can mask losses as wins (e.g. loud noises, badges, flashing lights). This further engrains an activity in our attentional spotlight.

Progress: Having the sense that you are getting better at something often makes it more attention-grabbing. The ideal is to create an experience with extremely low barriers to entry (anyone get started and enjoy) but which then rewards time and effort put in by making the experience more challenging. Alter gives the example of Super Mario Bros as a game that had this ideal mix. He then notes that contemporary game designers use similar design principles to hook people into particular games on smartphones and social media platforms (Farmville, Candy Crush, Kim Kardashian’s Hollywood). They often then exploit this attentional-hook by adding in-game purchases that are necessary if you wish to make progress in the game environment.

Escalation: Having the sense that you are triumphing over adversity and that the stakes are being constantly raised often makes something more attention-grabbing. To be honest, I’m not entirely sure what the distinction is between this and the previous one, but as best I can tell it has to do with encouraging someone to believe they are acquiring mastery over a particular set of skills (as opposed to just giving them a sense of progress). Again, Alter highlights how modern game designers are experts at doing this. Adopting Csikszentmihalyi idea of flow, he notes how they create game environments that get people to operate just outside their comfort zones. This makes for a more immersive and rewarding experience. Alter also argues that humans have certain ‘stopping rules’ (cues that encourage them to end a particular behaviour) and that technology erodes or undermines these stopping rules.

Cliffhangers: Having the sense that a task or experience has not yet been completed can make it more attention-grabbing. This idea goes back to the work of the Russian psychologist Bluma Zeignarik who did experiments revealing that when you open a ‘task loop’ in your mind, it continues to occupy considerable mental real estate until it is closed off (i.e. until you complete the task). This has become known as the ‘Zeignarik Effect’. Alter notes how modern media (particularly serial television shows and podcasts) exploit this effect to encourage ‘binge’ watching/listening. The ‘autoplay’ features on Netflix and Youtube also take advantage of this: they automatically open loops and present you with the next episode/video to sate your desire for more.

Social Interaction: Sharing an experience with others and getting feedback from them can make it more attention-grabbing. Suffice to say, social media platforms such as Twitter, Facebook and Instagram are excellent at facilitating social interaction of the most addictive kind. They allow for both positive and negative feedback, and they provide that feedback on an inconsistent schedule.

To reiterate, and to be absolutely clear, there is nothing necessarily technological about these six ingredients. You could engineer attention-grabbing experiences and products using these six ingredients in an offline, non-technological world. Indeed, Tim Wu, in his recent book The Attention Merchants highlights the many ways in which this has been done throughout human history, suggesting in particular that religions were the original innovators in attentional engineering. Alter’s point is simply that technology makes it easier to bring these six features together to make for particularly absorbing experiences.

But is this a bad thing? Not necessarily. Here we run into a major problem with the argument in favour of a right to attention. As noted earlier, attention is central to the well-lived life. Paying attention to the right things, and being completely immersed in their pursuit, is a good thing. Consequently, using the six features outlined by Alter to engineer immersive and attention-grabbing experiences is not necessarily a bad thing. If the experiences you have engineered are good (make individual lives better), then you might be making the world a better place. To suggest that we are living through a ‘crisis of attention’, and that this crisis warrants special protection of attention, requires some additional, and potentially controversial, argumentative footwork.

First, you have to argue that the kinds of attention-grabbing experiences that are being fed to us through our digital devices are, in some sense, worse or inferior than the experiences we might be having without them. One way to do this would be to channel the spirit of John Stuart Mill and suggest that there are ‘higher’ and ‘lower’ experiences and that, in the main, technology is a fetid swamp of lower quality experiences. I think there is some plausibility to this, but it is complicated. You could argue that being totally immersed in video games - to the exclusion of much else - is ‘lower’ because you are not achieving anything of intrinsic worth. The time spent playing the game is time that could be spent (say) finding a cure for cancer and making the world a better place. You could also argue that the jockeying for position on social media platforms cultivates vicious (as opposed to virtuous) character traits (e.g. competitiveness, jealousy, narcissism). But you probably couldn’t argue that all technologically-mediated experiences are ‘lower’. Some may involve the pursuit of higher pleasures and goods. A blanket dismissal of digital media would be wrong.

Second, you would have to argue that the vast array of potentially absorbing experiences on offer is deeply distracting and hence corrosive of the ability to concentrate and achieve flow states. This seems like an easier argument to make. One thing that definitely appears to be true about the modern age is that it is distraction-rich. There are so many games, movies, podcasts, and social media services that are competing for our attention that it becomes hard to focus on any one of them. We get trapped in what Fred Armisen and Carrie Brownstein called the ‘technology loop’ in their amusing sketch from the TV series Portlandia.


In this sense, it doesn’t really matter whether the experiences that are being mediated through these devices are intrinsically worthwhile (whether they consist of the ‘higher’ pleasures); the distraction-rich environment provided by the devices prevents you from ever truly experiencing them.
If these three arguments are correct — if it is easier to engineer attention-grabbing experiences; if the majority of the experiences involve ‘lower’ pleasures/pursuits; and if the environment is too distraction rich — then we may well be living through an acute crisis of attention in the present era.

3. Why a ‘right’ to attentional protection?
You could accept the first two propositions and still disagree with the third. It could be the case, after all, that attention is valuable and is under threat but that it is neither desirable nor useful to recognise a specific ‘right’ to attentional protection. Further argumentation is needed. Fortunately, this argumentation is not too difficult to find.

One reason for favouring a ‘right’ to attentional protection is simply that doing so is the normatively/morally appropriate thing to do. Look at how other rights-claims are normatively justified. They are usually justified on the basis that recognising the right in question is fundamental to our status as human beings (to our ‘dignity’, to use the common phrase) or because doing so leads to better consequences for humankind. The right to property, for example, can be justified on Lockean ‘natural’ right grounds (that it is fundamental to our nature as human beings to acquire ownership over our material resources) or on practical economic grounds (the economy runs better when we recognise the right because it incentivises people to do things that increase social welfare).

Presumably similar justifications are available for the right to attentional protection. If the likes of Winifred Gallagher and Mihalyi Csikszentmihalyi are correct, for example, then the skilful management of attention is integral to living a truly satisfying human life (it is the ‘sine qua non’ of the good life, to use Gallagher’s phrase). Protecting this ability to manage attention would, thus, seem in keeping with the requirements of human dignity and overall social well-being.

But normative justifications of this sort are probably not enough. It’s possible that we could ensure our dignity as attentional beings, and improve the societal attention-quotient, without recognising a specific ‘right’ to attentional protection. To justify the ‘right’ would seem to require a more practical set of arguments. Fortunately, this is possible too. You can favour the notion of a ‘right’ to attention on the grounds that doing so will be politically and practically useful. Contemporary political and legal discourse is enamoured with the language of rights. To recognise something as a right carries a lot of force in public debate. If we seriously think that attention is valuable and under threat, it may, consequently, be much to our advantage to recognise a right to attentional protection. We are more likely to be heard and taken seriously if we do.

On top of that, if a right to attentional protection does get recognised in law, it carries further practical significance. To understand this, it is worth stepping back for a moment and considering what a legally protected right really is. The classic analysis of legal rights was conducted by William Hohfeld. Hohfeld noted that claims to the effect that such-and-such a right exists usually breakdown into a number of more specific sub-claims.

Hohfeld’s complete analysis is a little bit complicated but the gist of it is readily graspable. As he saw it, there were four specific ‘incidents’ or components to rights (not all of which were present in every claim about the existence of a right). It’s best if we understand these with an example. Take the right to bodily integrity. According to Hohfeld’s analysis, this could be made up of four distinct incidents:

Privilege: The freedom/liberty to do with your body as you please.

Claim: A duty imposed on others not to interfere with or alter your body in any way (this is what we usually associate with the use of the term 'right').

Power: The legally recognised ability to waive your claim-right (e.g. through informed consent) and allow others to interfere with your body.

Immunity: The legally recognised protection against others waiving or altering your claim-right (i.e. not to be forced to give up your claim right).

Privileges and claims are first order incidents: they regulate and determine your conduct and the conduct of others. Powers and immunities are second-order incidents: they regulate and determine the content of the first order incidents. Using this four-part model, you can map the relationship between the different elements of the right to bodily integrity using the following diagram.

Diagram taken from Stanford Encyclopedia of Philosophy article on 'Rights'

All rights can be understood as combinations of these four incidents, but not all contain all four. For example, you could have a claim right (against interference by another) without necessarily having a power or immunity. Similarly, the basic elements of a right can be qualified in many important ways. For instance, the privilege to do with your body as you please is limited in many countries to exclude the right to sell sexual services or body parts. Likewise, the immunity against others interfering with your claim right might be a qualified immunity: if a law is passed through a legally legitimate mechanism that eliminates the claim, you may no longer be entitled to it. Some rights can be quite limited and qualified; some rights can be given the strongest possible protections.

This Hohfeldian analysis is helpful in the present context. It allows us to sharpen and deepen our thinking about the right to attentional protection. What kind of right is it? Which incidents does it invoke? Here’s my first pass at both of these questions:

Privilege: The liberty to focus and manage your attention as you see fit.

Claim: A duty on others not to interfere with or hijack your attention, and your capacity to pay attention.

Power: The legally recognised power to waive your claim-right to attentional protection, i.e. to allow others (people or experiences) to enter your attentional spotlight.

Immunity: The legally recognised protection against others waiving your claim right to attention (e.g. by selling off a claim to your attention to others).

I think these four incidents would need to be qualified in certain ways. The arguments I outlined earlier in relation to the precarious nature of attention in the modern era would seem to imply some degree of paternalism when it comes to the protection of attention. The fear, after all, is that modern technology is particularly good at hijacking our attention and that we are not the best protectors of our own attention. This would seem to qualify the privilege over attention. Furthermore, and as Jasper Tran notes this in his article, there may be a duty to pay attention to certain things in certain contexts (e.g. a jury member has a civic and legal duty to pay attention to the evidence being presented at trial). Thus, there cannot be an unqualified privilege to pay attention to whatever you like (and, correlatively, to ignore whatever you like).

All that said, it would seem that the right to attentional protection does warrant reasonably robust recognition and enforcement. After all, attention is, if the arguments earlier on were correct, integral to human well-being.

4. Objections and Outstanding Issues
To this point, I have been looking at the case in favour of the right to attentional protection. I’m going to conclude by switching tack and considering various problems with the idea. I’ll look at four issues in particular. I have some thoughts about each of them, but I’m not going to kid you and pretend that I know what to do with each of them. They pose some serious challenges that would need to worked out before a defence of the right to attentional protectional became fully persuasive.

The first issue is that the right to attentional protection might conflict with other important and already recognised rights. Tran discusses the obvious one in his article: the freedom of speech. If I have a right to speak my mind, surely that necessarily entails a right to invade your attentional ecosphere? Or, if not a right to invade, at least a right to try and grab your attention. There seems to be some tension here. Tran responds to this by arguing that the right to attention and the right to freedom of expression are analytically distinct: you have a right to speak your mind but not a right to have others pay attention to you. That’s certainly true. But the analytical distinction ignores the practical reality. If you have people out there speaking their minds, it would be difficult to insure that they don’t, at least occasionally, trespass on someone else’s attention. That said, how much weight is ascribed to the freedom of expression varies a bit from jurisdiction to jurisdiction, and commercial products have always been subject to more stringent regulations than, say, journalism, literature or other works of art. Furthermore, clashes of rights are common and the mere fact that one right will clash with another doesn’t, in itself, provide reason to reject the existence of that right.

The second issue concerns the practicality of protecting attention. You might argue that it is impossible to really protect someone’s attention from interference or hijacking. To be alive and conscious, after all, is to be bombarded by demands on your attention. How could we possibly hope to protect you from all those demands? The simple answer to this is that we couldn’t. To argue that there is right to attentional protection does not mean that there is a right to be protected from all interferences with your attention. That would be absurd. Analogous areas of the law have dealt with this problem. Take the right to bodily integrity again. Most legal systems impose a duty on others not to apply force to your body. This is usually protected by way of laws on assault and battery. But, of course, simply being alive and going out in society entails that sometimes people will bump into you and apply force to your body. Legal systems typically don’t recognise such everyday bumps and collisions as part of the duty not to apply force. They save their energies from more serious interferences or infringements. A similar approach could be adopted in the case of a right to attentional protection.

The third issue concerns the redundancy of the right to attentional protection (i.e. its overlap with other pre-existing rights). There are a lot of rights already recognised and protected by the law. Some people may argue that there is an over-proliferation of rights-claims and this dilutes and attenuates their usefulness. In the case of the right to attentional protection, you could argue that this is already adequately protected by things like the right to privacy and bodily integrity, the freedom of conscience, and the restrictions on fraud, manipulation and coercion that already populate the legal system. This is probably the objection with which I a most sympathetic. I do worry that much of what is distinctive and interesting about the right to attention is already covered by existing rights and legal doctrines. That said, there mere fact that there are already mechanisms in place to protect the right does not mean that the right should not be recognised. Recognising the right may provide a useful way to organise and group those existing mechanisms toward a particular purpose. Furthermore, I do think there is something distinctive about attention and its value in human life that is not quite captured by pre-existing rights. It may be worth using the label so as to organise and motivate people to care about it.

Finally, even if we do recognise a right to attentional protection, there are a variety of questions to be asked about the mechanisms through which the right is protected. One big question concerns who should be tasked with recognising and protecting against violations of the right. Should it be up to the individual whose right is interfered with? Or should there be a particular government agency (or third sector charity) tasked with doing so? Or some combination of both? Giving the job to someone other than the individual might be problematic insofar as it is paternalistic and censorious: it would involve third parties arguing that a particular attentional interference was harmful to the individual in question. There are also then questions about the legal remedies that should be available to ensure that attention is protected. Should the individual have a right to sue an app-maker or social-media provider for hijacking their attention? Or should some system of licensing and regulatory sanction apply? One possibility, that I quite like, is there should be dedicated public spaces that are free from the most egregious forms of attentional manipulation. That might be one way for the state to discharge its duty to protect attention.

Suffice to say, there is a lot to be worked out if we ever did agree to recognise a right to attentional protection.