In July 2018, Marcus Gilroy-Ware and I gave a talk at the Pervasive Media Studio on Facebook for Sceptics. This article is my take on the essential elements of what we covered, plus additional thoughts about the societal and individual impacts of Facebook. In this article I don’t presume to speak for Marcus on the specifics, although he and I agree on many aspects of the threats posed by Big Tech. Marcus is the author of Filling the Void: Emotion, Capitalism & Social Media.

Facebook owns other companies including WhatsApp, Instagram and Oculus, which I include under the heading of ‘Facebook’ in what follows.

Facebook is in the business of selling targeted messaging which advertisers and other campaigners direct at its users. The buyers include, as is now abundantly clear, political campaigners, and indeed anyone who seeks to influence the public in any part of the globe, whether a political party in the West or the Russian state.

Facebook achieves this targeting through the data it extracts from its users, which it turns into a huge database of user attributes, such as age and home location. It has no interest in selling users’ data to third parties, as is sometimes thought. For those parties might then become competitors in the domain of targeting services, which are the overwhelming source of its revenues.

Let’s say a client has a message it wants to deliver to specific sorts of users while they browse Facebook, e.g. an advertiser wished to reach females in their 30s and 40s in North America who ride bicycles. Facebook’s services are designed to deliver the message to those of its over two billion active users whose attributes match the client’s target specification. The more you pay, the more of the target group you will reach.

This service is extremely valuable to campaigners, because it gives them a shot at influencing just those people who are of interest to them. They don’t waste money and time on reaching irrelevant people, and they can tune their message to different demographics. At least, the perceived value is great.  But there are flaws:

  • The match is somewhat fuzzy – Facebook doesn’t actually know whether a particular individual rides a bike, although it can have a stab at inferring it, and
  • As far as influence goes, the effect on the recipient isn’t directly known unless it leads to a traceable action such as a purchase. Traceability is generally restricted to certain online actions. Facebook doesn’t know (as far as we know) whether someone went into a store to buy a bicycle. It would be illegal for them to know whether we voted a certain way in a polling booth.

In order to characterise its users and so match them up with their clients’ interests, Facebook acquires data from users in at least the following ways:

  1. Direct user entry – your name, age, profile picture, home town, phone number, etc.
  2. Your behaviours: ‘likes’, clicks, location, even how you move your mouse or VR headset and controllers
  3. Your contributions: images, videos, posts and messages
  4. Your friends and other contacts (and all the above information about them)
  5. Metadata concerning the calls you make and texts you send (WhatsApp)
  6. Attributes such as your social class that are derived from the above data using machine learning
  7. Records about you that are extracted from textual references to your name and recognition of your face in Facebook posts, even if you don’t have a Facebook account.

Derived attributes

Facebook employs algorithms to derive certain attributes (6-7 above) from data that you or others have knowingly or implicitly supplied (1-5). For example, it has patented a technique for deriving your social class.

Facebook might not be the only entity that has run your Facebook data through its algorithms. As shown in the diagram, when you use Facebook to log in to another app, you give the app owner (e.g. Uber) access to some data about you and your Facebook friends. (Disclaimer, the author’s Nth Screen app uses Facebook login, although it acquires only the user’s name and profile picture.)

The most notorious case of this is an app created by Cambridge University researcher Aleksandr Kogan, ‘This is your digital life’. This app not only gathered personality attributes by questioning its users, it also trawled Facebook data which they had effectively provided by logging in with Facebook. It thus built a statistical model linking Facebook data with personality attributes. By examining data such as likes, one can derive personality attributes such as introversion or extroversion, on a statistical basis. Whatever one thinks of the accuracy of such a model, to the campaigner it appears like gold dust. For example, personality factors (typically expressed as a ranking of openness, conscientiousness, extroversion, agreeableness and neuroticism) might render a person more persuadable through certain types of messaging.

Krogan based the app on prior work by colleagues. One of them, Michal Kosinski, had warned about the dangers:

“Commercial companies, governmental institutions, or even your Facebook friends could use software to infer attributes such as intelligence, sexual orientation or political views that an individual may not have intended to share.”

Nonetheless, Krogan made a deal with Cambridge Analytica to develop the model based on data from over 50 million people, and use it for targeted political campaigning including the 2016 US election. The app contravened Facebook’s terms of service. But Facebook revealed, through its inaction, a lack of any public responsibility. It mentioned no knowledge of the affair until after The Guardian reported on it. This was one of a class of breaches which it must have known were possible, and in fact had specific knowledge of in the case of Cambridge Analytica. It has since, after a public outcry, taken steps to reduce the possibility of this happening again, and I’ll come back to that. In the meantime, its CEO Mark Zuckerberg has said he’s “sorry”.

What is Facebook good for?

Campaigners and advertisers value Facebook’s services highly: to the tune of about $12 billion revenue in the first quarter of 2018, notwithstanding the furore over privacy. But what about the users upon whose data that value is based – what good does it do them? Here’s what Facebook says:

“The Facebook community is now officially 2 billion people! We’re making progress connecting the world, and now let’s bring the world closer together.”- Mark Zuckerberg, Twitter, 2017.

“Every piece of content that you share on Facebook, you own, and you have complete control over who sees it and — and how you share it, and you can remove it at any time. That’s why every day, about 100 billion times a day, people come to one of our services and either post a photo or send a message to someone, because they know that they have that control and that who they say it’s going to go to is going to be who sees the content.” – Mark Zuckerberg to Senate Hearing, July 2018.

Zuckerberg’s answer, then, is that the value lies in ‘connecting’ people into a ‘community’, so that they can share content, but in a controlled way.

Indeed, many people regularly share content with others on Facebook, which makes this relatively convenient compared to other means. There’s no doubt that this type of ‘connection’ can be valuable, both for our personal relationships and for our ability to find information. Some Facebook interaction – dialogue with people we care about – can have benefits for mental health.

But we can question several aspects of Zuckerberg’s statements. First, ‘connection’ over Facebook (or indeed other forms of social media) isn’t the same thing as conversation, as Sherry Turkle has observed. She describes the phenomenon of being “alone together” on social media: online on a platform but without meaningful engagement through it. And not all social media activity involves interaction with other individuals; much is passive consumption of content, alone. Meaningless or non-existent engagement is far from being the worst outcome. Facebook itself admits that certain types of social media use are linked with poor mental health, notwithstanding the benefits that sometimes occur. Moreover, a 2017 report of a study of 5,000 people found that “use of Facebook was negatively associated with well-being” in terms of self-reported physical health, mental health and life satisfaction, and body mass index.

I recently read, in an as-yet unpublished essay, “Social media can make it hard for people to interact face to face.” This statement touched upon a good question: does Facebook activity sometimes substitute for, or distort, our face-to-face interactions? And what are the implications of that? When others live far away or are otherwise inaccessible, we take advantage of Facebook to interact with them while there is no face-to-face alternative, and the telephone has limitations. But what are the effects when online interaction displaces or even replaces some face-to-face communication that could be taking place? Seeing someone in the flesh as you speak to them can convey more in the way of non-verbal communication, context, tone and affect than online interaction, notwithstanding video calls. And what we say and how we behave towards one another tends to be moderated to a greater extent when we are in our interlocutor’s physical presence than when we encounter them mostly or only online.

Are we seeing the negative effects of these differences in increased anger, loneliness and depression, and in entrenchment rather than openness in our world views?

Zuckerberg’s second claim  is that Facebook users can control who sees their information. But Facebook’s online privacy controls are notoriously complex and thus hard for users to understand. In a recent study, the Norwegian Consumer Council gave a damning report – not only about Facebook – in this respect:

“The combination of privacy intrusive defaults and the use of dark patterns, nudge users of Facebook and Google, and to a lesser degree Windows 10, toward the least privacy friendly options to a degree that we consider unethical.” – Deceived by Design, June 2018.

– where “dark patterns” are what the report defines as “exploitative design choices”.

The following image is taken from the report. Have you understood and set your Facebook privacy configuration with respect to facial recognition lately? Why is it necessary to consider such a question – and traverse this awkward graph – when all you wanted to do is (in my case, for example) participate in an online group related to a hobby?

If it’s free, you are the product

There is some truth in the statement about subscription services in general that “if it’s free, then you’re the product”. If there is a cost to providing an ongoing free service, then the provider must meet it  either through subsidy or through revenues derived from its operation – typically through sales derived from the users’ data. Facebook does not charge for its services, but then it makes vast profits from its users’ data. However, this state of affairs is not new to Facebook, and it misses the main point, which is the scale of Facebook’s operation compared to what went before it.

Capitalism quickly embraced Web 2.0, recognising the potential for vast profits from two principal means of production: (a) raw material in the shape of data gathered from billions of users, and (b) machinery in the form of massed computers running algorithms over that data to turn it into products. While users supply some data explicitly and voluntarily, the remainder of Facebook’s means of data acquisition (which I have listed above) has reasonably been described as surveillance – by Shoshana Zuboff, among others. The algorithms are essentially statistical in nature: the techniques of machine learning.  Those algorithms are not capable of processing the semantics or meaning inherent in our lives. They  ‘run the numbers’ over the data surveilled from us; an  attempt at pattern-matching towards the production of commercially valuable information, which has unknown error modes and no inherent ability to account for its determinations.

The data centres in which the algorithms run consume very large amounts of power; a significant side-effect is greenhouse gas emission, in an era of human-made climate change. Globally, according to this article, data centres consume 40% more electricity than the entire United Kingdom (the world’s seventh largest economy). Power consumption in data centres is growing rapidly with internet use.

I’d replace the “If it’s free…” statement with: You are part of the cybernetic machine, and it’s not free.

The following diagram shows the operation of the cybernetic machine: an amalgam of computation and human behaviour, which all social media users are a part of. Like any cybernetic system, it contains feedback loops. Human use of the machine produces data, which is used to create machine behaviour (timeline manipulation, notifications etc.) which influences human use of the machine. And so on. Our boundaries, as Aral Balkan would have it, no longer end at our biological selves. We are plugged into the machine. The machine has mathematical properties, which researchers use in attempting to model aspects of it. But the characteristics of the machine per se are less significant than its social, economic and political ramifications. We voluntarily plug ourselves into this machine because we see certain benefits in it. But it runs principally on behalf of capitalists whose central concern is profit, not necessarily our welfare or that of the planet.

The costs we stand to incur by being part of this machine are not only destruction of the physical and social integrity of the planet through climate change, but also both our individual mental health and our collective political health. In principle we could mitigate the first problem through use of renewable energy sources. The solution to the second problem is far less clear. But the power of technology (and Silicon Valley in particular) to solve problems is limitless, right?

Inference, influence

We are supposed to believe in the power of Facebook and other Big Tech companies to solve any problem. Tech progress, Silicon Valley would have us believe, Is Awesome and Good.

In actual fact, the Facebook machine, like any other capitalist construct, is oriented towards profit rather than societal interests. Furthermore, it is far from Awesome, technically speaking. Referring to the above diagram, the inference of attributes such as social class and personality type from Facebook data only works – to the extent it does at all – on an unreliable and easily fooled statistical basis, through machine learning techniques such as deep learning; and it’s limited by the secondary sources of ground truth available – including human judgement and facts belonging to the sphere of human experience. Capitalists are interested in making money by convincing advertisers that they can reach, for example, gay or middle-class people. It’s likely that they will reach some people in those categories through their algorithms. But the algorithms may well be wrong in the case of any particular individual. The reductionist Big Tech world-view holds that complex questions about human nature and/or social phenomena such as sexuality or class can be answered by deep learning algorithms run over data such as facial features and Facebook likes. This view is fatuous, given the evidence against it, and rightly held to be objectionable.

A controversial example is research by Michal Kosinski and Yilun Wang who claim to be able to infer sexual orientation from facial features extracted from images on a dating site, and also based on Facebook data. Kosinski – who also conducted research which led ultimately to the Cambridge Analytica scandal – is fully aware of the controversies in which he has embroiled himself. The validity of the research conclusions has been questioned by fellow researchers, and LGBTQ groups have objected to the biological determinism underlying it. Some people wrote to Kosinski asking him to run his algorithm over their pictures, in order to help answer their questions about their own sexuality (as if that could possibly have been informative, as opposed to a palliative to understandable feelings of insecurity). Kosinski declined. However, while he is an academic and not part of Facebook or any other Big Tech company, he embodies the philosophy manifested in the latter’s practices:

“We should focus on organising our society in such a way as to make sure that the post-privacy era is a habitable and nice place to live.” (from this article)

While the techniques in Kosinski’s paper for algorithmic determination of sexuality are not known to be used by Facebook itself, a recent report described complaints from users who were shown advertisements for ‘gay conversion’ while browsing Facebook, based on their likes. Facebook rejected the advertisements after the complaints.

Instead of ‘inference’, then, we should refer to naive statistical reductionism.

Turning to the question of influence, targeted Facebook messages were used, as described above, in an attempt to sway the result of the US election. In addition, for either political or financial gain, foreign actors posted  ‘fake news’ to generate controversy and thus attention and traffic. These were fabricated items which appeared, at least to some, to come from credible sources. Robert Mueller, investigating Russia’s role in the election, charged thirteen Russians with an “interference operation” that made use of Facebook (as well as other platforms) to post items designed to sway voters to vote for Donald Trump. In terms of financial gain, certain parties in Macedonia were reported to have been fabricating news as ‘clickbait’ directed at Trump supporters. Initially this was thought to be purely for financial gain: they were generating ad revenue as a side-effect of users following their sensationalised headlines. However, later reports identified US conservatives as being instrumental in their operation.

But is it really true that these messages were instrumental in the result? That is highly questionable: users are subject to many influences, and, whatever their personalities or concerns, aren’t necessarily swung one way or another in purchasing decisions or political decisions by any particular subset of messages delivered to them. In his testimony to a Senate hearing, political scientist Eitan Hersch said:

“While it may be broadly true that Facebook data has wide coverage and accurate records about the American public, it is worth noting that the demographic group credited for President Trump’s victory, the white working class, may be the least likely group to use Facebook and therefore the least likely to afford campaigns with a digital path to voter engagement.”

In a more recent example of alleged influence without proof, media including the New York Times reported that “Facebook Fuelled Anti-Refugee Attacks in Germany” – arguing that “towns where Facebook use was higher than average … reliably experienced more attacks on refugees.”

This study has been contested. Due to the empirical complexity of controlling for all possible variables, it is extremely difficult to tie social media activity causally to behaviours outside it.

None of the above is to argue that Facebook targeting or Facebook usage are uncorrelated with political effects. Facebook itself admits to a problem:

“What happened in the 2016 election cycle was unacceptable… We were too slow to spot this and too slow to act. That’s on us… We are learning from what happened, and we are improving… [We] removed hundreds of pages and accounts involved in coordinated inauthentic behavior – meaning they misled others about who they were and what they were doing” -Facebook’s chief operating officer, Sheryl Sandberg, in Senate testimony, September 2018.

Even if, in most cases, it is extremely difficult to prove, how should we prevent even the possibility of ungoverned mass political influence via Facebook?

Is Facebook redeemable?

In our talk, Marcus Gilroy-Ware and I asked: despite all of its flaws, is Facebook redeemable? That is, can we change it, or provide an alternative which has the benefits but none of the problems?

The authorities are asking similar questions. Facebook is under investigation by The Justice Department, FBI and SEC in the USA and, in Britain, the Information Commissioner’s Office (ICO) has fined Facebook for failing to safeguard its users’ data – all for its part in the Cambridge Analytica scandal.

Tim Wu says “Don’t fix Facebook, replace it” and considers some alternatives. Half-seriously, I suggested that one could nationalise Facebook or produce a non-commercial equivalent: to create something akin to the BBC. Recently, Labour leader Jeremy Corbyn has proposed something akin to that, as a Facebook alternative. Such an entity does not seem destined to be popular. And a charge (the BBC charges a licence fee) would exclude many people.

Mastodon is an alternative “decentralized, open source social network”. But how do we know it cannot be abused like Facebook has been abused?  And in what sense is “the crowd” more trustworthy, or more manageable than a registered corporation with a physical address?

Should we regulate Facebook? In particular, should we:

  1. Bring its privacy-related practices under firmer legal controls
  2. Extend its responsibility towards users, especially children, whose physical and mental wellbeing it impacts
  3. Make it legally responsible for the abuse it facilitates, including hate speech, and fake news?

The authorities are playing catch-up with regard to 1. Number 2, the impact on health, is another very complex question. The role, if any, for regulation is not clear. If only we had a better understanding of the impact then a societal response – a reevaluation of our wellbeing and how to safeguard it by reconsidering how we and our children should use Facebook – would be preferable.

With regard to 3, Facebook itself has various approaches to content moderation but none of them is truly effective, especially given that it has over two billion users. According to a report in the Washington Post, “[Facebook] says AI has boosted its ability to monitor the social network’s two billion users, but it still relies heavily on human moderators and, as [Yann LeCun, Facebook’s chief AI scientist] noted, ‘works not so well with false news’.” This statement encapsulates the advantages vs limitations of machine learning (which is what is meant by ‘AI’): capitalists value its statistical benefits when it comes to user surveillance, but it is unfit for making accurate determinations in any particular individual case.

A recent Channel 4 Dispatches programme revealed that Facebook’s moderation policies – as implemented by human moderators – are sometimes inconsistent, hard to defend, and instituted essentially as frameworks for business decisions. For example, a page with more subscribers (sources of revenue) seems to be less likely to be taken down than a low-traffic site with a similar level of hate-speech.

Another, very recent example concerning Facebook moderation – and the difficulty it faces – is a post by the Anne Frank Centre which Facebook took down as contravening their ‘standards’, only to restore it when the Centre pointed out that, in context, their post was inoffensive – and that Facebook allows Holocaust denial pages to continue:

In order to help manage users who contravene their terms of service, Facebook is reported to be ranking all its users with an algorithmically determined trust rating. This is problematic in several ways. Not least, trustworthiness, like human sexuality, is far too sophisticated a concept to be tractable by machine learning. Secondly, machine learning algorithms are unreliable. It’s likely that some trustworthy users (such as the Anne Frank Centre) will receive low trust rankings on a spurious, unknown, algorithmic basis. Might Facebook’s user ranking be used, nonetheless, in an algorithmic estimation of credit rating with respect to some other aspect of our lives – one that Facebook does not make transparent, or allow us to dispute?

Currently, social media platforms are exempt from legal responsibility for the content posted on them. It is not clear why. They distribute content to public subscribers, like TV stations do.  Why are matters in the public interest being handled exclusively by a profit-driven entity, with so little oversight and so little understanding – on Facebook’s part or anyone’s – of the ramifications of its algorithms?

Creative response

Although we asked about the possibility of redemption, there is a bigger question. Given where matters stand – over two billion of us now ‘connected’ – can we recover ourselves, as individuals and society, from our participation in the cybernetic machine that we have plugged ourselves into?

The first step is recalcitrance. How much do you need to be on Facebook? Do you need to be on it as often as you are? How much time do you spend on it compared to other social activities? Is it possible to be part of the machine and exploit it solely for your own purposes rooted outside the machine? To sidestep the imperative behind the machine’s design, and not “perform as a subject of value“, as Beverley Skeggs and Simon Yuill put it?

In 2010, as part of my ‘Facebook Data Provocations‘, I proposed that we should all invoice Facebook for their use of our data. We are supposed to think that we get a good deal by being offered the service without charge. But who is to say that the value of my data is about the same as Facebook’s costs in running its service for me? What if my data was far more valuable than that? I would be entitled to invoice Facebook for the difference. Going further, what if there were an open market in ‘personal data assets’, which we sold and the likes of Facebook had to bid for? What if, as others have suggested, we the users of Facebook formed a union to negotiate collectively over its terms of service?

Let there be far more provocations such as these. But what we really need is a movement against the cybernetic machine which we have plugged ourselves into. I have been thinking about this possibility, in terms not only of politics but of art.

Surrealism 2.i

In the last century,  Dada arose, “reject[ing] the logic, reason, and aestheticism of modern capitalist society” (wikipedia). Surrealism followed, with a greater emphasis on harnessing the unconscious, as a space that is separate from the capitalistic machine. What now, given that we are faced with a new form of capitalism from the likes of Facebook? Like other social media platforms, Facebook is extremely demanding of our conscious – and perhaps unconscious – attention. Tim Wu writes about this in his book “The Attention Merchants: The Epic Scramble to Get Inside Our Heads“.

If Facebook is already “inside our heads”, what should ‘Surrealism 2.i’ be? (That’s i the imaginary number- ‘2.0’ would hardly be surreal.) Surrealism means ‘beyond realism’ – beyond the capitalist machinery of the 20th century. What stance is needed in relation to the new cybernetic machine, which entangles humans with their mobiles, with servers and algorithms?

Further immersion in the virtual – virtual reality (VR) – is surely not the answer. VR amounts to yet another mode of being “alone, together”.

Perhaps what we need is rerealism – a renewal of realism in the sense of interpersonal relations away from the virtual and in the physical presence of others. Or interrealism: ‘between’ rather than ‘beyond’. To focus, not on the unconscious, as the surrealists did, but on the hyperconsciousness that is possible between unmediated humans who interact directly, away from the machine. This would be to reject internet-age capitalism’s illusion that it connects us all, when in many ways it divides us; it would be in favour of imaginatively occupying the space between us.

Do you agree or have your own suggestions? Please comment or get in touch.  And, if you live in or near Bristol, UK, I’ll be inviting you to a series of meetings on this topic, from late September 2018.


Image by Endre Rozsda ( [CC BY-SA 4.0 (], via Wikimedia Commons