In a recent post, I mentioned John Lanchester’s view that Facebook, and by extension other software giants represented a new form of surveillance capitalism. In his LRB article, in which he reviewed three recent books on things digital, he is pretty trenchant on this subject.
Facebook, in fact, is the biggest surveillance-based enterprise in the history of mankind. It knows far, far more about you than the most intrusive government has ever known about its citizens. It’s amazing that people haven’t really understood this about the company. I’ve spent time thinking about Facebook, and the thing I keep coming back to is that its users don’t realise what it is the company does. What Facebook does is watch you, and then use what it knows about you and your behaviour to sell ads. I’m not sure there has ever been a more complete disconnect between what a company says it does – ‘connect’, ‘build communities’ – and the commercial reality.
Although one of the elements of received wisdom about digital platform businesses such as Facebook is that they tend to monopoly because of their network effects, he can also see trends that might curb its growth, or even see it shrink. There are four, or perhaps five.
The first is what Timothy Wu calls in The Attention Merchants “disenchantment effects”—that periods of rapid media growth produce a backlash, sometimes with accompanying regulation or legislation. This is why Paris still has tight control of display advertising. This extends beyond media. When significant technologies become mature, and their rate of growth slows, there’s a period of social stocktaking in which social side effects (external costs, in other words) start to be regulated. Think of ‘60s innovations such as seat-belts, parking meters, and drink-driving legislation, in response to road deaths and congestion.
Lanchester focuses on customer disenchantment, meaning advertiser disenchantment.
[A] lot of the clicks on these ads are fake. There is a mismatch of interests here. Facebook wants clicks, because that’s how it gets paid: when ads are clicked on. But what if the clicks aren’t real but are instead automated clicks from fake accounts run by computer bots? This is a well-known problem, which particularly affects Google, because it’s easy to set up a site, allow it to host programmatic ads, then set up a bot to click on those ads, and collect the money that comes rolling in. On Facebook the fraudulent clicks are more likely to be from competitors trying to drive each others’ costs up.
Indeed, Ad Week has estimated the cost of click fraud at $7 billion, representing one-sixth of the market. Some of the estimates of click fraud are a lot higher than that. These are not trivial sums, especially in a market that is facing serious cost pressures. Even legitimate views are, shall we say, enhanced by the way Facebook chooses to measure use.
A video is counted as ‘viewed’ on Facebook if it runs for three seconds, even if the user is scrolling past it in her news feed and even if the sound is off. Many Facebook videos with hundreds of thousands of ‘views’, if counted by the techniques that are used to count television audiences, would have no viewers at all.
Inevitably, Facebook has tried to claim that its ads have the most impact in the first two seconds: advertisers are sceptical.
But there might also be end-user disenchantment. We have seen generational change affect online use before, and Facebook might also be vulnerable to this. There’s some evidence that younger people are sharing less on more open platforms, and using closed platforms instead, such as WhatsApp. Even though Facebook also owns WhatsApp, there is less data to be seen and used in a closed end-to-end platform. People don’t need to stop using Facebook to erode much of its value; they just need to use it less.
Another source of end-user disenchantment might be the emerging evidence that Facebook use is bad for well-being, given that wellbeing is one of the strong long-term shifts in social values. Lanchester notes that “the more people use Facebook, the more unhappy they are.” It is possible that this is a correlation, not causation, but I expect to see more on this issue.
Second, there could be more formal regulation, and this might be related to advertising, particularly political advertising. Every time it is clear that Facebook’s algorithms actually allow people to advertise to “Jew-haters,” as ProPublica recently discovered, or that Russian interests have paid Facebook for American political ads during the 2016 Presidential campaign. The fact that Facebook profits so handsomely from “fake news” that it has no incentives to do anything about it —what’s good for Facebook is bad for America—might also spur some kind of regulation.
And right on cue: [Update 1] the EU has just warned Facebook and Twitter that they will regulate them unless they take steps to remove hate speech in a much more timely way.
And, [Update 2] Congress is actively concerned about the Russian ads issue, as Buzzfeed reports:
The Russian ad scandal has captured lawmakers’ attention in a way Facebook’s previous political crises — from allegations of bias in its Trending column to its role in spreading fake news — have not. It has crystallized a trio of individual fears — Facebook is too big, has too much influence, and cannot effectively monitor itself — into one big expression of all of them.
Third, one of the trends implied by surveillance capitalism is individualised pricing. As Lanchester observes, the big internet enterprises have so far escaped competition investigation, because—in the US—this typically focuses on consumer pricing and consumer benefit. Because Facebook and Google are free, in the sense that they don’t cost money (we’re paying them with our data, which they sell on), they have largely escaped competition scrutiny in the US, though not, of course, in Europe.
But the point at which their activities lead to individualised pricing, and especially if this pricing is driven by algorithms that can’t be independently tested or evaluated, is the point at which this may well change. Even without that, there will surely come a point where a competition economist decides that algorithms that increase the price of an airfare when you return for a second look are anti-competitive.
The other issue is that not all of the companies that have grown rich on the digital boom share the same interests. Unlike Facebook and Google, Apple’s customers are end-users who pay them for their products (Amazon sits somewhere between). It’s better for Apple’s business model to position themselves as being on the side of their customers; Apple’s software is increasingly designed to reduce intrusion.
[T]he new Safari feature uses a “machine learning model”, Apple says, to identify which first-party cookies are actually desired by users, and which are placed by advertisers. If the latter, the cookie gets blocked from third-party use after a day, and purged completely from the device after a month, drastically limiting the ability of advertisers to keep track of where on the web Safari users visit.
And it’s worth noting the extreme response by the ad industry to this relatively modest restriction on what advertisers can do with end-user data:
Apple’s unilateral and heavy-handed approach is bad for consumer choice and bad for the ad-supported online content and services consumers love. Blocking cookies in this manner will drive a wedge between brands and their customers… As organizations devoted to innovation and growth in the consumer economy, we will actively oppose any actions like this by companies that harm consumers by distorting the digital advertising ecosystem and undermining its operations.
Well, blah, blah, blah. If controls such as those that Apple is proposing are going to be so damaging, then clearly the “digital advertising ecosystem” is a lot more fragile than we thought it was.
It is worth putting all of this in a wider context. Facebook’s expansion has been driven by a simple model:
Growth (in numbers) x Monetisation (of each user).
There are strong grounds to think that user growth has run of road, certainly in mature markets, which really only leaves monetisation. What monetisation means is getting advertising in Facebook and Google to work harder, which means, for example, inserting more ads in feeds and being more unscrupulous with user data. We know from experience that end-users dislike both of these things: in other words, the pursuit of monetisation could turn into a vicious circle.
One final note here, from Nick Srnicek’s book on Platform Capitalism. He points out that advertising markets are more fragile than they seem, especially to recession. The last time, ad-based platforms like Facebook and Google were able to ride out the recession through growth in numbers, which was still strong.
He doesn’t add that they were also insulated at the time from profit/performance pressures from shareholders and investors because both were privately held at the time. Both are now at least partially listed on the stock exchange.
Recession, in short, could be a catalyst that makes the combination of pressures outlined here start to have an effect on how these surveillance businesses work. We could be close to peak Facebook.
The image at the top of the post is by Marc Smith, and is published here under a Creative Commons licence. It was also used to illustrate an article on The Conversation on the subject of surveillance capitalism.