Special Report

The Liquification of the Physical World

A review of Shoshana Zuboff’s Surveillance Capitalism

By Nathan Albright

A Google Street View car drives into a ditch

In 1971, the US Senate Subcommittee on Constitutional Rights began an investigation into the growing trend of “government-sanctioned programs designed to predict, control, and modify human behavior.” The subcommittee found that “the widespread urban riots of the 1960’s and the resulting calls for law and order” had prompted a search for “immediate and efficient means to control violence and other forms of anti-social behavior.” Apparently, following the civil unrest of the 60’s, the US government had become “heavily involved in a variety of behavior modification programs ranging from simple reinforcement techniques to psychosurgery,” and even attempts to recreate communist “brainwashing techniques.” While the report details some of the most horrifying (and least effective) methods, it was the “simple” techniques of pinpointed conditioning, monitoring, and reinforcement, which proved to be most effective.

Harvard psychologist B.F. Skinner, had led the research on these techniques and was developing what he called “behaviorism,” a science of human behavior which he believed could one day control entire civilizations. In fact, the techniques Skinner developed proved so effective at controlling captive populations that they were quickly adopted by prisons, psych wards, homes for the autistic, schools, factories, and the military, provoking public concern that eventually led to the senate investigation. The increased public scrutiny resulted in a string of government regulations on research ethics, and, for a time, curtailed behavioral control research. Skinner lamented these obstacles to perfecting his new science of human behavior, and spent much of the rest of his career bemoaning the lack of public support as well as the limitations of his time — to hone the behavior of whole populations would require massive amounts of data on countless individuals and the ability to conduct society-wide experiments, but at the time the public was still hung up on what he called “the problem of privacy.” Still, Skinner was optimistic that all these problems might be “eventually solved by technological advances.”

Fifty years later, a buffet of previously unimaginable technological advances has provided the kinds of tools that Skinner could only have dreamed of, but as Shoshana Zuboff makes clear in her 2019 book Surveillance Capitalism, these tools are consolidated in the hands of a few of the world’s most powerful corporations, and most of us have no idea what they’ve been up to. In short, tech companies are gathering an astronomical amount of data on individuals, running experiments on enormous swaths of the general population, and working tirelessly to figure out how to best predict and ultimately control our behavior.

Zuboff, a Harvard Business professor who took undergrad Psychology classes in Skinner’s department in the 70’s, defines Surveillance Capitalism as, “a new economic order that claims human experience as free raw material.” This idea is central to her thesis: “forget the cliché that if it’s free, you’re the product,” she writes, “you are not the product; you are the abandoned carcass. The ‘product’ derives from the surplus that is ripped from your life.” The first of the book’s three sections recounts the history of how one company, Google, first developed this business model, and how the tech world followed close behind.

In 2014, Google overcame Exxon Mobil in market capitalization, but as recently as 2001, the company had still not found a way to turn a profit off of millions of regular users. Google had already emerged as the leading search engine by taking what they called “data exhaust,” seemingly trivial information which was being ignored by other companies including, “the number and pattern of search terms, how a query is phrased, spelling, punctuation, dwell times, click patterns, and location,” and running it through a “reflexive process of continuous learning and improvement.” As more people used their search, the results grew more accurate and intuitive, which in turn drew more users to the site. At first, Google had been opposed to advertising, expressing concerns that it would interfere with the objectivity of their search results, but when tech start-ups started folding during the “tech bubble burst” of 2001, they quickly shifted focus and developed their signature innovation: targeted ads.

Even today many people think a targeted ad is linked to search terms (maybe a sneaker company pays to have an ad displayed after a search for “shoes”), but this had long been standard practice for search engines. Google took this a step further and used the the vast amount of data it had acquired from its search function to build profiles for each of its users, predict who was most likely to purchase which products, and then charge advertisers a “price per click,” while delivering their ads to users they knew were interested. Like the search engine itself, this was a process that became a learning cycle, constantly improving its own predictions and therefore its profitability. Two years after running their first targeted ads, Google was a billion dollar company. They had shown advertisers that they could eliminate risk, and virtually guarantee profit. The key ingredient was data.

It wasn’t long before most tech companies were trying to emulate Google’s business model and the competition to collect mass data turned into a mad dash. In 2008, Facebook hired Sheryl Sandberg, former vice president of Google’s global online sales. Within two years they had implemented their signature “like button” which appears on third party websites across the internet and tracks all visitors to the site, even non-Facebook users, whether they click the button or not. Around the same time, Google was embroiled in a scandal in Germany after a Google Street View car was found not only taking photos of streets as it drove, but also “secretly collecting personal data from private Wi-Fi routers” including emails, URLs, passwords, names, telephone numbers, credit information, messages, chat transcripts, as well as records of online dating, pornography, browsing behavior, medical information, location data, photos, and video and audio files. In 2014, when asked about a new tracking ID assigned to each of their customers, a Verizon spokesperson acknowledged, “there’s no way to turn it off;” the same was true of a similar new tracking system from AT&T. Google, sensing that internet service providers could pose some of the only real competition in collecting data, feigned concern for user privacy and launched a campaign to ban the practice. Meanwhile, Google was upgrading its real-time satellite imaging capabilities to allow for what one expert called “pattern of life” analysis, while also expanding their Street View fleet to include “a wearable backpack, a three-wheeled pedicab, a snow-mobile, and a trolley” to capture close-up images of places in difficult to reach terrain including the interior of buildings.

The race to corner the market on surveillance has only intensified over time, producing a string of data collection schemes lazily disguised as ever more absurd products and services. From Roombas, which map the layout of your apartment, to Sleep Number Beds which monitor your vitals and record audio as you sleep, to wifi enabled rectal thermometers – there is quite literally nowhere tech companies won’t go to mine data. In 2017, researchers at Microsoft’s Bing explained that such endeavors are worth it because “even a 0.1% accuracy improvement,” in their ability to predict which users would click on which ads, “would yield hundreds of millions of dollars in additional earnings.”

It’s not only novelty products that are mining data. A 2015 study from the University of Pennsylvania found that, of the top one million most popular websites, “90 percent leak data to an average of nine external domains that track, capture, and expropriate user data for commercial purposes,” a vast majority of which leads back to Google or Facebook. Similar studies of smartphone apps have found that a majority covertly launch secondary, undetectable data mining apps – “even the most innocent-seeming applications such as weather, flashlights, ride sharing, and dating apps,” Zuboff writes, “are ‘infested’ with dozens of tracking programs that rely on increasingly bizarre, aggressive, and illegible tactics to collect massive amounts of behavioral surplus ultimately directed at ad targeting.” Probably the most common piece of data tracked on a smart phone is a user’s location, which according to a 2015 Carnegie Mellon study, is shared on average thousands of times each week. Even when a user turns off official location tracking, service providers can (and do) triangulate your location by calculating your distance from cell towers. In one particularly desperate effort to track users locations, one retail company used a form of sonar to communicate between in-store speaker systems and user’s phones.

Rather than showing concern about this rapidly expanding surveillance infrastructure, the US government has enthusiastically supported the companies designing it. In addition to direct support from the intelligence community in developing surveillance technology (Google Earth was originally a CIA project), and a revolving door for tech executives, there’s been bipartisan interest in Silicon Valley’s ability to influence elections. While readers are likely familiar with the notorious Cambridge Analytica scandal in which a private data analysis firm collected around 5,000 data points on every adult in the United States and collaborated with the 2016 Trump campaign, they may not know that the firm’s research had been funded by Microsoft, Boeing, Google, the National Science Foundation, and the Defense Advanced Research Projects Agency. Readers may also be surprised to learn that the practice of using mass data collection to sway elections was actually first pioneered in 2008, when Google CEO Eric Schmidt had a leading role in advising the Obama Campaign in its effort to gather information on more than 250 million Americans. One of the campaign’s consultants put it bluntly: “we knew who people were going to vote for before they decided.” Zuboff describes how terrified Google’s competitors were when they saw Schmidt standing triumphantly behind Obama at the first post-election press conference, and quotes a number of insiders who suggest the Google CEO did it just to prove that he could.

All of this likely explains the weak or nonexistent attempts at government regulation that tech companies have so far faced. Frequently legislators have settled for promises from Google and others to “self-regulate,” or follow “best practices.” Even in the European Union where the world’s most strict data protection laws exist, Zuboff notes that tech companies have found work-arounds and continue to thrive. For instance, it’s easy for Google to claim that it doesn’t sell its users’ data because it’s true: it jealously guards all the data it collects and uses it to instead sell pinpointed access to its users. Other companies are able to claim that they never sell data simply because they are much more interested in “meta-data” or data about data which has often proven to be even more effective in predicting behavior. Meanwhile, one of the most popular suggestions for reigning in data collection is “data ownership,” an idea supported by much of the tech world who see it as a slight financial incentive for users to tolerate the status quo.

The hand-in-glove nature of big tech and government relations becomes especially concerning as Zuboff begins to outline the next stage of surveillance capitalism, which she terms “instrumentarian power,” or the ability to modify and manage users behavior directly. With the imperative to improve prediction, it was only a matter of time before companies realized that the most effective way to predict a person’s behavior is to control it. To do so, tech companies have turned to the same principles of conditioning that Skinner was only beginning to uncover in his research, but this time with nearly unlimited resources and seemingly complete impunity.

For example, anyone who has ever had a Facebook account has almost certainly been experimented on as the company routinely conducts experiments on their users with sample sizes in the hundreds of thousands. One experiment involved altering certain user interfaces to see how it affected voter registration patterns; another tested a hypothesis on “emotional contagion” between users. Normally, this kind of research would have to be independently reviewed under the Common Rule, a regulation implemented largely thanks to the senate investigation mentioned earlier, but since Facebook is a private company, it’s not required to work with a review board or even inform the participants of the study.

One result of this research has been the company’s rapidly improving ability to get users quite literally addicted to the site by scientifically honing what Zuboff calls a “casino environment” which exploits users’ emotional vulnerabilities. A leaked Facebook document boasts to advertisers of the company’s ability to monopolize a young person’s attention and identify when they’re most vulnerable to advertising: “by monitoring posts, pictures, interactions, and internet activity, Facebook can work out when young people feel ‘stressed,’ ‘defeated,’ over-whelmed,’ ‘anxious,’ ‘nervous,’ ‘stupid,’ ‘silly,’ ‘useless,’ and ‘a failure.’” In 2016, Facebook’s Marketing director was not shy about how deliberate and effective this strategy had been, bragging that young users were checking their phones 157 times a day.

Rather than introducing these monitoring and conditioning technologies by way of prisons or schools, tech companies are doing their best to stir up public demand for products and services that make perpetual monitoring seem like a luxury. The clearest example of this is the proliferation of voice activated home assistants meant to mimic the experience of having a servant, each of which are recording audio used by their parent companies to refine voice recognition and artificial intelligence technologies. Another example is the explosion of “wearable” products, such as fitness tracker watches, jeans made of “interactive denim,” and, more recently, pandemic smart masks which monitor your breathing. The earliest of these wearable monitors, Zuboff explains, were actually developed for research on observing and influencing animal migration patterns, something one might have been able to infer from the fact that companies still use terms like “herding” to refer to influencing user behaviors.

In situations where the balance of power is reversed, the tactics are less subtle: a number of lending companies, landlords, and employers are increasingly using services that provide invasive data profiles to vet potential borrowers, tenants, or employees. Information available includes “texts, e-mails, GPS coordinates, social media posts, Facebook profiles, retail transactions, and communication patterns” as well as, “the frequency with which you charge your phone battery, the number of incoming messages you receive, if and when you return phone calls, how many contacts you have listed in your phone, how you fill out online forms, or how many miles you travel each day.” Other services offer information on everything “from your personality to your ‘financial stress level,’ including exposing protected status information such as pregnancy and age.”

It’s hard to know for sure where big tech goes from here, in part because, as Zuboff points out, industry leaders tend to speak in absurd hyperbole. Google proclaims “the internet will disappear,” IBM predicts the “liquification of the physical world,” and Mark Zuckerberg says Facebook will become a global “church” destined to connect the world to “something greater than ourselves.” What we do know, is what they’re investing in. In the past few years, top tech companies have acquired software start-ups specializing in artificial intelligence, facial recognition, and emotional recognition, and have been developing diverless cars, drones, and city infrastructure. Google in particular has been working on their Sidewalk Labs “smart cities” project, in which dozens of participating cities, turn over all transit data to the company to develop a traffic management plan which tends to involve redirecting funding from public transportation into rideshare apps. Sidewalk Labs’ first experiments were the NYC Link wifi hubs that have popped up around the streets of New York “to combat digital inequality,” and which the program’s director referred to as “fountains of data.”

While Zuboff’s research is urgently necessary for navigating the coming years, she stops short of connecting it to the most pressing issues of the day. She acknowledges the brutalities of industrial capitalism, the oppressions it unevenly distributes to historically marginalized groups, the environmental crisis it’s caused leaving future generations with “the burden of a burning planet,” and yet, devotes a maddening proportion of the book to defending some more pure imagined form of capitalism. Her efforts to defend capitalism in spite of everything she uncovers grow more and more elaborate and nonsensical as she approaches her conclusion, reaching its apex when she acknowledges that industrial capitalism has threatened earth with a sixth mass extinction, only to shift focus to what she calls the “seventh extinction” of “human nature” caused by surveillance capitalism.

Rather than quickly changing the subject from the potential destruction of all life on earth, Zuboff might have done well to mention the very real impact the digital world has on the degradation of our planet, consuming massive amounts of energy to accomplish tasks that in no way enrich the material world. In the year 2020, for example, Bitcoin “miners” used as much electricity as the entire country of Argentina. A more useful way to combat either of the two potential extinctions she mentions, would be to explore the way surveillance capitalism does or does not promote the organization of mass movements fighting for survival under a capitalist regime. In the early chapters of her book, Zuboff references the work of French Sociologist Emile Durkheim who wrote that the most dangerous political phenomena are “extreme asymmetries of power” which make “conflict itself impossible” by “refusing to admit the right of combat,” but she doesn’t circle back to a discussion of what combat might look like in the context of surveillance capitalism. Later in the book, she references a startup called Geofeedia which “specializes in detailed location tracking of activists and protesters, such as Greenpeace members or union organizers,” but doesn’t connect it to this right of combat. Instead, she concludes the book with a hollow salvo on the fight against “a cruel perversion of capitalism” which must be won with American democracy (despite Big Tech’s daunting electoral influence).

Zuboff’s most useful insight, which is essential for organizers to reckon with, is that “as awareness endangers the larger project of behavior modification,” surveillance capitalists have a profit motive to eliminate, “human consciousness itself.” The scientists who developed “wearable sensors” for animal research said they produced more reliable data because, “they could disappear into the body without triggering the animal’s awareness.” The “casino environment” engineered by social media companies attacks individual awareness more directly by employing the kinds of conditioning and reinforcement techniques pioneered by Skinner to encourage compulsive, almost automatic behavior.

Most readers probably have first hand experience with this semi-conscious state of mind: most adults now carry around at least one internet-enabled device at all times – usually a cellphone – which monitors, nudges, and displays non-stop corporate advertising. The result has been a sort of generalized deficit in attention along with perpetual disorientation. There can be no “right of combat” when the world’s most powerful corporations have monopolized our attention and warped our perceptions. If this is “instrumentarian power” in action, one has to wonder whether Skinner may soon be proven right in his belief that more data is all one needs to realize complete control.

The radical anthropologist David Graeber, who passed away last fall, grappled with many of the same questions as Skinner, but came to nearly the opposite conclusions. At the heart of behaviorism is a belief that the science of human behavior is no different than physics– that all can be known, and mapped into laws and predictions. When an unexplained phenomenon presents itself, we know that it’s merely a gap in our knowledge, and trust that there is a causal mechanism at play which we have yet to discover. In his behaviorist manifesto, sincerely titled Beyond Freedom and Dignity, he argued that freedom is nothing more than an illusion caused by ignorance. Citing the work of quantum physicists, he argued that all of the mechanisms driving each of our choices are as predictable as any physical phenomenon – we only have to discover the forces at work.

Graeber looked at these gaps in knowledge differently. In a beautiful meditation on the unknown, he wrote about natural phenomena that have eluded simple, rational explanations: from organisms exhibiting frivolous play, selfless altruism, and “action carried out for the sheer pleasure of acting,” to electrons evading prediction or control. After weighing some of the controversial theories attempting to patch over these stubborn gaps in a rational worldview, he proposes, as do some quantum physicists, that such a generalized stubbornness probably means that, “on every level of physical reality,” there must be “something at least a little like freedom.”

In the time since Zuboff published her book, tech companies have stepped into the void left by pandemic restrictions on in-person gatherings, taking a more prominent role than ever before. The result has been record profits for the tech sector in a time when most other industries saw massive losses, as well as a shared sense of constraint and disorientation. As we witnessed crises of all kinds, from climate chaos to police murder, rapidly filtered through the same spectacle-making medium, we’ve been left little room for reflection. But reflection is urgently needed. We must understand how and when we are being manipulated, nudged toward forgetting or shifting focus. We have to reorient ourselves and refuse to forget events of the past few months.

As millions took part in the uprisings last summer following the police murder of George Floyd, the feeling of throwing off regimented authority electrified the air. Walking around the streets of Lower Manhattan during last year’s riots, I saw things I didn’t know if I would ever see in my lifetime. Police cars upside-down and burning. Windows of multi-billion dollar companies shattered. Police retreating from groups of teenagers throwing water bottles. What was most incredible was knowing that millions of others in cities big and small across the US were seeing the same thing. The wave of unrest was unlike anything seen in the US since the riots which first prompted government research into behavior control over 50 years ago. But the billions of dollars in property damage and a clearly frantic government response were heartening reminders that even after decades of technological advances and behavioral research, no one is in complete control. For the precious time being, at this crucial moment when our collective survival is in question, there still exists something at least a little like freedom.

Nathan Albright is a Cofounding Editor at the The Flood.