Breaking News Emails
Get breaking news alerts and special reports. The news and stories that matter, delivered weekday mornings.
Feb. 4, 2019, 7:49 PM GMT
By Jacob Ward
Facebook and other companies may very well be protecting your privacy — but they don’t need your personal information to determine exactly who you are and what you’ll do next.
Our human sensor array was built to easily and automatically detect small, immediate anomalies such as snakes, fire, or members of an enemy tribe. Our cognitive and perceptual equipment evolved to spot those things right now and right here. Larger, more abstract threats and patterns are mostly beyond our immediate comprehension. This inability to detect the big stuff is one of the great challenges to our ability to understand, say, the worldwide implications of climate change , or the need to fill out a complicated form to enroll in a 401(k). And in the world of privacy and data, it clouds our ability to see the real effects of data collection.
First, understand that privacy and data are separate things. Your privacy — your first and last name, your Social Security number, your online credentials — is the unit of measure we best understand, and most actively protect. When a bug in FaceTime allows strangers to hear and watch us, we get that, in the same visceral way we can imagine a man snooping outside our window. But your data — the abstract portrait of who you are, and, more importantly, of who you are compared to other people — is your real vulnerability when it comes to the companies that make money offering ostensibly free services to millions of people. Not because your data will compromise your personal identity. But because it will compromise your personal autonomy.
"Privacy as we normally think of it doesn’t matter,” said Aza Raskin, co-founder of the Center for Humane Technology. "What these companies are doing is building little models, little avatars, little voodoo dolls of you. Your doll sits in the cloud, and they'll throw 100,000 videos at it to see what’s effective to get you to stick around, or what ad with what messaging is uniquely good at getting you to do something.”
Raskin was a successful engineer and entrepreneur, leading teams at Mozilla and Jawbone before realizing that his work had directly shaped human behavior in ways he couldn’t tolerate. He invented the infinite scroll — the now-ubiquitous design standard in which your feed never ends at the bottom of the page — and then did the math on how much time people were wasting by virtue of his creation. "Infinite scroll at the very minimum wastes 200,000 human lifetimes per day,” he said. "That’s why I chose a new life.”
Now, working at CHT with former Google ethicist Tristan Harris, Raskin spends his days grappling with the power companies have in predicting and shaping human behavior.
Even the tiniest interactions with an app or service give it useful data in building a simulation of you. “Imagine it’s a stick figure at first, and as you use the system, it’s collecting fingernail scraps and bits of hair,” Raskin told NBC News. "What do you care that you lost a fingernail scrap? But they’re putting it together into a model of you."
With 2.3 billion users, “Facebook has one of these models for one out of every four humans on earth. Every country, culture, behavior type, socio-economic background," said Raskin. With those models, and endless simulations, the company can predict your interests and intentions before you even know them.
And this is what gives rise to the illusion that our phones are recording our words and feeding us ads for cars just as we’ve finished a conversation about cars — a notion that Facebook and others have steadfastly denied.
The argument against Facebook’s market research effort goes something like this: at a time when it faces mounting concerns over its data-collection practices, the company made an end-run around Apple developer policies to slurp up some of the most sensitive data a person has, including some belonging to teenagers.
“I get that it’s creepy to imagine they listen to your conversations,” said Raskin. “But isn’t it more creepy that they can predict what you’re talking about without listening in? It’s this little model of you. You are super predictable to these platforms. It’s about persuasion and prediction, not privacy.”
But the market for this sort of data is just getting started. “Remember 10 years ago? We could barely look at a map on our phone — we had to print directions out,” said DJ Patil, who served as chief data scientist under President Obama.
At the moment, the data is most immediately valuable as a way of targeting advertising. Without having to attach your name or address to your data profile, a company can nonetheless compare you to other people who have exhibited similar online behavior — clicking this, liking that — and deliver the most targeted advertising possible.
In a statement provided to NBC News, Facebook said it targets advertising categories based on people’s interests, as gauged by their activity on Facebook, and the company points out that users can disassociate themselves from an interest by removing it from their settings . The company also says that one’s ad interests are not tied to personal characteristics, only to their interests, and that Facebook’s ad policy prohibits discrimination .
But this sort of data is so powerful that it produces results far more powerful than traditional advertising. For instance, Facebook offers the chance to pay not just for a certain audience size, but an actual business outcome, like a sale, an app download, or a newsletter subscription. Once upon a time advertisers paid a “CPM” — cost per thousand views — for a marketing campaign.That was just the chance to get in front of people. Now Facebook offers a rate based on “CPA,” or “cost per action,” a once-unimaginable metric offered because the company is so confident in its understanding of people and their preferences that Facebook can essentially guarantee a certain number of people will do certain things.
And the data can do more than that. As we’ve seen in the past few years, data can predict not just which shirt you might be willing to buy, but which topics are so emotionally charged you cannot look away from them — and which pieces of propaganda will work best upon you. And that makes the platforms that collect data at scale an amazing way to influence human beings. Maybe not you. Maybe not today. But it’s enough influence, at scale, over time, that the outcomes on the whole are both overwhelmingly consistent, and yet individually invisible.
Tim Wu, professor at Columbia Law School, and author of The Attention Merchants, believes this makes social platforms — and Facebook in particular — a tremendous liability. "There’s an incredible concentration of power there. So much data, so much influence, makes them a target for something like Russian hackers. To influence an election, you used to have to hack hundreds of newspapers. Now there’s a single point of failure for democracy."
"To influence an election, you used to have to hack hundreds of newspapers. Now there’s a single point of failure for democracy."
And the categories into which your data places you can be used for much more than just selling you stuff or determining your political preferences. Without your ever telling a company your race, or sexual orientation, your behavioral history can reveal those things. And so today, Patil said, engineers should be trained to recognize a problematic revelation coming out of otherwise anonymous data.
“It really has to be integrated into the curriculum, and into every interview,” Patil said. “I always ask job candidates something like ‘You find a piece of data that acts as a proxy for race. What do you do?’”
“The correct answer is ‘Who in the organization do I take that to? What group meets regularly about it?’ The incorrect answer is ‘Hey, wow, good question!’ That’s the wrong person for the job."
And in future, the handing away of anonymous data to companies is going to give them so much insight into human behavior that the systems built on that data may sweep all of us up without our conscious participation in the process. A recent joint venture announced by Apple and the insurance giant Aetna will reward customers who agree to wear an Apple Watch with nudges toward good health practices. The companies promised to safeguard individual privacy. But at the same time, Aetna told CNBC it hopes to eventually enroll all its members in the program. What will all that data empower the company to do? Will premiums be higher for people whose choices may be healthy, but whose data profile suggests a shorter, more afflicted life? And once every insurance company has that data, how will people accidentally cursed with the wrong data profile get affordable insurance?
Unfortunately, your data, and its predictive power — while it has enormous value in the aggregate to the companies that collect it — have only the tiniest measurable value in your life. In a lawsuit, Patel v. Facebook , making its way through the Ninth Circuit Court of Appeals, Facebook argued that for individuals to sue the company for violating privacy under a new Illinois law, they should have to show that they were individually harmed. And yet the business model is based on the value of aggregating individual data at the greatest possible scale.
On an individual basis, Facebook seems to be arguing that data is useless, and worthless. But while we essentially cannot show the effect of data collection in our personal lives, the value of your data combined with everyone else’s is immeasurable.