The Interface is a daily column about the intersection of social media and democracy. Subscribe to the newsletter here .
Programming note: I’m on assignment tomorrow and Friday. The Interface will return on Monday.
At around 2:30 a.m. ET on Wednesday, Facebook sent me an update about the controversial market research program revealed on Tuesday by TechCrunch . Effective immediately, the company said, the program would end on Apple devices. It also took issue with some of the language in TechCrunch’s report:
“Key facts about this market research program are being ignored,” the company said. “Despite early reports, there was nothing ‘secret’ about this; it was literally called the Facebook Research App. It wasn’t ‘spying’ as all of the people who signed up to participate went through a clear on-boarding process asking for their permission and were paid to participate. Finally, less than 5 percent of the people who chose to participate in this market research program were teens. All of them with signed parental consent forms.”
Some of that, I think, is fair: it seems wrong to call a program advertised publicly on various apps, and known as Facebook Research, a secret spying program. On the other hand, the percentage of teens who participated seems less relevant than the fact that they were targeted to begin with, even if it was emphasized in headlines (including my own). The “parental consent form” was a screen that anyone could quickly tap past.
What I didn’t know in the wee hours of Wednesday morning was that Facebook had already lost the general argument to its chief regulator in this case: Apple, which last night took steps to invalidate the root certificates enabling both the market research program and every single app that Facebook uses for internal testing purposes, for tens of thousands of employees around the world. Here are Tom Warren and Jake Kastrenakes:
Apple has shut down Facebook’s ability to distribute internal iOS apps, from early releases of the Facebook app to basic tools like a lunch menu. A person familiar with the situation tells The Verge that early versions of Facebook, Instagram, Messenger, and other pre-release “dogfood” (beta) apps have stopped working, as have other employee apps, like one for transportation. Facebook is treating this as a critical problem internally, we’re told, as the affected apps simply don’t launch on employees’ phones anymore.
As I noted here yesterday , tensions between Apple and Facebook have been high for some time now. For Apple CEO Tim Cook, Facebook and its fellow ad-supported tech giant, Google, make for convenient punching bags. Last year, in a speech about privacy as a human right, he referred to the companies as “the data-industrial complex.” Cook wants to promote the idea that iOS devices are more valuable than others because they don’t use an advertising-based business model. (His rhetoric escalated sharply in 2016, after Apple’s five-year quest to build an advertising-based business model of its own sputtered and collapsed .)
Facebook has pushed back, lightly: Mark Zuckerberg called Cook’s comments about Facebook’s business model “extremely glib” last year. But Zuckerberg can only ever go so far. Cook can flip a switch that removes the Facebook app from the devices of every iOS user. Facebook may be one of the most powerful companies in the world — but viewed in this way, it begins to look quite weak.
By invalidating Facebook’s enterprise certificate today, Cook flipped one of his lesser switches. And the result inside Facebook today was chaos, Rob Price reports . (Others told me much the same.)
The move dramatically escalated tensions between Facebook and Apple, and has left Facebook employees unable to communicate with colleagues, access internal information, and even use company transportation.
And just like that, Facebook’s entire day was wasted. What had been a cold conflict had suddenly escalated into a shooting war.
The argument against Facebook’s market research effort goes something like this: at a time when it faces mounting concerns over its data-collection practices, the company made an end-run around Apple developer policies to slurp up some of the most sensitive data a person has, including some belonging to teenagers.
That includes data from the friends of people who volunteered to participate — which, as Issie Lapowsky notes , was at the heart of the Cambridge Analytica scandal. And lawmakers wondered whether some of the people who Facebook targeted — which included children as young as 13 — could even meaningfully offer their consent. “Wiretapping teens is not research, and it should never be permissible.” Sen. Richard Blumenthal (D-CT) said in a statement.
Moreover, pro-privacy Apple would likely recoil at the prospect of more companies following in Facebook’s footsteps and seeking root-level access of customers’ phones, even if those companies were paying people for the privilege. It’s not hard to imagine what a single bad actor could do with that level of control over individual devices.
But there’s an argument for Facebook’s kind of research, too, and I heard it from some of you. One is that it’s common — and indeed, by the end of the day, Google had to remove a similar app from its enterprise development program. (The company escaped punishment from Apple after issuing an abject apology.) Two is that Facebook’s program sought and obtained consent from its participants, and that to say people shouldn’t have been able to offer their consent is oddly patronizing. Three is that by paying its volunteers, it essentially made them contractors — offering a fig-leaf defense of the move to include Facebook Research among the company’s enterprise app deployments.
Some of the ads asked for individuals ages 13-17 for a “paid social media research study,” while another advertised opportunities for users “Age: 13-35 (parental consent required for ages 13-17).” Facebook appears to have taken steps to obfuscate that they are behind the program, with TechCrunch reporting that some sign-up methods only mentioned its name during installation instructions.
For those who believe that Facebook should be compelled to obtain and retain less consumer data, today likely felt like a win. In this view, Apple stepped in and protected consumers. (“It’s weird but probably necessary/inevitable that Apple is now Facebook’s de facto privacy regulator,” the New York Times’ Kevin Roose tweeted .)
But if you’re more interested in competition, today’s news may give you a chill. One giant platform declared another giant platform’s market research program inappropriate, then disappeared it with a Thanos-style finger snap. In the words of my boss, Nilay Patel :
Hi, I’m the nagging voice in the back of your head pointing out that it’s pretty intense that Apple can simply decide to prevent people from running code on their phones.
Facebook is an enlightened dictatorship, but so is Apple. Tim Cook and his lieutenants dictate the terms of an enormous economy, and can change that economy on a whim. Today Apple may have acted out of consistency with its privacy principles, to the benefit of some consumers. (And to the detriment of anyone who was counting on that $20 gift card!) But as Apple faces more pressure to serve as, as Roose put it, de facto privacy regulator, we may find ourselves uncomfortable with its monopolistic power.
Concerns in this area have been developing over the past several years as the iPhone has matured as a development platform. Apple is currently the subject of a lawsuit , now before the Supreme Court, alleging that its App Store monopoly results in customers being overcharged. I suspect there is more of this kind of scrutiny ahead.
All that said: the team within Facebook that built this market-research program appears to have acted recklessly, given the stakes for their fellow employees. I hate riffing on the company’s old move-fast-and-break-things motto more than most journalists who cover the company, but here is a case where Facebook’s decision to empower its engineers to ship almost anything with a minimum of review has truly come back to haunt it.
I still believe Facebook would be well served to consider how it might seek less user data, and to develop new programs around purging the data the company has already collected. It could buy the company goodwill at a time when that is in dangerously short supply . Or perhaps it will simply look at today’s stellar earnings report as proof that once again, a media dogpile failed to account for the authentic affection its 2 billion users have for its products, whatever missteps it makes along the way.
But for all the attention we’re paying to Facebook’s moves here, I hope we spare at least as much for Apple. If Tim Cook can wreak this much havoc on Facebook’s day, however justified, just imagine what power Apple holds over the rest of us.
Benedict Carey reports on a new study from Stanford about the consequences, positive and negative, of temporarily stepping away from Facebook:
Expect the consequences to be fairly immediate: More in-person time with friends and family. Less political knowledge, but also less partisan fever. A small bump in one’s daily moods and life satisfaction. And, for the average Facebook user, an extra hour a day of downtime. […]
The new study, a randomized trial financed principally by the Alfred P. Sloan Foundation , a nonpartisan supporter of research in science, technology and economics, sketches out a nuanced, balanced portrait of daily use that is unlikely to satisfy either critics or supporters of the platform.
Following court wins in cases involving the president blocking his Twitter critics, the American Civil Liberties Union filed a lawsuit accusing a Sacramento sheriff of unlawfully blocking Black Lives Matter activists from his official Facebook page, Makena Kelly reports:
According to the ACLU, two Black Lives Matter Sacramento leaders were blocked by Sheriff Scott Jones on Facebook after Jones refused to investigate the death of Mikel McIntyre, who was killed by Sacramento deputies in 2017. This past fall, Jones posted on his official Facebook page to seek support, but was met with criticism which prompted him to block BLM leaders Tanya Faison and Sonia Lewis.
When a page blocks someone on social media platforms, the blocked user is no longer able to view or interact with posts on that page. Because the page in question was operated by the sheriff, a government official, the block raises unique constitutional issues.
Daniel Funke explores how fake-news publishers on Facebook can evade punishments simply by changing their web hosts:
Of the 45 flagged stories that we identified at the time, 12 are still live on News Punch’s site with the same headline. Of those, none were flagged as false on Facebook as of publication.
That means that — even though fact-checkers like have already debunked these stories — users can share old YourNewsWire stories from News Punch links without receiving a warning that they’re false. The fake news site itself could even repost them and find a new audience. And some of the stories have.
Google is taking new steps to protect user accounts in the run-up to the European Union elections, including new advertiser registration requirements similar to Facebook’s. It’s also offering free, specialized tools to journalists.
Amazon is attempting to generate some goodwill in New York City, where plans to build Regional Office 1 have led to widespread criticism. It will now fund dozens of computer science programs.
Perhaps the most important thing about Facebook’s latest earnings report, from the company’s perspectives, is that usage is growing:
There were more than 1.52 billion people using Facebook every day in December 2018, a 9 percent increase year over year. Monthly active users were also up 9 percent year over year, with 2.32 billion as of December 31st.
Those numbers are both up from last quarter by 1.8 percent, improving on a three-month stretch that saw a slight usage decline in the US and Europe . This quarter’s growth is by no means the best Facebook has had, but it does mark a return to the company’s usual upward trend after signs that it might be beginning to falter.
Zuckerberg says Facebook has four priorities this year:
First, continue making progress on the major social issues facing the internet and our company.
Second, build new experiences that meaningfully improve people’s lives today and set the stage for even bigger improvements in the future.
Third, keep building our business by supporting the millions of businesses — mostly small businesses — that rely on our services to grow and create jobs.
And fourth, communicate more transparently about what we’re doing and the role our services play in the world.
Taylor Hatmaker reports that Facebook has tempted some privacy advocates to come work on policy issues from the inside:
Next month, longtime Electronic Frontier Foundation counsel Nate Cardozo will join WhatsApp , Facebook’s encrypted chat app. Cardozo most recently held the position of Senior Information Security Counsel with the EFF where he worked closely with the organization on cybersecurity policy. As his bio there reads, Cardozo is “an expert in technology law and civil liberties” and already works with private companies on privacy policies that protect user rights.
Jack Nicas explores the challenge of accurately measuring fake accounts on Facebook. (As Rob Horning points out, the existence of fake accounts can also be taken as a hopeful sign that a perfect global panopticon that exists only to exploit the most vulnerable among us has not yet been completed !)
Facebook cut its estimate of fake accounts significantly in 2016. A year later, it more than quadrupled the estimate. And on Wednesday, in small print at the bottom of a slide about earnings, Facebook increased the estimate by 36 percent, to 116 million. So what’s going on?
Facebook arrives at its estimates by analyzing a sample of accounts, looking “for names that appear to be fake or other behavior that appears inauthentic,” the company said in securities filings. “We apply significant judgment in making this determination.”
Nathan Grayson reports on a woman who is beset by creeps on Twitch — and the Amazon-owned company can’t seem to do anything about it. Her crime: advocating for diversity in video games:
Whenever she streams, she can count on people in her community getting creepy DMs, either from a person who goes by the handle “Mosheddy” or his friends, some of whom have created accounts that specifically mention Mosheddy to taunt her. Other channels DePass has hosted, sent her community into using Twitch’s “raid” feature, or even just decided to watch on her own have also ended up being trolled by Mosheddy and his sympathizers.
This is all happening despite a suite of Twitch tools and a terms of service that should, in theory, enable streamers to curate their communities and experiences—and, most importantly, protect themselves from users they feel threatened or upset by. DePass’ troll has uncovered a series of easily exploitable loopholes. Sure, now that DePass has banned him, he can’t talk in her channel’s chat, but he can still follow it, see who’s in her community, DM them, and tag along any time DePass decides to hit up another channel.
Citing the success of the Google Walkout, Brendan Nyhan and Patrick Ball argue that employees of big tech companies should put more internal pressure on their bosses to make reforms:
Encouraging the tech giants to do more to limit the spread of misinformation may be the least bad solution to the problem that now confronts us. The effectiveness of these efforts and their consistency with our values is difficult to monitor from outside of these massive and powerful firms.
In this sense, the employees of Facebook, Google, and other platforms have been entrusted with a great responsibility; we must encourage them to act as advocates for socially responsible computing and protect and reward those who come forward when those principles are violated.
Adi Robertson is not impressed with a new documentary about Cambridge Analytica that is headed to Netflix later this year:
The Great Hack is sometimes fascinating, especially when it’s delving into the shady inner workings of Cambridge Analytica. And it covers timely and important themes. But for a film about resisting propaganda, it’s surprisingly credulous.
Cambridge Analytica clearly breached Facebook users’ trust. There’s far less evidence that its “psychographic” tactics worked any better than traditional canvassing and broadly targeted ads. Some reports paint the company as a bumbling snake-oil hawker , suggesting that Mercer forced candidates to hire it as a condition of his donations. But while The Great Hack’s subjects hammer Cambridge Analytica for all sorts of deceptions, they appear to accept its sales pitch at face value — and so do the filmmakers, who present company marketing material and promotional speeches as unchallenged fact.
If BuzzFeed — with an audience of 690 million people a month — can’t build an ad-supported media company online, who can? Farhad Manjoo is worried about what it means for us all. (So am I.)
More than anyone else in media, BuzzFeed’s founder, Jonah Peretti, bet on symbioses with the tech platforms. He understood that the tech giants would keep getting bigger, but to him that was a feature, not a bug. By creating content that hooked into their algorithms, he imagined BuzzFeed getting bigger — and making money — along with them.
At the least, the layoffs suggest the tragic folly of Mr. Peretti’s thinking. Google and Facebook have no economic incentive for symbiosis; everything BuzzFeed can do for them can also be done by the online hordes who’ll make content without pay.
So where does that leave media? Bereft.
If you like your Facebook criticism flecked with the spittle of its author, you’ll probably enjoy Tom Bissell’s review of Roger McNamee’s Zucked. (I didn’t.)
It’s no stretch to posit that because human neurotransmitters respond to the platform’s iconic use of a certain shade of blue, and spark with dopamine upon receiving a “like” or “tag” notification, desperate children are now living in cages and a raving madman occupies the Oval Office. Not even Orwell, after a feast of psilocybin, could have predicted thisdystopia. This one’s all ours.
And finally ...
Jesse Daugherty likes to broadcast himself playing video games to a small number of fans. Then he fell asleep while broadcasting and woke up to his biggest audience ever:
In a rather endearing clip on his channel , Daugherty can be seen slowly coming to and realizing how big his audience had gotten. The clip, titled “The awakening,” now has more than 2.6 million views.
“I saw the total of 200 and I thought that was wrong,” he said. “Then I saw how fast the chat was moving and I was like, okay, that’s not wrong.”
I’ve been telling people not sleep on Twitch as a fast-growing social network this year, but it seems to be working just fine for this guy.
Talk to me
Send me tips, comments, questions, and $20 gift cards: [email protected]