Does Facebook Break WhatsApp’s End-To-End Encryption?

Several years ago, I got everyone in my family to switch to using WhatsApp. They liked it because it was a messaging platform that was easy to use and had all the features they wanted, while I liked it because of the end-to-end encryption that was missing from the assortment of solutions that we had been using.

I would rather use Signal, but WhatsApp is a nice compromise between security and usability for my family’s use case. Still, I am always concerned when I read stories like ProPublica’s recent investigation How Facebook Undermines Privacy Protections for Its 2 Billion WhatsApp Users.

Fortunately, the story, in this case, was largely garbage, and ProPublica should be ashamed for running this scaremongering article.

Here is how Facebook undermines privacy protections in WhatsApp: it has a system that allows users to report abusive messages, which it then investigates. When a user reports an abusive message to WhatsApp, the content of that message and recent messages with the allegedly abusive sender are sent to Facebook as part of the abuse report.

That’s it. Facebook doesn’t break the end-to-end encryption or other shady methods; it simply has an abuse reporting system that allows users to share the content of abusive messages.

But ProPublica chose to characterize this abuse reporting system this way,

Deploying an army of content reviewers is just one of the ways that Facebook Inc. has compromised the privacy of WhatsApp useres. Together, the company’s actions have left WhatsApp–the largest messaging app in the world, with two billion users–far less private than its users likely understand or expect. A ProPublica investigation, drawing on data, documents and dozens of interviews with current and former employees and contractors, reveals how, since purchasing WhatsApp in 2014, Facebook has quietly undermined its sweeping security assurances in multiple ways.

Unfortunately, ProPublica’s story was widely interpreted to mean that Facebook regularly compromised WhatsApp’s end-to-end encryption, which is not true.

Eventually, ProPublica faced such a backlash that it was forced to revise its story and add the following disclaimer,

Clarification, Sept. 8, 2021: A previous version of this story caused unintended confusion about the extent to which WhatsApp examines its users’ messages and whether it breaks the encryption that keeps the exchanges secret. We’ve altered language in the story to make clear that the company examines only messages from threads that have been reported by users as possibly abusive. It does not break end-to-end encryption.

Frankly, that is unacceptable. First, the story already did enormous damage by falsely undermining confidence in WhatsApp’s end-to-end encryption. Unfortunately, when people see these sorts of stories, they often switch to less secure messaging options. It is bizarre, for example, how many people I’ve seen swear off WhatsApp in favor of Telegram, which is significantly less secure than WhatsApp.

Second, ProPublica should have retracted its story since the central premise of the story was false. As Mike Masnick summarized the errors in the story for TechDirt,

Alec Muffett does a nice job dismantling the argument. As he notes, it’s really bad when journalists try to redefine end-to-end encryption to mean something it is not. It does not mean that recipients of messages cannot forward them or cannot share them. And, in fact, pretending that’s true, or insisting that forwarding messages and reporting them is somehow an attack on privacy is dangerous. It actually undermines encryption by setting up false and dangerous expectations about what it actually entails.

. . .

But, really, this gets back to a larger point that I keep trying to make with regards to reporting on “privacy” violations. People differ (greatly!) on what they think a privacy violation really entails, and because of that, we get very silly demands — often from the media and politicians — about “protecting privacy” when many of those demands would do tremendous harm to other important ideas — such as harming competition or harming free speech.

And this is especially troubling when perfectly reasonable (and in fact, quite good) systems like WhatsApp “report” feature are portrayed incorrectly as “undermining privacy” when what it’s actually trying to do is help deal with the other issue that the media keeps attacking WhatsApp for: enabling people to abuse these tools to spread hatred, disinformation, or other dangerous content.

The Eternal Mystery: Why Did They Chose 256?

In early 2016, WhatsApp increased the maximum number of people who could be in a group chat from its original limit of 100 all the way up to 256. Online tech journal iPhoneInCanda.ca tasked one Nick Salerni to rewrite the WhatsApp press release about this increase.

According to Salerni’s bio on the article he wrote about this, he is “a software engineer with a passion for creating and innovation using technology.” Maybe that’s true, but I’m not sure you’d want to rely on Salerni for anything mission critical after his opening couple of paragraphs (emphasis added),

The maximum number of people you can have in a WhatsApp group chat has increased to 256, rather than 100 as it was before.

It’s not clear why WhatsApp settled on the oddly specific number, but it’ll be good news for those users for whom 100 just wasn’t big enough. Most people use WhatsApp to talk to individuals and maybe a small group of friends, but for publishers, companies and online communities, the larger groups could come in useful.

Oh. My. God.

If you are working on a master’s in computer science and you don’t understand why WhatsApp settled on 256 maximum users, you might want to rethink your life choices.