EFF Plans to Deprecate HTTPS Everywhere Browswer Because HTTPS Really Is Almost Everywhere

The Electronic Frontier Foundation announced that it plans to put its HTTPS Everywhere web extension into maintenance mode beginning in 2022 due to the dramatic rise in website encryption in the ten years since it introduced the addon.

Plus, with the release of Chrome 94, all major browsers now have a built-in HTTPS Everywhere-like feature that can be enabled.

The goal of HTTPS Everywhere was always to become redundant. That would mean we’d achieved our larger goal: a world where HTTPS is so broadly available and accessible that users no longer need an extra browser extension to get it. Now that world is closer than ever, with mainstream browsers offering native support for an HTTPS-only mode.

Indeed, the rise of ubiquitous encryption is one of the good news stories of the last decade that was primarily the direct result of Edward Snowden’s revelations.

Does Facebook Break WhatsApp’s End-To-End Encryption?

Several years ago, I got everyone in my family to switch to using WhatsApp. They liked it because it was a messaging platform that was easy to use and had all the features they wanted, while I liked it because of the end-to-end encryption that was missing from the assortment of solutions that we had been using.

I would rather use Signal, but WhatsApp is a nice compromise between security and usability for my family’s use case. Still, I am always concerned when I read stories like ProPublica’s recent investigation How Facebook Undermines Privacy Protections for Its 2 Billion WhatsApp Users.

Fortunately, the story, in this case, was largely garbage, and ProPublica should be ashamed for running this scaremongering article.

Here is how Facebook undermines privacy protections in WhatsApp: it has a system that allows users to report abusive messages, which it then investigates. When a user reports an abusive message to WhatsApp, the content of that message and recent messages with the allegedly abusive sender are sent to Facebook as part of the abuse report.

That’s it. Facebook doesn’t break the end-to-end encryption or other shady methods; it simply has an abuse reporting system that allows users to share the content of abusive messages.

But ProPublica chose to characterize this abuse reporting system this way,

Deploying an army of content reviewers is just one of the ways that Facebook Inc. has compromised the privacy of WhatsApp useres. Together, the company’s actions have left WhatsApp–the largest messaging app in the world, with two billion users–far less private than its users likely understand or expect. A ProPublica investigation, drawing on data, documents and dozens of interviews with current and former employees and contractors, reveals how, since purchasing WhatsApp in 2014, Facebook has quietly undermined its sweeping security assurances in multiple ways.

Unfortunately, ProPublica’s story was widely interpreted to mean that Facebook regularly compromised WhatsApp’s end-to-end encryption, which is not true.

Eventually, ProPublica faced such a backlash that it was forced to revise its story and add the following disclaimer,

Clarification, Sept. 8, 2021: A previous version of this story caused unintended confusion about the extent to which WhatsApp examines its users’ messages and whether it breaks the encryption that keeps the exchanges secret. We’ve altered language in the story to make clear that the company examines only messages from threads that have been reported by users as possibly abusive. It does not break end-to-end encryption.

Frankly, that is unacceptable. First, the story already did enormous damage by falsely undermining confidence in WhatsApp’s end-to-end encryption. Unfortunately, when people see these sorts of stories, they often switch to less secure messaging options. It is bizarre, for example, how many people I’ve seen swear off WhatsApp in favor of Telegram, which is significantly less secure than WhatsApp.

Second, ProPublica should have retracted its story since the central premise of the story was false. As Mike Masnick summarized the errors in the story for TechDirt,

Alec Muffett does a nice job dismantling the argument. As he notes, it’s really bad when journalists try to redefine end-to-end encryption to mean something it is not. It does not mean that recipients of messages cannot forward them or cannot share them. And, in fact, pretending that’s true, or insisting that forwarding messages and reporting them is somehow an attack on privacy is dangerous. It actually undermines encryption by setting up false and dangerous expectations about what it actually entails.

. . .

But, really, this gets back to a larger point that I keep trying to make with regards to reporting on “privacy” violations. People differ (greatly!) on what they think a privacy violation really entails, and because of that, we get very silly demands — often from the media and politicians — about “protecting privacy” when many of those demands would do tremendous harm to other important ideas — such as harming competition or harming free speech.

And this is especially troubling when perfectly reasonable (and in fact, quite good) systems like WhatsApp “report” feature are portrayed incorrectly as “undermining privacy” when what it’s actually trying to do is help deal with the other issue that the media keeps attacking WhatsApp for: enabling people to abuse these tools to spread hatred, disinformation, or other dangerous content.