How Long Would It Take To Break Enigma Today?

This is a question that occasionally comes up on discussion forums–how long would it take to break Enigma’s encryption with tools available today?

Back in 2017, cloud-based server provider, DigitalOcean participated in a publicity stunt with Enigma Pattern in which an “artificial intelligence” program cracked an Enigma-encoded message in 13 minutes.

Enigma Pattern, a DigitalOcean client, used a range of modern machine learning and artificial intelligence techniques and methodologies to break the Enigma code in just 13 minutes and for a cost of only £10.

The team, led by Lukasz Kuncewicz, taught the artificial intelligence system to recognise the German language by feeding it Grimm’s fairy tales, and after long hours contemplating them, it started to be more and more confident in its classification.

They then recreated the most sophisticated version of Enigma (four rotors navy type, one pair of plugs), which has 15,354,393,600 password variants, in the programming language Python. Just like the bombe the Polish and British had used, they set it up to test all possible combinations of the password – the only difference being they didn’t limit the number of passwords.

ComputerPhile has a video exploring how one might go about cracking Enigma in 2021 without using an AI.

What Happens When Governments Serve a Warrant on Signal?

Signal posted a summary of what happened when it receive a search warrant from Santa Clara County requesting data on one of its users.

Because everything in Signal is end-to-end encrypted by default, the broad set of personal information that is typically easy to retrieve in other apps simply doesn’t exist on Signal’s servers. Once again, this request sought a wide variety of information we don’t have, including the user’s name, address, correspondence, contacts, groups, call records.

As usual, we couldn’t provide any of that. It’s impossible to turn over data that we never had access to in the first place. Signal doesn’t have access to your messages; your chat list; your groups; your contacts; your stickers; your profile name or avatar; or even the GIFs you search for. As a result, our response to the subpoena will look familiar. It’s the same set of “Account and Subscriber Information” that we can provide: Unix timestamps for when each account was created and the date that each account last connected to the Signal service.

That’s it.

You Probably Need a VPN

Vice recently ran an article with the attention-grabbing headline, You Probably Don’t Need a VPN. The main problem with the article is that it confusingly conflates several separate issues.

The objections to using a VPN boil down to:

  1. It doesn’t really matter if your Internet access provider (ISP, etc.) see what sites you are connecting to because the actual connections themselves are encrypted. All ISPs can collect these days is metadata about who you connect to and when.

    The objection that the only thing Internet access providers can collect is essentially browser history metadata seems absurd given how much we know about the value of that metadata. Many ISPs turn around and sell that metadata about their customers precisely because it has value.

    I have little trust or faith in the Internet access providers that I use in the United States. There are essentially zero legal protection for consumers at the moment for how ISPs can use and sell that data. Even if there were, these providers themselves typically employ bottom of the barrel security practices (looking at you Breach-Mobile), and such data will likely be stolen if not sold at some point.

  2. There are a lot of lousy VPN companies, many of which represent a potentially bigger data risk than your local Internet access provider.

There are a lot of lousy companies, period. The one cool trick that changes everything here is not to pick a terrible VPN company.

Pretty much the only VPN I recommend to ordinary people these days is ProtonVPN. Their basic $4/month plan will likely meet most people’s needs. Their VPN client is well-designed, and I trust their no-logs policy.

The other VPN I recommend is AirVPN, but not for casual users. In my opinion, if you want to do a lot of high-speed torrenting, AirVPN is the best option out there. Let’s just say in the last ten years, I’ve yet to receive a DMCA notice and leave it at that.

Does Facebook Break WhatsApp’s End-To-End Encryption?

Several years ago, I got everyone in my family to switch to using WhatsApp. They liked it because it was a messaging platform that was easy to use and had all the features they wanted, while I liked it because of the end-to-end encryption that was missing from the assortment of solutions that we had been using.

I would rather use Signal, but WhatsApp is a nice compromise between security and usability for my family’s use case. Still, I am always concerned when I read stories like ProPublica’s recent investigation How Facebook Undermines Privacy Protections for Its 2 Billion WhatsApp Users.

Fortunately, the story, in this case, was largely garbage, and ProPublica should be ashamed for running this scaremongering article.

Here is how Facebook undermines privacy protections in WhatsApp: it has a system that allows users to report abusive messages, which it then investigates. When a user reports an abusive message to WhatsApp, the content of that message and recent messages with the allegedly abusive sender are sent to Facebook as part of the abuse report.

That’s it. Facebook doesn’t break the end-to-end encryption or other shady methods; it simply has an abuse reporting system that allows users to share the content of abusive messages.

But ProPublica chose to characterize this abuse reporting system this way,

Deploying an army of content reviewers is just one of the ways that Facebook Inc. has compromised the privacy of WhatsApp useres. Together, the company’s actions have left WhatsApp–the largest messaging app in the world, with two billion users–far less private than its users likely understand or expect. A ProPublica investigation, drawing on data, documents and dozens of interviews with current and former employees and contractors, reveals how, since purchasing WhatsApp in 2014, Facebook has quietly undermined its sweeping security assurances in multiple ways.

Unfortunately, ProPublica’s story was widely interpreted to mean that Facebook regularly compromised WhatsApp’s end-to-end encryption, which is not true.

Eventually, ProPublica faced such a backlash that it was forced to revise its story and add the following disclaimer,

Clarification, Sept. 8, 2021: A previous version of this story caused unintended confusion about the extent to which WhatsApp examines its users’ messages and whether it breaks the encryption that keeps the exchanges secret. We’ve altered language in the story to make clear that the company examines only messages from threads that have been reported by users as possibly abusive. It does not break end-to-end encryption.

Frankly, that is unacceptable. First, the story already did enormous damage by falsely undermining confidence in WhatsApp’s end-to-end encryption. Unfortunately, when people see these sorts of stories, they often switch to less secure messaging options. It is bizarre, for example, how many people I’ve seen swear off WhatsApp in favor of Telegram, which is significantly less secure than WhatsApp.

Second, ProPublica should have retracted its story since the central premise of the story was false. As Mike Masnick summarized the errors in the story for TechDirt,

Alec Muffett does a nice job dismantling the argument. As he notes, it’s really bad when journalists try to redefine end-to-end encryption to mean something it is not. It does not mean that recipients of messages cannot forward them or cannot share them. And, in fact, pretending that’s true, or insisting that forwarding messages and reporting them is somehow an attack on privacy is dangerous. It actually undermines encryption by setting up false and dangerous expectations about what it actually entails.

. . .

But, really, this gets back to a larger point that I keep trying to make with regards to reporting on “privacy” violations. People differ (greatly!) on what they think a privacy violation really entails, and because of that, we get very silly demands — often from the media and politicians — about “protecting privacy” when many of those demands would do tremendous harm to other important ideas — such as harming competition or harming free speech.

And this is especially troubling when perfectly reasonable (and in fact, quite good) systems like WhatsApp “report” feature are portrayed incorrectly as “undermining privacy” when what it’s actually trying to do is help deal with the other issue that the media keeps attacking WhatsApp for: enabling people to abuse these tools to spread hatred, disinformation, or other dangerous content.