Cloudflare Wants to Replace CAPTCHAs with FIDO Keys

Cloudflare is testing a system to allow users to use FIDO keys to skip CAPTCHAs.

From a user perspective, a Cryptographic Attestation of Personhood works as follows:

1. The user accesses a website protected by Cryptographic Attestation of Personhood, such as cloudflarechallenge.com.

2. Cloudflare serves a challenge.

3. The user clicks I am human (beta) and gets prompted for a security device.

4. User decides to use a Hardware Security Key.

5. The user plugs the device into their computer or taps it to their phone for wireless signature (using NFC).

6. A cryptographic attestation is sent to Cloudflare, which allows the user in upon verification of the user presence test.

Completing this flow takes five seconds. More importantly, this challenge protects users’ privacy since the attestation is not uniquely linked to the user device. All device manufacturers trusted by Cloudflare are part of the FIDO Alliance. As such, each hardware key shares its identifier with other keys manufactured in the same batch (see Universal 2nd Factor Overview, Section 8). From Cloudflare’s perspective, your key looks like all other keys in the batch.

Cloudflare says it is primarily interested in reducing the amount of time users spend on CAPTCHAs, which it estimates currently take up 500 years of user time every day.

CAPTCHAs are certainly frustrating, and anything that can be done to replace them while still mitigating brute force and DDOS attacks is great. But it would also be great to see FIDO keys become more accepted and normalized across the Internet.

Bellingcat: US Soldiers Inadvertently Leaked Nuclear Security Secrets via Flashcard Apps

Yikes.

For US soldiers tasked with the custody of nuclear weapons in Europe, the stakes are high. Security protocols are lengthy, detailed and need to be known by heart. To simplify this process, some service members have been using publicly visible flashcard learning apps — inadvertently revealing a multitude of sensitive security protocols about US nuclear weapons and the bases at which they are stored.

. . .

However, the flashcards studied by soldiers tasked with guarding these devices reveal not just the bases, but even identify the exact shelters with “hot” vaults that likely contain nuclear weapons.

They also detail intricate security details and protocols such as the positions of cameras, the frequency of patrols around the vaults, secret duress words that signal when a guard is being threatened and the unique identifiers that a restricted area badge needs to have.

The entire article is well worth a read, especially for the sheer amount of information Bellingcat uncovered, including locations of cameras and backup generators at specific sites, detailed information on equipment carried on bases, and schedules for checking aircraft shelters containing nuclear weapons vaults.

This information was publicly searchable because most of the flashcard/quizzing tools that the soldiers used made content public by default. This is similar to how credentials are inadvertently leaked on Github by developers apparently unaware or misunderstanding the implications of hosting those on public repositories.

One change that would help a lot would be if online applications start defaulting to private and requiring users to enable public access, rather than the current approach of defaulting to public and requiring the user to intervene to make content private.

For example, although Github has been the source of numerous credential links, all new personal repositories default to “Public.” The user has to choose the “Private” option manually. This practically guarantees a high level of ongoing leaks at sites such as Github.

Github did make a change in July 2020 so that all repositories created by users accessing Github via an organizational SSO service will be defaulted to private. So they realize that defaulting to public is a problem. Yet, they decided to stick with that behavior for personal repositories, even though a huge segment of Github-related credential leaks are from individuals using personal repositories.

This should be unacceptable given the well known security and privacy problems with this practice.

Let’s Encrypt Comes Up With Solution for Bizarre Problem

The problem itself is fairly straightforward. Let’s Encrypt launched in 2016, and while it waited to have its root certificate approved and added to browsers and OSes, it reached an agreement with existing certificate authority IdenTrust to cross-sign it’s SSL certs. This meant that as long as IdenTrust’s widely deployed root certificate was on a device, then Let’s Encrypt certs would be accepted as valid by that device.

But that IdenTrust root certificate expires in September 2021, and Let’s Encrypt will transition to using its own widely deployed root certificate going forward.

Except on one operating system–Android.

Let’s Encrypt was added to Android’s certificate authority store in Android 7.1.1, released in December 2016. So devices using version 7.1.1. or newer will have no problems at all when the IdenTrust root certificate expires. Let’s Encrypt’s root cert is already included in the Android OS, and things will be fine.

The problem is that almost 34 percent of Android devices are running a version older than 7.1.1. That translates to about 845 million devices still running an OS that is more than four years old.

Let’s Encrypt found a workaround, but it’s crazy that 845 million Android devices being actively used have an OS that hasn’t been updated in four years, and that likely can’t receive updates even if their owners wanted to.

Ironically, one of the bug fixes rolled out in 7.1.1 was an update to Android’s CURL/LIBCURL libraries, which had bugs that could allow a malicious actor with a forged certificate to launch a remote code execution attack.

Hell, Let’s Encrypt’s workaround relies on the fact that Android ignores crucial security settings. Even though a root certificate like IdenTrust’s has an expiration date, Android ignores that expiration date. So IdenTrust has agreed to extend its cross-signing of Let’s Encrypt certs for three years.

IdenTrust has agreed to issue a 3-year cross-sign for our ISRG Root X1 from their DST Root CA X3. The new cross-sign will be somewhat novel because it extends beyond the expiration of DST Root CA X3. This solution works because Android intentionally does not enforce the expiration dates of certificates used as trust anchors. ISRG and IdenTrust reached out to our auditors and root programs to review this plan and ensure there weren’t any compliance concerns.

Notion Is Unusable and Unsafe

I am a sucker for new note-taking and productivity applications, so a couple of years ago, I started using Notion. I still keep logging into it for a very specific purpose, but in general, Notion is largely unusable.

This anonymous post outlining Notion’s usability experience “disaster” does a good job of cataloging all of the ways that Notion is largely unusable. If you read any Notion community groups for any length of time, it quickly becomes clear that Notion is the tool of choice for the sorts of folks who enjoy tinkering around with their productivity systems rather than actually getting things done.

As if the user interface isn’t horrific enough, it has become apparent over the past couple of years that the developer’s behind Notion either a) don’t care or b) don’t know how to make their application secure.

As this Reddit post points out, simply inviting a guest to edit a page grants that guest a large amount of private information that they do not and should not have access to. Stuff like this crops up all the time. It is clear there are almost no privacy or security protections built into Notion.

I still use Notion, but largely because I’ve built an extensive inventory of my action figure collection within Notion. That’s the only sort of data I would trust to Notion as it is today.

An Ingenious Phishing Technique

Craig Hays wrote a fascinating article describing a phishing campaign his company had to deal with that had an ingenious method of propagating itself.

As we dug deeper and compared sign-in timestamps with email timestamps, it became clear what was happening. The phishing emails were being sent as replies to genuine emails. Emails exchanged between our people and our suppliers, our customers, and even internally between colleagues.

A typical phishing email comes from an email address you’ve never seen before. Granted, it might be similar to a real address you’d expect to see such as rnicrosoft.com instead of microsoft.com, but it’s rare for an address you trust to send you anything suspicious. When someone you know does send you something suspicious it’s usually rather obvious. When it happens we contact them directly to let them know there’s a problem. ‘Looks like you’ve been hacked, mate.’ We don’t fall for the scam.

In this attack, however, all of the phishing links were sent as replies to emails in the compromised account’s mailbox. This gave every email an inherited sense of trust. ‘You asked for this thing, here it is: link to phishing page’. When I realised what was happening, I was in awe. Whether done by deliberate design or not, the outcome was incredible. The conversion rates one these emails would make even the greatest of email marketers envious!

No, Do Not Use Unroll.Me

It was kind of odd seeing (or hearing) security podcast Security In Five recommend Unroll.Me, which is a service that helps users easily unsubscribe from subscription-based emails.

It’s a great idea, but Unroll.Me’s business model is essentially selling data about its users.

For years they did this and lied about it, claiming that they didn’t sell such data. In late 2019, they reached a settlement with the US Federal Trade Commission.

The FTC alleged that Unrollme Inc., which helps users unsubscribe from unwanted emails or consolidate their email subscriptions, falsely told consumers that it would not “touch” their personal emails in order to persuade consumers to provide access to their email accounts.

In fact, Unrollme shared users’ email receipts from completed transactions with Unrollme’s parent company, Slice Technologies, Inc. E-receipts can include, among other things, the user’s name, billing and shipping addresses, and information about products or services purchased by the consumer. Slice uses anonymous purchase information from Unrollme users’ e-receipts in the market research analytics products it sells.

As part of the settlement with the Commission, Unrollme is prohibited from misrepresenting the extent to which it collects, uses, stores, or shares information from consumers. It must also notify those consumers who signed up for Unrollme after viewing one of the allegedly deceptive statements about how it collects and shares information from e-receipts. The order also requires Unrollme to delete, from both its own systems and Slice’s systems, stored e-receipts previously collected from those consumers, unless it obtains their affirmative, express consent to maintain the e-receipts.

So today, Unroll.Me is upfront about its data usage, but the way it collects and uses data is concerning. According to its How We Use Your Data page (you know, the one the FTC had to force them create),

Unroll.Me is owned by Rakuten Intelligence, an e-commerce measurement business that provides companies with insights into industry trends, corporate performance, and the competitive landscape. Unless otherwise restricted by your email provider, when you sign up for Unroll.Me, we share your transactional emails with Rakuten Intelligence, who helps us de-identify and combine your information with that of millions of users, including Rakuten Intelligence’s shopping panel.

Honestly, I get why a lot of people would blow that off and figure “who cares”, but I am surprised that someone in computer security would given a company like this access to their data.