Nick Kalevich, the Android Platform Security Engineering Lead, gave a presentation at Black Hat over the summer at which he addressed hardening Android. While I have a lot of concerns about Android security, I did like Kalevich’s statement early in his talk about his team’s efforts to make things safe by design.
“When you make the safest thing to do the easiest thing to do, people will do it.”
I ran across this Hacker News thread recently about Facebook’s practice of accepting four different versions of a user’s password. According to an email from Facebook, their system will accept:
- The original password – password
- The original password typed as if caps lock was enabled – PASSWORD
- The original password with the first character automatically capitalized, which is still a “feature” on some mobile phones – Password
- The original password with an extra character added at the end – Password2
Researchers at Cornell, MIT and Dropbox published a paper in 2016 about this practice, cleverly titled pASSWORD tYPOS and How to Correct Them Securely. According to the abstract,
We provide the first treatment of typo-tolerant password authentication for arbitrary user-selected passwords. Such a system, rather than simply rejecting a login attempt with an incorrect password, tries to correct common typographical errors on behalf of the user. Limited forms of typo-tolerance have been used in some industry settings, but to date there has been no analysis of the utility and security of such schemes.
We quantify the kinds and rates of typos made by users via studies conducted on Amazon Mechanical Turk and via instrumentation of the production login infrastructure at Dropbox. The instrumentation at Dropbox did not record user passwords or otherwise change authentication policy, but recorded only the frequency of observed typos. Our experiments reveal that almost 10% of failed login attempts fail due to a handful of simple, easily correctable typos, such as capitalization errors. We show that correcting just a few of these typos would reduce login delays for a significant fraction of users as well as enable an additional 3% of users to achieve successful login.
We introduce a framework for reasoning about typo-tolerance, and investigate the seemingly inherent tension here between
security and usability of passwords. We use our framework to show that there exist typo-tolerant authentication schemes that can get corrections for “free”: we prove they are as secure as schemes that always reject mistyped passwords. Building off this theory, we detail a variety of practical strategies for securely implementing typo-tolerance.
Mateusz Jerczyk of Google Project Zero calls out Microsoft for implementing some security patches in Windows 10, but not Windows 7 and 8.1.
The aim of this blog post was to illustrate that security-relevant differences in concurrently supported branches of a single product may be used by malicious actors to pinpoint significant weaknesses or just regular bugs in the more dated versions of said software. Not only does it leave some customers exposed to attacks, but it also visibly reveals what the attack vectors are, which works directly against user security. This is especially true for bug classes with obvious fixes, such as kernel memory disclosure and the added memset calls. The “binary diffing” process discussed in this post was in fact pseudocode-level diffing that didn’t require much low-level expertise or knowledge of the operating system internals. It could have been easily used by non-advanced attackers to identify the three mentioned vulnerabilities (CVE-2017-8680, CVE-2017-8684, CVE-2017-8685) with very little effort. We hope that these were some of the very few instances of such “low hanging fruit” being accessible to researchers through diffing, and we encourage software vendors to make sure of it by applying security improvements consistently across all supported versions of their software.
On the one hand, he’s correct. Microsoft is leaving users of Windows 7/8.1 exposed to potential security risks by not patching those OSes, and they should be “encourage[d] . . .to make sure of it by applying security improvements across all supported versions of their software.”
On the other hand, it’s a bit rich of Google to be lecturing Microsoft for not patching older OSes. Take Windows 7. That OS was released on July 22, 2009 and mainstream support for it ended on January 13, 2015. Microsoft is committed to providing extended support, however, through January 14, 2020.
So Google is unhappy that 8 years after releasing Windows 7 that Microsoft failed to implement a security patch for a known vulnerability. Fair enough.
On July 9, 2012, Google released Android 4.1 Jelly Bean. A major vulnerability in Android 4.1 was discovered in early 2015. In January 2015, Google publicly announced that it would not develop a security patch for this bug for Android 4.1. It did graciously allow that if someone else wanted to develop a security patch for the 2-and-a-half year old OS that it might be willing to incorporate those into Android 4.1.
Unlike Microsoft, Google can’t even be bothered to publish formal end-of-life dates for its software. The only policy it has in place is related to its own Nexus and Pixel devices, and states that such devices will receive “security patches for at least 3 years from when the device first became available on the Google Store, or at least 18 months from when the Google Store last sold the device.”
Compared to Google, Microsoft is a paragon of virtue when it comes to supporting its customers on aging OSes.
On September 22, 2017, Nintendo released two-factor authentication for Nintendo accounts. The system uses Google’s 2FA system (so it would also work with the LastPass authenticator, which is what I generally use).
So at this point, my Nintendo account is more secure than my bank account. My bank doesn’t offer any form of routine 2FA, despite me constantly harassing them about adding it.
And really, even 2FA isn’t good enough when it comes to banking. There’s no reason banks and credit unions shouldn’t offer their customers the option of using U2F.
Let’s Encrypt announced today that they plan to offer wildcard certificates beginning in January 2018.
A wildcard certificate can secure any number of subdomains of a base domain (e.g. *.example.com). This allows administrators to use a single certificate and key pair for a domain and all of its subdomains, which can make HTTPS deployment significantly easier.
Wildcard certificates will be offered free of charge via our upcoming ACME v2 API endpoint. We will initially only support base domain validation via DNS for wildcard certificates, but may explore additional validation options over time. We encourage people to ask any questions they might have about wildcard certificate support on our community forums.
That is excellent news. Wildcard certificates are fairly expensive. I’m paying $94/year for a Comodo PositiveSSL wildcard cert through a reseller. If you go directly to Comodo, they want $249/year which is going to be well out of the range of a lot of people to afford.
It will be interesting to see what the uptake is on this, as I assume wildcard certificates are a major profit center for certificate authorities. It would also be interesting to see an analysis of what effect Let’s Encrypt has had on the economics of CA’s already.
Are those who use Let’s Encrypt large companies and individuals who weren’t using SSL at all beforehand, or is a significant portion of that activity from people who opted for a free alternative.
I know I was at the point where I needed to buy a single domain certificate last year and opted for Let’s Encrypt because of its low, low price of nothing.