Notion Is Unusable and Unsafe

I am a sucker for new note-taking and productivity applications, so a couple of years ago, I started using Notion. I still keep logging into it for a very specific purpose, but in general, Notion is largely unusable.

This anonymous post outlining Notion’s usability experience “disaster” does a good job of cataloging all of the ways that Notion is largely unusable. If you read any Notion community groups for any length of time, it quickly becomes clear that Notion is the tool of choice for the sorts of folks who enjoy tinkering around with their productivity systems rather than actually getting things done.

As if the user interface isn’t horrific enough, it has become apparent over the past couple of years that the developer’s behind Notion either a) don’t care or b) don’t know how to make their application secure.

As this Reddit post points out, simply inviting a guest to edit a page grants that guest a large amount of private information that they do not and should not have access to. Stuff like this crops up all the time. It is clear there are almost no privacy or security protections built into Notion.

I still use Notion, but largely because I’ve built an extensive inventory of my action figure collection within Notion. That’s the only sort of data I would trust to Notion as it is today.

An Ingenious Phishing Technique

Craig Hays wrote a fascinating article describing a phishing campaign his company had to deal with that had an ingenious method of propagating itself.

As we dug deeper and compared sign-in timestamps with email timestamps, it became clear what was happening. The phishing emails were being sent as replies to genuine emails. Emails exchanged between our people and our suppliers, our customers, and even internally between colleagues.

A typical phishing email comes from an email address you’ve never seen before. Granted, it might be similar to a real address you’d expect to see such as instead of, but it’s rare for an address you trust to send you anything suspicious. When someone you know does send you something suspicious it’s usually rather obvious. When it happens we contact them directly to let them know there’s a problem. ‘Looks like you’ve been hacked, mate.’ We don’t fall for the scam.

In this attack, however, all of the phishing links were sent as replies to emails in the compromised account’s mailbox. This gave every email an inherited sense of trust. ‘You asked for this thing, here it is: link to phishing page’. When I realised what was happening, I was in awe. Whether done by deliberate design or not, the outcome was incredible. The conversion rates one these emails would make even the greatest of email marketers envious!

No, Do Not Use Unroll.Me

It was kind of odd seeing (or hearing) security podcast Security In Five recommend Unroll.Me, which is a service that helps users easily unsubscribe from subscription-based emails.

It’s a great idea, but Unroll.Me’s business model is essentially selling data about its users.

For years they did this and lied about it, claiming that they didn’t sell such data. In late 2019, they reached a settlement with the US Federal Trade Commission.

The FTC alleged that Unrollme Inc., which helps users unsubscribe from unwanted emails or consolidate their email subscriptions, falsely told consumers that it would not “touch” their personal emails in order to persuade consumers to provide access to their email accounts.

In fact, Unrollme shared users’ email receipts from completed transactions with Unrollme’s parent company, Slice Technologies, Inc. E-receipts can include, among other things, the user’s name, billing and shipping addresses, and information about products or services purchased by the consumer. Slice uses anonymous purchase information from Unrollme users’ e-receipts in the market research analytics products it sells.

As part of the settlement with the Commission, Unrollme is prohibited from misrepresenting the extent to which it collects, uses, stores, or shares information from consumers. It must also notify those consumers who signed up for Unrollme after viewing one of the allegedly deceptive statements about how it collects and shares information from e-receipts. The order also requires Unrollme to delete, from both its own systems and Slice’s systems, stored e-receipts previously collected from those consumers, unless it obtains their affirmative, express consent to maintain the e-receipts.

So today, Unroll.Me is upfront about its data usage, but the way it collects and uses data is concerning. According to its How We Use Your Data page (you know, the one the FTC had to force them create),

Unroll.Me is owned by Rakuten Intelligence, an e-commerce measurement business that provides companies with insights into industry trends, corporate performance, and the competitive landscape. Unless otherwise restricted by your email provider, when you sign up for Unroll.Me, we share your transactional emails with Rakuten Intelligence, who helps us de-identify and combine your information with that of millions of users, including Rakuten Intelligence’s shopping panel.

Honestly, I get why a lot of people would blow that off and figure “who cares”, but I am surprised that someone in computer security would given a company like this access to their data.

NSA On Limiting Location Data Exposure

The National Security Agency released a 3-page document about the risks that mobile devices post with their collection of location data, and how to mitigate those risks.

Mobile devices store and share device geolocation data by design. This data is essential to device communications and provides features—such as mapping applications—that users consider indispensable. Mobile devices determine location through any combination of Global Positioning System (GPS) and wireless signals (e.g., cellular, wireless (Wi-Fi), or Bluetooth (BT)). Location data can be extremely valuable and must be protected. It can reveal details about the number of users in a location, user and supply movements, daily routines (user and organizational), and can expose otherwise unknown associations between users and locations.

Mitigations reduce, but do not eliminate, location tracking risks in mobile devices. Most users rely on features disabled by such mitigations, making such safeguards impractical. Users should be aware of these risks and take action based on their specific situation and risk tolerance. When location exposure could be detrimental to a mission, users should prioritize mission risk and apply location tracking mitigations to the greatest extent possible. While the guidance in this document may be useful to a wide range of users, it is intended primarily for NSS/DoD system users.

Twilio’s Kelley Robinson: What If We Had TLS for Phone Numbers?

This talk will discuss the latest advancements with STIR (Secure Telephone Identity Revisited) and SHAKEN (Signature-based Handling of Asserted information using toKENs), new tech standards that use well accepted public key cryptography methods to validate caller identification. We’ll discuss the path and challenges to getting this implemented industry wide, where this tech will fall short, and what we can do to limit exposure to call spam and fraud in the meantime.

Kelley Robinson (@kelleyrobinson) works on the Account Security team at Twilio, helping developers manage and secure customer identity in their software applications. Previously she worked in a variety of infrastructure and data engineering roles at startups in San Francisco. She believes in making technical concepts, especially security, accessible and approachable for new audiences. In her spare time, Kelley is an avid home cook and greatly enjoys reorganizing her Brooklyn kitchen to accommodate completely necessary small appliance purchases.

LeakyPick: IoT Audio Spy Detector

Researchers concerned about IoT devices surreptitiously transmitting audio recordings back to their manufacturer put together a Raspberry Pi-based proof-of-concept (1mb PDF) to detect such transmissions.

Manufacturers of smart home Internet of Things (IoT) devices are increasingly adding voice assistant and audio monitoring features to a wide range of devices including smart speakers, televisions, thermostats, security systems, and doorbells. Consequently, many of these devices are equipped with microphones, raising significant privacy concerns: users may not always be aware of when audio recordings are sent to the cloud, or who may gain access to the recordings. In this paper, we present the LeakyPick architecture that enables the detection of the smart home devices that stream recorded audio to the Internet without the user’s consent. Our proof-of-concept is a LeakyPick device that is placed in a user’s smart home and periodically “probes” other devices in its environment and monitors the subsequent network traffic for statistical patterns that indicate audio transmission. Our prototype is built on a Raspberry Pi for less than USD $40 and has a measurement accuracy of 94% in detecting audio transmissions for a collection of 8 devices with voice assistant capabilities. Furthermore, we used LeakyPick to identify 89 words that an Amazon Echo Dot misinterprets as its wake-word, resulting in unexpected audio transmission. LeakyPick provides a cost effective approach for regular consumers to monitor their homes for unexpected audio transmissions to the cloud.