Never understood why Robin got such prominent billing over Flash or Green Lantern.

(via gameraboy)
Just another nerd.
KnowBe4 produces a number of cybersecurity products, including borderline unwatchable training videos and, the subject of this post, phishing tests.
A phishing test (also commonly referred to as simulated phishing) is when an organization sends its employees emails that look like phishing attempts to see whether or not staff will click on the links in the emails. Those who do click on the links are typically directed to even more pointless training.
As I’ve said before, I think phishing tests are the current version of “change your password every three months” requirements. Like the constant password changes of yesterday, phishing tests are usually done with good intent but are, at best, security theater and, at worst, undermine long-term cybersecurity efforts.
Regardless, most modern email systems will tend to flag phishing tests as spam/phishing and either quarantine them or deliver them to junk mailboxes. To ensure the phishing test is delivered to employee inboxes, organizations have to whitelist the emails using one of a number of possible methods.
KnowBe4 helpfully publishes its whitelisting guide on its website.
KnowBe4’s documentation explains how organizations can whitelist their phishing test emails by IP address, hostnames, or headers. And, of course, the same information can be used to filter any phishing test emails into the junk mail or any other folder.
For example, as of the writing of this article, KnowBe4’s documentation indicates it uses 23.21.109.197 and 23.21.109.212 as IP addresses to send phishing test emails to its US, Canadian, UK, and German customers. Its documentation also mentions that it may use 147.160.167.0/26 in the future to send phishing test emails. It also uses the hostname psm.knowbe4.com.
KnowBe4 also uses a default header of X-PHISHTEST, but unlike the IP address and the hostname, individual organizations can create a custom header. As such, filtering against that header is less reliable than the other two indicators.
From there, it is just a matter of creating a mail filter that routes any emails that have those IP addresses/ranges or hostname in the header into whatever folder you want.
KnowBe4’s documentation page is updated almost daily, but this appears to mainly be an effort to signal that the information is current rather than that it is constantly changing the IP/hostname details. The only thing that seems to actually change daily is the documentation’s date. This makes sense because every time KnowBe4 updates its IP addresses or hostname, every organization using it for phishing tests also has to update its email whitelisting configuration, so these details likely change infrequently.
Still, the last part of this process would be to set up a webpage monitor to report when there are any changes to the whitelisting guide. I prefer to self-host something like changedetection.io, which gives a diff showing what changed on a page.
(via gameraboy)
Lawyer and YouTuber Leonard French does an excellent job here of walking through the privacy and security risks of Microsoft Copilot being turned on by default in Microsoft 365 applications.
Microsoft has largely dismissed security concerns by claiming that a) it doesn’t use the content of users’ documents to train its AI models and b) it only acts on documents that the user has explicit authorization to access.
But is this good enough? Probably not.
French references a Bluesky conversation between lawyer Kathryn Tewson and Ben Schorr, a senior project content manager at Microsoft, and nicely walks through the security concerns that Tewson and others have that Schorr completely dismisses.
The concern is pretty straightforward: suppose I am writing a Word document. Will Copilot restrict itself to using only the context of the document I am currently writing, or will it also rely on other documents I can access when generating content?
It is not difficult to think of scenarios where the latter is a significant problem, but Microsoft seems not to have even considered this.
French references strict requirements that lawyers have not to mingle data or information across cases–which could be a problem if Copilot looks through all files a user has access to to generate a response to a prompt.
But you can imagine other obvious scenarios where this would be a no-no. For example, I might have in a OneDrive folder somewhere that contains a copy of one of my employees’ most recent performance reviews and a copy of a previous disciplinary letter.
If I am writing a new document that references this employee, I sure as hell do not want Copilot potentially mixing information from either of those documents into my new document. I would hope that I would catch this if it did, but get enough users creating enough documents, and this sort of boundary crossing is inevitably going to occur in the wild (likely with potential legal or other consequences).
Moreover, Microsoft appears to be intentionally making the process of turning Copilot off confusing and challenging. Their current documentation includes instructions for turning off Copilot in specific applications and then adds that if you do so in one application, it will be turned off elsewhere. However, it is difficult to trust Microsoft, to say the least, given its history of “accidentally” turning features back on that users explicitly disabled.
French’s entire 18-minute video is well worth watching, and it is hard to draw any conclusion but that Microsoft’s AI offerings are unsafe to use under pretty much any circumstance given that the company wants them so tightly integrated with the OS and applications will likely be installed on at least a billion devices in the coming few years.