The Privacy and Security Risks of Microsoft Copilot

Lawyer and YouTuber Leonard French does an excellent job here of walking through the privacy and security risks of Microsoft Copilot being turned on by default in Microsoft 365 applications.

Microsoft has largely dismissed security concerns by claiming that a) it doesn’t use the content of users’ documents to train its AI models and b) it only acts on documents that the user has explicit authorization to access.

But is this good enough? Probably not.

French references a Bluesky conversation between lawyer Kathryn Tewson and Ben Schorr, a senior project content manager at Microsoft, and nicely walks through the security concerns that Tewson and others have that Schorr completely dismisses.

The concern is pretty straightforward: suppose I am writing a Word document. Will Copilot restrict itself to using only the context of the document I am currently writing, or will it also rely on other documents I can access when generating content?

It is not difficult to think of scenarios where the latter is a significant problem, but Microsoft seems not to have even considered this.

French references strict requirements that lawyers have not to mingle data or information across cases–which could be a problem if Copilot looks through all files a user has access to to generate a response to a prompt.

But you can imagine other obvious scenarios where this would be a no-no. For example, I might have in a OneDrive folder somewhere that contains a copy of one of my employees’ most recent performance reviews and a copy of a previous disciplinary letter.

If I am writing a new document that references this employee, I sure as hell do not want Copilot potentially mixing information from either of those documents into my new document. I would hope that I would catch this if it did, but get enough users creating enough documents, and this sort of boundary crossing is inevitably going to occur in the wild (likely with potential legal or other consequences).

Moreover, Microsoft appears to be intentionally making the process of turning Copilot off confusing and challenging. Their current documentation includes instructions for turning off Copilot in specific applications and then adds that if you do so in one application, it will be turned off elsewhere. However, it is difficult to trust Microsoft, to say the least, given its history of “accidentally” turning features back on that users explicitly disabled.

French’s entire 18-minute video is well worth watching, and it is hard to draw any conclusion but that Microsoft’s AI offerings are unsafe to use under pretty much any circumstance given that the company wants them so tightly integrated with the OS and applications will likely be installed on at least a billion devices in the coming few years.

Leave a Reply