SERIES: Security Now!
DATE: August 21, 2018
TITLE: The Foreshadow Flaw
HOSTS: Steve Gibson & Leo Laporte
DESCRIPTION: This week, as we head into our 14th year of Security Now!, we look at some of the research released during last week's USENIX Security Symposium. We also take a peek at last week's Patch Tuesday details, Skype's newly released implementation of Open Whisper Systems' Signal privacy protocol, Google's Chrome browser's increasing pushback against being injected into, news following last week's observation about Google's user tracking, Microsoft's announcement of more spoofed domain takedowns, another page table sharing vulnerability, believe it or not “malicious regular expressions,” some numbers on how much money Coinhive is raking in, flaws in browsers and their add-ons that allow tracking-block bypasses, two closing-the-loop bits of feedback, and then a look at the details of the latest Intel speculation disaster known as the “Foreshadow Flaw.”
SHOW TEASE: It's time for Security Now! as we begin our 14th year. Wow. Episode 677. And it seems like for 14 years we've been talking about speculation flaws in Intel processors. There's a new one. Plus why regex can get you into trouble, and a retrospective look at this month's Patch Tuesday. It's all coming up next on Security Now!.
LEO LAPORTE: This is Security Now! with Steve Gibson, Episode 677, recorded Tuesday, August 21st, 2018: The Foreshadow Flaw.
It's time for Security Now!. Yes, it is. Oh, boy. We got cake, let me just put it that way. Security Now!, the show where we protect you and your loved ones online, and your privacy, and let you know how it all works with this guy right here, the Explainer in Chief, Steve Gibson. Hello, Steve.
STEVE GIBSON: Yo, Leo. Great to be with you.
LEO: And Happy Anniversary.
STEVE: As planned, we announced the end of Year 13 last week, so we get to note that it's the beginning of our 14th year today.
LEO: Wow. Wow.
STEVE: And so this is Episode 677 for August 21st, 2018. And there was no question that this one had to be titled “The Foreshadow Flaw.”
LEO: Oh, boy.
LEO: It's the gift that keeps on giving. Hey, before - we did have cake. The funny thing is the cake is because this is the second anniversary of the day we moved into this studio, so it's also that, as well as the anniversary.
LEO: And technically it was August 18th, 2005 that we began this show. And in the studio with me is Theodore from Sacramento. He is a computer science student at Sac State. He's here for his birthday.
LEO: So there's another reason for cake. Theodore asked his dad, he said - actually Dad asked him, “What do you want to do for your birthday?” And Theodore said, “For my 18th birthday I want to watch Security Now!.” But what we didn't calculate, and you did, is that means he was four years old when the show started.
STEVE: Yes, and I had hair.
LEO: You weren't quite five yet. You were five the day after the show started. Four and 364 days. And Steve, you did not. I'm sorry, but I have to beg to differ. I don't believe you had hair when we started the show.
STEVE: Oh, maybe not.
LEO: Did you?
STEVE: Maybe my moustache was darker, something. There has to have been some deterioration over the last 13 years.
LEO: No. We look exactly the same. This show is unchanged. And I can say that with certainty because the first 100 shows were all audio only. So there's no proof.
STEVE: We do know that the length has grown from 18 minutes to 120.
LEO: Honey Monkeys was only 18 minutes.
STEVE: More than 10 times longer.
LEO: It's longer because in the original show I feel like it was just news; right? And maybe even just one story.
STEVE: I was sort of sitting around thinking, okay, what are we going to talk about this week? Yeah.
LEO: Now you, like, work all week preparing this.
STEVE: It's become a labor of love.
LEO: Yeah. A labor, for sure.
STEVE: Yeah, well, so for example we're going to talk about the Foreshadow Flaw which is, as you said, the gift that keeps on giving. As I said at the beginning of the year when we saw the first crack of this happen, the concept that speculation was a problem, I thought, oh, goodness, this is really going to be bad. Well, I learned something that stunned me about what Intel was doing that I'm still - I still can't get over it, which we'll get to at the end of the show, which is the consequence of this flaw that just I'm gobsmacked, as they say in the U.K., over. But we're going to also look at some additional research that was released during last week's USENIX Security Symposium, which took place in Baltimore, Maryland.
Of course last week we were talking about the Black Hat and DEF CON conferences. These security conferences are always fabulous for interesting topics. We're going to take a look at last week's Patch Tuesday details. It's always happening, like during the podcast, so there's no chance to really do anything comprehensive about it. This one was a little more frightening than usual. We've got Skype's newly released implementation of Open Whisper Systems' Signal privacy protocol, which is welcome and interesting. Google's Chrome browser has taken its next step with its release number 68, doing something we talked about them announcing late last year, which is pushing back against being injected into.
STEVE: So nobody wants that.
LEO: Nobody wants that.
STEVE: Well, I guess some people. Anyway…
LEO: Stop right there, Steve.
STEVE: Yes. We have some follow-up news to last week's observation from the research by the Associated Press about Google's user tracking. Microsoft's announcement of - just this morning I watched an interview, it happened by Brad Smith at Microsoft talking about taking down six more web domains. We've got another page table sharing vulnerability which is not the foreshadow flaw, but this harkens back to a variation on the Rowhammer attacks that's not Rowhammer. So we'll talk about that. And believe it or not, Leo, the emergence of malicious regular expressions. Yes.
STEVE: The ReDoS, Regular Expression Denial of Service. So we will talk about when regular expressions go bad. I also have some numbers, thanks to some research, actually it was one of these USENIX guys, on how much money Coinhive is raking in with their kind of questionable tactics. Some flaws in browsers and their add-ons, which are allowing attempts to block tracking to be bypassed. A couple of closing-the-loop bits of feedback with our listeners. And then, not in 18 minutes certainly, hopefully we'll squeak this into 120. We'll look at some details of the latest Intel speculation disaster known as the Foreshadow Flaw.
LEO: At least the name is creative.
STEVE: Oh, it's got a good logo. You can download - you can get the logo…
LEO: Oh, I'm looking for that in any flaw. It's got to have a good logo.
STEVE: You can get the logo in three different sizes, as a PNG or an SVG, whatever you want. So, yeah, it's got some good marketing behind it, and its own web domain, of course, TheForeshadowFlaw.eu. So it's like…
LEO: Oh, lord.
STEVE: …okay, got to have all that these days.
LEO: Registered trademark.
STEVE: So I have a bunch of these pictures which they're lovingly sent to me when they're discovered in the wild by our listeners. And I just get, I mean, they're simple to parse visually. Just a constant source of bemusement because you think, okay, I mean, I don't get it. So just for those who are listening and not seeing, we have a very nice-looking, modern, high-end, push-button entry combination door lock with the 12-key pad, star and pound sign and one through zero. And prominently displayed on the top, which anyone except maybe a toddler would be able to see, it says: “Passcode: 4143#.”
LEO: It's good they put the pound on there. I might forget to press that button.
STEVE: Yeah, wouldn't want to forget that.
LEO: It's obviously not a secure facility, I hope.
STEVE: I just really don't - I don't really get, you know, somebody must have had ambition for security at some point, but it was just too pesky. You know? It was like, okay, what's that code? You know, and the cleaning people couldn't get in, and the fire department was complaining because they need to have entry. It's like, okay, what, you know, forget about this, let's just - but, you know, they didn't remove it. They just said, okay. Well, I guess that gives them the option of peeling the tape off and changing the code, or just changing the code. I guess if you left the wrong code on, that would be even more security. I don't know. Anyway…
LEO: Yeah. You don't know that that's the right code.
STEVE: That's true.
LEO: I mean, maybe they're smart.
STEVE: That could really be tricky. Or the secret could be do it backwards. It's actually 3414.
LEO: Or, you know, ROT-13 or something. You know, do a little transformation on that.
STEVE: Anyway, I'm tempted to do a weekly series of these for the foreseeable future because I have a bunch of them. And they're all different.
LEO: Everybody sends them to you.
STEVE: They're all different. And they're just like, okay. You know, really, what is the story? I just don't get it.
LEO: I love it.
STEVE: That's like, okay, you know…
LEO: I wish we'd had you with us yesterday. We shot a piece for The New Screen Savers in an escape room. Have you ever done escape rooms?
LEO: It's basically a giant puzzle. So this one is a team building escape room in San Francisco called Reason. And the idea is - we were just five people. It's built for 10 people, but we were five people. You go in. They lock the door. And you have an hour and a half to get out. And in this case the scenario was a nuclear reactor is scrammed, and it's going to explode in 90 minutes unless you can figure out how to stop it.
STEVE: Ah, not good.
LEO: And it's all these - and there's Arduinos. There's VR. There's drones. There's all these technologies in this small couple of rooms. Actually, there's one room, and then once you figure out some puzzles another door opens, and you get another room. And it's puzzles like, you know, there's code. And it's really fun because it's mathematical. It's visual. And we could have used your brain, frankly. Because we got out…
STEVE: And communication among all the participants.
LEO: That's why it's team building; right.
LEO: Because, oh, I got it over here. And some people are better at some things, and so you have to figure out who's good at this. The best team got out in 48 minutes. We got out at 98 minutes and four seconds, 90 minutes and four seconds. It had blown.
LEO: We got out, literally, it blew, and the door opened. We were that close. So it was fun. Next time you're in town - I know you want to come up for SQRL - we'll take you.
STEVE: Yeah, yeah.
LEO: Because you would have probably got us out in 48 minutes.
STEVE: Would have been fun to interact with everybody.
STEVE: So we have last week's second Tuesday of the month, famously now forever branded Patch Tuesday. And again, as I said at the top of the show, it's happening as we're doing the podcast. So there's never really time to go in-depth. However, last week we had Microsoft releasing patches for 60 - that's not 16, that's six zero - flaws, two which were actively being attacked at the time of release. I say this because we have a grab bag of problems here.
And I think probably - I know that there are people who are annoyed by the updates, and they're like, I don't want to reboot my computer now. Certainly enterprises, as we've been talking about, there was that survey that was taken recently by the gal who is an MVP for Microsoft who asked people, how are you feeling here about all of these updates? And people are grumbly about them. And we know that enterprises have been hurt by rapidly applying them without giving them sufficient testing. They feel like they want to test them themselves. Anyway, so sometimes the months are a little more important than, you know, some months of patches are more important than others.
In this case, of the 60 flaws, 19 were rated critical by Microsoft and encompassed Windows, Edge, IE, Office, the ChakraCore, .NET Framework, Exchange Server, Microsoft SQL Server, and even Visual Studio. All of them - and I've got to ask how Visual Studio is in there. But all of them allowed remote code execution when successfully exploited. So 19 remote code execution vulnerabilities. Those two that were known to be publicly exploited in the wild at the time, and that is to say also still workable on any still unpatched machines. The first one was an IE memory corruption problem which affected IE 9, 10, and 11 on all versions of Windows - probably, you know, they don't talk about XP anymore, but 7, 8.1, and 10, and all of the server versions, which allows remote attackers to take control of any unpatched as of last Tuesday or subsequently, vulnerable system by convincing users just to view a specially crafted website through IE. So not good.
In its advisory, Microsoft noted that an attacker could also embed an ActiveX control marked as safe for initialization in an application or a Microsoft Office document which is able to carry those, that hosts the IE rendering engine. And so that would be a way of not directly, but rather indirectly, getting IE involved, even if it's an Office document or something else. So anyway, so that allows a bad guy to run code on your machine.
The second publicly known and actively exploited flaw resides in the Windows shell as a consequence of improper validation of file paths. And, boy, again, we're always talking about file path exploits and validations. We talked about the famous ..\..\..\ as a means of backing up, like above the level where you're starting in the directory hierarchy in order to get up to mischief. And that's been a problem for web browsers and for operating systems for a long time. In this case, this flaw, which they don't go into any further detail, but it is now being exploited, allows once again arbitrary code to be executed on targeted systems by convincing victims to open a specially crafted file which they receive via email or on a web page. So those two are known to be being used right now.
That leaves 17 other known remote code execution flaws which they fixed and, as far as they know, nobody is currently exploiting. SQL Server, both 2016 and 2017 versions, are both vulnerable to a buffer overflow vulnerability that can be exploited remotely by an attacker. However, they have to be able to somehow submit a query to an affected server. So that seems, I mean, if you've got your SQL Server query port wide open on the Internet, accepting unauthenticated queries, you've got bigger problems than that.
LEO: Seems slightly dangerous.
STEVE: They'd have to somehow figure out how to get access to that. But still, we know that hackers are devilishly clever, so you don't want that to be left unpatched. Windows 10 has a PDF-based remote code execution vulnerability when Edge is left as its default browser, which again allows Windows 10 to be compromised by a user merely visiting a website that hosts a malicious PDF. And Microsoft notes that malicious ads can be hosts of content that can trigger this vulnerability. So they're not saying that they know of this happening, but they're glad that they got this patched, and they would like everyone to fix it.
Exchange Server has a vulnerability, remote code execution, which is worrisome since it would support attacks targeting specific enterprises known to be using Exchange Server 2010, 2013, or 2016, all of which were vulnerable until last week. It's a memory corruption vulnerability which allows a bad guy just to send email to the Exchange Server. So it's not a question of a port being open in this case. Everybody's advertising their port with their MX record, and DNS saying please send us email here. And if a bad guy does that with a cleverly formatted piece of email, they can obtain control of the enterprise's Exchange Server. So if anybody is listening and has been deferring updating, this is CVE-2018-8302. You've going to want to make sure you're patched for that one because it's particularly useful for targeted attacks.
It turns out that all of Microsoft's currently supported versions of Windows, meaning 7, 8.1, and 10, as well as Servers 2012 and 2016, all share the same GDI, the Graphics Device Interface subsystem, which has a vulnerability in its font processing library due to improper handling of specially crafted embedded fonts. Websites are able to ask browsers to download and render fonts on the fly. That's a nice feature for websites. But when your font renderer cannot be trusted, then it creates a vulnerability which can be exploited. And that's the case here. It's possible for a malicious font to be invoked to cause your browser to download this malicious font which a malicious website hosts. Or maybe bad guys could like sneak the font into wherever the website is pointing to, and so it wouldn't be the site's fault, technically. It would be a bad font got in. And then it could compromise your system. And lastly, believe it or not, what is it, 2018? How old is Theodore? He's 18?
LEO: He's 18, yeah.
STEVE: When he was five, we were having problems with link files, with Windows shortcut link files. He's now 18, and we're still having problems with those pesky .lnk shortcuts.
LEO: You'd better get on this, Theodore. See if you can fix that, yeah.
STEVE: I just wish we would stop screwing around with these operating systems. Just leave them alone. Because we don't seem to be making any real progress here. A remote code execution vulnerability exists. This is one of those 19 serious top-level problems in all versions of Windows that allows remote code execution if a .lnk, a link shortcut file, is processed. An attacker could present the user a removable drive or a remote share that contains a malicious .lnk file. Or, you know, maybe a thumb drive that's got something on it. It's like, oh. And again, we're still having problems. As a consequence, the link file can, just opening the drive, when the operating system looks at the .lnk file, just enumerates it, that causes it to be able to run an associated malicious binary which will execute naturally that binary of the attacker's choice on the target system. So even before a week ago, it was still the case on all versions of Windows and Windows Server that plugging a thumb drive into your system that would allow the system to look at it could cause a takeover. So again, throughout Theodore's entire life this has been a problem, and still is.
LEO: He's weeping. He's weeping right now.
STEVE: Well, no. He's in computer science, so he's got a great future ahead of him.
LEO: Yeah, exactly, yeah.
STEVE: That's right. He might have worried when he was five that, oh, by the time I'm old enough…
LEO: There'll be nothing to do.
STEVE: There'll be nothing left.
LEO: Nothing left.
STEVE: Not to worry, Theodore.
LEO: It'll all be solved.
STEVE: That's right. The podcast just keeps getting longer and longer as we struggle to deal with what happened just in the previous week. Speaking of which, not to be left out of any good Patch Tuesday, we have Adobe released security patches for 11 vulnerabilities, two of which were rated as critical, for Acrobat and Reader. Of course we've got Flash Player was among them. Creative Cloud Desktop and Experience Manager were the other apps that were affected. Good news is in this case none of the 11 things that were patched were either publicly known nor known to be actively exploited in the wild. So these were responsibly disclosed. However, I would argue that the real responsible thing to do would be just to stop, just like say goodbye to Flash Player because it's just so optional now. Although we did have the listener who said, you know, our company needs it to be embedded in PDFs. Ouch. Okay. What could possibly go wrong?
So there was a bit of nice news, which is that Skype - Microsoft of course now owns Skype. And I was a little nervous, Leo, when I turned my Skype machine on this morning because…
LEO: You never know, do you.
STEVE: You never know. And it would really mess up my Tuesday and yours. Because this time I got, for no reason that I could tell, the big dark screen UAC, like click this to approve changes. And I went, like, what? Just turning the machine on, which I've never seen before just turning the machine on.
STEVE: So I looked at it closely, and it said Microsoft's Skype. And I'm like, oh, no, no, please.
STEVE: I mean, what could I do? I gulped, and I said okay. And it kind of rummaged around for a while and made a lot of noise, and then everything settled down, and here I am. So once again.
STEVE: However, what they announced last week was their support, which had been announced at the beginning of the year, and at the beginning of the year they announced their relationship with Open Whisper Systems. In fact, I have a link in the show notes to the Signal.org blog about the Skype partnership with Signal. And, you know, this surprised people at the time because it was like, wow, really? Microsoft's going to add end-to-end encryption to Skype. And it must be that it's because everybody else has. You know, WhatsApp now supports Signal. Google does. Facebook does. There's even Signal now on apps in our various mobile devices and a desktop version.
So anyway, the good news is, in Skype, you click on the Chat + button. And I did yesterday because I was like, oh, really? The third item in the menu is Start Private Conversation. And this allows you to, as you would expect now that Skype supports so-called private conversations, although Microsoft apparently didn't really announce it, it got noticed in the wild that it was now there. It is the Signal protocol which Skype now supports. In Android, the Skype Private Conversations appeared at the beginning of the month, on August 2nd, for Android, as I mentioned, in 220.127.116.116. And Windows Skype received it just at some point in the last few days, 18.104.22.168. And when I looked, I think I was like .6 something. So I missed a few incrementals. But whatever it was that I clicked on this morning when I fired up my Skype machine, who knows where I am today. But it is present.
And I'll just say that I'm always a little uncomfortable when explicit key management is taken away from the user and is performed by the system. I mean, I think it is a rational tradeoff given that no typical user wants to be bothered with key management. We've talked about how Apple's iMessage system is cryptographically strong, but it inherently supports multiway messaging, and the user is not managing the keys for the recipients to the chat themselves. Which does allow Apple, if they were compelled to, to insert a key for an additional party that would then be participating silently in messaging.
So that's, I mean, again, is it an absolute problem? Well, it's a theoretical problem. But in reality, I mean, staying a little bit grounded, it's so trivial, as we know, to capture the keystrokes before or after the encryption, that is, like outside of the encrypted channel, that what anybody using these systems should consider is that what they are providing is really, really strong proof against a network-located attacker. That is, when we began talking about, really, when all of the issue for privacy ramped up with the Snowden revelations and the idea that the NSA had big data sucking node taps all over the Internet, and it was funneling virtually all of the traffic off to some farm somewhere for permanent storage.
So that's where the real awareness of the need for HTTPS began to happen, and then we learned even that's not safe because of the lack of perfect forward secrecy that allows a key that is subsequently revealed to be used to go back and decrypt what the NSA had been storing all these years. So now we're getting better about that. But in the case of Signal, we have a really robust, on-the-wire system that protects against that. But users should remember that the protection is not absolute. That as we talked about with Android last week, we sort of pounded on Android a bit with all of the various - the man-in-the-disk attack and such, where it is still very possible for the endpoint to be exploited. Crypto technology, we've got that nailed now. We can do bulletproof crypto.
But the NSA and law enforcement and other agencies realized, well, fine. Let them encrypt as it goes from point to point. We can still stick a tap in outside of that encrypted tunnel. So somebody who's really interested in seriously having untappable communications needs to think about having a dedicated device where they rigorously refrain from any other use of a dedicated device except that. I would get an iPhone with a known history, maybe brand new, and just not use it for anything else. Install Signal.
LEO: You would use Signal? You wouldn't use Apple Messages, obviously.
STEVE: Yes. I would use Signal. I think, you know, exactly. Because if you use the Signal app from Signal, then that's as good as you're going to get. They're taking advantage of the security in iOS. Again, you don't want to just, like, resist all temptation. I mean, it would be your secure comm platform if you really absolutely care about security. Because as far as we know, Signal is open source. It's open protocol. Thoroughly vetted. It's really been scrutinized. And so you want to put it on a platform that hasn't been contaminated. I noted that Signal is available for Windows, macOS, and Linux. And right now…
LEO: There's also a Chrome extension you can use, which is nice.
STEVE: Yeah, I think, exactly. Although where I'm headed with this is that at some point it will become part of the Linux distro. It's probably not quite yet. It's hosted on something that's a little worrisome, which makes it only available for two of the different types of Linuxes. But my point is once it became bound into the turnkey distro, you could create a bootable CD. And that would be a very strong security way of doing messaging, if you needed to, to boot a CD that you trusted on a machine that you had some reason to believe, I mean, as we know, there are even ways to infect the boot process if somebody was really, really being targeted. But that would be a way of getting a clean OS boot that already had a really state-of-the-art secure messaging technology built right into it. So I can't think of any real practical way of being more safe and secure than that. And it's got to be safe enough. And speaking of Google's Chrome browser….
STEVE: They are becoming - as I said, no one wants to be injected into, Leo - becoming more proactive against code injection instabilities. Last year we talked about Google's announcement to developers that they were going to start moving toward making Chrome increasingly intolerant of having code and hooks injected into its browser processes. As we know, software is named. First there was hardware. And then it was like, oh, look, software. It's malleable. It can be changed. So hooking is a longstanding practice when using some sort of add-on like antivirus facilities which need to access some of a system's internals that the system is not explicitly exposing via an API. Back in the early days of Windows, the early firewalls, the personal firewalls, Windows was firewall hostile. Like ZoneAlarm and other early firewalls had to go in and hook the OS. They installed drivers that went in and modified entry points to the OS itself that gave them the access that they needed to do their function and still let the OS operate correctly.
Now, of course, the problem is rootkits do the same thing. They hook the OS in order to hide themselves and do nefarious things. So you could have well-meaning hookers and malicious hookers of API entry points. The question is does the system want to tolerate them? And increasingly, we're seeing them being abused. And also there are inadvertent problems that are causing crashes. And this is what Google is arguing. This is the way Google is arguing is that not all companies are equally adept at going in and essentially modifying the Chrome internals in such a way that they don't cause side effects. And then there's the problem of Google trying to update Chrome, which may change the nature of some of the things that, for example, an antimalware program is hooking, thus causing a crash.
So, okay. So at the beginning, in April of this year, Google, as they said they would, so they gave developers plenty of notice, they began alerting users after a browser crashed which was they believed caused by a third party who had injected code, that the user would get a notice saying to update or remove problem applications. And then it said - that was the title of a dialogue: “The following application could be preventing Chrome from working properly.” And then it would name the application that it had determined had caused Chrome to crash. So they know that they're being hooked into, and they're tolerating it at the moment. But they're moving away from that posture. And so developers have been given notice.
Now, last week, with the release of Chrome 68, actually it was last month, the next phase of this gradual tightening has continued. Now the advisory nature of the previous warning has been elevated to a commandment. They said “update or remove incompatible applications.” And in some of the notices that I've seen samples of, Malwarebytes was mentioned, and BitDefender Total Security. So there are a couple user-added add-ons which have caused Chrome problems, which should be put on notice that their behavior is not going to be tolerated much longer.
The next phase of tightening is slated for January of 2019, when Chrome 72 will be released, and it will begin automatically blocking third-party code injection. So these things will become incompatible with Chrome. In their coverage of this, Bleeping Computer wrote - I think it was Lawrence who posted this article. He wrote: “Since this feature was enabled in July” - that is, the next level - “there have been an increasing number of reports about antivirus software being listed as incompatible applications by Chrome. Some of the antivirus applications,” he says, “that we have seen reported as incompatible include Malwarebytes, BitDefender, Eset, Emsisoft, AVG, IObit, and Avast.”
He said: “Strangely, there are many other programs that are also being listed as incompatible applications such as TortoiseGit, TortoiseSVN, Stardock, Acronis True Image, Dropbox, FileZilla, Acer Power Manager software, and RocketDock. While antivirus software,” he says, “I can understand being listed, some of these programs are a bit surprising.”
On the flipside, I would argue that to me they're not so surprising because things like Stardock and RocketDock, they're like adding browser enhancements, in a way, which goes beyond just being an add-on. They're obviously hooking inside. According to a Google developer who posted to a Google Help Forum regarding these alerts, he said they have no way of determining which programs are innocently injecting code or which might be malicious, as I mentioned. Rootkits do this, too.
So in his posting he said: “Chrome dev here. This is related to a new feature that aims to prevent third-party software from injecting into Chrome's processes and interfering with its code. This type of software injection is rampant,” he writes, “on the Windows platform and causes significant stability issues,” he says, parens, “(crashes).” He says: “The Microsoft Edge browser already does this kind of blocking, and we are in the process of making Chrome behave similarly. What you are seeing is the warning phase of a year-long effort to enable blocking, originally announced in November of 2017.”