The Microsoft team drives to catch bugs before they happen | MarketingwithAnoy

Like a rush As cybercriminals, state-sponsored hackers and fraudsters continue to flood the zone with digital attacks and aggressive campaigns worldwide, it’s no surprise that the maker of the ubiquitous Windows operating system is focused on security defenses. Microsoft’s Patch Tuesday update releases often contain fixes for critical vulnerabilities, including those that are actively utilized of attackers around the world.

The company already has the necessary groups to hunt down weaknesses in its code (“the red team”) and develop countermeasures (“the blue team”). But recently, this format has evolved again to encourage more collaboration and interdisciplinary work, hoping to catch even more bugs and shortfalls before things spiral. Known as Microsoft Offensive Research & Security Engineering, or Morsethe department combines the red team, the blue team and the so-called green team, which focuses on finding mistakes or taking weaknesses found by the red team and correcting them more systemically through changes in how things are done in an organization.

“People are convinced that you can’t move forward without investing in security,” said David Weston, Microsoft’s vice president of enterprise and operating system security, who has been with the company for 10 years. “I have been safe for a very long time. For most of my career, we were seen as annoying. Now, if anything, executives come to me and say, ‘Dave, am I OK? Have we done all we can?’ It has been a significant change.”

Morse has worked to promote secure coding practices across Microsoft so that fewer bugs end up in the company’s software in the first place. OneFuzz, an open source Azure testing framework, enables Microsoft developers to constantly and automatically fuzz their code with all sorts of unusual use cases to remove bugs that wouldn’t be noticeable if the software was only used exactly as intended.

The combined team has also been at the forefront of promoting the use of safer programming languages ​​(such as Rust) across the company. And they have recommended integrating security analysis tools directly into the right software compiler used in the company’s production workflow. This change has been significant, Weston says, because it means developers aren’t doing hypothetical analysis in a simulated environment, where some bugs might be overlooked at a step removed from real-world production.

The Morse team says the shift towards proactive security has led to real progress. In a recent example, Morse members examined historical software—an important part of the group’s job since so much of the Windows code base was developed before these extended security reviews. While investigating how Microsoft had implemented Transport Layer Security 1.3, the basic cryptographic protocol used across networks like the Internet for secure communication, Morse discovered a remotely exploitable flaw that could have given attackers access to target devices.

As Mitch Adair, Microsoft’s Principal Security Lead for Cloud Security, put it: “That would have been as bad as it gets. TLS is used to secure virtually every service product that Microsoft uses.”

Leave a comment