A powerful key was stolen from one of the world’s largest companies. It still has questions to answer.
An attacker could have been forging access tokens to Microsoft services for up to two years, unnoticed
Microsoft has published its post-mortem into the theft from the heart of its systems of a powerful cryptographic key that could be used to access other services it provides. This successful breach of one of the world’s largest companies, first reported in July 2023, resulted in data being stolen from upwards of 25 customers including multiple federal agencies – heaping pressure on Redmond to determine how the incident had unfolded.
A September 6 report into the incident by the Microsoft Security Response Center (MSRC) was greeted warmly by many in the cybersecurity community: “This report was top notch. It has reset the bar for what transparency in incident reporting looks like”, said former NSA employee Jake Williams in a post on X (previously Twitter); a common view.
Microsoft had earlier attributed the attack to a Chinese group that it calls Storm-0558. It was first alerted to the attacks, seemingly by a federal customer, on June 16, 2023; subsequently beginning an “investigation into anomalous mail activity” and finding that 25 organisations and targeted individuals working for them had been hacked. (The extent and impact of data loss among downstream customers remains unclear.)
Early assessments by Microsoft revealed that the attacker had forged authentication tokens to access enterprise email servers using a stolen Microsoft account (MSA) cryptographic key. That should not have happened. Even if it had, that key should not have been able to help the attackers breach enterprise email accounts. But the big question was how the attacker had “acquired” (the term Microsoft initially used) the key to start with; something MSRC’s report this month aimed to clarify.
Microsoft key breach post-mortem in brief
In short, MSRC said in its report this month, that this happened:
- In April 2021 its consumer key signing system crashed.
- This generated a “crash dump” that included the key itself.
- The key’s presence in the crash dump was not detected.
- At a later (noticeably unspecified) date, this crash dump was moved from an “isolated production network” into a debugging environment on an internet-facing corporate network.
- At some point after April 2021 (again, noticeably unspecified), the attacker was “able to successfully compromise a Microsoft engineer’s corporate account” that had access to the debugging environment containing the crash dump holding the key.
MSRC’s report is clear that this is a best guess based on analysis of key exposure (the kind of deep sift that many were so impressed with) but that unfortunately “due to log retention policies, we don’t have logs with specific evidence of this exfiltration by this actor, but this was the most probable mechanism by which the actor acquired the key.”
It also flags that the “key material’s presence in the crash dump was not detected by our systems” in the wake of the initial crash (saying that “this issue has been corrected” and that “our credential scanning methods did not detect its presence” once the crash dump was moved into a corporate environment (“this issue has [also] been corrected”).
(These issues arguably should have been corrected much earlier. Microsoft suffered another breach in late 2020 during which attackers downloaded source code related to Azure components that it described at the time as “subsets of service, security, identity” as well as source code for a “subset” of Intune and Exchange components. Redmond emphasised in its subsequent post-mortem of that incident that “our development policy prohibits secrets in code and we run automated tools to verify compliance”; presumably these tools did not extend to scanning debugging data; a major oversight, given as Jake Williams noted: "As a former exploit developer, a repository of crash dumps, especially from an internal environment sounds like winning the lottery...")