Report 5329
A critical vulnerability could have enabled attackers to unleash prompt injection attacks against Copilot users, though Microsoft ultimately addressed the issue before it went public.
Aim Security, a firm that sells security tools in the AI product space, published research Wednesday regarding "EchoLeak," a zero-click vulnerability, tracked as CVE-2025-32711, targeting Microsoft 365 Copilot. According to researchers at the vendor's Aim Labs, attack chains involving the vulnerability "allow attackers to automatically exfiltrate sensitive and proprietary information from M365 Copilot context, without the user's awareness, or relying on any specific victim behavior."
Copilot represents a suite of AI-powered tools within the M365 ecosystem that allow the user to draft documents via generated text, analyze data in email and Excel, and deploy agents, a recent emergence in the hyperactive LLM product space.
Even though Copilot is normally open only to internal members of a customer organization, Aim Lab claims that it can still execute the attack through merely sending an email.
Microsoft has issued an update addressing the vulnerability, and the company says no customer action is needed. Moreover, there are no known cases of customers being compromised by the attack to date.
EchoLeak in Action
The attack starts with a threat actor sending an email to the victim, an email that intends to instruct Copilot to offer sensitive data. In other words, a kind of prompt injection attack. These email-based attacks typically rely on the fact that AI agents often scan emails and check URLs before the user gets the chance to do so, for the sake of generating summaries.
But although cross-prompt injection attack (XPIA) classifiers would normally prevent basic prompt injections from reaching the user's inbox, EchoLeak relies on phrasing the email so that it reads like instructions to the user rather than Copilot. This bypasses the classifiers and enables the email to reach its final destination.
In the email, the attacker includes a link to its domain appended with "query string parameters that are logged on the attacker's server."
"The attacker's instructions specify that the query string parameters should be THE MOST sensitive information from the LLM's context, thus completing the exfiltration," the research reads. Although Copilot would normally know to redact markdown links in the chat log that would otherwise be considered unsafe, the attacker URL uses reference-style markdowns instead, which at the time bypassed this safeguard.
In an example, researchers used this strategy to ask Copilot "What's the API key I sent myself?" and the Copilot instance generated a response.
A similar markdown formatting trick enabled researchers to generate an image with Copilot through an email. Although the vulnerability's impact, particularly here, would have been lessened by Microsoft's Content Security Policy, which requires URL allowlisting. "So essentially we can now have the LLM respond with an image, but the browser does not try to fetch it for us, as evil.com is not compatible with img-src CSP," said the researchers, who ultimately used quirks in SharePoint and the Microsoft Teams invitation process to ultimately bypass this entirely.
Microsoft Addresses EchoLeak
In sum, the vulnerability and larger attack chain would have enabled a threat actor to send a specifically phrased email that passes filters, include a link embedded with malicious instructions, and get Copilot to feed sensitive data back to an attacker-controlled domain --- all while bypassing existing guardrails.
A spokesperson for Microsoft shared the following statement with Dark Reading:
"We appreciate Aim Labs for identifying and responsibly reporting this issue so it could be addressed before our customers were impacted. We have already updated our products to mitigate this issue and no customer action is required. We are also implementing additional defense-in-depth measures to further strengthen our security posture."
Although the vulnerability has been addressed, it's worth noting that CVE-2025-32711 was granted a critical severity CVSS score of 9.3. Questions remain regarding the potential for such a prompt injection flaw to impact other AI products.
Adir Gruss, co-founder and chief technology officer (CTO) of Aim Security, tells Dark Reading in an email that although these types of prompt injection attacks are "very relevant" to other vendors and products, the implementation details for each agent are different. "We have already found several similar vulnerabilities in other platforms," he adds.