Microsoft Internal Data Leak Exposes Employee Information

Microsoft Internal Data Leak Exposes Employee Information
Microsoft AI researchers' mishap exposes tens of terabytes of sensitive data, including employee passwords and internal communications, underscoring the need for stringent data security measures.

In a significant security lapse, Microsoft AI researchers inadvertently exposed a vast trove of sensitive internal data. The leak comprised tens of terabytes of critical information, including private keys, passwords, and over 30,000 internal Microsoft Teams messages from hundreds of employees. This breach occurred as a result of a misconfigured Azure Storage URL shared on GitHub, designed to provide access to AI training models but mistakenly granting permissions to the entire storage account. The exposed data not only included sensitive personal data from the personal backups of two Microsoft employees but also posed a substantial security risk, potentially allowing attackers to inject malicious code into the AI models hosted in the storage account.

The repository, aimed at supplying AI training models, became a potential gateway for arbitrary code execution due to the nature of the data and the format it was in. This vulnerability highlighted the inherent risks associated with handling and sharing massive volumes of data, especially when it involves sensitive information. Security researchers from Wiz discovered the leak and promptly alerted Microsoft, leading to the revocation of the compromised Shared Access Signature (SAS) token. Microsoft took steps to mitigate the issue by invalidating the token and enhancing their GitHub secret scanning service to prevent similar incidents.

Security firm Wiz stumbled upon this security lapse during their routine scans for exposed cloud-hosted data. Upon discovering the misconfigured GitHub repository belonging to Microsoft’s AI research division, Wiz promptly reported the issue to Microsoft on June 22. The software giant acted swiftly, revoking the overly permissive SAS token by June 24, thus preventing any further unauthorized access.

Microsoft’s response to this incident underscores the delicate balance between innovation and security in the era of AI. The company acknowledged the breach, stating that no customer data was compromised and that internal services were not at risk. Furthermore, Microsoft has taken proactive steps to bolster its security measures, particularly focusing on enhancing GitHub’s secret scanning service to detect and prevent similar exposures in the future.

Microsoft confirmed that no customer data was compromised and that the leak did not put any other internal services at risk. However, the incident underscores the critical importance of stringent security measures and vigilant monitoring to prevent accidental data exposure. It also brings to light the complexities and challenges associated with managing large datasets in cloud environments, particularly when it involves open-source projects and AI research​.

Tags

About the author

Aditi Sharma

Aditi Sharma

Aditi holds a Masters in Science degree from Rajasthan University and has 7 years under her belt. Her forward-thinking articles on future tech trends are a staple at annual tech innovation summits. Her passion for new tech trends ensures that our readers are always informed about the next big thing.

Add Comment

Click here to post a comment

Follow Us on Social Media

Web Stories

Best performing phones under Rs 70,000 in December 2024: iQOO 13, OPPO Find X8, and more! realme 14X 5G Review Redmi Note 14 Pro vs Realme 13 Pro Most Affordable 5G Phones Under Rs 12000 in December 2024: Samsung, Redmi, Lava, Poco & More! Best mobile phones under Rs 35,000 in December 2024: realme GT 6T, Vivo T3 Ultra 5G and more! Best Mobile Phones under Rs 25,000 in December 2024: Nothing Phone 2(a), OnePlus Nord CE 4 Lite & More!