News
The Microsoft AI research division accidentally leaked dozens of terabytes of sensitive data starting in July 2020 while contributing open-source AI learning models to a public GitHub repository.
Microsoft AI researchers accidentally exposed tens of terabytes of sensitive data, including private keys and passwords, while publishing a storage bucket of open source training data on GitHub.
A vulnerability in Microsoft Corp.’s Azure App Service has been found to expose hundreds of source code repositories. Discovered by security researchers at Wiz Inc. and detailed Dec. 21, the ...
Some results have been hidden because they may be inaccessible to you
Show inaccessible results