Late last week, Microsoft announced that after a series of high-profile data breaches that involved its services, it had decided it would now be “making security our top priority at Microsoft, above all else.” Today, as part of the annual RSA Conference in San Fransisco, the company announced a number of new initiatives designed to boost security for business and enterprise users.
In a blog post, Microsoft stated it is expanding the features of its Microsoft Defender for Cloud service to help businesses protect their AI apps and infrastructure against cyberattacks:
Now security teams can identify their entire AI infrastructure—such as plugins, SDKs, and other AI technologies—with AI security posture management capabilities across platforms like Microsoft Azure OpenAI Service, Azure Machine Learning, and Amazon Bedrock. You can continuously identify risks, map attack paths, and use built-in security best practices to prevent direct and indirect attacks on AI applications, from development to runtime.
Microsoft claims Defender for Cloud is the first cloud-native app to add threat detection for AI workloads at runtime.
Microsoft has also launched a preview version of the Purview AI Hub. It will give businesses more info and data such as how many of its users are accessing its AI apps, along with their risk level, It will also reveal what kinds of sensitive data is being shared by those AI apps and more.
The company is also adding more features to its Microsoft Sentinel and Defender XDR services. They include putting in some new threat scenarios for Defender XDR to fight, including ways for it to disable malicious open authentication apps. The same service can now be used by security analysts to find risk information as part of any probe of threats and incidents.
Microsoft also announced today that it has added a number of third-party plugins for its Copilot for Security generative AI assistant service.
This post was originally published on 3rd party site mentioned in the title of this site