Sysdig introduces AI Workload Security for enhanced cloud risk management – SecurityBrief Asia

2 minutes, 45 seconds Read
image

Sysdig has announced a novel enhancement to its platform: AI Workload Security. This latest addition is intended to facilitate real-time risk assessment and management in AI environments. Its primary function is to provide comprehensive visibility into AI environments, promptly discern suspicious activity, and thereby allow for the quick rectification of issues in anticipation of impending regulation.

According to Knox Anderson, SVP of Product Management at Sysdig, there is a wide demand for a solution that enables secure AI adoption, which can expedite business processes. He explained that, with AI Workload Security, organisations can understand their AI infrastructure and identify active risks. Of particular concern are workloads containing AI packages that are publicly accessible and have exploitable vulnerabilities. Given that AI workloads are an attractive target for harmful actors, AI Workload Security allows the detection of suspicious activities, enabling an organisation to address any significant threats to their AI models and training data efficiently.

The Sysdig CNAPP, built on the open-source Falco, accommodates cloud-native runtime security, regardless of whether the workloads are cloud-based or on-premises. Kubernetes has become the preferred platform for deploying AI. However, securing data and mitigating risks in containerized workloads pose significant challenges due to their inherent ephemerality. Sysdig’s CNAPP strives to provide real-time solutions with runtime visibility, enabling an understanding of malicious activities and runtime events potentially leading to a breach of sensitive training data.

Sysdig’s real-time AI Workload Security identifies and prioritises workloads containing leading AI engines and software packages like OpenAI, Hugging Face, Tensorflow, and Anthropic. This understanding of where AI workloads are deployed enables organisations to manage and control their AI usage effectively. Moreover, Sysdig simplifies triage, decreasing response times by integrating AI Workload Security with the company’s unified risk findings feature. This provides security teams with a coherent view of all correlated risks and events, promoting a more streamlined workflow to prioritise, investigate, and remedy Active AI Risks.

An alarming finding by Sysdig reveals that 34% of all current GenAI workloads are publicly exposed. Public exposure refers to the accessibility of workloads from the internet or other untrusted networks without appropriate security measures. This lack of security puts the sensitive data used by GenAI models in immediate danger, potentially leading to security breaches and data leaks and, regrettably, opening the door to regulatory compliance difficulties.

The urgency in announcing AI Workload Security aligns with the increasingly rapid pursuit of AI deployment, alongside growing apprehensions surrounding the security of AI models and their training data. A recent survey discovered that 55% of organisations are planning to implement GenAI solutions in the current year, indicating a threefold increase in the deployment of OpenAI packages since December. Anticipatory guidance, proposed by the Biden Administration in an executive order from October 2023, influences the introduction of AI Workload Security. By notifying organisations about public exposure, vulnerabilities, and runtime events, Sysdig’s AI Workload Security helps organisations resolve issues promptly in anticipation of imminent AI legislation.

Anderson explained that without proper runtime insights, AI workloads can expose organisations to unnecessary risks, including the possibility of threat actors exploiting vulnerabilities to access sensitive data. Therefore, he concluded, enhanced security controls and runtime detections suited to such unique challenges are crucial. Sysdig aims to support businesses to address these issues, allowing organisations to enjoy the efficiency and speed advantages offered by generative AI.

This post was originally published on 3rd party site mentioned in the title of this site

Similar Posts