Can AI be used to finally secure software and data supply chains? – SecurityBrief Australia

3 minutes, 16 seconds Read
image

Open source software (OSS) supply-chain threats are wreaking havoc on the global cybersecurity landscape. Threats and attacks such as SolarWinds, 3CX, Log4Shell, and now XZ underscore the potentially devastating impact of these security breaches. This is being felt locally in Australia, according to a report from PwC. 

The ubiquity of open-source software is a key driver of supply chain attacks, with open-source libraries and languages being the foundation of over 90% of the world’s software. Expect attacks on the open-source software supply chain to accelerate, with attackers automating attacks in common open-source software projects and package managers. Many CISOs and DevSecOps teams are unprepared to implement controls in their existing build systems to mitigate these threats. The coming year will see DevSecOps teams migrate away from shift-left security models in favour of “shifting down” by using AI to automate security out of the developers’ workflows. 

Here I will discuss how AI can help developers work more efficiently while concurrently creating more secure code. 

The importance of governance in the data supply chain 

Security professionals must consider how security vulnerabilities extend to their data supply chains. Although organisations typically integrate externally developed software through their software supply chains, their data supply chains often need clearer mechanisms for understanding or contextualising data. In contrast to software’s structured systems or functions, data is unstructured or semi-structured and faces a wide array of regulatory standards.

Many companies are building AI or Machine Learning (ML) systems on top of enormous data pools with heterogeneous sources. ML models on model zoos are published with minimal understanding of the code and content used to produce them. Software engineers need to handle these models and data just as carefully as they do the code going into the software they’re creating, paying attention to their provenance. 

DevSecOps teams must assess the liabilities of utilising data, especially when building Large Language Models (LLMs) to train AI tools. That demands careful data management within models to prevent the accidental transmission of sensitive data to third parties like OpenAI.

Organisations should adopt strict policies outlining the approved usage of AI-generated code. When incorporating third-party platforms for AI, conduct a thorough due diligence assessment to ensure that their data will not be used for AI/ML model training and fine-tuning.

AI driving transition from ‘shift-left’ to ‘shift-down’

The shift-left concept became big a decade ago as a means of addressing security flaws early in the software development lifecycle and enhancing developer workflows. While system defenders have long been at a disadvantage – AI now has the potential to level the playing field. As DevSecOps teams navigate the intricacies of data governance, they must also assess the impact of the evolving shift-left paradigm on their organisations’ security postures.

Companies will begin moving beyond shift-left to embrace AI to fully automate security processes and remove them from the developer’s workflow. This is called “shifting-down,” because it pushes security into automated and lower-level functions in the tech stack instead of burdening developers with complicated and often difficult decisions. 

GitLab’s Global DevSecOps Report: The State of AI in Software Development found that developers only spend 25% of their time on code generation. AI can elevate their output by optimising the remaining 75% of their workload. That’s one way to leverage AI’s capacity to solve specific technical issues and improve the efficiency and productivity of the entire software development life cycle.

When we look back on the year that was, I expect we will reflect on how the escalating threats on OSS ecosystems adversely affected global software supply chains. The impact of this will catalyse substantial changes in cybersecurity strategies, including a heightened dependence on AI to safeguard digital infrastructures. The cybersecurity landscape is already transforming, with a growing focus on mitigating supply chain vulnerabilities, enforcing data governance, and incorporating AI into security measures. This transformation promises to steer DevSecOps teams toward software development processes with efficiency and security at the forefront.

This post was originally published on 3rd party site mentioned in the title of this site

Similar Posts