Study: 77% of Businesses Have Faced AI Security Breaches – Tech.co

2 minutes, 43 seconds Read

AI advancement is clearly a double-edged sword, with an alarming percentage of businesses reporting that they have faced security breaches of their AI systems.

Platforms like ChatGPT have made life a lot easier for businesses around the world. The generative AI technology can do everything from creating content and scheduling meetings to generating images and developing code.

Unfortunately, it’s not all peaches and cream when it comes to AI, with the technology representing an incredibly vulnerable system that could lead to devastating breaches.

AI Security Breaches on the Rise

According to a recent study from HiddenLayer — titled the AI Threat Landscape Report 2024 — AI security breaches are becoming a serious problem in the industry. The survey showed that 77% of businesses reported a breach to their AI in the last year.

That number is frustratingly high when you consider how much of a concern it is for business owners around the world. The study found that 97% of IT leaders prioritize securing AI systems and 94% have an allocated budget for AI security in 2024.

Surfshark logo🔎 Want to browse the web privately? 🌎 Or appear as if you’re in another country?
Get a huge 86% off Surfshark with this special tech.co offer.See deal button

Unfortunately, prioritization and funding don’t necessarily lead to a secure system. In fact, the report found that only 61% of IT leaders are confident that the budget allocated to them will be enough to stop hackers in their tracks.

How Secure Is AI?

Given all the news around AI and its adoption into the business world, this feels like a question that should’ve been asked a long time ago. Still, with more and more technology rolling out, the query remains: Is AI actually secure?

Well, if you ask the founder and CEO of HiddenLayer, who produced the study, it’s safe to say there is room for improvement.

“Artificial intelligence is, by a wide margin, the most vulnerable technology ever to be deployed in production systems. It’s vulnerable at a code level, during training and development, post-deployment, over networks, via generative outputs, and more.” – Chris “Tito” Sestito, founder and CEO of HiddenLayer

The other problem that arises with AI is when you realize just how much data is being used and reused in these systems, making them attractive targets for cybercriminals to get their hands on. Because if they can hack your AI system, they have access to basically everything.

How to Protect Your AI Systems

Despite the prevalence of AI in business today, a small percentage of businesses are actually using it to their up their defenses. Our own Impact of Technology on the Workplace report found that only 19% of businesses use AI for cybersecurity purposes.

Fortunately, beyond that, there are some ways that you can protect your AI systems from being breached. The HiddenLayer report notes that building relationships between your AI and security teams is the best place to start. Then, you’ll want to be regularly scanning and auditing all AI models used at your business. Finally, understanding the origin source of AI models in use can make sure you’re ahead of problems before they arise.

AI is just another piece of technology at your business: It requires attention in regard to security. Because if you think a standard breach is bad, just wait until a hacker gets into your AI system.

This post was originally published on 3rd party site mentioned in the title of this site

Similar Posts