What to Expect at RSA 2024: Will AI Wreak Havoc on Cybersecurity? – Security Boulevard

4 minutes, 51 seconds Read

As artificial intelligence leaders debate what constitutes safe AI and whether it means we’re on a path to destruction, the cybersecurity community is still trying to wrap its collective head around how to best protect our businesses and our customers, today. AI presents immense challenges and opportunities, an explosion of new applications, and quickly evolving threats – making it more important than ever that the cybersecurity community collaborates to stay ahead of bad actors.

As nearly 50,000 security practitioners prepare to attend RSA 2024, here’s a look at how AI will dominate and shape the conversation, as well as how the recent SEC guidelines are changing the roles of the CISO, the C-suite, and board members.

On the Frontlines of the AI Revolution

At last year’s RSA, there was a lot of buzz about whether AI was going to be the next big thing in cybersecurity and how to move past the hype. As we approach RSA 2024, large language models (LLMs) and machine learning (ML) have now not only arrived – they will fuel much of the conversation this year.

With so much to consider around the convergence of AI and risk management, I’m expecting lively debates about the double-edged sword presented by this burgeoning technology.

As my colleagues and I discuss how to protect companies from AI-empowered dark operators by leveraging that same technology to augment our defenses, here are some themes I expect to surface at RSA.

.ai-rotate {position: relative;}
.ai-rotate-hidden {visibility: hidden;}
.ai-rotate-hidden-2 {position: absolute; top: 0; left: 0; width: 100%; height: 100%;}
.ai-list-data, .ai-ip-data, .ai-filter-check, .ai-fallback, .ai-list-block, .ai-list-block-ip, .ai-list-block-filter {visibility: hidden; position: absolute; width: 50%; height: 1px; top: -1000px; z-index: -9999; margin: 0px!important;}
.ai-list-data, .ai-ip-data, .ai-filter-check, .ai-fallback {min-width: 1px;}

AIE
Techstrong Podcasts

How AI Empowers Hackers and Malicious Attackers

With physical security incidents costing companies $1 trillion in 2022, AI-powered cyberattacks continue to exhibit greater sophistication, speed, and adaptability. While this makes them harder to detect and mitigate, offerings like DarkGPT and FraudGPT are making it even easier for bad actors to cause harm without needing any coding skills.

AI algorithms can turn vast amounts of data from social media and other sources into highly targeted and convincing phishing attacks or other forms of social engineering. Amid the race to harness generative AI, for example, hackers are already exploiting casual users with fake ChatGPT websites and phishing scams that mimic the real site. When these attacks are played on unsuspecting employees, they can provide unauthorized access to critical systems.

At the same time, in the rush to market, legitimate GenAI clients often fail to include security as a priority while developing apps. With offerings like Google’s Gemini susceptible to attack, early adopters are augmenting their workforce with third-party apps that leave them vulnerable – which means employees may willingly expose sensitive company information without even knowing it.

As the cybersecurity community learns to defend against a cascade of AI-powered attacks, the technology will only embolden bad actors. Our teams will be faced with increasingly higher-level Advanced Persistent Threats (APTs) and more sophisticated and evasive malware that is capable of infiltrating and manipulating industrial control systems with greater precision and speed.

How AI Empowers Cybersecurity Defenders

In AI’s accelerating world, the challenge before security leaders is growing faster than ever. Thankfully, we are armed with the same technology, and we can leverage AI-powered solutions to fortify our defenses and create a proactive campaign against bad actors.

By incorporating AI/ML-type models into security programs, our cybersecurity teams access a deeper understanding of our data and expose risks we would otherwise miss. When paired with automation, AI tools can sift through massive datasets, compare data cluster behaviors, and respond to potential hazards before their human counterparts, quickly isolating exposed systems from the company’s infrastructure.

In addition, just as bad actors are using GenAI to craft stronger, more deceptive threats, companies can use the technology to build proactive training scenarios. In this controlled environment, teams can practice, test, and improve their defenses so they are prepared for a myriad of real attacks. On a more day-to-day basis, GenAI can be implemented to empower CISOs to ask questions about their program using natural language and receive accurate answers with predictive insights.

While employee training and education remain essential, beating today’s cyber criminals requires robust and evolving technical defenses – and the SEC’s recent regulations reflect the significance of this battle.

The New Role of Cybersecurity Executives

The SEC’s 4-day disclosure rule presents a significant new challenge for businesses – one that ties directly back to the AI conversation. AI can help with early detection warnings and faster response times to identify breaches well within that window.

At the same time, the SEC also requires public companies to share material information regarding their cybersecurity risk management measures on an annual basis. This creates a need for clear visibility of historical data.

With increasing responsibilities, today’s CISOs are not necessarily in the trenches as they once were. They are more likely to be observing their program from a higher perspective, enabling them to take a more holistic approach to their cyber defense strategy and direct their teams with actionable data-informed intelligence.

Embracing AI/ML is critical for CISOs to achieve this holistic view and help analyze the vast amount of data they need to sift through.

Exploring the ‘Art of Possible,’ Together, at RSA 2024

With AI’s capacities seeming to accelerate at breakneck speed, it is continuously opening new possibilities – on both sides of the divide. To stay ahead of empowered bad actors and shape a resilient and adaptable ecosystem, the cybersecurity community must work together to uncover and embrace the art of possible in risk management and threat defense – which is why I am looking forward to exploring RSA 2024 and collaborating to identify solutions and strategies that will help us continue to shape a safer world.

Photo credit: Headway on Unsplash

This post was originally published on 3rd party site mentioned in the title of this site

Similar Posts