BlackBerry @RSA: The AI Arms Race in Cybersecurity, Can Defenders Keep Pace with Attackers? – BlackBerry Blog

2 minutes, 10 seconds Read
image

As artificial intelligence (AI) becomes more accessible and advanced, malicious threat actors are increasingly turning to AI-powered tools in their arsenal. From automated phishing campaigns and deepfakes to adversarial malware, bad actors seek to use AI’s capabilities to outmaneuver traditional defenses.  

Rather than reacting after the fact, AI also enables a more proactive security posture. Solutions can continuously monitor networks and endpoints, stopping threats and alerting teams to suspicious behaviors that could represent the earliest stages of an intrusion lifecycle.

This is the crux of what I will be speaking on at RSA 2024 in my session, AI-equipped Threat Actors Versus AI-enhanced Cyber Tools – Who Wins? As the leader of BlackBerry’s product engineering and data science teams, I’m pleased to share the progress we’ve made to strengthen our Cylance® AI-powered solutions.  

Our researchers have uncovered new tactics from several advanced persistent threat groups targeting critical infrastructure. Through deep analysis of these attacks, our data scientists have enhanced Cylance AI models to more accurately identify malicious behaviors and tools.

I’d like to share a brief preview of my session and hope you will join me for some lively Q&A at RSA this May. 

AI in Cybersecurity: Challenges and Opportunities Ahead  

By developing AI-enhanced detection and response tools, defenders can gain insights to identify emerging threats. Machine learning models can analyze vast amounts of data at machine speed to detect subtle anomalies and patterns that may indicate the beginning of an attack. When trained to look for malicious intent, machine learning models do a fantastic job identifying unseen novel suspicious behavior. 

However, AI systems are not foolproof. Adversarial actors have shown they can manipulate ML (machine learning) models’ inference through small perturbations of input data to evade detection. Defenders must take precautions to minimize these risks and protect their AI tools.

My RSA presentation will cover the following:

  • An examination of data science and modeling tools that threat actors could or perhaps are using to create targeted attacks leveraging ML techniques 

  • The approaches defenders can take to address the rise of AI/ML-based threat discovery 

  • A look at adversarial attacks on the ML model itself and ways to reduce that risk

  • Explore a powerful tool for defenders: predictive and behavioral modeling and how we are solving the challenges we face  

Cyber defenders have tools to fight back in the AI arms race – but only if they implement strategies to minimize risks and protect their systems. A balanced, carefully managed approach combining the strengths of threat research and AI may be the defenders’ best hope of keeping pace with malicious actors in the long run.

This post was originally published on 3rd party site mentioned in the title of this site

Similar Posts