AI cybersecurity solutions detect ransomware in under 60 seconds
Worried about ransomware? If so, it’s not surprising. According to the World Economic Forum, for large cyber losses (€1 million+), the number of cases in which data is exfiltrated is increasing, doubling from 40% in 2019 to almost 80% in 2022. And more recent activity is tracking even higher.
Meanwhile, other dangers are appearing on the horizon. For example, the 2024 IBM X-Force Threat Intelligence Index states that threat group investment is increasingly focused on generative AI attack tools.
Criminals have been using AI for some time now — for example, to assist with phishing email content creation. Also, groups have been using LLMs to help with basic scripting tasks, including file manipulation, data selection, regular expressions and multiprocessing, to potentially automate or optimize technical operations.
Like a chess match, organizations must think several moves ahead of their adversaries. One of these anticipatory moves can include cloud-based AI cybersecurity to help identify anomalies that might indicate the start of a cyberattack.
Recently, AI cybersecurity solutions have emerged that can detect anomalies like ransomware in less than 60 seconds. To help clients counter threats with earlier and more accurate detection, IBM has announced new AI-enhanced versions of the IBM FlashCore Module technology available inside the IBM Storage FlashSystem products and a new version of IBM Storage Defender software. These solutions will help security teams better detect and respond to attacks in the age of artificial intelligence.
Traditional storage vs. AI threat detection
Immutable copies of data are used to protect data from corruption, such as ransomware attacks, accidental deletion, natural disasters and outages. These backups are also useful for helping organizations comply with data regulations.
Storage protection based on immutable copies of data is typically separated from production environments. These safeguarded copies cannot be modified or deleted by anyone and are only accessible by authorized administrators. This type of solution offers the cyber resiliency necessary to ensure immediate access to data recovery in response to ransomware attacks.
However, given the growing need for AI-ready ransomware security, new solutions are in demand. Unlike traditional storage arrays, systems like IBM FlashSystem leverage machine learning to monitor data patterns, looking for anomalous behaviors indicative of a cyber threat.
This new technology is designed to continuously monitor statistics gathered from every single I/O using machine learning models to detect anomalies like ransomware in less than a minute.
Advanced systems can use machine learning models to distinguish ransomware and malware from normal behavior. This dramatically accelerates threat detection and response, enabling organizations to take action and keep operating during an attack. For example, autonomous responses can trigger alerts or IT playbook activation that will minimize the impact of an attack against data.
AI-security threats
Cyber criminals are continuously developing AI-enhanced attack capabilities. AI-driven cyberattacks are quickly evolving that can pinpoint vulnerabilities, detect patterns and exploit weaknesses. Plus, AI’s efficiency and rapid data analysis can give hackers a tactical advantage over poorly equipped cyber defenses. Traditional cybersecurity methods are no longer enough to combat AI security threats as new tools evolve in real-time. The result is rapid intrusion and undetected ransomware deployment.
Moreover, there are also predictions about how LLMs and other generative AI tools will be offered as a paid service, like Ransomware-as-a-Service, to help attackers deploy their attacks more efficiently with less effort involved. This means the threat will grow into something even more dangerous and more proliferative.
The only response is to fight fire with fire. AI cybersecurity solutions, such as AI-enhanced versions of the IBM FlashCore Module technology, are designed to thwart the most dangerous attacks now — as well as the ones that security teams will face in the future.
More from Artificial Intelligence
April 25, 2024
NIST’s role in the global tech race against AI
4 min read – Last year, the United States Secretary of Commerce announced that the National Institute of Standards and Technology (NIST) has been put in charge of launching a new public working group on artificial intelligence (AI) that will build on the success of the NIST AI Risk Management Framework to address this rapidly advancing technology.However, recent budget cuts at NIST, along with a lack of strategy implementation, have called into question the agency’s ability to lead this critical effort. Ultimately, the success…
April 24, 2024
Researchers develop malicious AI ‘worm’ targeting generative AI systems
2 min read – Researchers have created a new, never-seen-before kind of malware they call the “Morris II” worm, which uses popular AI services to spread itself, infect new systems and steal data. The name references the original Morris computer worm that wreaked havoc on the internet in 1988.The worm demonstrates the potential dangers of AI security threats and creates a new urgency around securing AI models.New worm utilizes adversarial self-replicating promptThe researchers from Cornell Tech, the Israel Institute of Technology and Intuit, used what’s…
April 10, 2024
What should an AI ethics governance framework look like?
4 min read – While the race to achieve generative AI intensifies, the ethical debate surrounding the technology also continues to heat up. And the stakes keep getting higher.As per Gartner, “Organizations are responsible for ensuring that AI projects they develop, deploy or use do not have negative ethical consequences.” Meanwhile, 79% of executives say AI ethics is important to their enterprise-wide AI approach, but less than 25% have operationalized ethics governance principles.AI is also high on the list of United States government concerns.…
Topic updates
Get email updates and stay ahead of the latest threats to the security landscape, thought leadership and research.
Subscribe today
This post was originally published on 3rd party site mentioned in the title of this site