Google announces its AI-driven cybersecurity strategy: Everything we know – NewsBytes

1 minute, 43 seconds Read

Gemini 1.5 Pro has shown impressive speed in analyzing malware
May 07, 2024

02:50 pm

What’s the story

Google recently announced its plan to integrate artificial intelligence (AI) into its cybersecurity initiatives.

The tech giant’s latest offering, Google Threat Intelligence, will leverage the expertise of its Mandiant cybersecurity division and VirusTotal threat intelligence.

This strategy also includes the use of the Gemini 1.5 Pro large language model, which Google claims can expedite the process of deconstructing malware attacks.

Rapid response

AI model’s remarkable speed in malware analysis

The Gemini 1.5 Pro LLM, launched in February, has demonstrated impressive speed in analyzing malware.

Google highlighted that this model was able to dissect the WannaCry virus code, and identify a kill switch within just 34 seconds.

The WannaCry virus was responsible for a global ransomware attack in 2017, that severely affected hospitals, businesses, and various organizations.

Threat simplification

Google’s AI model simplifies threat reports

Beyond analyzing malware code, the Gemini AI model also has the potential to condense complex threat reports into easily understandable language within Google’s Threat Intelligence.

This feature could assist businesses in better understanding the possible impacts of attacks and formulating appropriate responses.

Additionally, Google’s Threat Intelligence provides a comprehensive information network, for preemptive monitoring of potential threats.

Expertise utilization

Mandiant’s role in Google’s strategy

Google acquired Mandiant, the cybersecurity firm that exposed the 2020 SolarWinds cyber attack against the US federal government, in 2022.

The tech giant plans to utilize Mandiant’s expertise to evaluate security risks associated with AI projects through its Secure AI Framework.

Mandiant will examine AI model defenses and contribute to red-teaming efforts, providing a human element to complement Google’s AI-driven cybersecurity strategy.

Potential risks

AI models: A double-edged sword in cybersecurity

While AI models can assist in summarizing threats and deconstructing malware attacks, they can also become targets for malicious entities.

Threats could include “data poisoning,” a tactic where harmful code is inserted into data collected by AI models, thereby inhibiting their response to specific prompts.

This highlights the need for robust defenses and constant vigilance, even as Google leverages AI technology in its cybersecurity initiatives.

This post was originally published on 3rd party site mentioned in the title of this site

Similar Posts