Abnormal Security Shares Examples of Attacks Using Generative AI – Security Boulevard

2 minutes, 28 seconds Read

Abnormal Security has published examples of cyberattacks that illustrate how cybercriminals are beginning to leverage generative artificial intelligence (AI) to launch cyberattacks.

For instance, in one example, a cybercriminal posed as a customer service representative from Netflix to encourage a potential victim to urgently renew their subscription by clicking on a URL. The attack is difficult to detect because it makes use of what appears to be an authentic helpdesk domain associated with Teeela, an online toy shopping app, and an email address hosted on Zendesk, a trusted customer support platform.

Other examples included similar attacks involving cybercriminals pretending to be representatives for cosmetics companies and insurance providers.

Abnormal Security CISO Mike Britton said as cybercriminals continue to leverage generative AI technologies, detecting these types of social engineering attacks will be increasingly difficult for the average end user. In fact, the only way organizations will be able to consistently detect these types of attacks is to rely on cybersecurity platforms that make use of AI to identify end-user behavior that is known to be good, he added.

Any deviation from that behavior can then be flagged for further review. In effect, organizations can leverage AI to combat increasingly more sophisticated attacks as generative AI technologies make it easier for cybercriminals to craft emails that appear legitimate, said Britton.

Those tactics and techniques are only going to become that much more challenging to detect as cybercriminals leverage generative AI platforms to create so-called deepfakes using audio and video that, at first glance, will appear to be equally legitimate, he added.

It’s not clear how cybersecurity will need to evolve as generative AI, despite existing safeguards, becomes more commonly used to launch attacks based on social engineering techniques that are often at the heart of a business email compromise (BEC). In theory, organizations could shift to other collaboration platforms, but many of those platforms are subject to the same types of social engineering tactics that cybercriminals use to compromise email, noted Britton.

There is little doubt that BEC and other similar types of attacks that are typically used to perpetrate fraud will exponentially increase in the coming year. While organizations might invest more in end-user training to recognize these attacks, the increased sophistication of these attacks enabled by generative AI will make them difficult for any human to detect. The only viable approach will be to rely more on machines to identify signals indicative of anomalous behavior such as an email that includes malware that includes links to some type of external command and control center.

In the meantime, organizations should be especially prudent when relying on email to manage any type of transaction. In much the same way that fewer people today answer their phone without knowing first who is calling, there may come a day when no one answers an email without first knowing where it came from and whether they can verify that the person who sent it is actually someone they know.

This post was originally published on 3rd party site mentioned in the title of this site

Similar Posts