How U.S. Businesses can Fight the Deepfake Threat – Security Boulevard

4 minutes, 57 seconds Read
image

Not all that long ago, the idea of artificial intelligence (AI) surpassing that of humans was confined to the world of science fiction. And while some still cling to that belief, there is undoubtedly a noticeable shift underway. The emergence and widespread adoption of transformative technologies like ChatGPT have opened many people’s eyes to AI’s true potential. As with anything new, fully unlocking, and indeed understanding its full capabilities will take time.

For security professionals, however, keeping a finger on the pulse of new AI developments is critically important. Financially motivated cybercriminals and nation-state actors are adept at exploiting weaknesses in cybersecurity frameworks. They continuously refine their techniques by leveraging technological advancements to breach defenses and gain unauthorized access to sensitive information.

AI Advancements Makes Efforts Considerably Easier

Indeed, organizations now face a growing cybersecurity threat from emerging technologies such as AI-manufactured deepfakes, which can create remarkably realistic false images, audio and video content for nefarious purposes.

One of the most recent and particularly alarming uses of deepfake technologies occurred with a video impersonating Ukrainian President Volodymr Zelensky, deceptively urging the country’s armed forces to stand down amidst the ongoing conflict with Russia. However, deepfakes aren’t just being weaponized in a political context. Equally, numerous other cases exist in which such technologies are being used to target businesses. 

In 2020, one threat actor managed to steal a jaw-dropping $35 million by leveraging AI to replicate a company director’s voice, deceiving a bank manager. Meanwhile, in January 2024, a finance employee at British engineering firm Arup fell victim to a $25 million scam following a video call with a “deepfake chief financial officer”.

.ai-rotate {position: relative;}
.ai-rotate-hidden {visibility: hidden;}
.ai-rotate-hidden-2 {position: absolute; top: 0; left: 0; width: 100%; height: 100%;}
.ai-list-data, .ai-ip-data, .ai-filter-check, .ai-fallback, .ai-list-block, .ai-list-block-ip, .ai-list-block-filter {visibility: hidden; position: absolute; width: 50%; height: 1px; top: -1000px; z-index: -9999; margin: 0px!important;}
.ai-list-data, .ai-ip-data, .ai-filter-check, .ai-fallback {min-width: 1px;}

As these technologies evolve, the threats they pose will only intensify. Indeed, one report indicates a staggering 31-fold surge in deepfake fraud attempts in 2023 (a 3,000% increase year-on-year).

35% of U.S. Businesses Have Experienced a Deepfake Incident

Consequently, it’s not just important but an absolute imperative for cybersecurity professionals to stay ahead of the game. With incidents of this nature capturing headlines and concerns escalating over the potential implications of deepfake manipulation in the upcoming U.S. election, we opted to explore the experience of security professionals in dealing with this newfound threat. 

ISMS.online had 1,526 respondents polled, as highlighted in the  ‘State of Information Security’ report and revealed a stark reality: A full 35% of U.S. businesses have experienced a deepfake security incident in the last 12 months, making it the second most common cybersecurity incident in the country.

Such an alarming figure underscores the increasing prevalence and growing impacts of deepfake technologies. These are no longer a prospective threat, but one that has become a present-day reality, with enterprises now tasked with confronting this technology head-on.

Currently, the most likely scenario in which threat actors utilize deepfakes is in business email compromise (BEC)style attempts, where attackers leverage AI-powered voice and video-cloning technology to deceive recipients into executing corporate fund transfers. However, there are also potential use cases around information or credential theft, causing reputational harm, or even circumventing facial and voice recognition authentication.

Regardless of the specific attack method, however, the consequences for organizations could be severe, leading to substantial data loss and/or service disruptions resulting in significant financial and reputational harm.

AI as a Critical Tool in Cyber Defenses

So, what measures can organizations take to mitigate the growing risks associated with deepfakes?

Critically, organizations must continue laying robust and effective cybersecurity foundations, harnessing cutting-edge technologies to bolster their data security endeavors.

As AI technologies become more prevalent and accessible, traditional cyberattacks are expected to diminish in relevance, with attackers increasingly leveraging AI to broaden and refine their capabilities. To keep pace with this shift, security teams themselves must also find and embrace the benefits that AI can offer at speed.

These aren’t exclusive tools for threat actors. Organizations can also use them to their advantage, establishing more robust defenses and enhancing the security posture through efficiency gains, accuracy improvements, a greater volume of insights and more.

Indeed, it is clear that companies recognize this potential. In the same ‘State of Information Security’ report, 73% of U.S. businesses acknowledge the pivotal role of AI and ML in improving their data security programs despite the challenges posed by AI-driven threats.

What is also clear, however, is that while many companies understand this potential, adoption remains in its infancy. The report reveals that just over a quarter (26%) have implemented such initiatives in the past 12 months. 

Part of this gap likely lies in a lack of knowledge surrounding effective implementation. There is no ignoring the fact that AI and ML solutions are rarely plug-and-play, particularly in the context of security. Instead, they must be adapted and implemented in line with each organization’s unique context and requirements. 

Use Caution, But We Must Proceed

Navigating this process may seem daunting. However, there are guiding principles that companies can leverage and follow to ensure a greater likelihood of success. For instance, I recommended aligning with standards such as ISO 42001, which deals directly with AI and is designed to help organizations continue building robust and effective information security foundations.

While it’s unclear how new, advanced technologies like AI and ML will ultimately change the data security landscape, now is not the time to stand still. Failure to take proactive measures could leave organizations vulnerable to evolving threats, hindering both the efficiency of their operations and their ability to respond effectively. 

To avert these consequences, cybersecurity teams must acknowledge and seize the AI-driven opportunities at their disposal sooner rather than later. By embracing standards such as ISO 42001, enterprises can proactively position themselves, providing assurances to partners, customers and regulators while simultaneously laying the groundwork for more effective operations, longevity and financial success.

This post was originally published on 3rd party site mentioned in the title of this site

Similar Posts