Deepfakes and AI-Driven Disinformation Threaten Polls – Trend Micro

3 minutes, 43 seconds Read
image

I’ve participated in many elections over the years, both pre-Internet and post-Internet, and the last few elections have seen massive shifts in how citizens get their information and news. With 2024 having many elections occurring around the world, and the US looking at a Presidential election in November, we’re already seeing some concerning aspects of what is to come. In my opinion, misinformation/disinformation campaigns are the most significant challenges we will have as citizens trying to figure out what news is real or fake. The technological advances over the past few years have allowed anyone worldwide to post information on the Internet about any topic they want. Whether this is using bots in social media to spread information quickly and broadly or newer deepfake technologies that can imitate a person via video or audio just by asking an app to create any messages they want, people are finding it harder and harder to identify what is real versus fake. 

With the US Presidential election coming up in November, we will see extensive amounts of misinformation campaigns occurring, and this is likely to be done by nation/states, social media influencers, and even campaigns themselves. We’ve seen this done in past elections, so it isn’t a stretch to think this won’t happen again.

The difference today is that many people now use the Internet and/or social media to get their news and information, which has allowed these misinformation campaigns to flourish. Another challenge is adversaries taking over accounts and websites, which can allow them to share their message with the subscribers, customers, or visitors of these places. Hacktivism is on an uptick due to both the Ukraine/Russia and Israel/Hamas conflicts, both of which will be key topics in the upcoming election, and these groups will want to insert their messages into the news. 

Technology like AI and Generative AI (GenAI) allows anyone, anywhere in the world, to utilize it to support misinformation campaigns. GenAI can be used to create information in any language, so non-English speaking people can easily create an English-based piece of content that they can share. Note the goal of the person or group is not flawless content production. Analysts and the educated public can usually tell that a particular video or voice is deepfake now. However, their target audience is often distracted by the way they consume news and information in general, which is often from the small screen of the mobile device. They also tend to share very emotionally provoking content quickly. So, even poor-quality deepfakes have viral potential, as they quickly spread and influence a significant portion of the common public. 

One of the potential biggest changes compared to the previous elections is the accessibility of AI has significantly grown, and the cost of access to AI technologies, primarily related to the manipulation of digital media, permits non-resourceful players to jump in. The line between manipulation and jokes will be very thin, and the costs of potential misinformation campaigns are affordable to ordinary people and the SMB segment, not just large corporations and state-sponsored actors. This gives significant opportunities to conduct False Flag operations and have initial investigations exposed to individuals and small business entities instead of governments who may be looking to orchestrate this.

So, what are people supposed to do to ensure they can identify what is real and what is fake? This is the challenge today as there isn’t a lot of technology to do this. Also, many platforms support freedom of speech, which means taking down content that may be fake as it is deemed free speech. Unfortunately, people will have to use common sense and do some due diligence in researching what they are being exposed to. Bookmark some of your favorite news sites, and don’t respond to unsolicited emails or text messages you receive that purport to be sharing news. Sign up for newsletters you’ve researched and know they are doing their due diligence on the content they share. On social media, if you see a message from someone you follow that goes against their traditional content, you should look to see if their account was compromised. Unfortunately, in today’s environment, misinformation is a weapon that is being used to distract, divide, and disorient many people. Hopefully, in the future, we can get a handle on it. But in the meantime, we need to be cautious of what we see and hear.  

This post was originally published on 3rd party site mentioned in the title of this site

Similar Posts