Cybersecurity in the age of AI: emerging threats and solutions – Digital Signage Today

7 minutes, 5 seconds Read

We spoke to cybersecurity expert Khurram Mir about the evolving world of cybersecurity in the age of AI.

Across sectors, cybersecurity is one of the most important aspects of creating and maintaining a successful modern business — and it’s one of the biggest challenges for enterprises of all sizes.

Cybersecurity challenges proliferate globally

It’s no secret — the global news cycle is filled with outbreaks of evolving cyber threats.

This week alone, the BBC reported on London hospitals after a cyberattack disrupted critical services, including blood transfusions and test results, thanks to ransomware targeting Synnovis, a pathology firm; medical students were asked to volunteer for 10-12 hour shifts to help the hospital system in its recovery.

Across the Channel, Paris is bracing for cybersecurity risks related to its hosting of the Summer Olympics.

As reported by Axios, reports from Microsoft, Google Cloud’s Mandiant, and Recorded Future have published warnings about a range of threats, calling out Russia as the greatest source of cybersecurity risk. Threat examples include Russian influence teams Storm-1679 and Storm-1099, which have been linked to media activities such as a deepfake documentary that included fake dialogue by Tom Cruise. The Mandiant report states that “Russian state-sponsored cyber threat activity poses the greatest risk to the Olympics.”

Microsoft struck a similar tone in its Threat Intelligence Report, entitled “Russian influence efforts converge on 2024 Paris Olympic Games,” which highlights historical trends alongside emerging threats such as deepfakes and AI-powered attacks, starting with the previously cited deepfake documentary featuring pseudo-Tom Cruise that was released in 2023.

“Nearly a year later and with less than 80 days until the opening of the 2024 Paris Olympic Games, the Microsoft Threat Analysis Center (MTAC) has observed a network of Russia-affiliated actors pursuing a range of malign influence campaigns against France, French President Emmanuel Macron, the International Olympic Committee (IOC), and the Paris Games. These campaigns may forewarn coming online threats to this summer’s international competition,” said the MTAC report.

Clearly, in the age of AI, keeping your systems secure has never been harder, and the stakes have never been higher. In that spirit, this publication reached out to cybersecurity expert Khurram Mir to learn more about cybersecurity in the age of AI. Mir serves as chief marketing officer at Kualitatem, a software testing and cybersecurity company and the creator of Kualitee, an AI software testing tool.

Cybersecurity Q & A with Khurram Mir

Khurram Mir, chief marketing officer at Kualitatem, a software testing and cybersecurity firm. Image: Kualitatem.

Q: Can you tell us briefly how you got into this line of expertise?

A: The technology industry has always drawn me, and I’ve worked as a Q&A specialist since the early 2000s. Once I finished college, I started working as a Q&A engineer for various companies, ensuring that every product reaching the market was as qualitative as possible. Eventually, I created my own tool to help developers do just that. The problem was that, when using standard methods, products could take years until they reached the market, leading to a loss in profit. The emergence of AI made me realize that this technology could expedite the process. I saw the potential and believed I could help people create high-quality products faster with the right balance.

Q: You have contended that AI cannot replace human intelligence in cyber security. Why is this so?

A: Artificial intelligence can be beneficial when it comes to providing you with answers, but it’s not without flaws. The technology is still in its beginning stage, and unlike the human brain, it lacks the intuition (and sometimes even the common sense) part. Even with the proper data training, it can still be biased if unsupervised. It can even “hallucinate” if the data pool does not have an answer, leading to numerous false positives and negatives. It can speed up the process, but human involvement is essential to minimize biases.

Q: What do you feel is the proper place for AI in a robust cybersecurity approach?

A: When used in a robust cybersecurity approach, AI should act as both a skilled analyst and your eyes that don’t sleep. The technology can go through thousands of datasets per second, which is more than the human brain can process. It can detect threats and respond to incidents much faster and could also be used to predict potential threats in the future. An AI-driven system can be set to provide continuous monitoring, bringing potential issues much faster to the human brain’s attention.

Q: What is AI bias in cybersecurity, and how should we approach the problem?

A: AI bias is a systematic error that often leads to an inaccurate or unfair outcome simply because the system doesn’t know any better. Think of how humans are limited when they are taught to believe something their whole life; the same can happen to AI. For instance, data could tell AI that most hackers come from countries such as China, causing them to ignore a low-potential one such as the Netherlands. Ensuring diverse data training and setting up continuous monitoring can help us detect these biases, overriding the AI’s decision before it becomes irreversible.

Q: Can you explain the impact of training data bias, and what’s the solution?

A: Training data bias can occur when the data added is skewed, incomplete, or unrepresentative, leading to potentially unfair results. Not only can this lead to false positives and negatives, but it can also discriminate by targeting certain groups. Curating the data is likely the best way to prevent training data bias and perform regular evaluations and audits. Biases will always happen since the algorithm was made to mimic human intelligence (but faster). They just need to be caught early on so that you can teach the AI to avoid them.

Q: What makes algorithmic data bias challenges unique, and how do you approach the issue?

A: AI uses various algorithms to browse data, using predictions and probable pathways to determine the most technically correct answer. The problem is that a glitch in that pathway (i.e., a flawed algorithm or incomplete data pools) can easily lead to discrimination and potential oversight. Regularly assessing the algorithms and adjusting those “blind spots” can help improve its decisions, adding a sense of fairness.

Q: How do you define cognitive bias in AI, and what’s the best way to avoid or ameliorate it?

A: Cognitive bias in AI is similar to the one we find in human intelligence, as it uses knowledge to influence the system’s way of thinking. This causes the AI to make potentially irrational decisions, as it only has that information to go on. Diverse and representative data training can help avoid this type of bias, and human oversight can intervene to prevent a potential “slip” in the process.

Q: In your expert opinion, what is the biggest cybersecurity threat today?

A: There are plenty of threats to watch, but perhaps the biggest one is the adversarial attack – especially with AI becoming such a widely-used system. With these types of threats, data is perturbed or intentionally manipulated to provide an incorrect output. Data or models can also be “poisoned,” leading to unfair or inaccurate decisions. Without continuous data validation and adversarial training, AI systems can go rogue and ultimately endanger user experience.

Q: What is a guiding principle to help companies stay safe from cybersecurity threats in the age of AI?

A: A guiding principle I will always go by is a holistic and proactive approach. AI can be very useful in detecting potential threats early in the testing stage, but the technology still has a lot to “learn.” Aside from creating a risk management strategy, I’d advise against letting the AI make every decision independently. Continuous monitoring must be implemented, and the human department should be able to override an error should it appear. User education and awareness also play an essential part in this, as they keep infiltrations at a minimum.

Q: Do you have any closing thoughts on this broad topic?

A: Considering how cybersecurity has been brought under the hammer in the past few years, the risk of an attack or leak is higher than ever. At this point, going on without using AI is no longer feasible. It’s faster and can detect a bug or malicious data in your code faster than you could. That said, vigilance and continuous human involvement must also be implemented. AI was meant to aid humans in doing a better job, not to take over their tasks and replace them.

This post was originally published on 3rd party site mentioned in the title of this site

Similar Posts