AI, LLMs and Security: How to Deal with the New Threats – The New Stack

5 minutes, 42 seconds Read

AI / Large Language Models / Security“>Chris Pirillo”>

AI, LLMs and Security: How to Deal with the New Threats  – The New Stack

How to Optimize AI?

Which is most important for AI workloads: computational performance, cost or portability?

Computational performance

0%

Cost

0%

Portability

0%

No preference

0%

2024-04-11 08:20:12

AI, LLMs and Security: How to Deal with the New Threats 

podcast,video,

When experimenting with AI, watch out for vulnerabilities that could be targets of attack, say Chris Pirillo and Lance Seidman in this episode of The New Stack Makers.


Apr 11th, 2024 8:20am by


Chris Pirillo
Featued image for: AI, LLMs and Security: How to Deal with the New Threats 

By now, most of the galaxy has used an AI tool in one way or another — with most people content in not going beyond trying one or two tools within their respective web browsers. But have you found yourself installing a large language model (LLM) on a local machine to tickle your tinkering inclinations?

If so, be careful. In rushing to try a new LLM, you might be exposing yourself to security risks. As AI continues to advance, so do the risks associated with potential exploits and vulnerabilities.

[embedded content]

AI models, particularly those with millions or even billions of parameters, are highly intricate and difficult to scrutinize fully. This complexity makes them susceptible to exploitation, as attackers may find loopholes or vulnerabilities that go unnoticed by developers.

On this episode of The New Stack Makers, I had a chat with Lance Seidman to shed more light on the new security challenges. Seidman, an experienced programmer currently pursuing AI solutions to benefit healthcare practices, helped us dive into the nuances of such recent exploits.

AI Models Need Human Oversight

Hugging Face bills itself as “the platform where the machine learning community collaborates on models, datasets, and applications” — and it certainly has become the go-to place for both running demos live and downloading code to run elsewhere.

Recently, it was discovered (and subsequently addressed) that malicious AI models on Hugging Face were backdooring users’ machines — contingent, seemingly, on Python’s pickle module.

Pickle, a serialization module in Python, allowed attackers to manipulate AI models to execute arbitrary commands, posing significant security threats to users. To mitigate, Hugging Face implemented a security scanner that scans every file pushed to the Hub and runs security checks. At this time, that includes both ClamAV scans and Pickle Import scans.

But nothing is foolproof.  One of the key takeaways from our conversation with Seidman is the critical role of human oversight in safeguarding AI systems against malicious attacks. While AI models may possess impressive capabilities, they are not immune to social engineering tactics.

In this episode of Makers, Seidman demonstrated how AI can be tricked into providing information on potential exploits, highlighting the need for constant vigilance and proactive security measures.

He noted: “Of course, before the AI creates new AI to make things better and make someone like me obsolete, there still needs to be some human at some point to just make sure things are being done correctly,” Seidman said in the episode.

“Because these models and all this information is created thanks to human intelligence. So, it only knows as much as it knows from us. At the same time, as we know, right now, these things get skewed and there’s misinformation — and somebody needs to be monitoring it. So, that’s probably a job in itself.”

Technical Safeguards, Cultural Awareness

As AI technologies become more sophisticated, so too do the tactics employed by malicious actors. To address these challenges, Seidman advocated for a multi-faceted approach to AI security. This approach involves not only technical safeguards but also fostering a culture of awareness and accountability within the AI community. Developers must prioritize security at every stage of the development life cycle, from code creation to deployment and beyond.

In all seriousness, use your noodle. Don’t outsource your critical thinking skills. That’s the bottom line.

One of the critical tools in the arsenal of AI security professionals is the ability to detect and mitigate potential exploits before they can be exploited. In this episode, Seidman further demonstrated how AI can be used to identify vulnerabilities, and how to address those weak spots before they can be exploited. By leveraging AI for defensive purposes, security professionals can stay one step ahead of potential threats and protect their systems from harm.

Check out the full episode for more on how to keep your organization safe when using AI.

You can also download The New Stack’s latest ebook, “Better, Faster, Stronger: How Generative AI Transforms Software Development,” for a clear-eyed view of the advantages and challenges baked into GenAI.

YOUTUBE.COM/THENEWSTACK

Tech moves fast, don’t miss an episode. Subscribe to our YouTube
channel to stream all our podcasts, interviews, demos, and more.

Group
Created with Sketch.

This post was originally published on 3rd party site mentioned in the title of this site

Similar Posts