Energy Lab Officials Highlight Importance of AI Security – GovCIO Media & Research

2 minutes, 58 seconds Read
image

The new center is focused on tackling emerging security risks throughout AI development and preventing misuse of the tool.

Leaders from Oak Ridge National Laboratory’s new Center for AI Security Research (CAISER) emphasized the need for proactive discussions on AI security to prevent future issues.

“One thing that specifically we’re trying to do [at CAISER] is take AI security and learn across departments and across domains to make sure that we can achieve the goals of making AI safe,” CAISER Research Lead Amir Sadovnik said in an opening keynote panel at AI FedLab in Reston, Virginia, Wednesday.

AI systems have characteristics that make them different from ones used in the past, Sadovnik added. AI also relies heavily on data, which makes it more susceptible to vulnerabilities that didn’t exist in systems built from scratch.

“I am an AI researcher, I don’t know exactly what’s going on underneath the hood. I can build it, but I don’t know exactly how it’s learning — and that introduces a whole set of vulnerabilities,” said Sadovnik. “We’re looking at the center in a scientific way to try to kind of figure out not just the cybersecurity, but the AI security field, how do we make sure that our AI is secure and how do we make sure that we’re secure from what AI can be doing?”

Sadovnik discussed the importance of interagency collaboration in addressing AI and cybersecurity issues. He said working with different government agencies to find AI solutions is a top priority at Oak Ridge.

“We actually take lessons learned from one agency to the other agency, and that’s one of the big goals of the national lab is to have this collaborative approach,” said Sadovnik.

Attracting and retaining skilled AI practitioners will also help agencies boost their overall cybersecurity. CAISER Director Edmon Begoli recommended starting an internal development process and hiring people who have advanced degrees.

“Given that this field is in such a rapid state of development, and some topics are pretty complicated, it requires agencies to have staff that understands AI very well. I would encourage internal development process because the competition is severe. There’s a shortage and having a really effective partnership with academia would also be good to have in place,” said Begoli. “When it comes to retaining people, what we found that really keeps people working with us is because we offer people some very interesting things to do that they frequently cannot do anywhere else.”

Begoli also encouraged agencies to stay ahead of emerging threats by keeping up with the latest trends and engaging in conversations.

“The AI itself is inherently insecure, and it is a self-acting system. It is far more capable than classical software that does click button to work,” said Begoli. “It has far more potential and that comes with also far more threats, so pay more attention to IT security and safety and everything that comes with it.”

Sadovnik advises everyone to proceed with caution and stressed the importance of understanding the risks of AI systems.

“A lot of what we’re doing at the center is trying to define what the risks are. … Sometimes risks are OK; we take risks all the time. Understanding what they are, understanding how to measure them, measuring the impact and then figure out what kind of risk you’re taking,” said Sadovnik. “I do want to encourage the government to push ahead with AI, but also kind of think about the risks and make sure we’re doing it safely.”

This post was originally published on 3rd party site mentioned in the title of this site

Similar Posts