AI Scientists Contest Validity of AI Security Science – Mirage News

2 minutes, 26 seconds Read

Caption: Participants gather for a group photo after discussing securing AI systems for critical national security data and applications. Photo by Liz Neunsinger/ORNL, U.S. Dept. of Energy
Participants gather for a group photo after discussing securing AI systems for critical national security data and applications. Credit: Liz Neunsinger/ORNL, U.S. Dept. of Energy

Researchers at the Department of Energy’s Oak Ridge National Laboratory met recently at an AI Summit to better understand threats surrounding artificial intelligence. Organized by ORNL’s Center for Artificial Intelligence Security Research, referred to as CAISER, the event was part of ORNL’s mission to shape the future of safe and secure AI systems charged with our nation’s most precious data.

“We’re embarking on a new field of science by bridging artificial intelligence research, such as what these systems can do with cybersecurity research, which protects networks and data from intrusion,” said Edmon Begoli, CAISER director. “We need to think about the problem, put scientific rigor behind our thoughts and develop capabilities to persistently understand the threat of AI to us as humans and us as a nation with allies.”

Throughout the day-long summit, members of CAISER presented to 40 researchers and guests on the pros and cons of using AI in face recognition systems; how to influence large language models without having access to the algorithm; and how polyglot flies, one file containing two distinct file types, threaten to infiltrate systems. Where the morning presentations were highly technical, the afternoon sessions focused on philosophical dialogue about the nature of AI – in particular, what can we learn about the technology of AI versus how society might adjust as machines inch closer to cognition?

“Given the fiercely rapid pace at which the field of AI is growing and changing, it is essential to act with urgency to develop foundational methods, software architectures and tools to mitigate the threats that are both faced by and emanate from AI systems,” said Shaun Gleason, ORNL’s director for Science-Security Initiative. “Similar to the traditional field of cybersecurity, AI security will be a continuous ‘cat and mouse game’ with adversaries that seek to disrupt, destroy and misuse AI systems and technologies for their own purposes.”

CAISER officially launched in September 2023 to dedicate a mix of talented researchers with resources to classify vulnerabilities as the tech industry and government agencies begin to explore AI applications. In the fields of biometrics, cybersecurity, geospatial science and nuclear nonproliferation, ORNL is poised to investigate AI opportunities and risks within these critical national security sectors.

UT-Battelle manages ORNL for DOE’s Office of Science, the single largest supporter of basic research in the physical sciences in the United States. DOE’s Office of Science is working to address some of the most pressing challenges of our time. For more information, visit https://energy.gov/science.

/Public Release. This material from the originating organization/author(s) might be of the point-in-time nature, and edited for clarity, style and length. Mirage.News does not take institutional positions or sides, and all views, positions, and conclusions expressed herein are solely those of the author(s).View in full here.

This post was originally published on 3rd party site mentioned in the title of this site

Similar Posts