NSA, CISA Released Guidance And Best Practices To Secure The AI – CybersecurityNews

2 minutes, 6 seconds Read
NSA CISA AI Security Guidelines

In an era where artificial intelligence (AI) systems are becoming increasingly integral to our daily lives, the National Security Agency’s Artificial Intelligence Security Center (NSA AISC) has taken a significant step forward in enhancing cybersecurity. 

The NSA AISC, in collaboration with several key agencies, including CISA, FBI, ASD ACSC, CCCS, NCSC-NZ, and NCSC-UK, has released a comprehensive Cybersecurity Information Sheet titled “Deploying AI Systems Securely.”

It outlines best practices for deploying and operating externally developed AI systems, focusing clearly on three primary objectives

  • Confidentiality: Ensuring that sensitive information within AI systems remains protected from unauthorized access.
  • Integrity: Maintaining the accuracy and reliability of AI systems by preventing unauthorized alterations.
  • Availability: Guaranteeing that AI systems are accessible to authorized users when needed.


Stop Advanced Phishing Attack With AI

Trustifi’s Advanced threat protection prevents the widest spectrum of sophisticated attacks before they reach a user’s mailbox. Stopping 99% of phishing attacks missed by
other email security solutions. .

Moreover, the guidance emphasizes the importance of implementing mitigations for known vulnerabilities in AI systems. 

This proactive approach is crucial in safeguarding against potential threats that could compromise the systems’ security.

The agencies also provide methodologies and controls designed to protect, detect, and respond to malicious activities targeting AI systems, their related data, and services. 

Organizations that deploy and operate externally developed AI systems are strongly encouraged to review and apply the recommended practices. 

Additionally, CISA points to previously published joint guidance on securing AI systems, such as “Guidelines for secure AI system development” and “Engaging with Artificial Intelligence,” which further elaborate on the strategies to enhance AI security.

Following are the few key measures from the report:

  • Conduct ongoing compromise assessments on all devices where privileged access is used or critical services are performed.
  • Harden and update the IT deployment environment.
  • Review the source of AI models and supply chain security. 
  • Validate the AI system before deployment.
  • Enforce strict access controls and API security for the AI system, employing the concepts of least privilege and defense-in-depth.
  • Deploying AI Systems Securely.
  • Use robust logging, monitoring, and user and entity behavior analytics (UEBA) to identify insider threats and other malicious activities. 
  • Limit and protect access to the model weights, as they are the essence of the AI system. 
  • Maintain awareness of current and emerging threats, especially in the rapidly evolving AI field, and ensure the organization’s AI systems are hardened to avoid security gaps and vulnerabilities.

Looking to Safeguard Your Company from Advanced Cyber Threats? Deploy TrustNet to Your Radar ASAP.

This post was originally published on 3rd party site mentioned in the title of this site

Similar Posts