Safeguarding AWS AI Services: Protecting Sensitive Permissions – Security Boulevard

5 minutes, 0 seconds Read
image

As AI continues to grow in importance, ensuring the security of AI services is crucial. Our team at Sonrai attended the AWS Los Angeles Summit on May 22nd, where we noted how big of a role AI is going to play in 2024. In fact, according to summit presentations, 70% of top executives said they are exploring generative AI solutions. With this in mind, we’ve tallied together a list of AWS AI services that have sensitive permissions. We hope your teams can use this to install policies and procedures for safeguarding these permissions.

Amazon BedRock

ApplyGuardrail

Description: Grants permission to apply a guardrail
MITRE Tactic: Impact

Why is it sensitive?
This permission allows users to set or modify boundaries on AI model behaviors. Misuse can result in improperly configured guardrails that either over-constrain the model, hindering its functionality, or under-constrain it, exposing the organization to compliance and safety risks.

DeleteGuardrail

Description: Grants permission to delete a guardrail or its version
MITRE Tactic: Impact

Why is it sensitive?
Deleting a guardrail can remove critical protections, leaving AI models without necessary operational boundaries. This can lead to models behaving unpredictably or violating regulatory requirements, posing significant risks to the organization. Additionally, it can allow broader data access.

.ai-rotate {position: relative;}
.ai-rotate-hidden {visibility: hidden;}
.ai-rotate-hidden-2 {position: absolute; top: 0; left: 0; width: 100%; height: 100%;}
.ai-list-data, .ai-ip-data, .ai-filter-check, .ai-fallback, .ai-list-block, .ai-list-block-ip, .ai-list-block-filter {visibility: hidden; position: absolute; width: 50%; height: 1px; top: -1000px; z-index: -9999; margin: 0px!important;}
.ai-list-data, .ai-ip-data, .ai-filter-check, .ai-fallback {min-width: 1px;}

UpdateGuardrail

Description: Grants permission to update a guardrail
MITRE Tactic: Impact

Why is it sensitive?
Updating a guardrail allows modifications to the constraints and rules governing AI models. If misused, it can weaken security measures or create loopholes, leading to potential compliance violations and operational disruptions.

Amazon Q Business

CreatePlugin

Description: Grants permission to create a plugin
MITRE Tactic: Persistence

Why is it sensitive?
Creating a plugin can introduce new functionalities, some of which might be malicious, allowing persistent access or data exfiltration.

CreateUser

Description: Grants permission to create a user
MITRE Tactic: Persistence

Why is it sensitive?
Creating a user can provide an attacker with a new identity to maintain persistent access and perform unauthorized activities without detection.

UpdatePlugin

Description: Grants permission to update a plugin
MITRE Tactic: Persistence

Why is it sensitive?
Updating a plugin can modify its behavior, potentially introducing malicious code or altering functionalities to bypass security measures.

Amazon SageMaker

CreateCodeRepository

Description: Grants permission to create a code repository
MITRE Tactic: Persistence

Why is it sensitive?
Creating a code repository can allow an attacker to store and execute malicious code within the AI environment, maintaining persistent control.

CreatePresignedDomainURL

Description: Grants permission to create a presigned domain URL
MITRE Tactic: Initial Access

Why is it sensitive?
This permission can be used to generate URLs that provide temporary access to resources, potentially allowing unauthorized users to gain entry.

CreateUserProfile

Description: Grants permission to create a user profile
MITRE Tactic: Persistence

Why is it sensitive?
Creating a user profile can help an attacker establish and maintain a foothold within the system, enabling ongoing malicious activities.

PutModelPackageGroupPolicy

Description: Grants permission to put a model package group policy
MITRE Tactic: Privilege Escalation

Why is it sensitive?
Setting a model package group policy can elevate privileges, allowing an attacker to gain more control over AI resources and operations.

+CreateEndpointConfig, CreateNotebookInstance, CreateTrainingJob, CreateWorkforce, and 7 more sensitive permissions.

Amazon Lex

CreateResourcePolicy

Description: Grants permission to create a resource policy
MITRE Tactic: Defensive Evasion

Why is it sensitive?
Creating a resource policy can be used to evade detection by altering access controls and permissions, masking malicious activities.

UpdateResourcePolicy

Description: Grants permission to update a resource policy
MITRE Tactic: Defensive Evasion

Why is it sensitive?
Updating a resource policy can modify access controls, potentially allowing an attacker to evade security measures and maintain undetected access.

Amazon Rekognition

PutProjectPolicy

Description: Grants permission to put a project policy
MITRE Tactic: Persistence

Why is it sensitive?
Setting a project policy can control access to AI resources, allowing an attacker to maintain persistent access or disrupt normal operations.

Amazon Comprehend

CreateEndpoint

Description: Grants permission to create an endpoint
MITRE Tactic: Persistence

Why is it sensitive?
Creating an endpoint can enable persistent access to AI services, potentially exposing sensitive data and operations.

PutResourcePolicy

Description: Grants permission to put a resource policy
MITRE Tactic: Persistence

Why is it sensitive?
Setting a resource policy can control access and permissions, helping an attacker maintain a foothold within the system.

Amazon Kendra

CreateAccessControlConfiguration

Description: Grants permission to create an access control configuration
MITRE Tactic: Persistence

Why is it sensitive?
Creating an access control configuration can help an attacker establish and maintain access, potentially leading to unauthorized actions.

UpdateAccessControlConfiguration

Description: Grants permission to update an access control configuration
MITRE Tactic: Persistence

Why is it sensitive?
Updating an access control configuration can modify permissions and controls, helping an attacker maintain undetected access.

Amazon Entity Resolution

AddPolicyStatement

Description: Grants permission to add a policy statement
MITRE Tactic: Lateral Movement

Why is it sensitive?
Adding a policy statement can extend permissions and access, allowing an attacker to move laterally within the network.

DeletePolicyStatement

Description: Grants permission to delete a policy statement
MITRE Tactic: Impact

Why is it sensitive?
Deleting a policy statement can remove critical security controls, increasing the risk of unauthorized access and actions.

PutPolicy

Description: Grants permission to put a policy
MITRE Tactic: Lateral Movement

Why is it sensitive?
Setting a policy can modify access controls, enabling an attacker to move laterally and potentially escalate their privileges within the system.

Conclusion

While IAM measures are just one of many ways to protect AWS AI services, permissions management will greatly reduce your chances of attackers exploiting your AI services. Sonrai’s Cloud Permissions Firewall will disable AI services your organization is not using to give assurance they cannot be exploited, and creates policies restricting the most sensitive permissions associated with the service. Development is not interrupted because identities who need the access are exempted in the policy.

*** This is a Security Bloggers Network syndicated blog from Sonrai | Enterprise Cloud Security Platform authored by Tally Shea. Read the original post at: https://sonraisecurity.com/blog/safeguarding-aws-ai-services-protecting-sensitive-permissions/

This post was originally published on 3rd party site mentioned in the title of this site

Similar Posts