In a recent incident observed by Sysdig researchers, attackers escalated from stolen credentials to full administrative access in an AWS environment in under 10 minutes, illustrating how AI can shorten cloud attack timelines.
“The threat actor achieved administrative privileges in under 10 minutes, compromised 19 distinct AWS principals, and abused both Bedrock models and GPU compute resources,” said the researchers.
According to Sysdig’s analysis of the November 2025 incident, the attack began with the discovery of valid AWS credentials exposed in publicly accessible Amazon S3 buckets.
These buckets were used to store Retrieval-Augmented Generation (RAG) data for AI models and contained long-lived access keys that could be abused by anyone who found them.
The exposed credentials belonged to an IAM user with the ReadOnlyAccess policy attached, along with limited permissions for Amazon Bedrock.
Although these privileges did not allow direct administrative actions, they provided broad visibility across the environment.
Using this access, the threat actor conducted extensive reconnaissance across multiple AWS services, including Secrets Manager, Lambda, EC2, ECS, RDS, CloudWatch, and Key Management Service.
They also enumerated Bedrock models and related AI services early in the intrusion, indicating an initial interest in identifying AI-related resources for potential abuse.
After mapping the environment, the attacker attempted to escalate privileges by assuming IAM roles commonly associated with administrative access.
When those attempts failed, they pivoted to a more reliable escalation technique: Lambda function code injection.
Because the compromised IAM user had UpdateFunctionCode and UpdateFunctionConfiguration permissions, the attacker was able to modify the code of an existing Lambda function that ran under an overly permissive execution role.
The attacker iterated on this approach several times, ultimately succeeding in creating new access keys for an administrative IAM user.
This step effectively granted full control over the AWS environment without the need for external command-and-control (C2) infrastructure, as the malicious Lambda function returned the newly created credentials directly in its execution output.
Analysis of the injected Lambda code revealed several indicators of AI-assisted development.
The script included detailed exception handling, execution timeout adjustments, and comments written in Serbian.
Researchers also observed behavior consistent with large language model (LLM) hallucinations, such as attempts to assume roles in non-existent AWS account IDs and references to a GitHub repository that does not exist.
With administrative access secured, the threat actor expanded their foothold by moving laterally across the environment.
They operated across 19 distinct AWS principals, including multiple IAM roles and users, created new access keys, and established a persistent backdoor user with the AdministratorAccess policy attached.
The attacker then shifted focus to LLMjacking, abusing the victim’s Amazon Bedrock access to invoke multiple foundation models, including Claude, DeepSeek, Llama, and Amazon Titan.
Because model invocation logging was disabled, this activity likely went undetected while generating real usage costs for the organization.
In the final stage of the attack, the threat actor provisioned high-end GPU infrastructure for machine learning workloads.
They successfully launched a p4d.24xlarge EC2 instance, which costs approximately $32.77 per hour, and used user data scripts to install CUDA, PyTorch, and other ML frameworks.
The scripts also launched a publicly accessible JupyterLab server, creating a backdoor that would allow continued access to the instance even if AWS credentials were later revoked.
As AI-assisted cloud attacks become faster and more automated, organizations need defensive controls that go beyond basic misconfiguration fixes.
The following measures focus on reducing privilege exposure, limiting attacker movement, and improving visibility into high-risk cloud and AI activity.
Together, these steps can help shorten detection timelines and limit the blast radius.
This incident demonstrates how cloud intrusions can escalate rapidly when exposed credentials, permissive identities, and automated tooling are combined.
The increasing adoption of large language models in attack workflows is expected to further reduce the time available for detection and response.
As attacks accelerate and implicit trust breaks down, organizations are increasingly turning to zero-trust to limit access and reduce the impact of compromised identities.
The post AI-Driven Attack Gains AWS Admin Privileges in Under 10 Minutes appeared first on Website Hosting Review.
Cloud storage makes it easy to store and access files from anywhere, but it also…
Flare researchers have identified a threat actor known as TeamPCP behind a large-scale campaign targeting…
A Windows Admin Center Azure SSO flaw could let attackers pivot from one compromised machine…
A newly discovered vulnerability in Traefik’s experimental ingress-nginx provider silently disabled TLS certificate verification for…
Effective fire prevention in data centers requires a coordinated approach that adapts to evolving hazards…
Deployment of direct liquid cooling (DLC, as cold plate or immersion systems) remains overwhelmingly concentrated…