Speaker:: Sergej Epp Title:: 8 Minutes to Admin. We Caught It in the Wild. Duration:: 18 min Video:: https://www.youtube.com/watch?v=xCtcQkJBReQ ## Key Thesis AI-assisted attackers are faster than ever, but their speed creates a paradox: the faster they move, the more forensic artifacts they leave. AI systems have an "accent" derived from their training data — predictable naming patterns, example data, and behavioral fingerprints — that defenders can weaponize through environment-specific honeytokens, timing controls, and naming convention enforcement to detect AI-driven intrusions. ## Synopsis Epp opens with a captured intrusion from November 28th of the prior year: S3 bucket credentials compromised from a rack database, full AWS admin access achieved within 8 minutes. The attack was fast but extremely loud. Walking through the CloudTrail logs, Epp points out several "confession" artifacts. Lambda function comments read things like "courageous admin access key list S3 bucket" — AI-generated commentary left in the code. When the attacker tried to assume IAM roles to move laterally, they used sequentially ascending and descending account IDs — the kind of example data any AI generates when asked for "sample AWS account IDs." Most telling: the AWS session name contained a visible reference to the AI tool being used, making the tooling trivially identifiable. The attacker also tried to spin up 8× H100 GPUs, naming them "Steven GPU monster" — and a user named "Steven" from Serbia appeared repeatedly in the logs. The attacker then tried to pull a GitHub repository that didn't exist — the URL was hallucinated. After the initial burst (8 minutes to full admin), there was a 50-minute quiet period followed by bursts of activity in a pattern consistent with human-LLM interaction cycles: prompt, wait, burst, prompt, wait, burst. Epp introduces a second case: a malware sample appearing within 24 hours of the React for Shell exploit going public. The sample included five persistence mechanisms, a local Node.js runtime environment, self-patching capability, and multiple payload delivery within minutes. The C2 server was hosted on a blockchain — making it impossible to sinkhole, block, or take down through a registrar. The code style and volume of production in the timeframe strongly suggests LLM generation, though Epp notes there's no smoking gun without watermarking confirmation from the model providers. The theoretical framework Epp presents is a "verifier spectrum" mapping offense and defense capabilities. Offense owns "objective verifiers" — popping a shell, exfiltrating data — things that are deterministically verifiable. Defense struggles with "subjective verifiers" — is this log entry malicious? But defense has a unique advantage: ownership of environment verifiers. Attackers don't know your naming conventions, your GitHub repos, how you deploy, or what your identities look like. That asymmetry is exploitable. Four defensive controls: (1) Weaponize time — new account created from unknown source with admin activity within 5 minutes is a high-confidence signal; alert on it. (2) Honey tokens — canary identities and credentials that follow your naming conventions, planted as traps. Epp calls this "the only best thing in cyber defense." (3) Enforce your accent — train your team on company naming conventions and build detections around deviations. (4) Real/fake verification — when an attacker calls a GitHub repo or assumes a role, you can simply verify whether that resource exists and alert if it doesn't. These controls don't require vendor solutions — they require knowing your own environment. ## Key Takeaways - 8 minutes from credential compromise to full AWS admin via AI-assisted attack, caught entirely in CloudTrail - AI attackers leave distinctive forensic artifacts: hallucinated URLs, example data patterns, session name confessions, consistent-with-LLM timing patterns - AI has a training-data "accent" — predictable outputs when asked for sample data, names, repos — that are detectable - C2 on blockchain = infrastructure that cannot be sinkholed or taken down via registrar - Defender's asymmetric advantage: you know your environment; AI attackers don't - Four controls: timing alerts, honeytokens, naming convention enforcement, resource existence verification - Offense has cheap verifiers (shell pop, data exfil); defense can make its verifiers equally cheap via environmental specificity ## Notable Quotes / Data Points - 8 minutes from S3 credential compromise to full AWS admin - Lambda function code had AI-generated comments confessing tool usage in the session name - Attacker tried to spin up 8× H100 GPUs ($50/hr each) named "Steven GPU monster"; user "Steven" from Serbia appeared in logs - Hallucinated GitHub repo URL — tried to pull from a repo that doesn't exist - Burst-pause-burst activity pattern consistent with human LLM prompting cycles - C2 hosted on a blockchain: "you can't really size it, you can't really block it, you can't take it down because there's no registrar" - "The faster they go, the more they confess. So let this sink in. The faster they go, the more they confess." - Weaponized exploit rate: ~2% of CVEs constant — but LLM-driven exploit generation could change this dramatically #unprompted #claude