Speaker:: Gadi Evron
Title:: Closing Words (Final)
Duration:: 15 min
Video:: https://www.youtube.com/watch?v=HjAxt-KpACg
## Key Thesis
The [un]prompted 2026 conference marked a genuine inflection point in the security community's engagement with AI — a "warp point in history" comparable to early Defcon where a new community is forming. Evron's closing manifesto: the rate of change is so fast that all tooling and scaffolding becomes obsolete every 90 days, but the community that chose to be here, on the edge, is exactly positioned to survive it — and the call to action is not individual mastery but collective uplift.
## Synopsis
Evron opened by thanking sponsors who contributed without any guaranteed return — naming Nostic, Tech with Kyle, White Rabbit, Hian Futures, and Alian Ventures — particularly noting the late-stage sponsors who came in when the event scaled from ~300 to 700 attendees (releasing ~400 from the waitlist). He spent significant time on the volunteer community, noting the event attracted CISOs, researchers, students from Singapore, and everyone in between, all talking about Claude's context window.
He reflected on the two-day event's key themes: the transition from deterministic to nondeterministic computing and what that means for security; coding agents going mainstream beyond engineers; the reality that individual teams and companies now build their own AI infrastructure without central control. His core observations:
**The 90-day obsolescence cycle.** Ryan Moon's framing resonated most: the people in the room are those who accepted they must "dump their scaffolding and infrastructure and all the smart prompts we built every 90 days." That is a harsh reality — months of research and tooling become nearly irrelevant in the face of the next capability jump. But the people in this room chose to be in this "death trap of doom" and are energized by it rather than paralyzed.
**The AI adoption litmus test.** Evron's singular indicator for which companies will survive: whether the CEO is on Claude Code (or Cursor or Copilot). He acknowledged this sounds ridiculous but believes it's the most predictive signal of organizational readiness.
**The intelligence stratification argument.** He outlined a hierarchy of AI engagement: most people interact only with GPT/Claude/Gemini in chat → multi-turn conversation → tool use → agents → agentic flow with self-healing → "ouroboros" (output feeds input). The people in the room operate in the agentic layers. If even 2% of people in their networks can be brought up from passive chat use to basic tool use, that cascades into real impact at organizational and national levels.
**Offensive AI singularity.** Attackers are now facing their own micro-singularity: operations no longer require months of zero-day preparation. Attackers can generate exploits on the fly. He predicted (framing it as "two years ahead, so probably two months out"): fully automated lateral movement without pre-planned zero-days, real-time vulnerability discovery during active operations, and automatic patching as a potential defense counter-measure.
**Platform-level AI security emerging.** Anthropic's Claude Code Security and OpenAI's renamed Codex Security were cited as signals that platform-level security tooling is arriving. He connected this to recent stock market movements as a "hint of what's to come" for application security vendors.
**The human question.** He closed with a quote from Trolls (former CSO): "How do we make jobs redundant without making people redundant?" He had no answer, but identified community — the relationships built at this event — as the only reliable anchor. He referenced Steve Crocker (author of RFC 1) and the founding principle of Arpanet: "Networks are for people." He expressed uncertainty about whether the agentic internet will remain "for people" but argued that for now, staying relevant means being on the edge, using the tools, and pulling others along.
Final call to action: use Claude Code today, then pull one or two people with you to that level of AI engagement.
## Key Takeaways
- The conference grew from ~300 planned to 700 attendees with 800+ on Slack — a signal a genuine new community is forming around AI security
- Every 90 days, scaffolding, prompts, and tooling become obsolete; the ability to rebuild fast is the core skill
- CEO-on-Claude-Code is Evron's single leading indicator of organizational AI readiness
- Offensive AI has hit a micro-singularity: attackers can now generate exploits in real time rather than preparing zero-days months in advance
- Defense has not yet hit its corresponding singularity — the imperative is to take the offensive power and apply it defensively now
- The "2% rule": moving 2% of your network from passive AI consumption to agentic tool use has compounding organizational impact
- Platform-level AI security tools (Claude Code Security, Codex Security) are now arriving, signaling shifts in the AppSec market
- Community and human connection are the durable asset; the tooling is ephemeral
## Notable Quotes / Data Points
- "We can't stop the storm. It's coming. It's here. People don't see it."
- "The people who are here chose to be part of this death trap of doom."
- Event scale: 700 in-person, 800+ on Slack; sponsors contributed blind without attendee lists or talk previews
- Ryan Moon's framing: must "dump their scaffolding and infrastructure and all the smart prompts we built every 90 days"
- "AI will not replace you, a human using AI will replace you" — Evron questioned whether even this is still true
- Steve Crocker (RFC 1 author): "Networks are for people"
- "The only true indicator I have is if the CEO is on Claude Code or Cursor or Copilot"
- Trolls (former CSO): "How do we make jobs redundant without making people redundant?"
- Evron noted he developed a capability similar to Anthropic's new security tooling at his startup — and acknowledged Anthropic did it better
#unprompted #claude