Speaker:: Srajan Gupta Title:: Injecting Security Context During Vibe Coding Duration:: 23 min Video:: https://www.youtube.com/watch?v=DmO3cVOijNY ## Key Thesis Vibe coding (prompt-driven AI code generation) fails not because of bad models but because of missing security context at generation time — and the fix is to inject a curated, intelligently-filtered security context pack into the prompt before code is written, then immediately verify and patch the output while intent is still fresh, using MCP as the delivery mechanism. ## Synopsis Gupta, a senior security engineer at fintech company Dave, opens with the March 2025 incident where a developer vibe-coded an app, launched it, got attacked within a week, and had to shut it down. A year later, the community is still wrestling with the same problem. The root issue: the current workflow is see/think → prompt → accept all → run, with zero security consideration. The focus is "is my code working" not "is my code secure." The scale problem is real: the developer-to-security ratio is roughly 100:1, security teams can only catch ~40% of flaws even at current pace, and vibe coding is accelerating code volume dramatically. Traditional CI/CD scans miss design flaws (they catch implementation bugs). Manual threat modeling of every PR doesn't scale. The missing 60% of security flaws come primarily from lack of deep security requirement understanding and the "accept all" culture that buries structural problems. Gupta's solution: a security context pack delivered via MCP that intercepts the code generation loop at three points. Pre-coding: before the agent writes a line, the MCP server does lightweight security analysis — identifying risk level (high/medium/low), relevant security categories (e.g., API security, data validation, web security), and pulling in the specific guidelines for those categories. Code generation: the agent writes code with the security context already in the prompt. Post-generation: the MCP verifier checks the generated code against the requirements identified in step one and flags mismatches immediately, while the developer's intent is still in the chat window. The live demo used OTel (open-source MCP observability tool) as the codebase. Task: build a webhook receiver endpoint accepting incoming webhooks from GitHub/Slack. Without the tools enabled: code was generated, post-hoc scan found two issues — sensitive header forwarding and authorization tokens propagating through the workflow. With tools enabled: pre-coding analysis flagged this as high-risk, identified API security, data validation, and web security as key categories, pulled in relevant OWASP cheat sheets, and the resulting code had a header blocklist, proper authorization cookie handling, X-Forwarded-For handling, and X-Real-IPs baked in from line one. Final verification: all checklist items passed, no critical/high/medium issues found. Key design insight: security guidelines written for humans don't work well directly with AI agents. The tool uses intelligent categorical tagging (e.g., authentication → JWT) so agents can filter precisely rather than loading everything into context. Gupta recommends organizations inject their own internal threat models, golden path libraries, approved dependencies, and SCA constraints. The centralized MCP server approach is critical for scalability — you don't want to replicate guidelines into 1,000 repos. ## Key Takeaways - Vibe coding's security problem is a context gap, not a model capability gap — the model will write secure code if given the right context - Security context must be injected before generation, not scanned after — post-generation fixes in CI/CD are too slow and miss design flaws - MCP is the right delivery mechanism: IDE-native, composable, auditable, and supports context-on-demand without monolithic prompts - Intelligent filtering by category prevents context bloat — only pull in guidelines relevant to the specific feature being built - Hooks (Claude/Cursor) make MCP calls deterministic; without hooks, tool calls are probabilistic - Centralized MCP server is more scalable than per-repo CLAUDE.md/rules files for enterprise deployments - This approach reduces developer wait time by catching issues inline rather than in CI — security matches dev speed - Does NOT replace threat modeling for new systems or human review for large architectural changes ## Notable Quotes / Data Points - Developer-to-security ratio cited: ~100:1 - Traditional methods catch only ~40% of security flaws at current code velocity - Demo showed: without tools → 2 post-gen findings; with tools → 0 critical/high/medium findings, all checklist items pass - Cursor's internal thought process showed "pass pass pass pass... no critical or high severity issues found" - "Vibe coding fails when the context is missing" - The tool is open source (GitHub repo shown in talk) - Internal policies, Confluence docs, OWASP cheat sheets, threat models can all be plugged in as context sources #unprompted #claude