Speaker:: Shruti Datta Gupta & Chandrani Mukherjee
Title:: Security Guidance as a Service
Duration:: 23 min
Video:: https://www.youtube.com/watch?v=SMEZowlcyyo
## Key Thesis
Adobe built a centralized, platform-agnostic AI security guidance service that delivers consistent, org-specific security recommendations across every stage of the SDLC — from IDE to Jira to Slack to threat modeling — by anchoring a RAG pipeline to a single vetted vector store fed by an automated document ingestion pipeline.
## Synopsis
Datta Gupta (product security engineer, Adobe, 5+ years) and Mukherjee (security engineer, Adobe, 10+ years) present the 18-month evolution of their "Security Guidance as a Service" project. The core problem: security-to-developer ratios are low across most organizations, creating a bandwidth bottleneck for one-on-one guidance. Security processes (design reviews, threat modeling, code scanning, vulnerability scanning) each generate similar guidance needs, but that guidance was fragmented across docs that weren't reaching developers at the right time. Generic LLM answers aren't sufficient — they lack org-specific context and carry hallucination risk.
Their solution: a single central vector store containing vetted, Adobe-specific security documentation, served to multiple platforms via a common AI orchestrator. The journey started with a RAG solution integrated into Jira ticket flows (where vulnerability SLAs live) and simultaneously into Slack. They quickly realized separate RAGs per use case was wrong and consolidated into one shared vector store.
**Document ingestion pipeline**: Security documentation is maintained as metadata files in a Git repo (the source of truth). A pub/sub model listens to Git diffs. When a metadata change is detected, a downloader service fetches the URL content, an ingestor generates embeddings and loads the vector store, and a Slack notification confirms ingestion. Critically, every ingested document requires a reference Q&A dataset from the document owner — this golden dataset feeds an automated eval workflow that checks correctness and relevancy. A cron job also scans ingested URLs for content updates.
**Architecture**: Inputs from Jira, threat modeling platforms, chatbot, or IDE go to an AI orchestrator. The orchestrator tweaks the system prompt based on input type, queries the vector store, formats the LLM response as a configured JSON per use case, and logs everything in LangSmith for traceability and online eval.
**Four use cases demoed** (demo hit network issues but was walked through): (1) a security support chatbot for policy and process questions; (2) threat remediation guidance from their automated AI threat modeling engine; (3) vulnerability ticketing/triaging workflow providing both short-term fixes and long-term class-level fixes; (4) an MCP server integrated into Cursor that delivers Adobe-specific security guidance as developers write code. The MCP server approach is being rolled out via a Cursor extension that auto-configures itself on sign-in. Initial testing showed a ~70% reduction in vulnerabilities in code when security rules were applied.
Key lessons: eval is the most important enabler but is time-consuming and manual; doc freshness requires a shared-responsibility model with document owners; the PR review gate for doc ingestion is a deliberate quality trade-off; and the tech stack evolved dramatically over 18 months (they had to do manual chunking/vectorization initially; now vector store vendors handle multimodal ingestion automatically).
## Key Takeaways
- A single central vector store beats multiple siloed RAGs when serving guidance across multiple security processes
- Git-backed metadata with automated ingestion + eval is the right ops model for keeping security docs fresh and quality-controlled
- Eval with a golden Q&A dataset per document is what separates production AI systems from experiments
- MCP server in the IDE is the furthest-left integration point — delivering security guidance at code-write time
- "When you make security zero calorie and seamless for developers, they will get on board" — adoption is a UX problem, not a compliance problem
- Security rules in CLAUDE.md / cursor rules reduced vulnerabilities ~70% in testing
## Notable Quotes / Data Points
- ~70% reduction in code vulnerabilities from applying foundational security rules in IDE
- Project running for ~1.5 years; tech stack changed significantly (LLMs now accept multimodal input; vector stores handle chunking automatically)
- Uses LangSmith for full traceability and online eval
- MCP server branded "Adobe Security Guidance" integrated into Cursor agent
- "AI boosts productivity but context is king — the more org-specific context you provide, the more useful these systems become"
- "Eval is what will turn your AI experiments into production systems"
#unprompted #claude