Speaker:: Billy Norwood Title:: Establishing AI Governance Without Stifling Innovation Duration:: 24 min Video:: https://www.youtube.com/watch?v=sh9LpVM1QBM ## Key Thesis A mid-market CISO at a $5B pharmaceutical distribution company shares hard-won lessons from implementing AI governance at a non-tech-forward organization — the central insight being that governance frameworks built on tiered committees, risk-scored intake, and controlled chokepoints (routing everything through Databricks) are necessary but insufficient without specific policies, training, and use-case-level human oversight decisions baked in from the start. ## Synopsis Norwood introduces FFF Enterprises — a ~$5B pharmaceutical distribution company with IoT drug-dispensing devices, an online pharmacy, and specialty drug focus. Not a tech-forward company; started taking fax orders. He has been CISO for 5+ years. The AI story started with a consulting firm that produced 40 AI use cases and a PMO plan to deliver them in five waves by 2027, presented to leadership without security input. Norwood saw the presentation and said: "We have a problem." The governance structure he built is tiered. Top layer: a governance committee of himself (CISO), CIO, General Counsel, Chief Compliance Officer, focused on policy, ethics, and regulatory alignment. Middle layer: an AI Center of Excellence with VPs, directors, data science leads, infrastructure engineers, and HR (critical for change management and framing AI as enabler rather than job-killer). The CoE handles standardized practices and control design. Risk escalation follows a simple rule: higher risk (PHI, PII, financial data, critical processes) escalates higher up the committee structure. Initial controls he thought were sufficient but were not: a vague AI usage policy ("check with IT before using AI"), routing everything through Databricks (primary control plane for agents) and Microsoft Copilot (for end users), and basic risk assessments. What was missing: explicit approved tool lists, specific prohibited use cases, required AI awareness training with sign-off, mandatory intake process with ROI/process benchmarking, and defined human oversight points per use case. Two concrete use cases highlighted: (1) Medical pre-authorization: an agent reads denial letters, pulls documentation, wraps it for a doctor to review — saves $250K/year, but requires human oversight before any action. (2) Overages/shortages/damages workflow: a multi-agent Databricks orchestration pulling SAP ship dates, Salesforce data, customer-uploaded images and video, packaging everything for a human reviewer to respond to the customer. Both are human-in-the-loop designs; FFF is risk-averse. Key failure modes and lessons: the initial vague policy needed replacement with specific approved/prohibited tool and use case lists. The intake form required iteration for every new use case as new risk dimensions emerged. Shadow AI detection remains an ongoing challenge — procurement and contract review for AI clauses are primary mitigations. "Security spaghetti" (tangled access control groups) became a problem as the agent landscape expanded. Budget constraints forced improvisation (emergency budget when CEO changed from "no AI" to "I want it now"). He is currently using Databricks as a de facto system of context, aggregating Salesforce and SAP data — effectively building a proprietary system of context rather than letting Salesforce or SAP own that role. ## Key Takeaways - Tiered governance (steering committee + center of excellence) is necessary but insufficient without specific policy artifacts - Vague AI usage policies ("check with IT") are useless; you need explicit approved tool lists and prohibited use case lists - Human oversight design must be decided per use case before deployment, not after - Routing all AI through a single control plane (Databricks) simplifies governance but creates a dependency that won't scale forever - HR as a governance partner is underrated — they are critical for framing AI adoption as augmentation, not replacement - Shadow AI detection is hard; procurement/contract review for AI clauses is a practical starting point - Use case scoring and ROI benchmarking before deployment enables prioritization and success measurement ## Notable Quotes / Data Points - 40 AI use cases proposed by consulting firm, originally scheduled for 5-wave delivery by 2027 - Medical pre-authorization agent saves $250,000/year; requires doctor review before any action - Multi-agent OSD workflow spans Databricks, SAP, Salesforce, with customer image/video inputs for $13,000 drug vials - FFF is a $5B company — Microsoft shop, Databricks as AI control plane, Copilot for end users - "We were aiming for controlled execution but it spawned child processes" — their self-summary via Copilot - Shadow AI chase: "ridiculous as it sounds" but necessary; secure browser enforcement as a first line - CrowdStrike + open Claude security alliance guidance used for shadow AI detection integration - Databricks bronze/silver/gold data standards as a governance scaffold within the control plane #unprompted #claude