Dec. 5, 2025

RAG, LLMs & the Hidden Costs of AI: What Companies Must Fix Before It’s Too Late

Most enterprises dramatically underestimate the true risks, costs, and vulnerabilities hidden inside their AI systems.

In this episode of An Hour of Innovation podcast, host Vit Lyoshin sits down with Dorian Selz, co-founder and CEO of Squirro, to explore why modern organizations are not nearly as prepared for AI as they believe.

Dorian breaks down how poor architectural choices can lead to massive operational expenses, including a real-world example of a $4 million-per-month LLM bill caused by misusing generative AI. He explains why Retrieval-Augmented Generation (RAG) has become essential for accuracy, cost control, and security in enterprise environments, and how exposing full documents to LLMs creates unnecessary risks.

The conversation dives deep into the growing gap between perceived and actual AI safety, the importance of guardrails, auditability, and observability, and the often-overlooked role insurance companies will play in shaping future AI governance. Dorian also discusses why vibe coding leads to dangerous levels of technical debt and why startups entering regulated industries must understand the operational and compliance burdens they face.

As a leader trusted by financial institutions, central banks, and highly regulated sectors, Dorian provides a clear and grounded perspective on what enterprises must fix before AI becomes mission-critical. This episode offers critical insights for business leaders, engineers, product teams, and anyone deploying LLMs, RAG systems, or enterprise AI at scale.

Dorian Selz is a veteran entrepreneur, known for building secure, compliant, and enterprise-grade AI systems used in finance, healthcare, and other regulated sectors. He specializes in AI safety, RAG architecture, knowledge retrieval, and auditability at scale, capabilities that are increasingly critical as AI enters mission-critical operations. His work sits at the intersection of innovation and regulation, making him one of the most important voices in enterprise AI today.

Takeaways

  • Most enterprises dramatically overestimate their AI security readiness.
  • A single architectural mistake with LLMs can create a $4M-per-month operational cost.
  • RAG is essential because enterprises only need to expose relevant snippets, not entire documents, to an LLM.
  • Trust in regulated industries takes years to build and can be lost instantly.
  • Real AI safety requires end-to-end observability, not just disclaimers or “verify before use” warnings.
  • Insurance companies will soon force AI safety by refusing coverage without documented guardrails.
  • AI liability remains unresolved: Should the model provider, the user, or the enterprise be responsible?
  • Vibe coding creates massive future technical debt because AI-generated code is often unreadable or unmaintainable.

Timestamps

00:00 Introduction to Enterprise AI Risks

02:23 Why AI Needs Guardrails for Safety

05:26 AI Challenges in Regulated Industries

11:57 AI Safety: Perception vs. Real Security

15:29 Risk Management & Insurance in AI

21:35 AI Liability: Who’s Actually Responsible?

25:08 Should AI Have Its Own Regulatory Agency?

32:44 How RAG (Retrieval-Augmented Generation) Works

40:02 Future Security Threats in AI Systems

42:32 The Hidden Dangers of Vibe Coding

48:34 Startup Strategy for Regulated AI Markets

50:38 Innovation Q&A Questions

Support This Podcast

Connect with Dorian

Connect with Vit