AI isn’t failing because it’s weak - it’s failing because reality is messy.
In self-driving, the hardest problems aren’t the common ones… they’re the rare, unpredictable edge cases.
And that’s exactly where even the most advanced AI still struggles.
Vit and Achyut explore why autonomous vehicles are still not fully solved despite years of progress in AI and machine learning. The conversation highlights the “long tail problem”, the countless rare scenarios that AI systems struggle to predict and handle.
While self-driving technology works well in controlled or common situations, real-world environments introduce unpredictable complexity that’s hard to model. This insight applies beyond autonomous driving and into product development, startups, and AI systems at large. Understanding these limitations is critical for anyone building or working with AI-driven products.
KEY TAKEAWAYS:
AI performs well on common scenarios but struggles with rare “edge cases”
The “long tail problem” is one of the biggest blockers in real-world AI systems
More data doesn’t guarantee full coverage of real-world complexity
True innovation happens when systems can handle unpredictability
Product leaders must design for uncertainty, not just the average case
🔗 Full episode: https://www.youtube.com/watch?v=Wn3E18rP5Eo
Connect with Achyut
* Website: https://torc.ai/
* LinkedIn: https://www.linkedin.com/in/achyutsarma/
Connect with Vit
* Substuck: https://anhourofinnovation.substack.com/
* LinkedIn: https://www.linkedin.com/in/vit-lyoshin/
* X: https://x.com/vitlyoshin
To support our work, please check out our sponsors and get discounts: https://www.anhourofinnovation.com/sponsors/








