ICCV 2025 Decoded:
The Must-Know Breakthroughs in Explainability & Debugging for Modern Neural Networks

December 10th 2025
11:00 EST/ 17:00 CET
About

Join us, on Wednesday, December 10th 2025, 
11:00 EST/ 17:00 CET for a focused, no-fluff webinar, where we’ll break down the most compelling ICCV 2025 work on model explainability, failure analysis, debugging, and model reliability.

This session will feature Thomas Fel, Research Fellow at Harvard, Ph.D. in Explainable AI, and Yotam Azriel, CEO & CTO of Tensorleap, who together will connect advances in deep vision interpretability and cutting-edge XAI research to practical model understanding and debugging.

For teams building, deploying, and troubleshooting neural networks in production, the real question after ICCV is simple: Which of these new research breakthroughs actually help us understand what our models are doing and where they’re still failing?
During this live webinar, we’ll go beyond the “what” of the published papers and dig into the “so what”- translating cutting-edge research into practical insights you can apply immediately across vision, multimodal, and foundation-model pipelines.

You’ll come away with a distilled view of where the field is moving, the debugging methodologies that matter, and the emerging patterns shaping how we diagnose and interpret complex neural networks today.


Anyone who wants high-signal insights from ICCV- without drowning in 2,000+ papers- will walk away with immediate, practical value.