Episode 98: Building Trust in AI Through Model Interpretability

Download MP3
When your machine learning model makes a decision that affects someone's medical treatment, financial security, or legal rights, "the algorithm said so" isn't good enough. Stakeholders need to understand why models make the decisions they do, and in high-stakes environments, model interpretability becomes the difference between AI adoption and AI rejection.

In this episode, Serg Masis joins Dr. Genevieve Hayes to share practical strategies for building interpretable machine learning models that earn stakeholder trust and accelerate AI adoption within your organisation.

You'll learn:
  1. The crucial distinction between interpretable and explainable models [07:06]
  2. Why feature engineering matters more than algorithm choice [14:56]
  3. How to use models to improve your data quality [17:59]
  4. The underrated technique that builds stakeholder trust  [21:20]
Guest Bio

Serg Masis is the Principal AI Scientist at Syngenta, a leading agricultural company with a mission to improve global food security. He is also the author of Interpretable Machine Learning with Python and co-author of the upcoming DIY AI and Building Responsible AI with Python.

Links
Episode 98: Building Trust in AI Through Model Interpretability
Broadcast by