• The Loop
  • Posts
  • 🔆 Explain Yourself: Practical Techniques for Explainable AI

🔆 Explain Yourself: Practical Techniques for Explainable AI

Demystifying AI's decision-making, US probes China's chip dominance, and California's crackdown on addictive algorithms

Was this email forwarded to you? Sign up here

🗞️ Issue 51 // ⏱️ Read Time: 7 min

Hello 👋

As we've explored before, the rise of complex AI has brought with it the challenge of the "black box" – where even the creators of an AI system may not be able to fully explain how it makes decisions. But what about those who deploy these models in real-world applications? In this edition, we go beyond the black box and explore practical techniques for organizations to analyze and interpret model outputs, even when they don’t have direct access to the model.

In this week's newsletter

What we’re talking about: Techniques and tools for understanding AI decisions when you can’t peek under the hood.

How it’s relevant: Most organizations deploying AI rely on models through APIs rather than developing their own models. The reliance on plug-in solutions makes it essential to understand model behavior without access to internal workings, ensuring responsible deployment and effective risk management.

Why it matters: As models continue evolving, the tools and techniques for understanding their decisions must evolve too. Organizations that implement robust frameworks now will be better positioned to deploy AI responsibly and maintain stakeholder trust.

Big tech news of the week…

🇺🇲🇨🇳The United States has initiated an investigation into China's dominance in the semiconductor industry, particularly focusing on silicon carbide (SiC) chips. These chips improve power efficiency and enable high-performance computing, indirectly supporting the advancement of AI technologies.

🇷🇺🇨🇳 Russian President Vladimir Putin has directed the government and the country’s biggest bank, Sberbank, to enhance cooperation with China in the field of AI.

⚖️A new law in California prohibits social media companies from knowingly providing "addictive feeds" to minors without parental consent. These are feeds that use algorithms to recommend content based on user behavior, rather than preferences explicitly set by the user.

Your Guide to Explainability

Explainable AI (XAI) is shedding light on how AI systems make decisions. It goes hand-in-hand with transparency as a key principle of Responsible AI. Transparency reveals what goes into the system—the data sources, model architecture, and who's accountable. Explainability shows why specific decisions are made by breaking down the reasoning process.

To navigate the world of XAI, it's helpful to consider the different perspectives involved. We'll explore techniques tailored to three key roles: Those deploying models in applications, those developing the AI models themselves, and those who interact with AI in non-technical roles.

XAI for the Model Deployer

Let’s start by looking at those who deploy models in applications and the techniques they can use. In this context, a deployer is anyone integrating AI models into real-world systems and products.

If you were to ask your AI model "Why did you predict that?" you’d ideally get a clear, understandable answer. This isn't just about satisfying curiosity and improving performance; it's about building trust, ensuring fairness, and promoting accountability.

Now, let's get practical. Most organizations today aren't building their own AI models from scratch—they're building applications on top of existing models like OpenAI's GPT-4, accessed through APIs. This means they don't have direct access to the model's internal workings or the training data. So, to understand how these models make decisions, the focus shifts to what you can control: the inputs. 

Here we’ll introduce you to 5 commonly used methods and techniques, all utilizing Input Perturbation, that offer unique approaches to understanding AI's decision-making processes.

Subscribe to keep reading

This content is free, but you must be subscribed to The Loop to continue reading.

Already a subscriber?Sign In.Not now