- Lumiera
- Posts
- 🔆 Breaking the Black Box: Transparency and Explainability in AI
🔆 Breaking the Black Box: Transparency and Explainability in AI
A closer look at understanding how AI systems make decisions, leaked screenshots from Instagram's AI adventures and AI investment in Kenya.
🗞️ Issue 16 // ⏱️ Read Time: 7 min
Hello 👋
Have you heard of the black box problem? This is where AI systems make decisions without providing insight into how those decisions were reached. This means the most honest answer to “How did ChatGPT come up with this response?” is “We do not know.”
Stanford’s Foundation Model Transparency Index has shown that no major foundation model developer is close to providing adequate transparency, revealing a fundamental lack of transparency in the AI industry. There are nuances and layers to this.
Keep reading to learn more about the principles of explainability and transparency, two important pillars of responsible AI.
In this week's newsletter
What we’re talking about: Transparency and explainability in artificial intelligence.
How it’s relevant: Together, transparency and explainability form one of the four key areas in responsible AI, alongside privacy and data governance, security and safety, and fairness.
Why it matters: Transparent and explainable AI is not just a technical issue; it's a societal one. Entrusting important decisions to a system that cannot explain itself presents real-world consequences, and decision-makers must understand these concepts and their implications to navigate the AI landscape responsibly.
Big tech news of the week…
🌍 At the American Chamber of Commerce Business Summit in Nairobi last week, US Commerce Secretary Gina Raimondo and the Kenyan government signed a partnership agreement to enable American companies to invest in AI and data centers in Kenya.
💰️ Microsoft will invest $1.7 billion over the next four years into expanding cloud services and artificial intelligence in Indonesia, including building data centers. Microsoft’s CEO, Satya Nadella, says the investment will "bring the latest and greatest AI infrastructure to Indonesia.”
📱 A leaked screenshot shows Instagram is testing an AI chatbot that lets you choose from 30 personalities. This would fit past statements from CEO Mark Zuckerberg that Meta is “developing AI personas that can help people in a variety of ways.”
🦺 Thorn, All Tech is Human, and 10 leading AI companies have joined forces to prevent the creation and spread of AI-generated child sexual abuse material on their platforms. The Safety by Design principles require that companies anticipate where threats may occur during the development process and design the necessary safeguards — rather than retrofit solutions after harm has occurred.
The Black Box
Transparent and explainable AI (XAI) is not a new concept. It has its roots in the early days of AI research and was already identified as desirable and crucial as early as the 1980s. However, as AI evolved, so did the complexity of the models, making it even more difficult to understand the contents of the system’s knowledge base and reasoning processes. The need for transparency and explainability arose to address this "black box" problem, where AI systems make decisions without providing insight into how those decisions were reached.

The benefits of explainable AI
Explainability: The capacity to comprehend and articulate the rationale behind AI decisions, emphasising the importance of AI being transparent and understandable to users and stakeholders.
Example: Suppose an individual applies for a loan, and an AI model determines that their creditworthiness is low, resulting in a rejection. With explainable AI, the model can provide reasoning such as "The loan application was rejected due to a high debt-to-income ratio and a history of late payments." This explanation allows the applicant to understand the reasons behind the decision and take appropriate actions to improve their creditworthiness.
Transparency: Open sharing of development choices, including data sources and algorithmic decisions, as well as how AI systems are deployed, monitored, and managed, covering both the creation and operational phases.
Example: Using the same hypothetical credit-scoring situation as before, the financial institution facilitating the loan could openly provide transparency into how the AI model was built and the factors considered in the creditworthiness assessment. This could include details about the data sources, the algorithmic decisions made, and any preprocessing or feature engineering steps taken.
The trade-off between accuracy and explainability
More complex models might deliver superior performance but tend to be less interpretable than simpler models. This assumption forces a choice between high-performing yet opaque models and more transparent, less precise, alternatives.
However, the trade-off can often be relatively small. Recent studies found that for almost 70% of the considered datasets, there was no significant trade-off between accuracy and explainability: A more explainable model could be used without sacrificing accuracy.
White Box Models: Making Algorithms Transparent
White box models (aka native white box models) typically include a few simple rules (such as decision trees or linear regression) with limited parameters, making the decision-making processes behind these algorithms generally understandable by humans. Surrogate white box models are simpler versions of complex black box models, trained on the predictions of the best-performing black box for that dataset. Unlike native white box models, they are intentionally designed to mimic black box behavior but be easier to understand. For example, a linear regression model (transparent) could be used to approximate the behavior of a deep neural network (opaque).
What does research suggest for deciding if a black-box or white-box model is best for your use case?
Default to white box: Start with transparent models and only opt for black-box models if there's a significant performance gap.
Know your data: Assess data quality and complexity to determine if black-box models are necessary, especially for multimedia or complex datasets.
Know your users: Consider the importance of transparency, especially in sensitive areas like hiring or legal decisions.
Know your organisation: Consider the organisation's digital readiness and employee trust in AI when choosing between white or black-box models.
Know your regulations: Ensure compliance with legal requirements for model explainability, such as the Equal Credit Opportunity Act or GDPR.
Explain the unexplainable: When black-box models are necessary, consider developing explainable proxies or prioritizing transparency to address trust and safety concerns.
Until next time,
Lumiera has gathered the brightest people from the technology and policy sectors to give you top-quality advice so you can navigate the new AI Era.
Follow the carefully curated Lumiera podcast playlist to stay informed and challenged on all things AI.
What did you think of today's newsletter? |
