The AI Element podcast S2E1: “Opening the AI black box”
Element AI Element AI
September 4 3 min

The AI Element podcast S2E1: “Opening the AI black box”

Today we’re proud to launch the second season of our podcast, the AI Element. The show is about what AI can actually do — when AI makes headlines, it’s not always for the right reasons, and we want to dig deeper into the big stories and see how AI is actually making a difference. We want listeners to learn new vocabulary and a new way of seeing and addressing some of the key topics in the field.

Our first season focused on current uses of AI and the implications for businesses big and small. For the second season, we wanted to focus on the big questions facing us as AI begins to change the world around us.

As adoption spreads, so does the need for us to understand the fundamentals of AI systems. The challenge for AI is that, unlike previous technologies, how and why it works isn’t always obvious. We call that idea explainability, the ability of an AI model to explain the reasons behind its decisions. It’s about matching the inputs of an AI system to its outputs and providing an explanation of machine decision-making that’s understandable by humans.

It might seem arcane, or a little philosophical, but explainability has big implications. Humans are asked to explain our own decision-making all the time, whether it’s a Supreme Court case or a debate over pizza toppings. Using AI for certain applications in regulated areas like healthcare or retail banking means an explanation might be legally necessary. In other cases, such as manufacturing or customer service, explanations might be key for identifying problems or optimizing AI performance.

AI Element Podcast Season 2 Episode 1: Opening the AI Black Box

Our first episode deals with the definitions and foundations for explainability and Explainable AI, the research field dedicated to explainability. It’s a buzzword in AI right now because machine decision-making is beginning to change the world, and people want to know how AI systems are coming to those decisions.

Nicole Rigillo, Berggruen Research Fellow at Element AI, breaks down the definition of explainability and other key ideas including interpretability and trust. Duke University computer science professor Cynthia Rudin talks about her work on explainable models, improving the parole-calculating models used in some U.S. jurisdictions and assessing seizure risk in medical patients. And Benjamin Thelonious Fels, founder of healthcare AI startup macro-eyes, says humans need to understand AI systems in order to trust them.

The rest of Season Two explores explainability from many different angles. We’ll talk to some of the leading policymakers working on AI governance, of which explainability is a key part. You need to understand how an AI system works if you want to build the proper laws and regulations around it. We’re also talking with leading thinkers in law, computer science, design, and beyond about human rights and ethics, human-computer interaction, the AI ecosystem, and the future of augmented intelligence. Please join us.

Check out The AI Element Podcast page for the latest episodes, show notes and links to further reading. Our new web player also lets you listen back to Season One.

Any questions or subjects you want us to cover in the show? Tweet us @element_AI and use the hashtag #theaielement and we’ll cover them in an upcoming episode.