• EN
    • FR
    • Research
    • Responsible AI
        • Responsibility
        • AI Maturity Framework
        • AI for Climate
    • Blog
      • EN
      • FR
      • Privacy Policy
      • Analytics and Cookies policy
      • Service providers
      • Terms and Conditions
      • Podcast
      • Careers
      • Facebook
      • LinkedIn
      • Twitter
      • Instagram
      • Medium
      © Element AI 2022 all rights reserved
      • The AI element podcast artwork 2019

      The AI Element

      Can you trust AI? What does trust really mean? What do justice and ethics look like in a world shaped by algorithms?

      Listen to Element AI's #1 rated podcast The AI Element, exploring the biggest issues and toughest questions around trust and adoption of artificial intelligence.

      We talk to tech pioneers, historians, ethicists, lawmakers, and industry leaders about the challenge of building trustworthy AI. Listeners will be in the room as we connect the dots between cutting-edge science and everyday impacts. Join host Alex Shee in learning how AI has already begun to change the world — and what that means for you.

      All Episodes

      • S3E1 The AI Successes You’re Not Hearing About
      • S3B2 The 7 Sins of Enterprise AI Strategies [13 minutes]
      • S3B1 Jeremy Barnes: The 4 Personas of AI Adoption
      • S2E8 AI-Powered Search for COVID-19 Body of Knowledge
      • S2E7 Global Competition Policy and Japan’s Society 5.0
      • S2E6 Histories of AI: Ancient Greek Myths and the Last AI Boom
      • S2B3 Jonnie Penn, AI Historian: What not to optimize?
      • S2B2 An Interview with Yoshua Bengio
      • S2E5 Sustainability
      • S2E4 Making Good Jobs with AI
      • S2B1 Bonus Episode - An Interview with Neil Lawrence
      • S2E3 From Data Governance to AI Governance
      • S2E2 In Data We Trust?
      • S2E1 Opening the AI Black Box
      • S1E6 AI for Good
      • S1E5 A Future with AI
      • S1E4 Cybersecurity and Phishing Attacks
      • S1E3 What’s an AI Strategy?
      • S1E2 Startups vs. Traditional Industry
      • S1E1 What AI Can’t Do
      The AI element podcast artwork generic
      S3E1

      The AI Successes You’re Not Hearing About

      • Apple Podcasts
      • Google Podcasts
      • Spotify
      • Stitcher

      Episode Guests

      • Taha Jaffer
        Taha Jaffer Head of Wholesale Banking and Global Treasury AI at Scotiabank

      Show Notes

      At a moment when most organizations are still in fairly early stages of their AI journey, Taha Jaffer, Scotiabank’s Head of Wholesale Banking and Global Treasury AI, gives us a preview of what to expect: it’s a transformative journey, and one that’s worth taking, even though elements like data, technology and governance all take time to develop.

      Taha talks about:

      • Going from simple use cases to more sophisticated and value creating solutions;
      • An example use case using AI to optimize a system of international bank transfers;
      • How his team’s governance focus has grown to ensuring data assets are in good shape early on, and how his team approaches model testing
      • Why you don’t hear about many AI successes in banking;
      • Plus: His 3 pieces advice for other AI leaders

      Further Reading:

      • AI Maturity Framework
      • AI Maturity Survey
      • How AI risk management is different
      • The value of explainable AI in financial services
      7 sins of AI blog post KV
      S3B2

      The 7 Sins of Enterprise AI Strategies [13 minutes]

      • Apple Podcasts
      • Google Podcasts
      • Spotify
      • Stitcher

      Episode Guests

      • Jeremy Barnes
        Jeremy Barnes Chief Technology Officer at Element AI

      Show Notes

      This is a special, short episode with a summary of lessons complementing our full-length interview with Element AI’s CTO Jeremy Barnes on “The 4 Personas of AI Adoption”.

      A lot of Jeremy’s work has him involved in the top level strategy of AI implementation for the Global 2000, and he’s recently synthesized “7 sins of Entreprise AI Strategies” based off of the common mistakes he has observed. From managing risk to accounting reforms to cultural enablement, these “sins” also come with suggestions for how boards and C-suites can best enable their AI strategies.

      Further Reading:

      Risk v.s. Impact Part 1: The 4 Personas of AI Adoption [Blog Post]

      Risk v.s. Impact Part 2: The 7 Sins of Enterprise AI Strategies [Blog Post]

      AI Maturity Survey

      AI Maturity White Paper


      If you liked this special short format, or have any comments to share, please send us an email at hello@elementai.com.

      4 personas of AI blog post KV
      S3B1

      Jeremy Barnes: The 4 Personas of AI Adoption

      • Apple Podcasts
      • Google Podcasts
      • Spotify
      • Stitcher

      Episode Guests

      • Jeremy Barnes
        Jeremy Barnes Chief Technology Officer at Element AI

      Show Notes

      If you’re tight on time, there is a short complimentary episode with our guest where he summarizes the key takeaways in “The Seven Sins of Enterprise AI Strategies”.

      While business leaders may know AI needs a mindset shift to get the most out of the technology, communicating what exactly needs to change is challenging. Jeremy Barnes, Element AI’s CTO, has an incredible ability to find and make sense of the connecting thread between AI technology and business.

      In this long-form interview, Jeremy talks about his initial role at Element AI as Chief Architect and helping to develop the company’s thesis, the 4 personas of AI adoption he’s observed in the market, and the importance of companies fostering a collaborative culture that will be able to experiment and change quickly around this new tech.

      If you’re curious how company leaders should think strategically about AI, this interview is for you.

      Further Reading:

      Risk v.s. Impact Part 1: The 4 Personas of AI Adoption

      Risk v.s. Impact Part 2: The 7 Sins of Enterprise AI Strategies

      AI Maturity Survey

      AI Maturity White Paper

      If you liked the show, please go onto your phone, or computer, or wherever you listen, and give us a rating or review. It helps a lot with getting people to find the show. We really want to hear your comments about what you like and about what you want to hear more of.

      Have a question we didn’t cover? Tweet at us at @element_ai and use the hashtag #TheAIElement, or send us an email at hello@elementai.com, and we’ll cover it in an upcoming episode.

      COVID Research HERO IMG Purple
      S2E8

      AI-Powered Search for COVID-19 Body of Knowledge

      • Apple Podcasts
      • Google Podcasts
      • Spotify
      • Stitcher

      Episode Guests

      • JF Gagné
        JF Gagné CEO and Co-Founder of Element AI

      Show Notes

      This special episode was recorded on Monday, March 30th for the beta release of the COVID-19 research platform that leverages technology from the Element AI Knowledge Scout product.

      Research data and reports are being published at an unprecedented pace as organizations scale up their efforts to respond to COVID-19. In order to help clinical and scientific researchers, public health authorities and frontline workers navigate that wealth of information at a rapid pace, we’ve adapted our Element AI Knowledge Scout platform to run semantic search on over 45,000 scholarly articles in the COVID-19 Open Research Dataset (CORD-19) released by the Allen Institute for AI.

      We are looking to rapidly add features and data sets that will be relevant for:

      • Scientific researchers building models of the pandemic and its impacts.
      • Public Health and Safety authorities sourcing the best practices from around the world.
      • Clinical researchers working on new therapies or vaccine trials, as well as identifying existing therapies that could provide immediate help.
      • Other data scientists searching for novel ways to connect research across the body of knowledge available on coronavirus

      If this is you, please go to www.elementai.com/covid-research to access the tool and send us any feedback on how this can better help your work.

      Further Readings:

      Element AI Knowledge Scout for COVID-19 Research

      CORD-19 Dataset release by Allen Institute for AI

      What We’re Doing

      Global competition policy and japans society 5.0
      S2E7

      Global Competition Policy and Japan’s Society 5.0

      • Apple Podcasts
      • Google Podcasts
      • Spotify
      • Stitcher

      Episode Guests

      • Yuko Harayama
        Yuko Harayama Former Executive Member of Japan’s Council for Science, Technology and Innovation Policy
      • Philippe Aghion
        Philippe Aghion Economist and Professor at College de France, The London School of Economics, and Harvard University

      Show Notes

      This week on the AI Element we zoom out to look at how AI is impacting, and is being impacted by, global economic policies. Policy is hugely important for AI’s development and implementation. Though each application of AI differs from business to business and country to country, there are often similar patterns and concerns that arise, like the fear of automation replacing jobs and of increasing inequalities. Policy makers across the globe are trying to tackle these concerns to ensure they are creating a positive outcome for all.

      This week we have two guests who are going to teach us about how AI is impacting growth and a radical new approach in Japan to guiding science and innovation. Our first guest Philippe Aghion is a world-renowned economist who tells us about the impact of AI on economic growth and why that growth may not be shared equally. His interview is a lesson about why innovation today without competition will lead to less innovation in the future.

      Our second guest Yuko Harayama co-wrote Japan’s 5 year plan for technology and innovation and was a leader in developing Japan’s national AI strategy. She tells us about Japan’s radical new approach to technology policy and how, rather than giving a strict roadmap for technology development, they created a vision of a society technology developers should abide by. She calls it Society 5.0.

      00:37 - Intro

      02:51 - AI’s Impact on Economic Growth

      05:36 - Aghion, Jones & Jones - Artificial Intelligence and Economic Growth

      06:27 - Aghion, Bergeaud, Boppart, Klenow, Li - Theory of Falling Growth and Rising Rents

      06:56 - AI, Superstar Firms and Competition

      13:03 - G7 ministers ‘agree in principle’ on deal taxing digital taxing digital giants

      13:56 - W.T.O Allows China to Impose Trade Sanctions on U.S. Goods

      14:36 - Acemoglu & Restrepo - Robots and Jobs: Evidence from US Labor Markets

      14:36 - The Fall of the Labor Share and the Rise of Superstar Firms

      16:39 - AI and Innovation

      17:42 - Baumol’s cost disease

      18:55 - Gaby Aghion, fashion designer: Co-founder of Chloe House, which revitalized French fashion in the 1950s

      19:54 - Society 5.0

      19:54 - Artificial Intelligence Technology Strategy

      19:54 -Government of Japan - The 5th Science and Technology Basic Plan

      20:45 - Society 5.0 Powerpoint

      29:29 - Society 5.0 and Global Policy

      29:45 - OECD - Artificial Intelligence

      31:53 - United Nations Activities on Artificial Intelligence (AI)

      33:31 - Optimism and Persistence

      33:46 - Encyclopedia Britannica - Pangloss

      Additional Links

      TED Talk - Yuko Harayama - Why Society 5.0

      Podcast s2e6 wavve bg
      S2E6

      Histories of AI: Ancient Greek Myths and the Last AI Boom

      • Apple Podcasts
      • Google Podcasts
      • Spotify
      • Stitcher

      Episode Guests

      • Adrienne Mayor
        Adrienne Mayor
      • Ronjon Nag
        Ronjon Nag

      Show Notes

      In 2020, and for our first episode in the new decade, we thought it would be good to continue to dig deeper into how AI has developed over time. Learning about the roots of AI, we are reminded that the north star of this field has always been what we tend to call artificial general intelligence today, intelligence that reflects the full breadth of human intelligence. This puts in context why the recent breakthroughs have been so significant, and at the same time there is still so far to go. On this week’s episode of The AI Element we are joined by two guests who share two very different histories of AI, one of its ancient roots and the other of contemporary challenges in operationalizing it for mass use.

      Adrienne Mayor is a historian and research scholar at Stanford University whose recent work focuses on the earliest imaginings of AI in ancient myths. She shares some insights from ancient Greek myths like Homer’s Iliad and writings by Aristotle that show that AI and AGI have long been part of the human imagination.

      Ronjon Nag reflects on the history of AI through his own experience. He’s an inventor, a distinguished Careers Research Fellow at Stanford and has pioneered a number of neural net applications since the 80s. He’s developed some of the first speech and handwriting recognition software and talks about the development of AI applications over the past 4 decades, and how though we’ve come a long way, there is still a long way to go.

      00:55: Jonnie Penn, AI Historian: What not to optimize

      2:00: Adrienne Mayor - Stanford University

      2:17: Gods and Robots: Myths, Machines and Ancient Dreams of Technology

      13:40: Talos Missile

      14:13: TALOS (uniform) - Wikipedia

      17:53: Harvard Divinity School

      19:57: Ronjon Nag - Stanford University

      20:30: Computers That Learn by Doing - Fortune Magazine

      21:59: How William Shatner Changed the World - Martin Cooper, mobile phone inventor - Youtube

      28:37: Google DeepMind

      32:24: SpiNNaker Project

      34:15: Grammatik - Wikipedia

      34:38: Grammarly

      36:13: The Boundaries of Humanity Project

      The AI element podcast artwork generic
      S2B3

      Jonnie Penn, AI Historian: What not to optimize?

      • Apple Podcasts
      • Google Podcasts
      • Spotify
      • Stitcher

      Episode Guests

      • Jonnie Penn
        Jonnie Penn

      Show Notes

      In this bonus interview, historian, researcher and former TV star Jonnie Penn is joined by our Head of Public Policy and Government Relations Marc-Etienne Ouimette to talk about the history of AI. In the interview, Jonnie looks to the past to help answer a number of important questions about the future of AI. For instance, what parts of our social system do we not want to optimize? Who does this technological progress actually benefit? And how can more young people get involved in decision making processes surrounding tech?

      1:19 - Jonnie Penn

      1:25 - The Buried Life

      1:54 - Berkman Klein Center

      2:05 - MIT Media Lab

      4:22 - Machines Who Think - Pamela McCorduck

      23:21 - ‘Don’t Join a Union, Pop a Pill’ - Katrina Forrester

      26:03 - The troubling case of the young japanese reporter who worked herself to death - Washington Post

      27:14 - Germinal (novel) - Wikipedia

      32:08 - The Cybernetic Brain - Andrew Pickering

      42:52 - Marvin Minsky - Wikipedia

      44:40 - Jonnie Penn - Twitter

      Other links:

      Jonnie Penn Publications - Berkman Klein Center

      What History Can Tell Us About the Future of Artificial Intelligence - TEDx Talks

      The AI element podcast artwork generic
      S2B2

      An Interview with Yoshua Bengio

      • Apple Podcasts
      • Google Podcasts
      • Spotify
      • Stitcher

      Episode Guests

      • Yoshua Bengio
        Yoshua Bengio Co-Founder of Element AI and Scientific Director of the Montreal Institute for Learning Algorithms (MILA)

      Show Notes

      Can AI be used to help solve Climate Change? If so, how?

      In this bonus interview, world renowned machine learning researcher Yoshua Bengio joins host Alex Shee to talk about AI’s role in solving the climate crisis. Segments of this interview was featured on our episode about sustainability, in which he shared some examples of how AI is being used to mitigate and adapt to climate change. In the full-length interview he shares his personal motivations for getting involved in climate action and goes in depth about his cross-disciplinary work to solve what he thinks is one of the world’s largest existential risks.

      2:03 - Mila - Yoshua Bengio’s Lab

      2:04 - IVADO - The Institute for Data Valorization

      3:11 - Montreal Declaration for Responsible AI

      7:55 - Tackling Climate Change with Machine Learning - Rolnick et al., arXiv

      11:19 - Mila - AI for Humanity

      11:30 - Mila - Climate Change

      13:31 - Visualizing the Consequences of Climate Change Using Cycle-Consistent Adversarial Networks - Schmidt et al., arXiv

      16:35 - GANS (Generative Adversarial Network) - Wikipedia

      16:58 - Help the planet by uploading your pictures of flooded houses or buildings - Mila

      19:10 - Use machine learning to find energy materials - Nature

      23:20 - French-language federal leaders debate 2019 - Maclean’s

      24:23 - MIT CSAIL Alliances Podcast

      27: 16 - Greta Thunberg - Wikipedia

      31:48 - AI Commons



      Additional Links

      Yoshua Bengio - Google Scholar

      Climate Change: How Can AI Help? - Alexandre Lacoste

      Climate Change AI

      EAI Orkestrator - Optimize the use of your compute and storage resources

      Sustainability podcast
      S2E5

      Sustainability

      • Apple Podcasts
      • Google Podcasts
      • Spotify
      • Stitcher

      Episode Guests

      • Urvashi Vaneja
        Urvashi Vaneja Co-founder of Tandem Research
      • Sherif Elsayed-Ali
        Sherif Elsayed-Ali Director of Partnerships
      • Yoshua Bengio
        Yoshua Bengio Co-Founder of Element AI and Scientific Director of the Montreal Institute for Learning Algorithms (MILA)

      Show Notes

      This week we’re exploring if and how AI can help build a sustainable future. From solving climate change to improving health care, AI is being seen as a technology that can solve some of the world’s biggest problems. But can AI really save us? How can we be sure that AI for Good initiatives are actually helping the people they’re trying to reach? What role can, or should, AI practitioners play in finding solutions?

      Urvashi Vaneja explains why we should be skeptical of AI for Good initiatives that claim to be a cure-all and she shows how to start thinking constructively about how to do better. Sherif Elsayed-Ali shows how AI for Good can help scale the positive impact of human rights organizations and also why we need to expand our current understanding of human rights. Yoshua Bengio reflects on his recent research into different ways AI can be used to mitigate against and adapt to a changing climate. Yoshua also shares why he decided to use his machine learning expertise to try and solve climate change — and how others in the machine learning community can help, too.

      • AI for Good Summit
      • UN Sustainable Development Goals
      • Tandem Research
      • AI for All: 10 Social Conundrums for India: Working Paper - Tandem Research
      • Artificial Intelligence apps risk entrenching India’s socio-economic inequities
      • India’s healthcare: Private vs public sector - Aljazeera
      • Sherif Elsayed-Ali - Twitter
      • Amnesty Tech - Twitter
      • Universal Declaration of Human Rights
      • Training a single AI model can emit as much carbon as five cars in their lifetime - MIT Technology Review
      • Tackling Climate Change with Machine Learning - Bengio et al., arXiv
      • Visualizing the Consequences of Climate Change Using Cycle-Consistent Adversarial Networks - Schmidt et al., arXiv
      • Use machine learning to find energy materials - Nature

      Additional Links

      • AI-enabled human rights monitoring - Sherif Elsayed-Ali & Tanya O'Carroll
      • Climate Change: How Can AI Help? - Alexandre Lacoste
      Podcast s2e4
      S2E4

      Making Good Jobs with AI

      • Apple Podcasts
      • Google Podcasts
      • Spotify
      • Stitcher

      Episode Guests

      • Daron Acemoglu
        Daron Acemoglu Institute Professor at MIT
      • Karthik Ramakrishnan
        Karthik Ramakrishnan Head of Advisory at Element AI

      Show Notes

      Will AI take our jobs? AI’s main application is in the workplace, along all levels of skill and pay. Critics are worried that this could lead to job loss, but like any new technology that impacts work, it depends on how we implement it. How, then, can we create AI products that will enhance our capacity for work — not replace it?


      MIT Institute Professor Daron Acemoglu sheds light on AI’s impact on the job market and how it could help both low-skilled and high-skilled workers alike. He also breaks down how, if we implement AI properly, it could help expand the labour market and reorganize the way we work. Karthik Ramakrishnan, Global Head of Advisory and Enablement at Element AI, talks about how we can successfully implement AI in organizations. The trick: bring workers into the process.

      1:07 - Daron Acemoglu - MIT

      1:20 - Why Nations Fail by Daron Acemoglu

      1:20 - Automation and New Tasks: How Technology Displaces and Reinstates Labor

      1:39 - Computer and Dynamo: The Modern Productivity Paradox in a Not-Too Distant Mirror

      7:34 - Karthik Ramakrishnan - Twitter

      10:37 - The four pillars of intelligent AI adoption - Karthik Ramakrishnan

      13:25 - Building a strategic AI roadmap for your business - Karthik Ramakrishnan

      18:44 - The Twenty Year History Of AI At Amazon - Forbes

      20:34 - The Wrong Kind of AI? Artificial Intelligence and the Future of Labor Demand

      21:49 - It’s good jobs, stupid - Daron Acemoglu

      22:55 - The Future of Work? Work of the Future! - European Commission Report


      Further Readings:

      Artificial Intelligence, Automation and Work - Daron Acemoglu

      The Four Pillars of Intelligent AI Adoption - Karthik Ramakrishnan

      The Revolution Need Not Be Automated - Daron Acemoglu, Pascual Restrepo

      The Wrong Kind of AI? Artificial Intelligence and the Future of Labor Demand - Daron Acemoglu, Pascual Restrepo

      Next-Generation Digital Platforms: Toward Human–AI Hybrids - MIS Quarterly (PDF)

      How To Become A Centaur - MIT Press

      Know Your Customers’ “Jobs to Be Done” - HBR

      The AI element podcast artwork generic
      S2B1

      Bonus Episode - An Interview with Neil Lawrence

      • Apple Podcasts
      • Google Podcasts
      • Spotify
      • Stitcher

      Episode Guests

      • Neil Lawrence
        Neil Lawrence Professor of Machine Learning at the University of Sheffield

      Show Notes

      What is data feudalism? Should machines adapt to us or should we adapt to machines? How can we reinstate agency and control when it comes to our personal data?

      In this bonus episode, Neil Lawrence, Professor of Machine Learning at the University of Cambridge, joins Element AI’s Head of Government and Public Policy Marc Etienne Ouimette to answer these questions and many more. Neil was featured in a previous episode of The AI Element, “In Data We Trust?”, in which he spoke about data trusts and data protection. In this extended interview he shares more of his thoughts on the future of AI and the growing data divide.

      1:04 - The Alan Turing Institute - Professor Neil Lawrence

      1:34 - Cambridge appoints first Deepmind professor of machine learning

      2:07 - Jonnie Penn

      2:25 - AI for social good workshop

      3:02 - Isaac Asimov’s Foundation - Wikipedia

      12:24 - Data Trusts could allay our privacy fears - The Guardian

      23:05 - Sylvie Delacroix - Twitter

      23:09 - Bottom-Up Data Trusts: Distributing the ‘One Size Fits All Approach to Data Governance - Sylvie Delacroix and Neil Lawrence

      Other Readings

      Data trusts: reinforced data governance that empowers the public - Element AI

      Data Trusts - Inverse Probability

      Inverse Probability - Neil Lawrence Blog

      Talking Machines Podcast - Neil Lawrence Podcast

      Data governance to ai governance
      S2E3

      From Data Governance to AI Governance

      • Apple Podcasts
      • Google Podcasts
      • Spotify
      • Stitcher

      Episode Guests

      • Tanya O’Carroll
        Tanya O’Carroll Director of Amnesty Tech at Amnesty International
      • Alix Dunn
        Alix Dunn Founder and Director of Computer Says Maybe
      • Jesse McWaters
        Jesse McWaters Financial Innovation Lead at World Economic Forum
      • Richard Zuroff
        Richard Zuroff Director of AI Advisory and Enablement at Element AI

      Show Notes

      AI is a powerful tool and with that power comes a great deal of responsibility. How can we be sure that we’re in control of AI systems? And what should the governance look like?

      Data governance is an existing practice that covers a lot of good ground because of how integral data is to AI’s functioning. However, AI’s ability to learn and evolve over time means it will adapt to changes in its environment based on its given objective. That dynamic relationship between environment and model makes things like the design of the system and its objectives just as integral as the data the model runs on. Managing the risks of these new, dynamic systems has been widely branded as “AI Governance”.

      Richard Zuroff breaks down the concept of AI governance and how it differs from data governance. Tanya O’Caroll and Alix Dunn tell us about the importance of governance in protecting human rights when building AI systems. Jesse McWaters shares his insights on AI’s impact on the financial sector and why a new form of governance will soon be necessary.

      00:48 - How AI risk management is different and what to do about it - Element AI

      05:07 - All the Ways Hiring Algorithms Can Introduce Bias - HBR

      06:45 - The Why of Explainable AI - Element AI

      07:37 - Amnesty Tech - Twitter

      07:39 - Computer Says Maybe

      07:52 - The Engine Room

      10:18 - UN Guiding Principles on Business and Human Rights

      14:18 - The Matthew Effect - Wikipedia

      15:00 - Agile Ethics - Medium

      17:48 - Human Rights Due Diligence

      20:07 - The New Physics of Financial Services - World Economic Forum

      24:30 - Consumer Financial Protection Bureau

      29:00 - GDPR

      Other Reading:

      Putting AI Ethics Guidelines to Work - Element AI

      AI-Enabled Human Rights Monitoring - Element AI

      New Power Means New Responsibility: A Framework for AI Governance - JF Gagne

      Podcast: Opening the AI Black Box - Element AI

      Podcast s2e2 taking control of our data
      S2E2

      In Data We Trust?

      • Apple Podcasts
      • Google Podcasts
      • Spotify
      • Stitcher

      Episode Guests

      • Ed Santow
        Ed Santow Australia’s Human Rights Commissioner
      • Christina Colclough
        Christina Colclough Director of Platform and Agency Workers, Digitalisation and Trade at UNI Global Union
      • Neil Lawrence
        Neil Lawrence Professor of Machine Learning at the University of Sheffield

      Show Notes

      We don’t have enough control over our data—how it is collected, by whom, what it’s used for. We’re used to hitting “accept” to whatever agreement we need to use the online platforms, mobile apps and other digital services that run our daily lives. Yet public awareness is growing about the importance of privacy and data control. Major data breaches and scandals about the misuse of data have shown the failures of the private sector when it comes to self-regulation.

      Now, governments and policymakers are stepping in with efforts to address the power imbalance between consumers and big companies when it comes to data. It’s about time — the impact of artificial intelligence could exacerbate that power imbalance, and help the data-rich get richer. Element AI’s Marc-Etienne Ouimette spoke with some of those leading the charge around taking back control of our data and the notion of data trusts — think a union, but for your data.

      03:54 - NSW police may be investigated for ‘secret blacklist’ used to target children - The Guardian

      08:30 - 94% of Australians do not read all privacy policies that apply to them – and that’s rational behaviour - The Conversation

      09:16 - Click to agree with what? No one reads terms of service, studies confirm - The Guardian

      10:57 - Up for Parole? Better Hope You’re First on the Docket - The New York Times

      15:09 - The GDPR Covers Employee/HR Data and It's Tricky - Dickson Wright

      18:01 - Companies are trying to test if they can make employees wear fitness trackers - Business Insider

      20:03 - Silicon Valley & the Netherlands: Drivers of the future of automation - Netherlands in the USA

      22:14 - Data Trusts - Neil Lawrence, inverseprobability.com

      23:50 - Data trusts could allay our privacy fears - The Guardian

      31:14 - Data trusts: reinforced data governance that empowers the public - Element AI

      Further Reading

      What is a data trust? - Open Data Institute

      Data trusts: reinforced data governance that empowers the public - Element AI

      Uncertainty and the Governance Dilemma for Artificial Intelligence - Dan Munro

      Governing AI: Navigating Risks, Rewards and Uncertainty - Public Policy Forum

      Anticipatory regulation - Nesta

      Disturbing the ‘One Size Fits All’ Approach to Data Governance: Bottom-Up Data Trusts - Sylvie Delacroix & Neil Lawrence

      Human Rights and Technology Issues - Australian Human Rights Commission

      The Civic Trust - Sean McDonald & Keith Porcaro

      Facebook’s privacy policy is longer than the US Constitution - The Next Web

      AI Element Podcast opening the ai black box
      S2E1

      Opening the AI Black Box

      • Apple Podcasts
      • Google Podcasts
      • Spotify
      • Stitcher

      Episode Guests

      • Nicole Rigillo
        Nicole Rigillo Berggruen Research Fellow at Element AI
      • Cynthia Rudin
        Cynthia Rudin Professor of Computer Science, Electrical and Computer Engineering, and Statistical Science at Duke University
      • Benjamin Thelonious Fels
        Benjamin Thelonious Fels Founder of AI healthcare startup Macro-Eyes

      Show Notes

      “Explainability” is a big buzzword in AI right now. AI decision-making is beginning to change the world, and explainability is about the ability of an AI model to explain the reasons behind its decisions. The challenge for AI is that unlike previous technologies, how and why the models work isn’t always obvious — and that has big implications for trust, engagement and adoption.

      Nicole Rigillo breaks down the definition of explainability and other key ideas including interpretability and trust. Cynthia Rudin talks about her work on explainable models, improving the parole-calculating models used in some U.S. jurisdictions and assessing seizure risk in medical patients. Benjamin Thelonious Fels says humans learn by observation, and that any explainability techniques need to take human nature into account.

      01:11 - Facebook Chief AI Scientist Yann LeCun says rigorous testing can provide explainability

      01:58 - Berggruen Institute, Transformation of the Human Program

      05:34 - Judging Machines. Philosophical Aspects of Deep Learning - Arno Schubbach

      06:31 - Do People Trust Algorithms More Than Companies Realize? - Harvard Business Review

      08:25 - Introducing Activation Atlases - OpenAI

      10:52 - Learning certifiably optimal rule lists for categorical data (CORELS) - YouTube

      11:00 - CORELS: Learning Certifiably Optimal RulE ListS

      11:45 - Stop Gambling with Black Box and Explainable Models on High-Stakes Decisions

      16:52 - Transparent Machine Learning Models for Predicting Seizures in ICU Patients - Informs Magazine Podcast

      19:49 - The Last Mile: Challenges of deployment - StartupFest Talk

      24:41 - Developing predictive supply-chains using machine learning for improved immunization coverage - macro-eyes with UNICEF and the Bill and Melinda Gates Foundation

      Further Reading

      A missing ingredient for mass adoption of AI: trust - Element AI

      Breaking down AI’s trustability challenges - Element AI

      The Why of Explainable AI - Element AI

      AI Element podcast header
      S1E6

      AI for Good

      • Apple Podcasts
      • Google Podcasts
      • Spotify
      • Stitcher

      Episode Guests

      • Rediet Abebe
        Rediet Abebe Co-Founder and Co-Organizer of Black in AI, Co-Founder and Co-Organizer of Mechanism Design for Social Good
      • Charles C Onu
        Charles C Onu Founder and AI Research Lead at Ubenwa

      Show Notes

      Charles C Onu is using AI to detect birth asphyxia in babies. His story is inspiring because of its impact on society and the field of healthcare (in 2016, 1,000,000 babies died from asphyxia), but also because of his humble beginnings. In this episode, Charles shows us that a passion for solving problems can help you overcome many obstacles.

      Host Alex Shee also sits down with Rediet Abebe, Co-Founder and Co-Organizer of Black in AI, to expand on how others are using AI to change not just their industry, but the world.

      AI Element podcast header
      S1E5

      A Future with AI

      • Apple Podcasts
      • Google Podcasts
      • Spotify
      • Stitcher

      Episode Guests

      • Daniel Gross
        Daniel Gross Partner at Y Combinator and Head of the AI Track
      • Natacha Mainville
        Natacha Mainville Chief Innovation Officer at TandemLaunch
      • Jordan Fisher
        Jordan Fisher CEO at Standard Cognition
      • JF Gagné
        JF Gagné CEO and Co-Founder of Element AI

      Show Notes

      Many have dystopian projections of what our future with AI will look like, but professionals working in AI see things differently. For some, our future with AI may simply mean more free time and cheaper access to quality services.

      We check in with Jordan Fisher, Daniel Gross, Natacha Mainville and JF Gagné who together paint a picture of what a not-so-distant future might look, especially in retail and insurance. They may not know exactly what the year 2050 will look like, but they are hopeful.

      AI Element podcast header
      S1E4

      Cybersecurity and Phishing Attacks

      • Apple Podcasts
      • Google Podcasts
      • Spotify
      • Stitcher

      Episode Guests

      • Frederic Michaud
        Frederic Michaud Former Director at Element AI
      • Oren Falkowitz
        Oren Falkowitz CEO of Area 1 Security

      Show Notes

      Every touchpoint with a prospect is an opportunity to nurture that relationship, but also a potential entry point for hackers. Given that cybercriminals are more resourceful than ever, cybersecurity experts need to be just as sharp.

      Oren Falkowitz is combining past experience at the NSA and US Cyber Command with AI to combat phishing attacks worldwide. In this episode, Alex Shee speaks to him and cybersecurity expert Frederic Michaud about how AI is currently being used to make businesses safer.

      AI Element podcast header
      S1E3

      What’s an AI Strategy?

      • Apple Podcasts
      • Google Podcasts
      • Spotify
      • Stitcher

      Episode Guests

      • Naomi Goldapple
        Naomi Goldapple Former Program Director at Element AI
      • Chris Benson
        Chris Benson Chief Scientist for Artificial Intelligence & Machine Learning at Honeywell

      Show Notes

      Successful adopters of AI develop an AI-first strategy supporting all functional areas of the business: marketing, product development, customer support, sales, and beyond. What does this look like in practice? Naomi Goldapple consults with execs about AI strategy on the regular, provides some insight.

      Alex also talks to Chris Benson who was hired at Honeywell to inject AI into the traditional-but-transforming manufacturing and logistics space. He shares some case studies of AI transformation and touches on the pervasive fear of job loss.

      AI Element podcast header
      S1E2

      Startups vs. Traditional Industry

      • Apple Podcasts
      • Google Podcasts
      • Spotify
      • Stitcher

      Episode Guests

      • JF Gagné
        JF Gagné CEO and Co-Founder of Element AI
      • Natacha Mainville
        Natacha Mainville Chief Innovation Officer at TandemLaunch

      Show Notes

      As AI seeps into every industry, businesses are being forced to adapt. Old school industries may not be as lean or quick to pivot as startups, but they have access to a motherlode of funding and data. Still, red tape and outdated infrastructure may block them from the timely AI transformation they need to stay afloat tomorrow.

      Alex Shee speaks with serial AI entrepreneur JF Gagné about this tension between startups and more corporate environments. Then, 15-year veteran of the insurance industry Natacha Mainville shares some real-world examples of how AI is flipping the industry on its head, forcing incumbents to keep up.

      AI Element podcast header
      S1E1

      What AI Can’t Do

      • Apple Podcasts
      • Google Podcasts
      • Spotify
      • Stitcher

      Episode Guests

      • Yoshua Bengio
        Yoshua Bengio Co-Founder of Element AI and Scientific Director of the Montreal Institute for Learning Algorithms (MILA)
      • Daniel Gross
        Daniel Gross Partner at Y Combinator and Head of the AI Track

      Show Notes

      Societal hype around AI is a byproduct of a few recent scientific breakthroughs — speech recognition, computer vision, natural language processing — in short, a computer’s ability to acquire human senses and mimic the human brain.

      Element AI co-founder Yoshua Bengio, world-renowned professor and head of the Montreal Institute for Learning Algorithms, has been at the front lines of the Deep Learning Revolution that has enabled this kind of innovation. In this episode, he gives an overview of where the tech is actually at: how close is it to mirroring human senses?

      0:00 / 0:00
      1.00x

      Subscribe now on

      • Privacy Policy
      • Analytics and Cookies policy
      • Service providers
      • Terms and Conditions

      • Podcast
      • Careers
      • Privacy Policy
      • Analytics and Cookies policy
      • Service providers
      • Terms and Conditions
      • Facebook
      • LinkedIn
      • Twitter
      • Instagram
      • Medium
      Element AI, the “EAI” logo and “EAI” are trademarks (registered, deposited or trade names) of Element AI, Inc. All rights reserved. © 2016-2020 Element AI Inc.