Collaborating with The Rockefeller and Mozilla Foundations to close the “human rights gap” in AI governance
Phil Dawson Phil Dawson
October 31 4 min

Collaborating with The Rockefeller and Mozilla Foundations to close the “human rights gap” in AI governance

If artificial intelligence (AI) is to be designed, developed and deployed in service of the public good, it must begin by respecting fundamental rights and freedoms. With AI systems being used to spread online hate speech and disinformation, target population groups through facial recognition, or create social credit scoring systems, it is clear that corporate self-regulation and ethical principles alone are insufficient.


A number of experts have argued that applying the human rights framework to AI governance, with its emphasis on law, rights, accountability and remedy, could help fill this gap.1 And while few will contest the importance of protecting rights like privacy, equality, non-discrimination, and the freedom of expression and opinion, for many governments and companies, the precise nature of the role the human rights framework should play in governing AI has remained unclear. Is it the rightful substitute to self-regulatory approaches that are rooted in ethics? Or is it the reinforcing complement that provides a legal foundation? What do we need to do in order to operationalize a human rights approach to governing AI?

With The Rockefeller Foundation and the Mozilla Foundation as partners, Element AI recently convened a workshop with global experts from the fields of human rights, law, ethics, public policy, and technology to respond to some of these questions.* This group examined:

  • the intersecting roles of human rights law and ethics in AI governance;
  • whether existing public institutions and laws are capable of addressing AI’s human rights risks;
  • how industrial policy and AI investors can encourage a market that favours the design, development, and deployment of responsible AI;
  • how governments can incentivize proactive governance practices such as human rights impact assessments, due diligence or human rights-by-design;
  • the potential for innovative data governance mechanisms, such as data trusts, to enable the protection of human rights and provide meaningful accountability;
  • the need for international, multi-stakeholder collaboration to address AI’s human rights risks.

The workshop led to consensus around a number of these issues, and we were able to develop a series of concrete institutional, regulatory, and governance recommendations. We will use these recommendations to design a toolkit of policy options to help States, companies and all stakeholders make progress towards delivering rights-respecting AI. A report containing a summary of the workshop’s discussions, along with the toolkit of recommendations for AI governance, will be released at the 2019 Internet Governance Forum, which is being held from November 25 to November 29 2019 in Berlin, Germany.

The team at Element AI would like to thank each of the participants for their thoughtful contribution to this work, as well as its generous partners, the Mozilla Foundation and The Rockefeller Foundation, without whom this multi-stakeholder convening would not have been possible.


This effort builds on Element AI’s participation in the development of European High-level Expert Group’s Ethics Guidelines for Trustworthy AI, the Organization for Economic Cooperation and Development’s Principles on Artificial Intelligence, and reflects the company’s ongoing commitment to developing an enabling environment for AI deployment that is socially beneficial and sustainable in the long term. We are excited to be continuing this work with the Mozilla Foundation as a new partner and collaborator in building responsible AI.

If you would like to know more about this project, please send an email to philip.dawson@elementai.com.


*Workshop attendees included representatives from The Rockefeller Foundation, the Mozilla Foundation, the Office of the United Nations High Commissioner for Human Rights, the Council of Europe, the Australian Human Rights Commission, the Stanford Global Digital Policy Incubator, Data & Society, Access Now, the Harvard Kennedy School Carr Center for Human Rights Policy, the Alan Turing Institute, Microsoft Corporation, Article One Advisors, and the Berggruen Institute. Residents of The Rockefeller Foundation’s AI thematic month were also invited to share their expertise.

------------------------------------------------------------------------------------------

1 Latonero, M. Governing Artificial Intelligence: Upholding Human Rights & Dignity, Data & Society, October 10, 2018; Access Now, Human Rights in the Age of Artificial Intelligence, November 8, 2018; McGregor, L., Murray, D., & Ng, V. (2019). International Human Rights Law as a Framework for Algorithmic Accountability. International and Comparative Law Quarterly, 68(2), 309-343; Donahoe, E., Metzger. M. Artificial Intelligence and Human Rights. Journal of Democracy, vol. 30 no. 2, 2019, p. 115-126; Council of Europe, Responsibility and AI: A study of the implications of advanced digital technologies (including AI systems) for the concept of responsibility within a human rights framework, October, 9 2019; Australian Human Rights Commission, World Economic Forum. White Paper: Artificial Intelligence: governance and leadership, 2019.