Supporting rights-respecting AI
Phil Dawson Phil Dawson
November 26 5 min

Supporting rights-respecting AI

Artificial intelligence is expected to generate important social and economic gains, from transforming enterprise productivity to advancing the Sustainable Development Goals. At the same time, recurring revelations of problematic impacts of the use of AI -- such as in the criminal justice system, predictive policing, public benefits systems, targeted surveillance programs, advertisements and disinformation -- have highlighted the extent to which the misuse of AI poses an existential threat to universal rights. The rights include privacy, equality and non-discrimination, freedom of assembly, freedom of expression, freedom of opinion, freedom of information, and in some cases, even democracy.


Over the last few years a series of policy initiatives1 have made progress on AI governance, and yet too few have addressed AI’s human rights impacts despite a growing body of research and proposals from known scholars, civil society and international organizations.2 In October 2019, Element AI partnered with the Mozilla Foundation and The Rockefeller Foundation to convene a workshop on the human rights approach to AI governance, to determine what concrete actions could be taken in the short term to help ensure that respect for human rights is embedded into the design, development, and deployment of AI systems. Global experts from the fields of human rights, law, ethics, public policy, and technology participated. This report provides a summary of the workshop discussions and includes a list of recommendations that emerged out of the meeting.

In this report:

The report’s principal recommendations include making human rights due diligence and human rights impact assessments grounded in the Universal Declaration of Human Rights and the United Nations Guiding Principles on Business and Human Rights a regulatory requirement: first, in the context of the public procurement of AI systems; and, second, in private sector contexts where AI’s human rights risks may have a significant impact on the financial interests, personal health or well-being of individuals, or where minors are concerned. The report recommends that industrial policy support this approach, for instance through the creation of tailored direct spending programs to help ensure that the design and technological foundations of rights-respecting AI, such as transparency, explainability and accountability, are firmly established in key sectors.

The report also examines the potentially transformative role that a group of investors could play in shaping a new ecosystem of technology companies. Finally, the report recommends that governments implement a dedicated capacity building effort to accelerate understanding of how the existing legislative and regulatory framework can be applied to ensure respect for human rights, and identify potential gaps where adjustments may be necessary. This could be accomplished through the creation of a new Centre of Expertise on AI, which could act as a source of policy expertise, education and capacity building across government policymaking departments, regulatory agencies, industry and civil society.

download the report

Why it matters:

Human rights risks in AI undermine the technology’s potential to deliver positive social and economic returns. Our report is part of a growing set of efforts focused on addressing human rights risk in AI governance. This report contributes to a growing number of efforts focused on closing the “human rights gap” in AI governance, including by:

There is still much to do. If you would like to know more about this project or have ideas for collaboration, please send an email to philip.dawson@elementai.com.

------------------------------------------------------------------------------------------

1These include national AI strategies; ethical frameworks such as the Montreal Declaration for the Responsible Development of Artificial Intelligence, which was developed through extensive civic consultation; the development of technical standards by standard development organizations such as the IEEE and ISO; the European Commission’s High-level Expert Group on AI’s work on the Ethics Guidelines for Trustworthy AI as well as Policy and Investment Recommendations for Trustworthy AI; the World Economic Forum and the Partnership on AI have led a series of projects aiming to guide the design, development and deployment of ethical or responsible AI; and, following the development of its G20-endorsed Principles on Artificial Intelligence, the Organisation for Economic Co-operation and Development (OECD) is now preparing to launch the OECD AI Policy Observatory to help countries “encourage, nurture and monitor the responsible development of trustworthy artificial intelligence (AI) systems for the benefit of society”.

2Latonero, M. Governing Artificial Intelligence: Upholding Human Rights & Dignity, Data & Society, October 10, 2018; Access Now, Human Rights in the Age of Artificial Intelligence, November 8, 2018; McGregor, L., Murray, D., & Ng, V. (2019). International Human Rights Law as a Framework for Algorithmic Accountability. International and Comparative Law Quarterly, 68(2), 309-343; Donahoe, E., Metzger. M. Artificial Intelligence and Human Rights. Journal of Democracy, vol. 30 no. 2, 2019, p. 115-126; Council of Europe, Responsibility and AI: A study of the implications of advanced digital technologies (including AI systems) for the concept of responsibility within a human rights framework, October, 9 2019; Australian Human Rights Commission, World Economic Forum. White Paper: Artificial Intelligence: governance and leadership, 2019.