AI-enabled human rights monitoring
Sherif Elsayed-Ali Sherif Elsayed-Ali
October 17 4 min

AI-enabled human rights monitoring

Update July 2nd, 2020: Some of the proposals here have since been made reality in our collaboration with Amnesty International on their Decode Darfur project. You can find a summary of the technical work here.

This blog post was written with Tanya O’Carroll of Amnesty Tech and comes from the forward to our new white paper on the potential impact that AI technologies could have on the field of Human Rights Monitoring, developed collaboratively with Amnesty International.

You can see the white paper here.

Artificial intelligence is a very powerful technology, that much is certain. It is also widely misunderstood and misrepresented. It is not a magic bullet, it cannot solve problems on its own or replace human ingenuity and adaptiveness. Today’s AI, however advanced, is not a replacement for people. What it can do well is augment people’s ability to do their jobs, doing them better, faster and at a scale not possible before. Used well, AI can have a transformative impact.

When AI fails to live up to people’s expectations, it is more often than not because it has been hyped up. In a world where popular discourse on AI is dominated by images of robots becoming sentient and rebelling over humanity, or somewhat more benignly making all jobs redundant, it’s easy for both fear and awe of AI to grow out of proportion.

We have been involved at the intersection of technology and human rights for years and between us have had our fair share of disappointments and missteps. The field of “technology-for-good” has been plagued by several problems, many of them mirroring the problems faced by new technologies in business. Others are very specific to the field. Here we briefly list key ones:

  • Technology as a savior: this is the misbelief that new technology can easily solve complex problems. Yes, new technology can make things happen that were previously not possible, but when problems involve complex social, legal and economic systems, the most sophisticated technological application will, by itself, fail ⁠— every time.
  • Going from proof of concept (PoC) to full-scale deployment: a PoC and/or a small pilot is often how technology-for-good applications start. They serve the main purpose of demonstrating what is technologically possible, but even when they are successful, most organizations struggle to turn them into fully deployed applications. The jump from PoC, in terms of human resources, funding and infrastructure, is often not well anticipated or planned for. Equally, while funders may be enthusiastic about providing seed funding, they often prove reluctant to support full deployment or scaling. A great example of this is Amnesty International’s Panic Button project, which we did a full post-mortem of here.
  • Failure to integrate: new technological applications usually require new processes and can be disruptive. When done well, they result in higher productivity and efficiency. But the change in workflows and dependencies needs to be well-managed for deployment to succeed. Failing to do that will usually make users frustrated and ultimately find ways to avoid the new technology.

AI can bring important benefits to the human rights field. Through the collaboration that resulted in this paper, we have strived to identify key opportunity spaces for the use of AI in the field of human rights monitoring, and outline ambitious but feasible initiatives for new approaches to human rights monitoring that leverage the availability of massive amounts of data and the power of machine learning techniques.

The world is facing big human rights challenges: the climate crisis, the spread of racist and xenophobic politics, intractable conflicts and extreme poverty, to name a few. AI will not solve these problems, but it can contribute to solutions.

Ultimately, we would like to see high quality, reusable, scalable AI-enabled tools that make a significant contribution to the work of human rights practitioners globally. This will need collaboration between the human rights community, AI and machine learning researchers, private and public sector organizations, and funders.

Leveraging the potential of AI for human rights protection necessitates a great degree of care to ensure that the development and use of such applications respects human rights. As with any new field, there is of yet no existing template or rulebook for how to do this, but an evolving set of principles and guidelines offers a way forward.

We only scratch the surface of what’s possible. We invite human rights and AI researchers to build on and improve on the ideas in this paper.

Tanya O’Carroll is Director of Amnesty Tech at Amnesty International

Sherif Elsayed-Ali is Director of AI for Good Partnerships at Element AI