AI and human rights: taking on the real risks of AI
Sherif Elsayed-Ali Sherif Elsayed-Ali
August 13 9 min

AI and human rights: taking on the real risks of AI

If you only read the news, you would think that AI has a mind of its own. The risks that make headlines seem pulled from sci-fi movies, not the actual practice of modern AI, and that makes us focus on the wrong problems and solutions. To properly address the risks of AI, we need to focus on the people and organizations putting it to use and the policymakers designing the future of AI regulation.

First, let’s state one critical point: an AI system is not an independent actor. AI is software, software that can improve with time and respond to its environment. It can be more powerful and more effective at many tasks than traditional software, but both are designed and developed by humans. AI is a tool, and the biggest risks lie in how that tool is being used. And AI is not our first decision-making system — it shares the risks of other approaches, including rules-based software and human intuition.

From a human rights perspective, some AI applications such as deepfakes, fake video - audio and images altered to seem real, and lethal autonomous weapons, such as drones equipped with the ability to choose and fire on targets, pose specific challenges, but addressing them is no different to the way we have addressed similar challenges in the past.

The real risks of AI

Despite the popular conception, modern AI systems don’t look anything like their science-fiction counterparts. There are no robots resembling Arnold Schwarzenegger, no all-knowing androids straight out of Star Trek. If you see something that looks and acts like a Hollywood-movie android, this is what it is: an expensive and elaborate parlour trick. Those concerns are far-fetched, yet there are some very real risks with AI.

Bias is a significant concern for AI systems, because they often end up repeating the patterns that they observe in data. Using AI to sift through resumes can end up discriminating against women or minorities if algorithms pick up on patterns in past successful candidates that are irrelevant to job performance, such as race or gender, and ends up producing results that reflect those biases. A machine learning model, if deployed without adequate understanding and regard to biases in existing data, will likely entrench existing inequalities — and, in doing so, perform poorly at its task.

Yet that failure is not unique to AI. Any decision-making system, including purely human systems and rules-based software, could come to similar outcomes. Indeed, human decision-making is the root of the problem in existing systemic biases. Yes, AI-based software presents specific challenges, such as the black box problem where we don’t know how exactly it came to a decision, and those specific problems need specifically tailored mitigation tactics. Such is any system: it has advantages and disadvantages.

With the public and policy discourse becoming increasingly focused on the risk of bias in AI, an underlying assumption is that non-AI decision-making systems are unbiased. This is both wrong and dangerous, because existing biases can’t be ignored. If we’re serious about managing AI risks, we need to look at the human processes behind them.

AI is not a special case

We should not treat AI as a special case in the way we consider accountability in decision-making systems. Responsibility for decision-making lies with people and organizations: those who set up decision-making processes and the mechanisms to deal (or not) with these systems when they fail. AI is a part of the process; humans and institutions are the ones that bear responsibility for the outcome.

Every such system, human or otherwise, has multiple points of failure. Humans are a common one: People can make incorrect decisions when they’re tired, stressed or under time pressure. We all have our unconscious biases and sometimes just do a bad job.

Software, too, has its issues. Some traditional software can be too rigid, with too-strict rules that can just as easily reflect human biases. Simple coding errors can also be a problem, such as the widespread 911 outages across six states in the USA in 2014.

There are many challenges for AI-powered systems, including bias. They can also become inaccurate when its operating environment changes. Google’s experiment with tracking flu trends through search data is one example, where the algorithm would start out effective and lose accuracy after a few years. In 2013, it missed the actual prevalence of flu by 140 percent, and the algorithm itself began looking for unrelated terms that were correlated to the start of flu season in late fall, such as “high school basketball.”

Every system — physical or digital — eventually fails . What we can do is understand how and when failures are likely to happen and design decision-making systems to mitigate against them. This could mean having human oversight for automated systems, much as we ensure that there are review mechanisms for human-based systems: appeals processes in judicial and administrative decisions, peer reviews, audits, etc.


AI is not a special case, but there are specific failure points that need to be dealt with differently. For example, because machine learning is probabilistic, an organization using an AI system should decide on the threshold for a positive (or negative) identification or decision based on the likely consequences of a wrong decision being made. When using AI systems, it’s important to maintain human decision makers in the loop, so they can review decisions where the confidence level falls below a certain acceptable threshold.

If dealing with probabilities and confidence levels feels like it would undermine human rights as it lacks certainty, it’s because we’re not looking at the issue in relation to other systems. In reality, we rely on probability and confidence levels all the time in our lives - it’s such a natural part of who we are that we don’t think about it.

Think of when you pick a fruit, cross the road or decide who to vote for in an election: you don’t know for certain that the fruit will taste good, that a car won’t suddenly speed out of nowhere or that the candidates you vote for will be competent. But you base your decision on prior information, current observations, and your assessment of what the outcome will be. Recent research indicates that human decision-making might be even more mysterious than this, and could be best explained by quantum modelling (see also this recent Nature article).

AI and human rights

The questions we ask when it comes to AI and human rights should be no different from the questions we ask about the impact of other systems. The impact of a system, physical or digital, depends on how it’s used. We should focus on who developed the system, for what use, its capabilities and limitations, the processes and accountability in which the system is embedded, how and where the system can fail, and ultimately, who is responsible for the system.

With this information we can assess the likely human rights impact and answer key questions:

  • Are there risks of human rights abuses and if so, where are they?
  • What safeguards should be in place to mitigate against potential human rights abuses?
  • In the context of respecting/protecting human rights, it is appropriate to use the system?

AI systems are not a special case when it comes to evaluating the human rights impact. When we examine the human rights impact of AI, we need to assess it in the context of existing systems: does it improve human rights outcomes or make them worse, compared to the alternative? To get the best human rights outcomes, we should subject different systems to the same level of scrutiny.

Deepfakes are one example where there is a high risk to human rights from AI systems. These sophisticated fakes, which use AI algorithms to mimic reality yet can depict whatever the creator wishes, can disrupt democratic processes and threaten the rights to privacy and dignity as well as basic trust in information. The way they do this is novel, but these threats from information manipulation are not new and not restricted to AI. Digitally manipulated images, troll networks and conspiracy theories have been used as a weapon against activists and politicians alike for a long time.

There are potential technical ways to detect deepfakes, and there are regulatory ways to control their use, while still allowing for legitimate uses of AI-created video — think for example of the use of synthetic actors in the entertainment industry. Sam Gregory at technology and human rights NGO Witness provides a great overview of the different avenues available.

With lethal autonomous weapons, the real question is a moral one, and one that can only be addressed by the international community. If we automate the decision to use lethal force, we could make warfare easier and remove the critical layer of personal responsibility.

Questions over the accuracy of AI in selecting a target or firing a weapon are a slippery slope: human targeting and existing precision weapons are not known for their infallibility. If, one day, AI targeting systems are objectively and consistently shown to be more reliable than human targeting, then what? More importantly, arguing about the choice of weapons misses the big picture: all conflicts result in human rights violations. And that’s where the international community can help.

Ultimately, the question of lethal autonomous weapons is beside the point. The focus should be on reducing armed conflicts, including by reducing access to technology that makes it easier to go to war. International law is the best way to address the risk, through an international ban on lethal autonomous weapons, similar to existing bans on other types of weaponry.

The most effective way to tackle human risks related to the use of AI is to focus on the people and organizations behind it, not the technology itself. Questions of accountability for the use (or misuse) of AI systems can then be addressed by existing frameworks looking at the human rights obligations of state actors and responsibilities of private actors. It’s the people, not the machines, that matter.

------------------


Sherif Elsayed-Ali started writing about the impact of AI on human rights in 2017, for example here, here and here. He co-authored one of the first major papers applying human rights standards to AI and started the process that led to the Toronto Declaration. Over the last 8 months, since he joined Element AI, he has worked with a team of AI researchers and software engineers building tools based on machine learning.