Computer, enhance please!
Team AI for Good Team AI for Good
July 3 11 min

Computer, enhance please!

From topping the ESA competition on super-resolution, towards monitoring human rights and the environment from space

Update July 2nd, 2020: A new blog post by Amnesty International's Citizen Evidence Lab explains the methodology of our super-resolution remote sensing work in Darfur as part of Amnesty's Decode Darfur project.

Written by Alfredo Kalaitzis, Michel Deudon, Buffy Price and Julien Cornebise

We believe super-resolution technology has a pivotal role to play in solving important problems for society and the environment.

In this post we explain super-resolution, why it matters and what is feasible with the artificial intelligence algorithm that placed us as top position during an international competition organized by the Advanced Concepts Team (ACT) of the European Space Agency.

Finding destroyed villages in Darfur (Amnesty International), mapping the encroachment of palm oil plantations (Greenpeace) or schools in remote areas (Unicef), identifying civilian presence in conflict zones (UNHCR), predicting the risk of floods (National Geographic) — are all important efforts by non-governmental organizations (NGO) that rely on a common tool: satellite imagery.

When combined with the latest image-processing techniques from artificial intelligence (AI), satellite images can help experts monitor human rights and the environment, providing NGOs with the capacity to improve countless lives and contribute to the delivery of the UN Sustainable Development Goals.

This is feasible. The algorithms are there. The satellites are there.

Low-resolution imagery is cheap or sometimes free, and it is frequently updated, but it lacks the detailed information needed to identify some patterns indicative of climate change or human rights violations. Yet the staggering cost of high-resolution satellite imagery is a significant barrier for NGOs.

This is what super-resolution technology can help overcome.

We’re working to combine ambiguous low-resolution images into useful high-resolution images. Our latest work, part of a joint effort between the AI for Good team at Element AI and the Montreal Institute for Learning Algorithms called “team Rarefin”, scored at the top of the leaderboards in the European Space Agency competition on super-resolution.

The devil is in the details

If the idea of super-resolution sounds familiar, you might be thinking of a scene in Blade Runner or (every season of) CSI where a single image can be zoomed and enhanced to the finest detail.

Let's Enhance (HD) from Duncan Robson on Vimeo.

That was science fiction. Back in reality, such Single-Image Super-Resolution (SISR) can only “enhance” an image by adding fake or imaginary details. As shown in the example below, the original ornaments on the character’s hat and necklace have been replaced by fake textures in the middle super-resolved image.

Enhancing a single low-res image with a state-of-the-art SISR algorithm involves inferring new detail which may not be present. The actual high-res is shown on the right for reference. source: SRGAN - Ledig, et al. CVPR 2017

The quality of the SISR-enhanced image in the middle is very pleasing to the eye, but the devil is in the details. With the addition of fake detail, any analysis of the image would no longer be grounded in real evidence.

For NGOs, the aesthetics are not important. What is important is that the recovered details are grounded in reality and back-traced to the original images that could be admitted as evidence in court, or in a research report.

If an image had been taken in low-resolution, then some of the original ground-truth detail is lost permanently and cannot be recovered from the low-res image alone. Blade Runner lied to you.

Super-resolution with multiple images

On the other hand, when multiple images of the same view are taken from slightly different positions, perhaps also at different times, then they collectively contain more information than any single image on its own. With the right algorithm to stitch them together, the composite image can reveal some of the original detail that would not have been accessible otherwise.

Left to right: Lower to higher resolution. The super-resolved image from our model HighRes-net recovers details distributed within multiple low-resolution images, such as crop shapes and crop yield. source of Low-res and High-res: ACT, imgset0285 -

This idea is known as Multi-Frame Super-Resolution (MFSR), and it has been an active field of research in computer vision since 1984. As of 2019, cutting-edge smartphone cameras and TVs already run MFSR algorithms to give clearer pictures.

The competition

This was the topic of the Kelvins competition that ESA’s Advanced Concepts Team hosted this year with the goal of enhancing the vision of the PROBA-V satellite through super-resolution. Participants were challenged to develop an algorithm to fuse multiple images — taken from 74 Earth locations — together into a single one.

And although algorithms are exciting to us, the human aspect behind such challenges is equally fascinating. Twenty-five teams participated, some as lone engineers, others as full-fledged academic labs — all consumed by the same challenge for six months.

We teamed-up with Mila to create team “Rarefin”, as we share common values on the role of AI in society. We believe that AI should benefit humanity and the environment. It was a truly international partnership. Across multiple oceans, in Canada, UK and India — fitting for a competition concerning the whole planet!

Towards human rights and environmental monitoring

Now let’s look at our super-resolution algorithm “HighRes-net” in action using a few images from the competition. These were taken by the PROBA-V satellite, which was launched by ESA to monitor Earth’s vegetation growth, water resources and agriculture.

The detailed features that HighRes-net recovers could support the work of NGOs working on environmental and human rights monitoring.

Land management and food security

Our model HighRes-net combines many low-res images (300 meters/pixel) into one image of superior resolution (better than 300m/pix). The same site shot in high-resolution (100m/pix) is also shown for reference. source of Low-res and High-res: ACT, imgset1087 -

In the example above, HighRes-net reveals sharper, distinct crop circles and their yield proportions — all fine-grained details distributed within multiple low-resolution images. Such details are vital to monitoring crop health, estimating crop yields and maintaining food security.

Food security is one of the biggest challenges facing humanity in the 21st century. Super-resolved images could help cattle farmers in Africa identify good grazing areas to prevent livestock losses, thus contributing to food security and political stability.

Mapping remote farms and road networks

source of Low- and High-res: ACT, imgset0666 -

HighRes-net reveals remote farm boundaries and roads. Maps often miss such networks. This contributes to inefficient supply chains and the isolation of remote communities.

Monitoring illegal or unsustainable deforestation

source of Low-res and High-res: ACT, imgset0667 -

The ongoing deforestation of the Amazon rainforest is the root of a host of environmental and indigenous issues, including a massive reduction in biodiversity, degradation of freshwater supplies and — due to the Amazon’s unique capacity for CO2 absorption — contribution to global warming. The super-resolution image above, created by HighRes-net, clearly shows the river’s pathway. With cheap low-resolution imagery and MFSR, experts can monitor changes in river flow, tree canopies and slope erosion. These features can indicate illegal and unsustainable activities.

Human rights applications

Quantifying human rights issues has been a major theme of our support to NGOs. Prior to joining Element AI, Julien Cornebise worked with Amnesty International and University College London to quantify the extent of the ongoing genocide in Darfur, Western Sudan. Their work was a proof-of-concept that, with deep learning, computers can accurately detect burned villages at a country-wide scale.

Satellite views and features that indicate the presence or absence of hut villages. source: Cornebise, et al. AI for Social Good NeurIPS Workshop 2018

Although we’ve shown HighRes-net to exemplify environmental use cases, it can equally be applied to the monitoring of human rights. A fully deployed monitoring platform for Darfur would require many expensive up-to-date snapshots. MFSR can help NGOs make the most of their limited budget. Once low-res imagery has been super-resolved, experts can shortlist areas of interest that justify the cost of a closer look with actual high-resolution imagery. This is especially important for human rights violations, where only the original imagery would be admissible in court.

Top of the leaderboards

Collectively as team Rarefin, we created HighRes-net, a deep neural network for MFSR. We achieved consistently top performance on the public and final leaderboards of the Kelvins competition — with a heart-wrenching twist on the final held-out dataset, putting us 0.003 behind the top contender, SuperPip.

The public leaderboard (bottom) includes test scores that participants could access throughout the competition. The final leaderboard (top) includes test cases that were kept hidden from participants until the end of the competition. This prevents against over-engineering an algorithm to the public test set, and extends its use beyond the scope of the competition. A score that changes significantly between leaderboards is a sign of such over-engineering. Public and final leaderboards. Source: ACT,

Soon after the end of the competition, the three leading teams revealed their real identities on the competition’s online discussion forum. We look forward to meeting them all in person at the competition workshop organized by ESA's ACT next September.

Next steps

We are looking to share our work on HighRes-net, through a peer-reviewed scientific article and — depending on our assessment of the ethical risks — an open-source code repository for reproducibility. We are evaluating the impact of sharing this code, knowing that equally capable engineers and competitors could easily implement these techniques and publish their own versions.

For us, topping the ESA competition on super-resolution was only a milestone towards our long-term goal of building a human rights and environmental monitoring platform for NGOs. We believe that such a platform can aid society in addressing these global issues. To effectively push the envelope in social and environmental innovations, strong international partnerships with NGOs, citizens and stakeholders are vital to ensure the ethical application of such technology, and that its benefits are shared with society as a whole.

The co-founders of Element AI, including Mila scientific director Yoshua Bengio, committed to this vision by creating our AI for Good team. Since its inception, the AI for Good team has stayed committed to this vision through long-term partnerships with global actors including Amnesty International and Human Rights Watch.

That was it for the big picture. Thanks for bearing with us! To the more AI-savvy readers, stay tuned for more details on HighRes-net and how we solved the challenges around ESA’s competition, all in the second and final part of this series.


HighRes-net is based on work by team Rarefin, an industrial-academic partnership between the AI for Good lab in London (Michel Deudon, Zhichao Lin, Alfredo Kalaitzis and Julien Cornebise) and Mila in Montreal (Israel Goytom, Md Rifat Arefin, Samira E. Kahou and Kris Sankaran and Vincent Michalski). Additional ML support within Element AI by Laure Delisle and from Mila by Yoshua Bengio.

This post could not have been written without the talent and dedication of these individuals at Element AI and Mila. Thank you to Laure Delisle, Grace Kiser, Kris Sankaran, Alexandre Lacoste and Yoshua Bengio for comments on this post. To Peter Henderson for editing this post. To Manon Gruaz, Morgan Guegan, Santiago Salcido and Alfredo Kalaitzis for the design and illustrations.

We extend our congratulations to the Image Processing and Learning group of Politecnico di Torino (aka team SuperPip) for topping the final leaderboard — the all-nighter up to the deadline was one to remember! Finally, we are grateful to Marcus Märtens, Dario Izzo, Andrej Krzic and Daniel Cox from the Advanced Concepts Team of the ESA for organizing this competition and assembling the dataset — we hope our solution will contribute to your vision for scalable environmental monitoring.


  1. R. Y. Tsai and T. S. Huang, “Multiple frame image restoration and registration,” in Advances in Computer Vision and Image Processing, pp. 317–339, JAI Press, Greenwich, Conn, USA, 1984.
  2. Ledig, C., Theis, L., Huszár, F., Caballero, J., Cunningham, A., Acosta, A., Aitken, A., Tejani, A., Totz, J., Wang, Z. and Shi, W., 2017. Photo-realistic single image super-resolution using a generative adversarial network. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 4681-4690).
  3. Cornebise, Julien, Daniel Worrall, Micah Farfour, and Milena Marin. Witnessing atrocities: Quantifying villages destruction in Darfur with crowdsourcing and transfer learning. In AI for Social Good NeurIPS Workshop, Montréal, Canada. 2018.
  4. Marcus Märtens, Dario Izzo, Andrej Krzic, and Daniël Cox, Super-Resolution of PROBA-V Images Using Convolutional Neural Networks, arXiv preprint - to appear in Astrodynamics