An AI algorithm for generating de-pixelated photos from pixelated ones has been found to generate a white face from a pixelated image of Barack Obama. This isn't a surprise considering the ongoing discussions around the racial bias of AI facial recognition tools, and examples previously found in other solutions. Such bias in AI is a major concern since it is widely considered as a technology capable of changing many aspects of society

The Black Lives Matter movement is raising questions over the systemic racism in modern law enforcement including, the technologies and tools used by police. Some of the examples of how racial bias by AI technologies employed by police departments can manifest is in the recognition of suspects and the predicting of 'at-risk' neighborhoods using previously collected data. This has been a matter of concern among researchers and activists for some time with the general understanding being these AI tools can disproportionately affect minorities, mainly due to the flawed datasets that have been used to train them. More recently, this has led to a number of companies that develop and sell these technologies to law enforcement rethinking their current stance. In Amazon's case, the company recent confirmed it was placing a one-year ban on the use of its facial recognition solution.

Related: AI Algorithm Identifies Age & Ethnicity, But Researchers Unsure How It Works

Recently, a machine learning algorithm called PULSE generated a white face from a pixelated image of Barack Obama. With Obama arguably one of the most famous black people in the world, it is a reminder of the racial bias that can be seen in AI imaging tools. The issue was flagged on Twitter, by programmers who tested PULSE after downloading its published code from GitHub. As The Verge reported, the algorithm is not designed to correctly identify the pixelated input image, but to produce a new artificial face resembling the pixelated image. Nevertheless, it does seem to show a serious bias when it comes to non-white faces. The algorithm is also generating faces with Caucasian features when the input is a pixelated photo of  Congresswoman Alexandra Ocasio-Cortez, as well as actress Lucy Liu.

How PULSE Works & Where The Bias Comes From

PULSE has been developed by researchers from Duke University using the algorithm StyleGAN, created by NVIDIA computer scientists. It has been used to build websites that looks to create realistic-looking human faces. Researchers have used StyleGAN to upscale visual data, that is, to fill in the missing data in the inputted pixelated face and imagine a new high-resolution face that looks similar to the input image when pixelated.

After the racial bias was reported, the original creators confirmed the issue and have clarified that, although they are not sure, it is most likely a flaw inherited from the datasets that were used to train StyleGAN. They have also added a new section to their published paper on the racial bias of the algorithm and the various ways it could have originated. This is an encouraging step, since finding and reporting the weaknesses of a technology at the research level can go a long way to helping the development of practical tools.

Nevertheless, the incident is a reminder of how bias can creep into technologies built using data that's collected over years. Since big data is going to be crucial in shaping the future, it is paramount that necessary steps are taken to ensure any discrimination contained in an artificial intelligence dataset doesn't translate into serious bias in a technology created using the data.

More: Amazon's AI Distance Assistants Help Enforce Employee Social Distancing

Source: The Verge, GitHub