With the destructive potential of deepfakes increasing with each passing year, especially as the science behind the techniques matures, a team of scientists has come up with a reliable method to identify artificially generated faces by analyzing the pupil’s shape. Deepfakes are created using a generative adversarial network (GAN), and over the years the technology has become so sophisticated that it has become increasingly difficult to discern a real human’s face from one that has been created using a machine learning model. Even though there are some commercial applications of this tech, there is a sinister side that is far more chilling, with serious repercussions.

The potential of fraud and identity theft is extremely high, but drafting the necessary regulatory tools and copyright framework has already become a nightmare around a technology that employs an AI for creating content. Last year, Microsoft launched a tool called Microsoft Video Authenticator designed to detect deepfake content in videos. A few months ago, Facebook also detailed an advanced AI-based system that can not only detect deepfakes but is also capable of tracing back the deepfake generating software that was used to created manipulated media. However, the tech to detect deepfakes is not always available to the masses, and it can not be implemented universally on all platforms where users consume media content.

Related: How Deepfake Technology Actually Works

This is where the latest collaborative research by scientists from the University of Albany, University at Buffalo, and Keya Medical offers a ray of hope. A research paper titled “Eyes Tell All: Irregular Pupil Shapes Reveal GAN-Generated Faces” describes a method for detecting deepfake faces by studying the shape of the pupil — the black center in the human eye. The key premise is that the human pupil is round in shape, but in artificially created faces, the pupil’s geometry is not uniform and is usually distorted. Scientists note that irregular pupil shape is commonplace even in high-quality deepfakes and it is easily discernible even to the naked eye. These irregularities in pupil shape are called artifacts and are caused by the lack of 'physiological constraints' in the models used to create deepfakes.

A Solution Waiting To Be Weaponized

Pupil Based Deepfake Detection Research

The scientists also created an automated system that was fed images of a thousand real faces and GAN-created artificial faces each to study how the marker described above can be used as a reliable method of deepfake identification. The team devised a metric called Boundary Intersection-Over-Union (BIoU) score based on a formula to assign scores according to pupil shape in an image. Real human faces with a uniformly elliptical pupil shape achieved high BIoU scores when passed through the analysis model, while artifacts in the pupils on an artificially generated image resulted in a low BIoU score. The research mentions that the method is extremely effective and simplistic at the same time.

But there are two issues here. One minor obstacle is that some diseases and infections can alter the pupil’s shape, which may result in the method failing. However, those are rare cases and don't invalidate the science behind them. The more pressing problem is that malicious parties can now learn from the findings of this research — which is now in the public domain — and accordingly refine their GAN systems. The result is that their deepfakes will now be even more convincing with evenly shaped pupils to deceive viewers. In a world where a company can already create synthetic versions of a real person’s face and use them in ads without consent, the prospects of misuse by bad actors are limited only by human imagination.

Next: Scientists Take Huge Step Toward Limitless Energy Via Nuclear Fusion

Source: arXiv