At first glance, the two rows of portraits at the top of this article just look like a bunch of average-looking people. The catch is, none of them exist. All of these faces are fakes, put together by artificial intelligence.
To be more precise, these faces are created by a generative adversarial network (GAN) developed by Nvidia, using deep learning techniques to produce realistic portraits out of a database of existing photos.
Head over to the This Person Does Not Exist website to see for yourself: every time you refresh the page, you get a new face. (See how long you can last before getting freaked out.)
With a GAN, two neural networks – neural as in designed to mimic the brain’s decision-making process – work in tandem. Here, one network generates a fake face, while another decides if it’s realistic enough by comparing it with photos of actual people.
If the test isn’t passed, the face generator tries again; this feedback loop is responsible for the images you can see here and on the site. Similar GANs have been used to switch a scene from winter to summer.
We’ve seen Nvidia’s impressive face coding in action before, but it’s now managing to add a new level of authenticity through what’s known as “style transfer”: processing different parts of the image (like face shape and hair style) separately.
It means different faces can be more easily and more realistically blended together, in a similar sort of way that photo apps turn your face into a painting or sketch.
“We came up with a new generator that automatically learns to separate different aspects of the images without any human supervision,” explain the Nvidia engineers in a YouTube video.
“After training, we can combine these aspects in any way we like.”
The weighting of these different facial aspects can be tweaked and adjusted as necessary, giving the programmers greater control over the end output.
As for the website, it’s not actually by Nvidia itself – it’s been put together by Uber engineer Philip Wang, based on the code that Nvidia has made public.
“Each time you refresh the site, the network will generate a new facial image from scratch from a 512 dimensional vector,” writes Wang on Facebook.
Nvidia has also been applying its ‘StyleGAN’ techniques to creating other fake collections, including ones for cars, cats, and bedrooms. The algorithms underpinning the AI are trained using publicly available photos and then asked to come up with new variations that meet the required level of realism.
Of course this all brings back the issue of deep fakes: fake digital assets, like photos or videos, that are indistinguishable from the real thing.
Artificial intelligence systems are only going to get smarter at producing this sort of content – perhaps next we can train them to spot their own fakes, and create some sort of verification process before we’re overwhelmed with spoofed footage of things and people that never even existed.
In the meantime, if you’re looking for stock photos of faces that don’t require permission from the models, you know where to turn.
The latest research from Nvidia hasn’t been peer-reviewed yet, but you can view a paper on it on the pre-print server arXiv.org.