They say beauty lies in the eye of the beholder, but in actuality, it goes far deeper than that.
The concept of physical beauty resides in the mind, defined by whatever features we find attractive in other people’s faces. These subtle preferences represent some of our most private inner thoughts – but that doesn’t mean they can’t be monitored, and perhaps even predicted.
In a new study, researchers used electroencephalography (EEG) measurements to identify what kind of facial features people found to be attractive, and then fed the results to an artificial intelligence (AI) program.
The machine learning system – termed a generative adversarial neural network (GAN) – was first able to familiarise itself with what sorts of faces individual people found desirable, and then fabricate entirely new ones specifically designed to please: tailored visions of synthesised beauty, as unattainable as they were perfect.
The experiment, run by a team of psychologists and computer scientists at the University of Helsinki in Finland, was sort of like a massive Tinder session for the 30 volunteers who took part.
Except for a few big differences.
As the participants sat in front of a computer screen showing them a series of faces, none of the faces displayed were real people, but were realistic-looking artificial portraits generated from a dataset of around 200,000 images of celebrities.
Unlike regular Tinder usage, the participants were also wearing elastic caps fitted with electrodes designed to measure their brain activity as they looked at the faces. They also didn’t have to swipe right when they saw someone they liked the look of – that was all taken care of.
“They did not have to do anything but look at the images,” explains cognitive neuroscientist Michiel Spapé. “We measured their immediate brain response to the images.”
Those individual measurements of neural activity were then assessed by the GAN, which was able to interpret the brain responses in terms of how attractive each artificial face was deemed by the viewer.
Using that data, the GAN was then able to generate new faces informed by people’s EEG attraction identifiers.
In a second experiment, these newly invented faces were then displayed back to the volunteers, who rated them in terms of attractiveness, alongside other images of randomly generated faces.
Ultimately, the results validated the researchers’ test, with participants rating the tailored-to-be-attractive images as attractive in around 80 percent of cases, while the other faces were only selected 20 percent of the time.
While this is only a small study, it’s just another example of how refined AI systems are becoming in their understanding of what makes us tick – even in intimate and often unspoken notions, such as the domain of personal attraction.
“Succeeding in assessing attractiveness is especially significant, as this is such a poignant, psychological property of the stimuli,” Spapé says.
“If this is possible in something that is as personal and subjective as attractiveness, we may also be able to look into other cognitive functions such as perception and decision-making. Potentially, we might gear the device towards identifying stereotypes or implicit bias and better understand individual differences.”
The findings are reported in IEEE Transactions on Affective Computing.