A new artificial intelligence created by Nvidia recently spat out a wild diversity of fake faces after researchers fed it thousands of real celebrity photos.
The graphics chip company is currently experimenting with a new method of training AI to generate novel faces using real photos. Nvidia researchers published their somewhat creepy results online and submitted them to the International Conference on Learning Representations 2018.
The research team admitted their AI-spawned faces still leave “a lot to be desired,” but confidently pointed out that “we feel that convincing realism may now be within reach.”
Although their work still awaits review from other AI experts, the researchers were pleased with both the variety of faces their machine generated and the nuanced detail achieved in 1,024-pixel resolution, which can be seen in the video above.
They used an increasingly popular AI training system called a general adversarial network (GAN). The program is fed a massive data set — in this case celebrity photos — and then gets better at creating the desired result (in this case, realistic computer-generated faces) over a period of days or weeks. The “adversarial” component involves pitting two machine-learning programs against one another, with one program “challenging” the other’s face creations.
The future applications of rendering realistic looking people — that aren’t actually people — seems both potentially useful and unsettling. This is certainly a boon to graphics companies that always need new images of people, perhaps for use in advertising. But it’s also important to recognize AI-programs are getting closer and closer to achieving realism in artificial faces; how this might be employed in fake news and other means of deception is unknown — and boundless. For now, though, it’s at least given us some really interesting faces to look at.