Below GitHub repository and the paper are a follow-up on a recently introduced OpenAI CLIP neural network. It allows manipulating images with text. One of (probably useless) possible applications can be creating an Asian version of Elon Musk. Although promising, it suffers from the same problems as the base version of CLIP (namely, lack of sufficient coverage of particular objects in a dataset, i.e., it can easily transform a lion to a wolf but struggling with a tiger to a wolf conversion).
🎒 Supplementary material:
- GitHub repository Live manipulation of StyleGAN Imaginery
– One of the main advantages of CLIP is so-called zero-shot learning. Learn the concept here.
– If you are not familiar with StyleGAN or GAN in general, this is an excellent intro that helps build intuition.