Official PyTorch implementation of the paper Image |
您所在的位置:网站首页 › TargetCLIP › Official PyTorch implementation of the paper Image |
TargetCLIP- official pytorch implementation of the paper Image-Based CLIP-Guided Essence Transfer
This repository finds a global direction in StyleGAN's space to edit images according to a target image. We transfer the essence of a target image to any source image. Pretrained directions notebook:The notebook allows to use the directions on the sources presented in the examples. In addition, there's an option to edit your own inverted images with the pretrained directions, by uploading your latent vector to the dirs folder. Examples:NOTE: all the examples presented are available in our colab notebook. The recommended coefficient to use is between 0.5-1 Targets that were not inverted- The Joker and Keanu ReevesThe targets are plain images, that were not inverted, the direction optimization is initialized at random. NOTE: for the joker, we use relatively large coefficients- 0.9-1.3 The targets are plain images that are out of the domain StyleGAN was trained on, the direction optimization is initialized at random. The targets are inverted images, and the latents are used as initialization for the optimization. The code in this repo draws from the StyleCLIP code base. |
CopyRight 2018-2019 办公设备维修网 版权所有 豫ICP备15022753号-3 |