Official PyTorch implementation of the paper Image

您所在的位置:网站首页 TargetCLIP Official PyTorch implementation of the paper Image

Official PyTorch implementation of the paper Image

2024-07-17 17:22| 来源: 网络整理| 查看: 265

TargetCLIP- official pytorch implementation of the paper Image-Based CLIP-Guided Essence Transfer

This repository finds a global direction in StyleGAN's space to edit images according to a target image. We transfer the essence of a target image to any source image.

Pretrained directions notebook:

Open In Colab

The notebook allows to use the directions on the sources presented in the examples. In addition, there's an option to edit your own inverted images with the pretrained directions, by uploading your latent vector to the dirs folder.

Examples:

NOTE: all the examples presented are available in our colab notebook. The recommended coefficient to use is between 0.5-1

Targets that were not inverted- The Joker and Keanu Reeves

The targets are plain images, that were not inverted, the direction optimization is initialized at random.

NOTE: for the joker, we use relatively large coefficients- 0.9-1.3

Out of domain targets- Elsa and Pocahontas

The targets are plain images that are out of the domain StyleGAN was trained on, the direction optimization is initialized at random.

Targets that were inverted- Trump

The targets are inverted images, and the latents are used as initialization for the optimization.

Credits

The code in this repo draws from the StyleCLIP code base.



【本文地址】


今日新闻


推荐新闻


    CopyRight 2018-2019 办公设备维修网 版权所有 豫ICP备15022753号-3