Skip to content

Latest commit

 

History

History
27 lines (21 loc) · 987 Bytes

README.md

File metadata and controls

27 lines (21 loc) · 987 Bytes

ControlStyle

repo for the paper ControlStyle: Text-Driven Stylized Image Generation Using Diffusion Priors (MM'23)

🔲 training code implemented in diffusers

🔲 pre-trained model release

✅ inference code implemented in diffusers

run inference

First of all, setting the pre-trained text-to-image diffusion model (sd-v15) and the pre-trained controlstyle model in the bash. Also, different prompt, controlnet_scale and random seed can be set in the bash. Then,

bash test_model.sh

citation

If you find this paper useful, please consider staring 🌟 this repo and citing 📑 our paper:

@inproceedings{chen2023controlstyle,
  title={ControlStyle: Text-Driven Stylized Image Generation Using Diffusion Priors},
  author={Chen, Jingwen and Pan, Yingwei and Yao, Ting and Mei, Tao},
  booktitle={Proceedings of the 31st ACM International Conference on Multimedia},
  pages={7540--7548},
  year={2023}
}