New Style Transfer Extension, ControlNet of Automatic1111 Stable Diffusion T2I-Adapter Color Control

26,448 بازدید
بیشتر
SECourses
SECourses
Discord : ...
Discord : https://bit.ly/SECoursesDiscord. New fantastic style transfer feature via T2I-Adapter added to the #ControlNet extension. If I have been of assistance to you and you would like to show your support for my work, please consider becoming a patron on 🥰 Patreon: SECourses

Playlist of #StableDiffusion Tutorials, #Automatic1111 and Google Colab Guides, DreamBooth, Textual Inversion / Embedding, LoRA, AI Upscaling, Pix2Pix, Img2Img:
Stable Diffusion Tutorials, Automatic...

TencentARC / T2I-Adapter GitHub Repo:
https://github.com/TencentARC/T2I-Ada...

Extension Github Repo: https://github.com/Mikubill/sd-webui-...

Academic Paper - T2I-Adapter: Learning Adapters to Dig out More Controllable Ability for Text-to-Image Diffusion Models:
https://arxiv.org/abs/2302.08453

How to install Python and set default path tutorial:
Easiest Way to Install & Run Stable D...

Automatic1111 GitHub Repo:
https://github.com/AUTOMATIC1111/stab...

git link:
https://github.com/git-for-windows/gi...

Git Bash: https://git-scm.com/downloads

Git Large: https://git-lfs.com/

Automatic1111 Web UI Command Line Arguments and Settings:
https://github.com/AUTOMATIC1111/stab...

1.5 pruned ckpt model file :
https://huggingface.co/runwayml/stabl...

How to use Stable Diffusion V2.1 and Different Models in the Web UI - SD 1.5 vs 2.1 vs Anything V3 - how to set yaml files:
How to use Stable Diffusion V2.1 and ...

ControlNet model files repository:
https://huggingface.co/lllyasviel/Con...

Style transfer T2I-Adapter models repository:
https://huggingface.co/TencentARC/T2I...

Transform Your Sketches into Masterpieces with Stable Diffusion ControlNet AI - How To Use Tutorial:
Transform Your Sketches into Masterpi...

Sketches into Epic Art with 1 Click: A Guide to Stable Diffusion ControlNet in Automatic1111 Web UI:
Sketches into Epic Art with 1 Click: ...

Fantastic New ControlNet OpenPose Editor Extension & Image Mixing - Stable Diffusion Web UI Tutorial:
Fantastic New ControlNet OpenPose Edi...

0:00 Introduction to newest extension Style Transfer in ControlNet
0:31 Requirements for installing and running Automatic1111 Web UI
1:42 How to make a fresh installation of Automatic1111 Web UI
4:11 Versions of installed libraries, python, torch, xformers
4:21 What does commit and checkpoint means
4:40 How to update Automatic1111 Web UI manually via git pull
4:53 How to use certain commit, certain version via git checkout
5:27 How to upgrade / install the latest best working xformers version
6:51 How to install ControlNet extension
8:06 How to enable / activate multi controlnet feature
8:30 Where to find controlnet extension
8:38 How to download controlnet necessary model files
10:13 How to download adapter transfer models
11:45 How to use style transfer feature of ControlNet
12:37 How to use multi controlnet to keep image shape better canny + hed
13:34 Which settings to achieve style transfer in controlnet tab
14:38 The improvement of using 2 preprocessor
16:49 How to improve art style / coloring of existing artwork via ControlNet
18:00 A very nice trick that you can use in your professional business life

Abstract from paper
The incredible generative ability of large-scale text-toimage (T2I) models has demonstrated strong power of learning complex structures and meaningful semantics. However, relying solely on text prompts cannot fully take advantage of the knowledge learned by the model, especially when flexible and accurate structure control is needed. In this paper, we aim to “dig out” the capabilities that T2I models have implicitly learned, and then explicitly use them to control the generation more granularly. Specifically, we propose to learn simple and small T2I-Adapters to align internal knowledge in T2I models with external control signals, while freezing the original large T2I models. In this way, we can train various adapters according to different conditions, and achieve rich control and editing effects. Further, the proposed T2I-Adapters have attractive properties of practical value, such as composability and generalization ability. Extensive experiments demonstrate that our T2I-Adapter has promising generation quality and a wide range of applications.

• Plug-and-play. They do not affect the original network topology and generation ability of existing text-to-image diffusion models (e.g., Stable Diffusion).
• Simple and small. They can be easily inserted into existing text-to-image diffusion models with low training costs. They have a small number of parameters (∼ 77 M) and small storage space (∼ 300 M), which will not introduce much computation cost.
• Flexible. We can train various adapters for different control conditions (e.g., sketch, semantic segmentation, keypose).

thumbnail freepik brgfx

همه توضیحات ...