Profile PictureCozyMantis

Dress Your Virtual Character - ComfyUI Workflow With SAL-VTON Clothes Swap

$0+
2 ratings

A ComfyUI Workflow for swapping clothes using SAL-VTON.

Generates backgrounds and swaps faces using Stable Diffusion 1.5 checkpoints.

Made with 💚 by the CozyMantis squad.

Dependencies:

Inputs you'll need

  • A model image (the person you want to put clothes on);
  • A garment product image (the clothes you want to put on the model);
  • Garment and model images should be close to 3:4 aspect ratio, and at least 768x1024 px.
  • (Optional) One or two portraits for face-swapping.


Make sure you own the rights to the images you use in this workflow. Do not use images that you do not have permission to use.


The SAL-VTON models have been trained on the VITON-HD dataset, so for best results you'll want:

  • images that have a white/light gray background;
  • upper-body clothing items (tops, tshirts, bodysuits, etc.);
  • an input person standing up straight, pictured from the knees/thighs up.

To help with the first point, this workflow includes a background removal pre-processing step for the inputs.

Stage 1: Swap the clothes

This stage uses the Cozy SAL-VTON node to run the virtual try-on model on the input images. The model will swap the clothes from the garment product image onto the model image.

Node that SAL-VTON relies on landmark detection to align the garment and model images. The landmark coordinates will be auto-generated the first time you run the workflow. If needed, you can correct the fit by manually adjusting the landmark coordinates and re-running the workflow. Press the "Update Landmarks" button in the Cozy SAL-VTON node to bring up the landmark editor.

Stage 2: Generate a background

Based on a text input, a background is generated for the dressed model using the following steps:

  • with an inpainting model, inpaint the background at full noise;
  • with a regular model, do another pass at less noise on the background to add more details;
  • with a regular model, do a very low noise pass on the entire image, to fix small artifacts without changing the cloth details.

Stage 3: Optional face swap

Here we use IPAdapter and inpainting to swap the face of the model with the face provided in the input portraits. This step is optional and can be skipped if you don't have a virtual-influencer-type scenario.

Acknowledgements

Based on the excellent paper "Linking Garment With Person via Semantically Associated Landmarks for Virtual Try-On" by Keyu Yan, Tingwei Gao, HUI ZHANG, Chengjun Xie.

$
Add to cart
Copy product URL

Ratings

5.0
(2 ratings)
5 stars
100%
4 stars
0%
3 stars
0%
2 stars
0%
1 star
0%
$0+

Dress Your Virtual Character - ComfyUI Workflow With SAL-VTON Clothes Swap

2 ratings
Add to cart