Skip to main content

Load VAE

Welcome to the magical land of compression and decompression, also known as the Load VAE node in ComfyUI. This node plays a vital role in your workflow by loading a Variational Autoencoder (VAE) — the component responsible for translating between the cozy latent space of your model and the gloriously noisy pixel soup we call an image. If your generated images are looking a little too “potato-cam” or you're getting weird color artifacts, chances are you're either not using a VAE or you're using the wrong one. Let’s fix that.


🧠 What is a VAE?

A Variational Autoencoder (VAE) is a type of neural network trained to compress and decompress image data, forming the bridge between the latent representation used by your model and the full-resolution image. It influences fine details like color tone, contrast, and sharpness — so yes, it definitely matters which VAE you use.

VAEs are checkpoint-specific most of the time. Using the wrong one? You'll get color shifts, blotchy noise, or all-around uncanny weirdness. So treat it like pairing wine with cheese — compatibility is key.


🧱 Node Type: VAELoader

Purpose:

Load a .vae.pt or .safetensors file and return a VAE object to be used in your generation pipeline.


🔌 Node Inputs and Outputs

InputTypeDescription
NoneThis node doesn’t require any incoming connections. It independently loads the specified VAE file.
OutputTypeDescription
VAEVAEEmits the loaded VAE object to be passed into your Checkpoint Loader or other nodes needing a VAE.

⚙️ Node Parameters

ParameterTypeDescription
vae_nameCombo/DropdownThe name of the VAE file to load. This dropdown is populated based on VAE files available in the ComfyUI VAE directory. If the file is missing, you’ll see red error messages and likely end up in grayscale hell.

📁 Where Do I Put VAEs?

By default, ComfyUI looks for VAEs in:

bash

CopyEdit

ComfyUI/models/vae/

Accepted file types:

  • .vae.pt
  • .safetensors

Make sure your files are properly named and located in that folder, or they won’t show up in the dropdown. Yes, capitalization matters. No, ComfyUI won’t guess what you meant.


Here’s where the Load VAE node fits into a typical text-to-image setup:

pgsql


[Load Checkpoint] └── MODEL ──> [KSampler] └── CLIP ──> [CLIPTextEncode] └── VAE ←── [Load VAE]

If you're not connecting the VAE output from Load VAE to your Checkpoint Loader or KSampler, you’re either:

  1. Using a baked VAE (built into the checkpoint),
  2. Or you forgot — in which case, prepare for color sadness.

🎯 Use Cases

  • Color Correction: Fix overblown skin tones, desaturation, or contrast weirdness.
  • Detail Preservation: Get crisper edges, richer highlights, and less murky textures.
  • Model Customization: Match VAEs tailored for checkpoints like DreamShaper, AnythingV5, PonyRealism, etc.
  • Style Control: Some VAEs affect the "softness" or "sharpness" of the final output, which is handy for stylized generations.

🛠 Prompting Tips

While prompting isn’t directly affected by the VAE, your results definitely are:

  • If you're seeing ghostly color overlays or washed-out details, try a different VAE.
  • Combining LoRA models or hypernetworks? Use the VAE recommended for the base checkpoint, not the LoRA.
  • If you're stacking VAEs and wondering why it's not working — you're not supposed to. One VAE per pipeline, please.

🔥 What-Not-To-Do-Unless-You-Want-a-Fire

So, you like chaos? Great. Here’s how to completely ruin your workflow using the Load VAE node:

🔥 1. Load the Wrong VAE for Your Checkpoint

You wouldn’t put diesel in a Tesla, so don’t pair an anime VAE with a realism checkpoint. Best case? Your model looks like it took a bath in color bleach. Worst case? Your output will haunt you in your dreams.

🔥 2. Leave the VAE Disconnected

This is the equivalent of shouting into the void. You loaded the VAE... cool. But if you don’t connect it to your Checkpoint Loader, it just sits there. Unused. Mocking you. Your output will default to the baked-in VAE or — gasp — no VAE at all. Welcome to grayscale hell.

🔥 3. Manually Rename or Move VAE Files Without Updating ComfyUI

ComfyUI doesn’t have telepathy. If you renamed a VAE to “definitely_not_cursed.vae.pt” and it disappears from the dropdown, that’s your fault. Don’t @ me.

🔥 4. Stack VAEs or Use Multiple in One Workflow

No. Stop. VAEs are not seasoning. You can’t just sprinkle multiple in and expect magic. ComfyUI uses one VAE per pipeline. Pick one. Commit.

🔥 5. Mix VAE Versions from Different Model Architectures

Some VAEs are trained for SD 1.5. Others are for SDXL. Mixing those? That’s like using a Game Boy charger on a microwave. It won’t work, and it might catch fire — metaphorically (but who knows with enough VRAM).

🔥 6. Expect VAEs to Magically Fix a Bad Prompt

VAEs affect the way images are decoded — not your prompt's creativity deficit. If your image still looks like AI-generated oatmeal, the VAE probably isn’t the issue. It's you. Yes, I said it.


🧪 Best Practices

ScenarioVAE Strategy
Using standard SD1.5 checkpointsUse vae-ft-mse-840000-ema-pruned.vae.pt for general fidelity.
Using highly stylized or anime modelsUse a VAE trained with similar style data, like anything-v4.0.vae.pt.
Generating photo-realistic imagesUse realism-optimized VAEs (e.g., EpicRealism or PonyRealism variants).
Don’t know?Stick with the VAE recommended by your checkpoint’s author or try a few common ones until your image looks normal again. It’s trial-and-error season, baby.

📚 Additional Resources


💬 Final Thoughts

The Load VAE node is often overlooked, but it’s a sneaky little gremlin that can ruin or rescue your output quality. Use it wisely, match it to your checkpoint, and stop blaming your prompts for everything. Sometimes it's just a bad VAE.

If your images still look cursed after swapping VAEs, then yes — now it's probably your prompt.