KSampler
The engine of generation in ComfyUI. This node takes your model, your prompt, your settings, and cranks out images by denoising a latent tensor into art. Itโs where diffusion happens, and where your GPU earns its keep.
๐ Inputsโ
Input | Type | Description |
---|---|---|
MODEL | MODEL | The diffusion model you're using โ e.g., dreamshaper_8.safetensors , revAnimated_v122.safetensors . Required. |
CLIP | CLIP | Text encoder. Usually comes from the same CheckpointLoaderSimple . It interprets your prompts into guidance. |
Positive Conditioning | CONDITIONING | From your prompt node. Guides the image generation toward the prompt. |
Negative Conditioning | CONDITIONING | Optional but highly recommended. Pushes the image away from undesirable traits (e.g., โblurry, extra limbsโ). |
LATENT | LATENT | The initial latent image. This can be random (for text-to-image) or derived from input for img2img, inpainting, etc. |
โ๏ธ Parameters and Settings (Deep Dive)โ
๐ข seed
(INT)โ
- Purpose: Sets the starting noise pattern.
- Same settings + same seed = same image. Crucial for reproducibility.
-1
= random seed each time.
Use Cases:
- Lock for reproducibility.
- Randomize when exploring.
๐ control_after_generate
(STRING ENUM)โ
Despite the name, this has nothing to do with ControlNet. This setting tells ComfyUI how to manage the seed value across batches.
Options:
fixed
: Same seed for every image.increment
: Add +1 to seed for each image.decrement
: Subtract -1 per image.randomize
: Use a random seed for each.
Why it matters:
fixed
= consistent image generation.increment
= ideal for controlled variations.randomize
= embrace the chaos.
๐งฎ steps
(INT)โ
- Purpose: Number of denoising iterations (the more steps, the more chances the model has to refine the image).
- Typical Range: 20โ50 for best balance.
- Max Range: Up to 150+, but prepare to wait.
Guidance:
- Too few = blurry or underdeveloped results.
- Too many = diminishing returns + GPU tears.
โ๏ธ cfg
(FLOAT)โ
(Classifier-Free Guidance Scale)
- Purpose: Controls how much the output adheres to the prompt.
- Typical Range: 1โ20
- Default Sweet Spot: 6.5โ8.5
Low cfg (e.g., 2) = freedom, creativity, also prompt forgetfulness.
High cfg (e.g., 15) = "you said banana samurai, youโre getting banana samurai."
๐ sampler_name
(STRING ENUM)โ
Determines the sampling algorithm used to perform denoising.
Popular Samplers:
Euler a
: Fast, chaotic, great for creativity.DPM++ 2M Karras
: Smooth, stable, photorealistic.Heun
,LMS
,UniPC
: All with their own quirks.
Best practice:
Try a few โ results can vary dramatically by sampler.
Ready to learn more? Take a look at our deep dive on all the sampler_name options.
๐ scheduler
(STRING ENUM)โ
Defines the noise schedule for denoising steps.
Options:
normal
: Uniform distribution.karras
: Better for fine details (recommended).exponential
: Aggressive at early steps.
Tip: Use karras
unless you're specifically told not to.
Ready to learn more? Take a look at our deep dive on all the sampler_name options.
๐ซ๏ธ denoise
(FLOAT)โ
- Range: 0.0โ1.0
- 1.0 = full generation from noise (text-to-image)
- <1.0 = preserve structure (for img2img, inpainting, ControlNet guidance)
Example Use:
1.0
โ generate from scratch0.5
โ img2img subtle change0.1โ0.3
โ ControlNet with light touch
๐งฐ Special Requirements & Notesโ
- Ensure your MODEL, CLIP, and VAE are compatible.
- Always connect both positive and negative conditioning for best results.
- Don't be afraid to play with CFG and Denoise โ they're your finesse tools.
- High
steps
+ Highcfg
= GPU meltdown risk โ ๏ธ
๐ฅ What-Not-To-Do-Unless-You-Want-a-Fireโ
Some mistakes are so common, so predictably catastrophic, that they deserve their own red-flagged list. If you enjoy wasting GPU hours, crashing your ComfyUI session, or summoning unholy Lovecraftian blobs, go ahead and do any of the following:
๐ซ Set steps
to 150+ with denoise
at 1.0โ
Unless youโre training your patience or testing thermal limits, this is the fastest way to generate pixel soup very slowly.
Instead: Use 25โ40 steps for 99% of tasks.
๐ซ Use cfg=20
because โhigher must be betterโโ
This wonโt make your prompt more accurate โ itโll make your image look like a bad Photoshop job run through a paper shredder.
Instead: Stick to the 6.5โ8.5 sweet spot. Go higher only if you know what youโre doing.
๐ซ Forget to set control_after_generate
when doing batch generationsโ
You wanted 8 unique variations. You got 8 clones. Oops.
Instead: Use increment
for clean batch diversity.
๐ซ Use denoise=0.1
in a text-to-image workflowโ
You just told the sampler to generate... almost nothing. Enjoy your blank canvas with a faint smudge of regret.
Instead: Use denoise=1.0
for full generations. Lower values are for img2img or ControlNet.
๐ซ Combine Euler a
sampler with steps=80
thinking it'll be gloriousโ
Nah. Euler a
is a fast sampler โ not meant for marathon sessions. You're not getting more detail, you're just looping futility.
Instead: Use DPM++ 2M Karras
or similar for higher-step use.
๐ซ Forget to connect Negative Conditioning
โ
Yes, it's โoptionalโ โ just like wearing pants in public is technically optional. But without it, your output will gleefully ignore your expectations and embrace chaos (in all the wrong ways).
๐ซ Use incompatible models and CLIP encodersโ
That โweird green mush with anime eyes and four earsโ? Yeah, thatโs what happens when you mix a v1.5 checkpoint with a v2 CLIP. Donโt.
๐ซ Crank everything to max at onceโ
CFG = 15, steps = 100, resolution = 2048x2048, tiling disabled, sampler = experimental beta nightly โ your GPU just left the chat.
๐ซ Blame the sampler when your prompt sucksโ
KSampler is powerful, but itโs not a miracle worker. If your prompt is โwomanโ and the image is cursed, wellโฆ maybe give the poor thing more to work with.
โ Pro Tip:โ
When in doubt โ lower your steps, simplify your prompt, lower your CFG, and only touch denoise
if you know what it does. KSampler rewards precision and punishes overconfidence.
๐งช Example Workflowโ
[CheckpointLoaderSimple] โโ> MODEL โโโโโโโโโโ โ [CLIPTextEncode (positive)] โโ> KSampler (positive) [CLIPTextEncode (negative)] โโ> KSampler (negative) [EmptyLatentImage] โโโโโโโโโโโ> KSampler (latent) CLIP โโโโโโโโโโโโโโโโโโโโโโโโโ> KSampler (CLIP) KSampler โโโโโโโโโโโโโโโโโโโโโ> [Decode/Save/Display]
โก TL;DR Config Cheat Sheetโ
Setting | Default / Safe Range | Notes |
---|---|---|
seed | -1 (random) | Lock for repeatability |
control_after_generate | increment | Varies seed across batch |
steps | 20โ30 | 40+ for ultra detail |
cfg | 7.5 | Higher = more prompt fidelity |
sampler_name | DPM++ 2M Karras | Stable and detailed |
scheduler | karras | Best detail control |
denoise | 1.0 | Use <1.0 for image-guided workflows |
๐ Additional Resourcesโ
๐ง Final Thoughtsโ
The KSampler is where the real magic happens โ the ultimate forge of diffusion sorcery. Every parameter you tweak gives you a different flavor of art, so get in there and experiment. Mastering this node means mastering your output.
Need docs for any of the surrounding nodes (EmptyLatentImage
, CLIPTextEncode
, etc.)? Just shout โ I'll keep the documentation train rolling.