Skip to main content

KSampler

The engine of generation in ComfyUI. This node takes your model, your prompt, your settings, and cranks out images by denoising a latent tensor into art. Itโ€™s where diffusion happens, and where your GPU earns its keep.


๐Ÿ”Œ Inputsโ€‹

InputTypeDescription
MODELMODELThe diffusion model you're using โ€” e.g., dreamshaper_8.safetensors, revAnimated_v122.safetensors. Required.
CLIPCLIPText encoder. Usually comes from the same CheckpointLoaderSimple. It interprets your prompts into guidance.
Positive ConditioningCONDITIONINGFrom your prompt node. Guides the image generation toward the prompt.
Negative ConditioningCONDITIONINGOptional but highly recommended. Pushes the image away from undesirable traits (e.g., โ€œblurry, extra limbsโ€).
LATENTLATENTThe initial latent image. This can be random (for text-to-image) or derived from input for img2img, inpainting, etc.

โš™๏ธ Parameters and Settings (Deep Dive)โ€‹

๐Ÿ”ข seed (INT)โ€‹

  • Purpose: Sets the starting noise pattern.
  • Same settings + same seed = same image. Crucial for reproducibility.
  • -1 = random seed each time.

Use Cases:

  • Lock for reproducibility.
  • Randomize when exploring.

๐ŸŽ› control_after_generate (STRING ENUM)โ€‹

Despite the name, this has nothing to do with ControlNet. This setting tells ComfyUI how to manage the seed value across batches.

Options:

  • fixed: Same seed for every image.
  • increment: Add +1 to seed for each image.
  • decrement: Subtract -1 per image.
  • randomize: Use a random seed for each.

Why it matters:

  • fixed = consistent image generation.
  • increment = ideal for controlled variations.
  • randomize = embrace the chaos.

๐Ÿงฎ steps (INT)โ€‹

  • Purpose: Number of denoising iterations (the more steps, the more chances the model has to refine the image).
  • Typical Range: 20โ€“50 for best balance.
  • Max Range: Up to 150+, but prepare to wait.

Guidance:

  • Too few = blurry or underdeveloped results.
  • Too many = diminishing returns + GPU tears.

โš–๏ธ cfg (FLOAT)โ€‹

(Classifier-Free Guidance Scale)

  • Purpose: Controls how much the output adheres to the prompt.
  • Typical Range: 1โ€“20
  • Default Sweet Spot: 6.5โ€“8.5

Low cfg (e.g., 2) = freedom, creativity, also prompt forgetfulness.
High cfg (e.g., 15) = "you said banana samurai, youโ€™re getting banana samurai."


๐ŸŒ€ sampler_name (STRING ENUM)โ€‹

Determines the sampling algorithm used to perform denoising.

Popular Samplers:

  • Euler a: Fast, chaotic, great for creativity.
  • DPM++ 2M Karras: Smooth, stable, photorealistic.
  • Heun, LMS, UniPC: All with their own quirks.

Best practice:
Try a few โ€” results can vary dramatically by sampler.

Ready to learn more? Take a look at our deep dive on all the sampler_name options.


๐Ÿ“† scheduler (STRING ENUM)โ€‹

Defines the noise schedule for denoising steps.

Options:

  • normal: Uniform distribution.
  • karras: Better for fine details (recommended).
  • exponential: Aggressive at early steps.

Tip: Use karras unless you're specifically told not to.

Ready to learn more? Take a look at our deep dive on all the sampler_name options.


๐ŸŒซ๏ธ denoise (FLOAT)โ€‹

  • Range: 0.0โ€“1.0
  • 1.0 = full generation from noise (text-to-image)
  • <1.0 = preserve structure (for img2img, inpainting, ControlNet guidance)

Example Use:

  • 1.0 โ†’ generate from scratch
  • 0.5 โ†’ img2img subtle change
  • 0.1โ€“0.3 โ†’ ControlNet with light touch

๐Ÿงฐ Special Requirements & Notesโ€‹

  • Ensure your MODEL, CLIP, and VAE are compatible.
  • Always connect both positive and negative conditioning for best results.
  • Don't be afraid to play with CFG and Denoise โ€” they're your finesse tools.
  • High steps + High cfg = GPU meltdown risk โš ๏ธ

๐Ÿ”ฅ What-Not-To-Do-Unless-You-Want-a-Fireโ€‹

Some mistakes are so common, so predictably catastrophic, that they deserve their own red-flagged list. If you enjoy wasting GPU hours, crashing your ComfyUI session, or summoning unholy Lovecraftian blobs, go ahead and do any of the following:

๐Ÿšซ Set steps to 150+ with denoise at 1.0โ€‹

Unless youโ€™re training your patience or testing thermal limits, this is the fastest way to generate pixel soup very slowly.

Instead: Use 25โ€“40 steps for 99% of tasks.

๐Ÿšซ Use cfg=20 because โ€œhigher must be betterโ€โ€‹

This wonโ€™t make your prompt more accurate โ€” itโ€™ll make your image look like a bad Photoshop job run through a paper shredder.

Instead: Stick to the 6.5โ€“8.5 sweet spot. Go higher only if you know what youโ€™re doing.

๐Ÿšซ Forget to set control_after_generate when doing batch generationsโ€‹

You wanted 8 unique variations. You got 8 clones. Oops.

Instead: Use increment for clean batch diversity.

๐Ÿšซ Use denoise=0.1 in a text-to-image workflowโ€‹

You just told the sampler to generate... almost nothing. Enjoy your blank canvas with a faint smudge of regret.

Instead: Use denoise=1.0 for full generations. Lower values are for img2img or ControlNet.

๐Ÿšซ Combine Euler a sampler with steps=80 thinking it'll be gloriousโ€‹

Nah. Euler a is a fast sampler โ€” not meant for marathon sessions. You're not getting more detail, you're just looping futility.

Instead: Use DPM++ 2M Karras or similar for higher-step use.

๐Ÿšซ Forget to connect Negative Conditioningโ€‹

Yes, it's โ€œoptionalโ€ โ€” just like wearing pants in public is technically optional. But without it, your output will gleefully ignore your expectations and embrace chaos (in all the wrong ways).

๐Ÿšซ Use incompatible models and CLIP encodersโ€‹

That โ€œweird green mush with anime eyes and four earsโ€? Yeah, thatโ€™s what happens when you mix a v1.5 checkpoint with a v2 CLIP. Donโ€™t.

๐Ÿšซ Crank everything to max at onceโ€‹

CFG = 15, steps = 100, resolution = 2048x2048, tiling disabled, sampler = experimental beta nightly โ€” your GPU just left the chat.

๐Ÿšซ Blame the sampler when your prompt sucksโ€‹

KSampler is powerful, but itโ€™s not a miracle worker. If your prompt is โ€œwomanโ€ and the image is cursed, wellโ€ฆ maybe give the poor thing more to work with.

โœ… Pro Tip:โ€‹

When in doubt โ€” lower your steps, simplify your prompt, lower your CFG, and only touch denoise if you know what it does. KSampler rewards precision and punishes overconfidence.


๐Ÿงช Example Workflowโ€‹

[CheckpointLoaderSimple]     โ””โ”€> MODEL โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”                        โ”‚ [CLIPTextEncode (positive)] โ”€โ”€> KSampler (positive) [CLIPTextEncode (negative)] โ”€โ”€> KSampler (negative) [EmptyLatentImage] โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€> KSampler (latent) CLIP โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€> KSampler (CLIP) KSampler โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€> [Decode/Save/Display]

โšก TL;DR Config Cheat Sheetโ€‹

SettingDefault / Safe RangeNotes
seed-1 (random)Lock for repeatability
control_after_generateincrementVaries seed across batch
steps20โ€“3040+ for ultra detail
cfg7.5Higher = more prompt fidelity
sampler_nameDPM++ 2M KarrasStable and detailed
schedulerkarrasBest detail control
denoise1.0Use <1.0 for image-guided workflows

๐Ÿ“š Additional Resourcesโ€‹

๐Ÿง  Final Thoughtsโ€‹

The KSampler is where the real magic happens โ€” the ultimate forge of diffusion sorcery. Every parameter you tweak gives you a different flavor of art, so get in there and experiment. Mastering this node means mastering your output.

Need docs for any of the surrounding nodes (EmptyLatentImage, CLIPTextEncode, etc.)? Just shout โ€” I'll keep the documentation train rolling.