If you’re looking for a more visual and intuitive way to generate AI art, ComfyUI is the perfect solution. This powerful graphical interface for Stable Diffusion lets you create complex image generation workflows through an easy-to-use node-based system. Let’s get you started!

Why Choose ComfyUI?

  • Visual node-based interface
  • More powerful than other GUIs
  • Highly customizable workflows
  • Better performance than alternatives
  • Active community and workflow sharing

Installation Options

The easiest way to get started with ComfyUI is through Thunder Compute:

  1. Visit Thunder Compute
  2. Create an account
  3. Select the “ComfyUI” template
  4. Launch your instance

Run start-comfyui in the terminal to start the ComfyUI editor.

You’ll get instant access to a fully configured ComfyUI installation with:

  • Pre-installed popular models
  • Optimized GPU settings
  • Common workflow templates
  • Web-based access from any device

Option 2: Local Installation

If you prefer running ComfyUI locally:

# Clone the repository
git clone https://github.com/comfyanonymous/ComfyUI
cd ComfyUI

# Install Python dependencies
pip install torch torchvision torchaudio --extra-index-url https://download.pytorch.org/whl/cu121
pip install -r requirements.txt

# Launch ComfyUI
python main.py

Setting Up Your First Workflow

  1. Access the ComfyUI interface (usually at http://localhost:8188 or your Thunder Compute URL)
  2. Right-click on the canvas to add nodes
  3. Create a basic workflow:
    • Load Checkpoint (model)
    • KSampler
    • CLIPTextEncode (for prompt)
    • Empty Latent Image
    • VAE Decode
    • Save Image

Essential Nodes Explained

Core Nodes

  • Load Checkpoint: Loads your Stable Diffusion model
  • KSampler: Controls the generation process
  • CLIPTextEncode: Handles prompts and negative prompts
  • VAE Decode: Converts latent space to images

Additional Useful Nodes

  • LoRA Loader: Add style modifications
  • Image Loading: Import reference images
  • Conditioning: Control generation areas
  • Upscaling: Enhance image quality

Creating Your First Image

  1. Add a “Load Checkpoint” node
  2. Connect it to a “KSampler” node
  3. Add “CLIPTextEncode” for your prompt
  4. Create an “Empty Latent Image”
  5. Add “VAE Decode”
  6. Finally, add “Save Image”

Connect the nodes and enter your prompt:

a beautiful sunset over mountains, professional photography, golden hour lighting, 8k, masterpiece

Advanced Techniques

Working with LoRAs

  1. Add a “LoRA Loader” node
  2. Connect it after your checkpoint
  3. Adjust the LoRA strength (typically 0.5-1.0)

Using Control Net

  1. Add “ControlNet” nodes
  2. Load appropriate ControlNet model
  3. Connect your input image
  4. Adjust conditioning strength

Optimizing Performance

Memory Management

  • Use lower resolution for drafts (512x512)
  • Enable VAE TAESD for preview
  • Utilize fp16 precision when possible

Speed Optimization

  • Use xformers attention
  • Batch similar operations
  • Preload frequently used models

Sharing and Saving Workflows

  1. Click “Save Workflow” in the menu
  2. Choose a descriptive name
  3. Share the workflow JSON file

To load a workflow:

  1. Click “Load Workflow”
  2. Select your workflow file
  3. Adjust parameters as needed

Troubleshooting

Common Issues

  1. Out of Memory
    • Reduce batch size
    • Lower resolution
    • Use fewer nodes
  2. Slow Performance
    • Enable optimizations
    • Simplify workflows
    • Consider Thunder Compute for better GPU access
  3. Model Loading Errors
    • Check model path
    • Verify model compatibility
    • Update ComfyUI

Resources

Next Steps

Now that you have ComfyUI set up, explore:

  • Custom node development
  • Advanced workflow techniques
  • Model merging and fine-tuning

Stay tuned for our next guide on advanced Stable Diffusion techniques, including custom model training and LoRA creation!