Text-to-Image
Diffusers
Safetensors
English
StableDiffusionPipeline
stable-diffusion
stable-diffusion-diffusers
Instructions to use wavymulder/Analog-Diffusion with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Diffusers
How to use wavymulder/Analog-Diffusion with Diffusers:
pip install -U diffusers transformers accelerate
import torch from diffusers import DiffusionPipeline # switch to "mps" for apple devices pipe = DiffusionPipeline.from_pretrained("wavymulder/Analog-Diffusion", dtype=torch.bfloat16, device_map="cuda") prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k" image = pipe(prompt).images[0] - Notebooks
- Google Colab
- Kaggle
- Local Apps
- Draw Things
- DiffusionBee
Analog Diffusion
CKPT DOWNLOAD LINK - This is a dreambooth model trained on a diverse set of analog photographs.
In your prompt, use the activation token: analog style
You may need to use the words blur haze naked in your negative prompts. My dataset did not include any NSFW material but the model seems to be pretty horny. Note that using blur and haze in your negative prompt can give a sharper image but also a less pronounced analog film effect.
Trained from 1.5 with VAE.
Gradio
We support a Gradio Web UI to run Analog-Diffusion:
Here's a link to non-cherrypicked batches.
- Downloads last month
- 329

