Diffusers documentation
Overview
Get started
Tutorials
OverviewUnderstanding pipelines, models and schedulersAutoPipelineTrain a diffusion modelLoad LoRAs for inferenceAccelerate inference of text-to-image diffusion modelsWorking with big models
Load pipelines and adapters
Load pipelinesLoad community pipelines and componentsLoad schedulers and modelsModel files and layoutsLoad adaptersPush files to the Hub
Generative tasks
Unconditional image generationText-to-imageImage-to-imageInpaintingText or image-to-videoDepth-to-image
Inference techniques
OverviewDistributed inferenceMerge LoRAsScheduler featuresPipeline callbacksReproducible pipelinesControlling image qualityPrompt techniques
Advanced inference
Specific pipeline examples
CogVideoXStable Diffusion XLSDXL TurboKandinskyIP-AdapterPAGControlNetT2I-AdapterLatent Consistency ModelTextual inversionShap-EDiffEditTrajectory Consistency Distillation-LoRAStable Video DiffusionMarigold Computer Vision
Training
Quantization Methods
Accelerate inference and reduce memory
Speed up inferenceReduce memory usagePyTorch 2.0xFormersToken mergingDeepCacheTGATExDiT
Optimized model formats
Optimized hardware
Conceptual Guides
PhilosophyControlled generationHow to contribute?Diffusers' Ethical GuidelinesEvaluating Diffusion Models
Community Projects
API
Main Classes
Loaders
Models
Pipelines
Schedulers
Internal classes
You are viewing v0.31.0 version. A newer version v0.38.0 is available.
Overview
The inference pipeline supports and enables a wide range of techniques that are divided into two categories:
- Pipeline functionality: these techniques modify the pipeline or extend it for other applications. For example, pipeline callbacks add new features to a pipeline and a pipeline can also be extended for distributed inference.
- Improve inference quality: these techniques increase the visual quality of the generated images. For example, you can enhance your prompts with GPT2 to create better images with lower effort.