How to use lightx2v/Z-Image-Turbo-Quantized with Diffusers:
pip install -U diffusers transformers accelerate
import torch from diffusers import DiffusionPipeline # switch to "mps" for apple devices pipe = DiffusionPipeline.from_pretrained("lightx2v/Z-Image-Turbo-Quantized", dtype=torch.bfloat16, device_map="cuda") prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k" image = pipe(prompt).images[0]
How to use lightx2v/Z-Image-Turbo-Quantized with Diffusion Single File:
# No code snippets available yet for this library. # To use this model, check the repository files and the library's documentation. # Want to help? PRs adding snippets are welcome at: # https://github.com/huggingface/huggingface.js
能否增加一个单独的lora? 这样更方便
这个只是官方release的9步模型的fp8/int8版本
我以为是训练的加速模型
· Sign up or log in to comment