runtime error

Exit code: 1. Reason: .42MB/s] vae/diffusion_pytorch_model.safetensors: 0%| | 0.00/168M [00:00<?, ?B/s] vae/diffusion_pytorch_model.safetensors: 60%|█████▉ | 101M/168M [00:01<00:01, 65.6MB/s] vae/diffusion_pytorch_model.safetensors: 100%|██████████| 168M/168M [00:01<00:00, 86.7MB/s] Loading pipeline components...: 0%| | 0/7 [00:00<?, ?it/s] Loading checkpoint shards: 0%| | 0/3 [00:00<?, ?it/s] Loading checkpoint shards: 67%|██████▋ | 2/3 [00:01<00:00, 1.73it/s] Loading checkpoint shards: 100%|██████████| 3/3 [00:01<00:00, 1.78it/s] Loading pipeline components...: 14%|█▍ | 1/7 [00:02<00:13, 2.26s/it]`torch_dtype` is deprecated! Use `dtype` instead! Loading checkpoint shards: 0%| | 0/2 [00:00<?, ?it/s] Loading checkpoint shards: 100%|██████████| 2/2 [00:00<00:00, 7.28it/s] You set `add_prefix_space`. The tokenizer needs to be converted from the slow tokenizers Loading pipeline components...: 71%|███████▏ | 5/7 [00:03<00:01, 1.77it/s] Loading pipeline components...: 100%|██████████| 7/7 [00:03<00:00, 1.86it/s] Waiting for a GPU to become available Traceback (most recent call last): File "/home/user/app/app.py", line 21, in <module> optimize_pipeline_(pipe, image=Image.new("RGB", (512, 512)), prompt='prompt') File "/home/user/app/optimization.py", line 59, in optimize_pipeline_ pipeline.transformer = compile_transformer() File "/usr/local/lib/python3.10/site-packages/spaces/zero/wrappers.py", line 187, in gradio_handler schedule_response = client.schedule(task_id=task_id, request=request, duration=duration_, gpu_size=gpu_size) File "/usr/local/lib/python3.10/site-packages/spaces/zero/client.py", line 200, in schedule raise error("ZeroGPU quota exceeded", message) gradio.exceptions.Error: 'No GPU was available after 60s. Try re-running outside of examples if it happened after clicking one'

Container logs:

Fetching error logs...