Hi all. Earlier this month I wrote a script incorporating LLAMA 3.1, and it worked well (with timeouts, etc), but for the past week whenever I try to access LLAMA 3.1 at all I get a 503 error (including for short simple requests, like HF’s sample code).
Any idea what the problem may be? I do indeed have Pro.
Am still getting a 503 when running Llama-3.2-3B-Instruct. Any known solution? huggingface_hub.errors.HfHubHTTPError: 503 Server Error: Service Temporarily Unavailable for url: /static-proxy?url=https%3A%2F%2Frouter.huggingface.co%2Fhf-inference%2Fmodels%2Fmeta-llama%2FLlama-3.2-3B-Instruct%3C%2Fcode%3E%3C%2Fp%3E