The models are converted from Qwen/Qwen3-30B-A3B-Instruct-2507
Before using these models, please set up the generation config properly.
- temperature = 0.7
- top_p = 0.8
- top_k = 20
- min_p = 0.0
- output token length: 16,384 tokens
Best Practice: https://huggingface.co/Qwen/Qwen3-30B-A3B-Instruct-2507#best-practices
- Downloads last month
- 15
Hardware compatibility
Log In to add your hardware
3-bit
4-bit
5-bit
6-bit
8-bit
16-bit
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐ Ask for provider support