--- license: apache-2.0 base_model: - WeiboAI/VibeThinker-1.5B tags: - llmcompressor --- This is [WeiboAI/VibeThinker-1.5B](https://huggingface.co/WeiboAI/VibeThinker-1.5B) quantized with [LLM Compressor](https://github.com/vllm-project/llm-compressor) with the recipe in the "recipe.yaml" file. The model is compatible with vLLM (tested: v0.12.0). Tested with an RTX 5090. - **Developed by:** [The Kaitchup](https://kaitchup.substack.com/) - **License:** Apache 2.0 license ## How to Support My Work Subscribe to [The Kaitchup](https://kaitchup.substack.com/subscribe). This helps me a lot to continue quantizing and evaluating models for free. Or you can "[buy me a kofi](https://ko-fi.com/bnjmn_marie)".