SignRoundV2: Closing the Performance Gap in Extremely Low-Bit Post-Training Quantization for LLMs Paper • 2512.04746 • Published 2 days ago • 7
SignRoundV2: Closing the Performance Gap in Extremely Low-Bit Post-Training Quantization for LLMs Paper • 2512.04746 • Published 2 days ago • 7
view post Post 1158 🚀 SignRoundV2 for LLM quantization: PTQ-level cost, QAT-level accuracy — yes, even at 2 bits. SignRoundV2: Closing the Performance Gap in Extremely Low-Bit Post-Training Quantization for LLMs (2512.04746) See translation 🔥 3 3 + Reply
view post Post 2798 Mistral's new Ministral 3 models can now be Run & Fine-tuned locally! (16GB RAM)Ministral 3 have vision support and the best-in-class performance for their sizes.14B Instruct GGUF: unsloth/Ministral-3-14B-Instruct-2512-GGUF14B Reasoning GGUF: unsloth/Ministral-3-14B-Reasoning-2512-GGUF🐱 Step-by-step Guide: https://docs.unsloth.ai/new/ministral-3All GGUFs, BnB, FP8 etc. variants uploads: https://huggingface.co/collections/unsloth/ministral-3 See translation 2 replies · 🔥 16 16 🤗 6 6 ❤️ 5 5 🚀 3 3 + Reply
Domain Adaptation of Llama3-70B-Instruct through Continual Pre-Training and Model Merging: A Comprehensive Evaluation Paper • 2406.14971 • Published Jun 21, 2024
Training-Free Tokenizer Transplantation via Orthogonal Matching Pursuit Paper • 2506.06607 • Published Jun 7 • 2
From Code Foundation Models to Agents and Applications: A Practical Guide to Code Intelligence Paper • 2511.18538 • Published 13 days ago • 238
Stabilizing Reinforcement Learning with LLMs: Formulation and Practices Paper • 2512.01374 • Published 5 days ago • 75