Gemma 4 31B-it - RotorQuant MLX 4-bit
4-bit weight-quantized MLX version of google/gemma-4-31B-it with RotorQuant KV-cache quantization. Optimized for Apple Silicon inference via the MLX framework. RotorQuant delivers 5.3x faster prefill and 28% faster decode compared to TurboQuant. A good balance between model quality and memory efficiency.
Approximate model size: ~17 GB
Model Specifications
| Property | Value |
|---|---|
| Base Model | google/gemma-4-31B-it |
| Parameters | 31 billion |
| Architecture | Dense transformer |
| Modality | Multimodal: image + text input, text output |
| License | Apache 2.0 |
| Weight Quantization | 4-bit (~17 GB) |
| KV-Cache Quantization | RotorQuant |
| Framework | MLX (Apple Silicon) |
Quickstart
import mlx.core as mx
from mlx_lm import load, generate
model, tokenizer = load("majentik/gemma-4-31B-it-RotorQuant-MLX-4bit")
prompt = "Describe this image in detail."
response = generate(model, tokenizer, prompt=prompt, max_tokens=512)
print(response)
For multimodal usage with images:
from mlx_vlm import load, generate
model, processor = load("majentik/gemma-4-31B-it-RotorQuant-MLX-4bit")
prompt = "What do you see in this image?"
output = generate(model, processor, prompt=prompt, image="path/to/image.jpg", max_tokens=512)
print(output)
What is RotorQuant?
RotorQuant is a high-performance KV-cache quantization method that achieves significantly better throughput than TurboQuant. Combined with 4-bit weight quantization in MLX, this provides a dual compression strategy with superior KV-cache performance: smaller model weights plus faster compressed KV cache for efficient long-context generation.
Key advantages over TurboQuant:
- 5.3x faster prefill
- 28% faster decode
- Equivalent memory savings
KV-Cache Quantization Comparison
| Method | Prefill Speed | Decode Speed | Memory Savings | Reference |
|---|---|---|---|---|
| TurboQuant | 1x (baseline) | 1x (baseline) | High | arXiv: 2504.19874 |
| RotorQuant | 5.3x faster | 28% faster | High | GitHub |
Memory Estimates (Gemma 4 31B-it)
| Precision | Approximate Size | MLX Variant |
|---|---|---|
| FP16 (original) | ~62 GB | -- |
| 8-bit quantized | ~31 GB | RotorQuant-MLX-8bit |
| 4-bit quantized | ~17 GB | This model |
| 2-bit quantized | ~9 GB | RotorQuant-MLX-2bit |
Hardware Requirements
This model requires approximately 17 GB of unified memory. Recommended hardware:
- Apple M2 Max (32 GB+)
- Apple M3 Pro (36 GB+)
- Apple M4 Pro (24 GB+)
- Any Apple Silicon Mac with 24 GB+ unified memory
See Also
- google/gemma-4-31B-it -- Base model
- majentik/gemma-4-31B-it-RotorQuant -- RotorQuant KV-cache only (transformers)
- majentik/gemma-4-31B-it-RotorQuant-MLX-8bit -- MLX 8-bit variant
- majentik/gemma-4-31B-it-RotorQuant-MLX-2bit -- MLX 2-bit variant
- majentik/gemma-4-31B-it-TurboQuant-MLX-4bit -- TurboQuant MLX 4-bit variant
- RotorQuant GitHub
- MLX Framework
Quant trade-off (MLX lane)
| Bits | Approx size | Use case | Recommendation |
|---|---|---|---|
| 2-bit | ~8.1 GB | Aggressive quantization | Very low-RAM Macs |
| 3-bit | ~11 GB | Lossy but small | Low-RAM Macs |
| 4-bit | ~13 GB | Balanced default | Recommended for most Macs |
| 5-bit | ~16 GB | Higher fidelity | Quality-sensitive |
| 6-bit | ~19 GB | Approaching FP16 quality | High-fidelity |
| 8-bit | ~24 GB | Near-lossless reference | Fidelity-critical work |
(Current variant — 4bit — is bolded.)
Variants in this family
(Showing 18 sibling variants under majentik/gemma4-31b-it-*. The current variant — RotorQuant-MLX-4bit — is bolded.)
| Variant | Runtime | Approx size | Use case |
|---|---|---|---|
| RotorQuant | runtime modifier | n/a | KV-cache root (weight-agnostic) |
| RotorQuant-AWQ-4bit | transformers | ~19 GB | GPU 4-bit (AutoAWQ) |
| RotorQuant-AWQ-8bit | transformers | ~34 GB | GPU 8-bit (AutoAWQ) |
| RotorQuant-GGUF-IQ4_XS | llama.cpp | ~27 GB | Lossy 4-bit, low-RAM CPU/edge |
| RotorQuant-GGUF-Q2_K | llama.cpp | ~19 GB | Lossy, low-RAM CPU/edge |
| RotorQuant-GGUF-Q3_K_M | llama.cpp | ~24 GB | Smaller 3-bit, CPU-friendly |
| RotorQuant-GGUF-Q4_K_M | llama.cpp | ~34 GB | Balanced default |
| RotorQuant-GGUF-Q5_K_M | llama.cpp | ~41 GB | Higher fidelity, more RAM |
| RotorQuant-GGUF-Q8_0 | llama.cpp | ~65 GB | Near-lossless reference |
| RotorQuant-MLX-2bit | mlx-lm | ~9.9 GB | Apple Silicon, smallest |
| RotorQuant-MLX-4bit | mlx-lm | ~19 GB | Apple Silicon balanced |
| RotorQuant-MLX-8bit | mlx-lm | ~37 GB | Apple Silicon reference |
| TurboQuant | runtime modifier | n/a | KV-cache root (weight-agnostic) |
| TurboQuant-AWQ-4bit | transformers | ~19 GB | GPU 4-bit (AutoAWQ) |
| TurboQuant-AWQ-8bit | transformers | ~34 GB | GPU 8-bit (AutoAWQ) |
| TurboQuant-MLX-2bit | mlx-lm | ~9.9 GB | Apple Silicon, smallest |
| TurboQuant-MLX-4bit | mlx-lm | ~19 GB | Apple Silicon balanced |
| TurboQuant-MLX-8bit | mlx-lm | ~37 GB | Apple Silicon reference |
- Downloads last month
- 1,330
4-bit