Update README.md
Browse files
README.md
CHANGED
|
@@ -13,11 +13,13 @@ The official QAT weights released by google use fp16 (instead of Q6_K) for the e
|
|
| 13 |
|
| 14 |
Here are some perplexity measurements:
|
| 15 |
|
| 16 |
-
| Model | File size ↓ | PPL (wiki.text.raw) ↓ |
|
| 17 |
| --- | --- | --- |
|
| 18 |
-
| [This model](https://huggingface.co/stduhpf/google-gemma-3-4b-it-qat-q4_0-gguf-small/blob/main/gemma-3-4b-it-q4_0_s.gguf) | 2.36 GB | 14.5943 +/- 0.13405 |
|
| 19 |
-
| [Q4_0 (bartowski)](https://huggingface.co/bartowski/google_gemma-3-1b-it-GGUF/blob/main/google_gemma-3-4b-it-Q4_0.gguf) | 2.37 GB | 16.8002 +/- 0.16519 |
|
| 20 |
-
| [QAT Q4_0 (google)](https://huggingface.co/google/gemma-3-4b-it-qat-q4_0-gguf/blob/main/gemma-3-4b-it-q4_0.gguf) | 3.16 GB | 14.5796 +/- 0.13395 |
|
| 21 |
|
| 22 |
Note that this model ends up smaller than the Q4_0 from Bartowski. This is because llama.cpp sets some tensors to Q4_1 when quantizing models to Q4_0 with imatrix, but this is a static quant.
|
| 23 |
-
The perplexity scores are within margin of error between this model and the original QAT, despite the size difference.
|
|
|
|
|
|
|
|
|
| 13 |
|
| 14 |
Here are some perplexity measurements:
|
| 15 |
|
| 16 |
+
| Model | File size ↓ | PPL (wiki.text.raw) ↓ | Hellaswag (first 4000 tasks, deterministic) ↑ |
|
| 17 |
| --- | --- | --- |
|
| 18 |
+
| [This model](https://huggingface.co/stduhpf/google-gemma-3-4b-it-qat-q4_0-gguf-small/blob/main/gemma-3-4b-it-q4_0_s.gguf) | 2.36 GB | 14.5943 +/- 0.13405 | 65.675% |
|
| 19 |
+
| [Q4_0 (bartowski)](https://huggingface.co/bartowski/google_gemma-3-1b-it-GGUF/blob/main/google_gemma-3-4b-it-Q4_0.gguf) | 2.37 GB | 16.8002 +/- 0.16519 | 65.65% |
|
| 20 |
+
| [QAT Q4_0 (google)](https://huggingface.co/google/gemma-3-4b-it-qat-q4_0-gguf/blob/main/gemma-3-4b-it-q4_0.gguf) | 3.16 GB | 14.5796 +/- 0.13395 | 66.075% |
|
| 21 |
|
| 22 |
Note that this model ends up smaller than the Q4_0 from Bartowski. This is because llama.cpp sets some tensors to Q4_1 when quantizing models to Q4_0 with imatrix, but this is a static quant.
|
| 23 |
+
The perplexity scores are within margin of error between this model and the original QAT, despite the size difference.
|
| 24 |
+
|
| 25 |
+
The drop in Hellaswag score is a bit surprising and disapointing for me. I really was expecting this model do perform almost identical to the original, but it seems ther are 16 tasks out of the 4000 tested that this model performs worse at.
|