stduhpf commited on
Commit
359d949
·
verified ·
1 Parent(s): 77ab62e

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -14,7 +14,7 @@ The official QAT weights released by google use fp16 (instead of Q6_K) for the e
14
  Here are some perplexity measurements:
15
 
16
  | Model | File size ↓ | PPL (wiki.text.raw) ↓ | Hellaswag (first 4000 tasks, deterministic) ↑ |
17
- | --- | --- | --- |
18
  | [This model](https://huggingface.co/stduhpf/google-gemma-3-4b-it-qat-q4_0-gguf-small/blob/main/gemma-3-4b-it-q4_0_s.gguf) | 2.36 GB | 14.5943 +/- 0.13405 | 65.675% |
19
  | [Q4_0 (bartowski)](https://huggingface.co/bartowski/google_gemma-3-1b-it-GGUF/blob/main/google_gemma-3-4b-it-Q4_0.gguf) | 2.37 GB | 16.8002 +/- 0.16519 | 65.65% |
20
  | [QAT Q4_0 (google)](https://huggingface.co/google/gemma-3-4b-it-qat-q4_0-gguf/blob/main/gemma-3-4b-it-q4_0.gguf) | 3.16 GB | 14.5796 +/- 0.13395 | 66.075% |
 
14
  Here are some perplexity measurements:
15
 
16
  | Model | File size ↓ | PPL (wiki.text.raw) ↓ | Hellaswag (first 4000 tasks, deterministic) ↑ |
17
+ | --- | --- | --- | --- |
18
  | [This model](https://huggingface.co/stduhpf/google-gemma-3-4b-it-qat-q4_0-gguf-small/blob/main/gemma-3-4b-it-q4_0_s.gguf) | 2.36 GB | 14.5943 +/- 0.13405 | 65.675% |
19
  | [Q4_0 (bartowski)](https://huggingface.co/bartowski/google_gemma-3-1b-it-GGUF/blob/main/google_gemma-3-4b-it-Q4_0.gguf) | 2.37 GB | 16.8002 +/- 0.16519 | 65.65% |
20
  | [QAT Q4_0 (google)](https://huggingface.co/google/gemma-3-4b-it-qat-q4_0-gguf/blob/main/gemma-3-4b-it-q4_0.gguf) | 3.16 GB | 14.5796 +/- 0.13395 | 66.075% |