Abhi99999/Hunyuan-1.8B-Instruct-Q4_K_M-GGUF

This model was converted to GGUF format from tencent/Hunyuan-1.8B-Instruct using llama.cpp via the ggml.ai's GGUF-my-repo space. Refer to the original model card for more details on the model.

Benchmark

Note: The following benchmarks are evaluated by TRT-LLM-backend on several base models.

Model Hunyuan-0.5B-Pretrain Hunyuan-1.8B-Pretrain Hunyuan-4B-Pretrain Hunyuan-7B-Pretrain
MMLU 54.02 64.62 74.01 79.82
MMLU-Redux 54.72 64.42 73.53 79
MMLU-Pro 31.15 38.65 51.91 57.79
SuperGPQA 17.23 24.98 27.28 30.47
BBH 45.92 74.32 75.17 82.95
GPQA 27.76 35.81 43.52 44.07
GSM8K 55.64 77.26 87.49 88.25
MATH 42.95 62.85 72.25 74.85
EvalPlus 39.71 60.67 67.76 66.96
MultiPL-E 21.83 45.92 59.87 60.41
MBPP 43.38 66.14 76.46 76.19
CRUX-O 30.75 36.88 56.5 60.75
Chinese SimpleQA 12.51 22.31 30.53 38.86
simpleQA (5shot) 2.38 3.61 4.21 5.69
Topic Bench Hunyuan-0.5B-Instruct Hunyuan-1.8B-Instruct Hunyuan-4B-Instruct Hunyuan-7B-Instruct
Mathematics AIME 2024
AIME 2025
MATH
17.2
20
48.5
56.7
53.9
86
78.3
66.5
92.6
81.1
75.3
93.7
Science GPQA-Diamond
OlympiadBench
23.3
29.6
47.2
63.4
61.1
73.1
60.1
76.5
Coding Livecodebench
Fullstackbench
11.1
20.9
31.5
42
49.4
54.6
57
56.3
Reasoning BBH
DROP
ZebraLogic
40.3
52.8
34.5
64.6
76.7
74.6
83
78.2
83.5
87.8
85.9
85.1
Instruction
Following
IF-Eval
SysBench
49.7
28.1
67.6
55.5
76.6
68
79.3
72.7
Agent BFCL v3
τ-Bench
ComplexFuncBench
C3-Bench
49.8
14.4
13.9
45.3
58.3
18.2
22.3
54.6
67.9
30.1
26.3
64.3
70.8
35.3
29.2
68.5
Long
Context
PenguinScrolls
longbench-v2
FRAMES
53.9
34.7
41.9
73.1
33.2
55.6
83.1
44.1
79.2
82
43
78.6

Use with llama.cpp

Install llama.cpp through brew (works on Mac and Linux)

brew install llama.cpp

Invoke the llama.cpp server or the CLI.

CLI:

llama-cli --hf-repo Abhi99999/Hunyuan-1.8B-Instruct-Q4_K_M-GGUF --hf-file hunyuan-1.8b-instruct-q4_k_m.gguf -p "The meaning to life and the universe is"

Server:

llama-server --hf-repo Abhi99999/Hunyuan-1.8B-Instruct-Q4_K_M-GGUF --hf-file hunyuan-1.8b-instruct-q4_k_m.gguf -c 2048

Note: You can also use this checkpoint directly through the usage steps listed in the Llama.cpp repo as well.

Step 1: Clone llama.cpp from GitHub.

git clone https://github.com/ggerganov/llama.cpp

Step 2: Move into the llama.cpp folder and build it with LLAMA_CURL=1 flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).

cd llama.cpp && LLAMA_CURL=1 make

Step 3: Run inference through the main binary.

./llama-cli --hf-repo Abhi99999/Hunyuan-1.8B-Instruct-Q4_K_M-GGUF --hf-file hunyuan-1.8b-instruct-q4_k_m.gguf -p "The meaning to life and the universe is"

or

./llama-server --hf-repo Abhi99999/Hunyuan-1.8B-Instruct-Q4_K_M-GGUF --hf-file hunyuan-1.8b-instruct-q4_k_m.gguf -c 2048
Downloads last month
21
GGUF
Model size
2B params
Architecture
hunyuan-dense
Hardware compatibility
Log In to view the estimation

4-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for Edge-Quant/Hunyuan-1.8B-Instruct-Q4_K_M-GGUF

Quantized
(17)
this model

Collection including Edge-Quant/Hunyuan-1.8B-Instruct-Q4_K_M-GGUF