FuseChat: Knowledge Fusion of Chat Models
Paper
•
2408.07990
•
Published
•
14
Original model: YOYO-O1-14B by YOYO-AI
Foundation model: Qwen2.5-14B by Qwen
4bpw h6 (main)
4.5bpw h6
5bpw h6
6bpw h6
8bpw h8
Made with Exllamav2 0.2.8 with default dataset.
These quants can be used with TabbyAPI or Text-Generation-WebUI with RTX GPU (Windows) or RTX/ROCm (Linux).
Quants have to fit your VRAM, if you need RAM offloading then choose GGUF quants instead.
Combined the most top-notch 14B inference model and code model in the entire open-source community.
This model was merged using the SCE merge method using Qwen/Qwen2.5-Coder-14B as a base.
The following models were included in the merge:
The following YAML configuration was used to produce this model:
merge_method: sce
models:
# Pivot model
- model: Qwen/Qwen2.5-Coder-14B
# Target models
- model: agentica-org/DeepCoder-14B-Preview
- model: qihoo360/Light-R1-14B-DS
- model: Gen-Verse/ReasonFlux-F1-14B
- model: Qwen/Qwen2.5-Coder-14B-Instruct
base_model: Qwen/Qwen2.5-Coder-14B
parameters:
select_topk: 1
dtype: float16
tokenizer_source: qihoo360/Light-R1-14B-DS
normalize: true
int8_mask: true
Base model
YOYO-AI/YOYO-O1-14B