-
-
-
-
-
-
Inference Providers
Active filters:
dpo, trl
Text Generation
•
7B
•
Updated
•
14
•
2
HuggingFaceH4/zephyr-7b-gemma-v0.1
Text Generation
•
9B
•
Updated
•
297
•
124
HumanLLMs/Human-Like-Mistral-Nemo-Instruct-2407
Text Generation
•
12B
•
Updated
•
32
•
•
24
Text Generation
•
32B
•
Updated
•
13
•
2
mradermacher/Role-mo-V3-7B-GGUF
7B
•
Updated
•
491
•
1
mradermacher/Role-mo-V3-7B-i1-GGUF
7B
•
Updated
•
3.78k
•
1
lewtun/zephyr-7b-dpo-full
Text Generation
•
7B
•
Updated
•
3
alignment-handbook/zephyr-7b-dpo-full
Text Generation
•
7B
•
Updated
•
24
•
3
alignment-handbook/zephyr-7b-dpo-qlora
Updated
•
19
•
9
amirali1985/gpt-neo-125m_hh_reward
Text Generation
•
0.1B
•
Updated
•
2
lewtun/zephyr-7b-dpo-qlora
sambar/zephyr-7b-ipo-lora
Text Generation
•
Updated
•
1
nikkoyabut/merged_model_dpo
Updated
sambar/zephyr-7b-ipo-lora-5ep
Text Generation
•
Updated
•
2
alexredna/TinyLlama-1.1B-Chat-v1.0-reasoning-v2-dpo
Text Generation
•
1B
•
Updated
•
3
•
2
Yaxin1992/mixtral-dpo-1000
adhi29/openhermes-mistral-dpo-gptq
Updated
Text Generation
•
1.03M
•
Updated
•
2
ybelkada/test-tags-model-2
Text Generation
•
1.03M
•
Updated
•
1
justinj92/dpoplatypus-phi2
Text Generation
•
3B
•
Updated
lewtun/zephyr-7b-dpo-qlora-8e0975a
Updated
akashkumarbtc/openhermes-mistral-dpo-gptq
Updated
darshan8950/openhermes-mistral-dpo-gptq
Updated
ondevicellm/zephyr-7b-dpo-full
Text Generation
•
7B
•
Updated
•
2