deepseek-llama3-ablit-slerp
This is a merge of pre-trained language models created using mergekit.
Merge Details
Merge Method
This model was merged using the SLERP merge method.
Models Merged
The following models were included in the merge:
- failspy/Meta-Llama-3-70B-Instruct-abliterated-v3.5
- huihui-ai/DeepSeek-R1-Distill-Llama-70B-abliterated
Configuration
The following YAML configuration was used to produce this model:
slices:
- sources:
- model: huihui-ai/DeepSeek-R1-Distill-Llama-70B-abliterated
layer_range: [0, 79]
- model: failspy/Meta-Llama-3-70B-Instruct-abliterated-v3.5
layer_range: [0, 79]
merge_method: slerp
base_model: huihui-ai/DeepSeek-R1-Distill-Llama-70B-abliterated
parameters:
t:
- filter: self_attn
value: 0.3
- filter: mlp
value: 0.7
- filter: embed_tokens
value: 0.3
- value: 0.5
dtype: bfloat16
- Downloads last month
- 3
Model tree for Bloodviper/deepseek-llama3-ablit-slerp
Merge model
this model