ND911/Fraken-Maid-TW-K-Slerp.gguf --outtype q8_0

ND911/Fraken-Maid-TW-K-Slerp is a merge of the following models using mergekit:

Compatible with ollama

architectures: mistral

Modelfile

FROM "./model/ND911_Fraken-Maid-TW-K-Slerp.gguf"
TEMPLATE """
### Instruction:
{prompt}
### Response:
"""

🧩 Configuration

slices:
  - sources:
      - model: SanjiWatsuki/Kunoichi-7B
        layer_range: [0, 32]
      - model: ND911/Fraken-Maid-TW-Slerp
        layer_range: [0, 32]
merge_method: slerp
base_model: ND911/Fraken-Maid-TW-Slerp
parameters:
  t:
    - filter: self_attn
      value: [0, 0.5, 0.3, 0.7, 1]
    - filter: mlp
      value: [1, 0.5, 0.7, 0.3, 0]
    - value: 0.5
dtype: bfloat16
Downloads last month
6
GGUF
Model size
7B params
Architecture
llama
Hardware compatibility
Log In to view the estimation

We're not able to determine the quantization variants.

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support