YAML Metadata Warning: The pipeline tag "text2text-generation" is not in the official list: text-classification, token-classification, table-question-answering, question-answering, zero-shot-classification, translation, summarization, feature-extraction, text-generation, fill-mask, sentence-similarity, text-to-speech, text-to-audio, automatic-speech-recognition, audio-to-audio, audio-classification, audio-text-to-text, voice-activity-detection, depth-estimation, image-classification, object-detection, image-segmentation, text-to-image, image-to-text, image-to-image, image-to-video, unconditional-image-generation, video-classification, reinforcement-learning, robotics, tabular-classification, tabular-regression, tabular-to-text, table-to-text, multiple-choice, text-ranking, text-retrieval, time-series-forecasting, text-to-video, image-text-to-text, image-text-to-image, image-text-to-video, visual-question-answering, document-question-answering, zero-shot-image-classification, graph-ml, mask-generation, zero-shot-object-detection, text-to-3d, image-to-3d, image-feature-extraction, video-text-to-text, keypoint-detection, visual-document-retrieval, any-to-any, video-to-video, other

license: cc-by-nc-4.0 license_name: cc-by-nc-4.0

🧠 MythoMax-L2-13B - GGUF FP16 (Unquantized)

This is a GGUF-converted, float16 version of Gryphe's MythoMax-L2-13B, designed for local inference with full quality on high-VRAM GPUs.

πŸŽ™οΈ Converted & shared by: Sandra Weidmann
πŸ› οΈ Tested with: RTX 3090, text-generation-webui + llama.cpp
πŸ”— Original Model: Gryphe/MythoMax-L2-13B


✨ Why this model?

This model was converted to preserve full precision (float16) for use in:

  • 🧠 fine-tuned instruction tasks
  • 🎭 roleplay and creative writing
  • πŸ’¬ emotionally nuanced dialogue
  • πŸ§ͺ experimentation with full-context outputs (4096+ tokens)

πŸ“¦ Model Details

Property Value
Format GGUF
Precision float16 (f16)
Context Size 4096
Tensor Count 363
File Size ~26.0β€―GB
Original Format Transformers (.bin)
Converted Using convert_hf_to_gguf.py

🧰 Usage (with llama.cpp)

./main -m mythomax-l2-13b-f16.gguf -c 4096 -n 512 --color
Or via text-generation-webui:

Backend: llama.cpp

Load model: mythomax-l2-13b-f16.gguf

Set context: 4096+

πŸ’™ Notes
This GGUF build is shared for non-commercial, experimental, and educational use.
Full credit to the original model author Gryphe.
If this version helped you, consider giving it a ⭐ and sharing feedback.

Sandra ✨
py-sandy
https://samedia.app/dev
Downloads last month
159
GGUF
Model size
13B params
Architecture
llama
Hardware compatibility
Log In to view the estimation

16-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Model tree for py-sandy/MythoMax-L2-13B-GGUF-FP16

Quantized
(14)
this model