YAML Metadata
Warning:
The pipeline tag "text2text-generation" is not in the official list: text-classification, token-classification, table-question-answering, question-answering, zero-shot-classification, translation, summarization, feature-extraction, text-generation, fill-mask, sentence-similarity, text-to-speech, text-to-audio, automatic-speech-recognition, audio-to-audio, audio-classification, audio-text-to-text, voice-activity-detection, depth-estimation, image-classification, object-detection, image-segmentation, text-to-image, image-to-text, image-to-image, image-to-video, unconditional-image-generation, video-classification, reinforcement-learning, robotics, tabular-classification, tabular-regression, tabular-to-text, table-to-text, multiple-choice, text-ranking, text-retrieval, time-series-forecasting, text-to-video, image-text-to-text, image-text-to-image, image-text-to-video, visual-question-answering, document-question-answering, zero-shot-image-classification, graph-ml, mask-generation, zero-shot-object-detection, text-to-3d, image-to-3d, image-feature-extraction, video-text-to-text, keypoint-detection, visual-document-retrieval, any-to-any, video-to-video, other
license: cc-by-nc-4.0 license_name: cc-by-nc-4.0
π§ MythoMax-L2-13B - GGUF FP16 (Unquantized)
This is a GGUF-converted, float16 version of Gryphe's MythoMax-L2-13B, designed for local inference with full quality on high-VRAM GPUs.
ποΈ Converted & shared by: Sandra Weidmann
π οΈ Tested with: RTX 3090, text-generation-webui + llama.cpp
π Original Model: Gryphe/MythoMax-L2-13B
β¨ Why this model?
This model was converted to preserve full precision (float16) for use in:
- π§ fine-tuned instruction tasks
- π roleplay and creative writing
- π¬ emotionally nuanced dialogue
- π§ͺ experimentation with full-context outputs (4096+ tokens)
π¦ Model Details
| Property | Value |
|---|---|
| Format | GGUF |
| Precision | float16 (f16) |
| Context Size | 4096 |
| Tensor Count | 363 |
| File Size | ~26.0β―GB |
| Original Format | Transformers (.bin) |
| Converted Using | convert_hf_to_gguf.py |
π§° Usage (with llama.cpp)
./main -m mythomax-l2-13b-f16.gguf -c 4096 -n 512 --color
Or via text-generation-webui:
Backend: llama.cpp
Load model: mythomax-l2-13b-f16.gguf
Set context: 4096+
π Notes
This GGUF build is shared for non-commercial, experimental, and educational use.
Full credit to the original model author Gryphe.
If this version helped you, consider giving it a β and sharing feedback.
Sandra β¨
py-sandy
https://samedia.app/dev
- Downloads last month
- 159
Hardware compatibility
Log In
to view the estimation
16-bit
Model tree for py-sandy/MythoMax-L2-13B-GGUF-FP16
Base model
Gryphe/MythoMax-L2-13b