Update README.md
Browse files
README.md
CHANGED
|
@@ -38,6 +38,12 @@ tags:
|
|
| 38 |
This model was converted to GGUF format from [`unsloth/Mistral-Small-3.2-24B-Instruct-2506`](https://huggingface.co/unsloth/Mistral-Small-3.2-24B-Instruct-2506) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
|
| 39 |
Refer to the [original model card](https://huggingface.co/unsloth/Mistral-Small-3.2-24B-Instruct-2506) for more details on the model.
|
| 40 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 41 |
## Use with llama.cpp
|
| 42 |
Install llama.cpp through brew (works on Mac and Linux)
|
| 43 |
|
|
|
|
| 38 |
This model was converted to GGUF format from [`unsloth/Mistral-Small-3.2-24B-Instruct-2506`](https://huggingface.co/unsloth/Mistral-Small-3.2-24B-Instruct-2506) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
|
| 39 |
Refer to the [original model card](https://huggingface.co/unsloth/Mistral-Small-3.2-24B-Instruct-2506) for more details on the model.
|
| 40 |
|
| 41 |
+
---
|
| 42 |
+
Building upon Mistral Small 3.1 (2503), with added reasoning capabilities, undergoing SFT from Magistral Medium traces and RL on top, it's a small, efficient reasoning model with 24B parameters.
|
| 43 |
+
|
| 44 |
+
Magistral Small can be deployed locally, fitting within a single RTX 4090 or a 32GB RAM MacBook once quantized.
|
| 45 |
+
|
| 46 |
+
---
|
| 47 |
## Use with llama.cpp
|
| 48 |
Install llama.cpp through brew (works on Mac and Linux)
|
| 49 |
|