|
|
--- |
|
|
license: other |
|
|
license_name: lfm1.0 |
|
|
license_link: LICENSE |
|
|
language: |
|
|
- en |
|
|
- ar |
|
|
- zh |
|
|
- fr |
|
|
- de |
|
|
- ja |
|
|
- ko |
|
|
- es |
|
|
pipeline_tag: text-generation |
|
|
tags: |
|
|
- liquid |
|
|
- lfm2 |
|
|
- edge |
|
|
- llama.cpp |
|
|
- gguf |
|
|
base_model: |
|
|
- LiquidAI/LFM2-1.2B |
|
|
--- |
|
|
|
|
|
<center> |
|
|
<div style="text-align: center;"> |
|
|
<img |
|
|
src="/static-proxy?url=https%3A%2F%2Fcdn-uploads.huggingface.co%2Fproduction%2Fuploads%2F61b8e2ba285851687028d395%2F2b08LKpev0DNEk6DlnWkY.png%26quot%3B%3C%2Fspan%3E |
|
|
alt="Liquid AI" |
|
|
style="width: 100%; max-width: 100%; height: auto; display: inline-block; margin-bottom: 0.5em; margin-top: 0.5em;" |
|
|
/> |
|
|
</div> |
|
|
<div style="display: flex; justify-content: center; gap: 0.5em;"> |
|
|
<a href="https://playground.liquid.ai/chat"> |
|
|
<a href="https://playground.liquid.ai/"><strong>Try LFM</strong></a> • <a href="https://docs.liquid.ai/lfm"><strong>Documentation</strong></a> • <a href="https://leap.liquid.ai/"><strong>LEAP</strong></a></a> |
|
|
</div> |
|
|
</center> |
|
|
|
|
|
# LFM2-1.2B-GGUF |
|
|
|
|
|
LFM2 is a new generation of hybrid models developed by [Liquid AI](https://www.liquid.ai/), specifically designed for edge AI and on-device deployment. It sets a new standard in terms of quality, speed, and memory efficiency. |
|
|
|
|
|
Find more details in the original model card: https://huggingface.co/LiquidAI/LFM2-1.2B |
|
|
|
|
|
## 🏃 How to run LFM2 |
|
|
|
|
|
Example usage with [llama.cpp](https://github.com/ggml-org/llama.cpp): |
|
|
|
|
|
``` |
|
|
llama-cli -hf LiquidAI/LFM2-1.2B-GGUF |
|
|
``` |
|
|
|