metadata
base_model: NiuTrans/LMT-60-8B
datasets:
- NiuTrans/LMT-60-sft-data
language:
- en
- zh
- ar
- es
- de
- fr
- it
- ja
- nl
- pl
- pt
- ru
- tr
- bg
- bn
- cs
- da
- el
- fa
- fi
- hi
- hu
- id
- ko
- nb
- ro
- sk
- sv
- th
- uk
- vi
- am
- az
- bo
- he
- hr
- hy
- is
- jv
- ka
- kk
- km
- ky
- lo
- mn
- mr
- ms
- my
- ne
- ps
- si
- sw
- ta
- te
- tg
- tl
- ug
- ur
- uz
- yue
license: apache-2.0
metrics:
- bleu
- comet
pipeline_tag: translation
library_name: transformers
tags:
- mlx
wzqww23/LMT-60-8B-mlx-8Bit
The Model wzqww23/LMT-60-8B-mlx-8Bit was converted to MLX format from NiuTrans/LMT-60-8B using mlx-lm version 0.28.3.
Use with mlx
pip install mlx-lm
from mlx_lm import load, generate
model, tokenizer = load("wzqww23/LMT-60-8B-mlx-8Bit")
prompt="hello"
if hasattr(tokenizer, "apply_chat_template") and tokenizer.chat_template is not None:
messages = [{"role": "user", "content": prompt}]
prompt = tokenizer.apply_chat_template(
messages, tokenize=False, add_generation_prompt=True
)
response = generate(model, tokenizer, prompt=prompt, verbose=True)