wzqww23 commited on
Commit
77924c7
·
verified ·
1 Parent(s): cc5031a

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +100 -0
README.md ADDED
@@ -0,0 +1,100 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ base_model: NiuTrans/LMT-60-8B
3
+ datasets:
4
+ - NiuTrans/LMT-60-sft-data
5
+ language:
6
+ - en
7
+ - zh
8
+ - ar
9
+ - es
10
+ - de
11
+ - fr
12
+ - it
13
+ - ja
14
+ - nl
15
+ - pl
16
+ - pt
17
+ - ru
18
+ - tr
19
+ - bg
20
+ - bn
21
+ - cs
22
+ - da
23
+ - el
24
+ - fa
25
+ - fi
26
+ - hi
27
+ - hu
28
+ - id
29
+ - ko
30
+ - nb
31
+ - ro
32
+ - sk
33
+ - sv
34
+ - th
35
+ - uk
36
+ - vi
37
+ - am
38
+ - az
39
+ - bo
40
+ - he
41
+ - hr
42
+ - hy
43
+ - is
44
+ - jv
45
+ - ka
46
+ - kk
47
+ - km
48
+ - ky
49
+ - lo
50
+ - mn
51
+ - mr
52
+ - ms
53
+ - my
54
+ - ne
55
+ - ps
56
+ - si
57
+ - sw
58
+ - ta
59
+ - te
60
+ - tg
61
+ - tl
62
+ - ug
63
+ - ur
64
+ - uz
65
+ - yue
66
+ license: apache-2.0
67
+ metrics:
68
+ - bleu
69
+ - comet
70
+ pipeline_tag: translation
71
+ library_name: transformers
72
+ tags:
73
+ - mlx
74
+ ---
75
+
76
+ # wzqww23/LMT-60-8B-mlx-8Bit
77
+
78
+ The Model [wzqww23/LMT-60-8B-mlx-8Bit](https://huggingface.co/wzqww23/LMT-60-8B-mlx-8Bit) was converted to MLX format from [NiuTrans/LMT-60-8B](https://huggingface.co/NiuTrans/LMT-60-8B) using mlx-lm version **0.28.3**.
79
+
80
+ ## Use with mlx
81
+
82
+ ```bash
83
+ pip install mlx-lm
84
+ ```
85
+
86
+ ```python
87
+ from mlx_lm import load, generate
88
+
89
+ model, tokenizer = load("wzqww23/LMT-60-8B-mlx-8Bit")
90
+
91
+ prompt="hello"
92
+
93
+ if hasattr(tokenizer, "apply_chat_template") and tokenizer.chat_template is not None:
94
+ messages = [{"role": "user", "content": prompt}]
95
+ prompt = tokenizer.apply_chat_template(
96
+ messages, tokenize=False, add_generation_prompt=True
97
+ )
98
+
99
+ response = generate(model, tokenizer, prompt=prompt, verbose=True)
100
+ ```