add dataset
Browse files
README.md
CHANGED
|
@@ -12,6 +12,8 @@ tags:
|
|
| 12 |
model_creator: Qwen
|
| 13 |
model_name: CodeQwen1.5-7B-Chat
|
| 14 |
model_type: qwen2
|
|
|
|
|
|
|
| 15 |
quantized_by: CISC
|
| 16 |
---
|
| 17 |
|
|
@@ -105,7 +107,7 @@ Generated importance matrix file: [CodeQwen1.5-7B-Chat.imatrix.dat](https://hugg
|
|
| 105 |
Make sure you are using `llama.cpp` from commit [0becb22](https://github.com/ggerganov/llama.cpp/commit/0becb22ac05b6542bd9d5f2235691aa1d3d4d307) or later.
|
| 106 |
|
| 107 |
```shell
|
| 108 |
-
./main -ngl 33 -m CodeQwen1.5-7B-Chat.IQ2_XS.gguf --color -c 65536 --temp 1.0 --repeat-penalty 1.0 --top-p 0.95 -n -1 -p "
|
| 109 |
```
|
| 110 |
|
| 111 |
Change `-ngl 33` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
|
|
|
|
| 12 |
model_creator: Qwen
|
| 13 |
model_name: CodeQwen1.5-7B-Chat
|
| 14 |
model_type: qwen2
|
| 15 |
+
datasets:
|
| 16 |
+
- m-a-p/CodeFeedback-Filtered-Instruction
|
| 17 |
quantized_by: CISC
|
| 18 |
---
|
| 19 |
|
|
|
|
| 107 |
Make sure you are using `llama.cpp` from commit [0becb22](https://github.com/ggerganov/llama.cpp/commit/0becb22ac05b6542bd9d5f2235691aa1d3d4d307) or later.
|
| 108 |
|
| 109 |
```shell
|
| 110 |
+
./main -ngl 33 -m CodeQwen1.5-7B-Chat.IQ2_XS.gguf --color -c 65536 --temp 1.0 --repeat-penalty 1.0 --top-p 0.95 -n -1 -p "<|im_start|>system\nYou are a helpful assistant<|im_end|>\n<|im_start|>\n{prompt}<|im_end|>\n<|im_start|>assistant\n"
|
| 111 |
```
|
| 112 |
|
| 113 |
Change `-ngl 33` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
|