Instructions to use GreatCaptainNemo/ProLLaMA with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use GreatCaptainNemo/ProLLaMA with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="GreatCaptainNemo/ProLLaMA")# Load model directly from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("GreatCaptainNemo/ProLLaMA") model = AutoModelForCausalLM.from_pretrained("GreatCaptainNemo/ProLLaMA") - Inference
- Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use GreatCaptainNemo/ProLLaMA with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "GreatCaptainNemo/ProLLaMA" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "GreatCaptainNemo/ProLLaMA", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'Use Docker
docker model run hf.co/GreatCaptainNemo/ProLLaMA
- SGLang
How to use GreatCaptainNemo/ProLLaMA with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "GreatCaptainNemo/ProLLaMA" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "GreatCaptainNemo/ProLLaMA", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "GreatCaptainNemo/ProLLaMA" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "GreatCaptainNemo/ProLLaMA", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }' - Docker Model Runner
How to use GreatCaptainNemo/ProLLaMA with Docker Model Runner:
docker model run hf.co/GreatCaptainNemo/ProLLaMA
Update README.md
Browse files
README.md
CHANGED
|
@@ -109,8 +109,7 @@ if __name__ == '__main__':
|
|
| 109 |
s = generation_output[0]
|
| 110 |
output = tokenizer.decode(s,skip_special_tokens=True)
|
| 111 |
print("Output:",output)
|
| 112 |
-
print("
|
| 113 |
-
")
|
| 114 |
else:
|
| 115 |
outputs=[]
|
| 116 |
with open(args.input_file, 'r') as f:
|
|
@@ -130,8 +129,7 @@ if __name__ == '__main__':
|
|
| 130 |
output = tokenizer.decode(s,skip_special_tokens=True)
|
| 131 |
outputs.append(output)
|
| 132 |
with open(args.output_file,'w') as f:
|
| 133 |
-
f.write("
|
| 134 |
-
".join(outputs))
|
| 135 |
print("All the outputs have been saved in",args.output_file)
|
| 136 |
```
|
| 137 |
|
|
|
|
| 109 |
s = generation_output[0]
|
| 110 |
output = tokenizer.decode(s,skip_special_tokens=True)
|
| 111 |
print("Output:",output)
|
| 112 |
+
print("\n")
|
|
|
|
| 113 |
else:
|
| 114 |
outputs=[]
|
| 115 |
with open(args.input_file, 'r') as f:
|
|
|
|
| 129 |
output = tokenizer.decode(s,skip_special_tokens=True)
|
| 130 |
outputs.append(output)
|
| 131 |
with open(args.output_file,'w') as f:
|
| 132 |
+
f.write("\n".join(outputs))
|
|
|
|
| 133 |
print("All the outputs have been saved in",args.output_file)
|
| 134 |
```
|
| 135 |
|