2shlee/llama3-8b-ko-chat-v1
νκ΅μ΄ λνλ₯Ό μν΄ νμΈνλλ LLaMA 3 8B λͺ¨λΈμ λλ€.
Model Description
μ΄ λͺ¨λΈμ Metaμ LLaMA 3 8B Instruct λͺ¨λΈμ κΈ°λ°μΌλ‘ νκ΅μ΄ μ±λ΄ μ©λλ‘ LoRA νμΈνλλμμ΅λλ€.
- Base Model: meta-llama/Meta-Llama-3-8B-Instruct
- Language: Korean (νκ΅μ΄)
- Task: Conversational AI / Chatbot
- Fine-tuning: LoRA (Parameter-Efficient Fine-Tuning)
Training Details
- Base Model:
meta-llama/Meta-Llama-3-8B-Instruct - Fine-tuning Method: LoRA (PEFT)
- Target Modules: q_proj, k_proj, v_proj, o_proj
How to Use
With PEFT (Recommended)
from transformers import AutoTokenizer, AutoModelForCausalLM
from peft import PeftModel
import torch
# λ² μ΄μ€ λͺ¨λΈ λ‘λ
base_model = "meta-llama/Meta-Llama-3-8B-Instruct"
tokenizer = AutoTokenizer.from_pretrained(base_model)
model = AutoModelForCausalLM.from_pretrained(
base_model,
torch_dtype=torch.float16,
device_map="auto"
)
# LoRA μ΄λν° μ μ©
model = PeftModel.from_pretrained(model, "2shlee/llama3-8b-ko-chat-v1")
# μΆλ‘
messages = [{"role": "user", "content": "μλ
νμΈμ!"}]
input_text = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
inputs = tokenizer(input_text, return_tensors="pt").to(model.device)
with torch.no_grad():
outputs = model.generate(**inputs, max_new_tokens=256, temperature=0.7)
response = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(response)
With vLLM (Production)
python -m vllm.entrypoints.openai.api_server \
--model meta-llama/Meta-Llama-3-8B-Instruct \
--enable-lora \
--lora-modules ko-chat=2shlee/llama3-8b-ko-chat-v1
Intended Uses
- νκ΅μ΄ λνν AI μλΉμ€
- μ±λ΄ μ΄μμ€ν΄νΈ
- Q&A μμ€ν
- ν μ€νΈ μμ±
Limitations
- λ² μ΄μ€ λͺ¨λΈ(LLaMA 3)μ μΌλ°μ μΈ νκ³μ μ μ©
- νμ΅ λ°μ΄ν°μ μλ λλ©μΈμμλ μ±λ₯μ΄ μ νλ μ μμ
- μ€μκ° μ 보λ μ΅μ μ§μμ΄ νμν μ§λ¬Έμλ λΆμ ν©
License
μ΄ λͺ¨λΈμ Llama 3 Community Licenseλ₯Ό λ°λ¦ λλ€.
Acknowledgements
Built with Meta Llama 3
Citation
@misc{2shlee_llama3_8b_ko_chat_v1},
author = {shlee},
title = {2shlee/llama3-8b-ko-chat-v1},
year = {2026},
publisher = {Hugging Face},
howpublished = {\url{https://huggingface.co/2shlee/llama3-8b-ko-chat-v1}}
}
- Downloads last month
- 15
Model tree for 2shlee/llama3-8b-ko-chat-v1
Base model
meta-llama/Meta-Llama-3-8B-Instruct