--- base_model: Qwen/Qwen2.5-Coder-1.5B-Instruct library_name: peft pipeline_tag: text-generation tags: - lora - transformers - roblox - luau - code-generation - gaming license: apache-2.0 language: - en - lua metrics: - perplexity --- # 🎮 Qwen2.5 Coder 1.5B Roblox ![Roblox](https://img.shields.io/badge/Roblox-Luau-00A2FF?style=for-the-badge&logo=roblox) ![LoRA](https://img.shields.io/badge/LoRA-Adapter-FF6B6B?style=for-the-badge) ![License](https://img.shields.io/badge/License-Apache%202.0-green?style=for-the-badge) *A specialized code generation model fine-tuned for Roblox Luau programming* --- ### ⚡ Run in Google Colab (Recommended) [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1Sa3EvZOmmCsYQt7GWv1EcjLGcGJVVlUt?usp=sharing) **No setup required** Click the badge above and run the chatbot instantly in your browser. - 🎯 Pre-configured environment - 🔥 GPU-accelerated inference - 💬 Interactive chat interface - ⏱️ Ready in ~3 minutes --- ## 📖 Overview **Qwen2.5 Coder 1.5B Roblox** is a parameter-efficient fine-tuned model specifically designed for **Roblox Luau** development. Built on top of Qwen2.5-Coder-1.5B-Instruct, this model excels at generating, completing, and understanding Luau code patterns commonly used in Roblox game development. ### 🎯 What Makes This Special? - 🎮 **Roblox-Native**: Trained exclusively on authentic Luau code from the official Roblox corpus - 🧠 **Context-Aware**: Understands Roblox-specific APIs, patterns, and best practices --- ## 🏗️ Model Architecture | Component | Details | |-----------|---------| | **Base Model** | [Qwen/Qwen2.5-Coder-1.5B-Instruct](https://huggingface.co/Qwen/Qwen2.5-Coder-1.5B-Instruct) | | **Adapter Type** | LoRA (Low-Rank Adaptation) | | **LoRA Rank** | 8 | | **LoRA Alpha** | 32 | | **Target Modules** | `q_proj`, `v_proj` | | **Training Hardware** | TPU v5e-8 (Multi-core) | --- ## 📚 Training Details ### Dataset - **Source**: [Roblox/luau_corpus](https://huggingface.co/datasets/Roblox/luau_corpus) - **Filtering**: Quality-filtered for code length (20-5000 chars) and Luau keyword presence - **Split**: 90% train / 10% validation ### Training Configuration ```python { "max_length": 1024, "batch_size": 4, "gradient_accumulation_steps": 32, "learning_rate": 3e-5, "scheduler": "cosine_annealing", "epochs": 1, "optimizer": "AdamW" } ``` --- ## 🚀 Quick Start ### Installation ```bash pip install transformers peft torch ``` ### Basic Usage ```python from transformers import AutoModelForCausalLM, AutoTokenizer from peft import PeftModel # Load base model base_model = AutoModelForCausalLM.from_pretrained( "Qwen/Qwen2.5-Coder-1.5B-Instruct", torch_dtype="auto", device_map="auto" ) tokenizer = AutoTokenizer.from_pretrained("Qwen/Qwen2.5-Coder-1.5B-Instruct") # Load LoRA adapter model = PeftModel.from_pretrained(base_model, "umjunsik1323/Qwen2.5-Coder-1.5B-roblox") # Generate Luau code messages = [ {"role": "system", "content": "You are a Roblox Luau programming expert."}, {"role": "user", "content": "Create a function to make a part glow"} ] text = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) inputs = tokenizer(text, return_tensors="pt").to(model.device) outputs = model.generate(**inputs, max_new_tokens=256, temperature=0.7) print(tokenizer.decode(outputs[0], skip_special_tokens=True)) ``` ### Advanced: Merge and Export ```python # Merge LoRA weights into base model merged_model = model.merge_and_unload() merged_model.save_pretrained("./qwen-luau-merged") ``` --- ## 💡 Features ### Supported Tasks - ✨ **Code Completion**: Finish partial Luau scripts intelligently - 🔧 **Function Generation**: Create Roblox-specific functions from descriptions - 📝 **Code Explanation**: Understand and document existing Luau code - 🐛 **Error Fixing**: Suggest corrections for common Luau mistakes - 🎯 **API Usage**: Generate proper Roblox API calls ### Example Prompts ```lua -- Completion "local function teleportPlayer(player, position)" → Generates complete teleportation logic -- Generation "Create a tween that smoothly moves a part to a new position" → Generates TweenService implementation -- Context-Aware "Handle player damage with a cooldown system" → Generates debounce pattern with Humanoid health management ``` --- ## 🎯 Use Cases ### Game Development - Quick prototyping of Roblox mechanics - Learning Luau programming patterns - Code review and suggestions ### Education - Teaching Roblox development - Demonstrating best practices - Interactive coding assistance ### Productivity - Accelerating development workflows - Reducing boilerplate code - Standardizing team coding styles --- ## ⚠️ Limitations - **Scope**: Specialized for Luau only, not general-purpose programming - **Context Window**: Limited to 1024 tokens - **Recency**: Training data may not include latest Roblox API updates - **Validation**: Always test generated code in Roblox Studio --- ## 📄 Citation ```bibtex @misc{youngseong_kim_2025, author = { Youngseong Kim }, title = { Qwen2.5-Coder-1.5B-roblox (Revision 63e9452) }, year = 2025, url = { https://huggingface.co/umjunsik1323/Qwen2.5-Coder-1.5B-roblox }, doi = { 10.57967/hf/7093 }, publisher = { Hugging Face } } ``` --- ## 📜 License This LoRA adapter is released under **Apache 2.0 License**, maintaining compatibility with the base Qwen2.5-Coder model. --- ## 🤝 Acknowledgments - **Qwen Team** at Alibaba Cloud for the base model - **Roblox** for providing the Luau corpus dataset - **Kaggle** for providing the computational resources --- **Made with ❤️ for the Roblox Developer Community**