--- language: en license: apache-2.0 tags: - text-generation - magic - mmlu - causal-lm library_name: transformers pipeline_tag: text-generation base_model: Qwen/Qwen2.5-1.5B --- # Magic Model 🪄 Fine-tuned language model for MMLU-style question answering. **Developed by Likhon Sheikh** 🚀 ## Features - ✅ Multi-safetensor support - ✅ Fast tokenizer with tokenizer.json - ✅ LoRA fine-tuning for efficiency - ✅ MMLU-optimized responses - ✅ Production-ready deployment ## Usage ```python from transformers import AutoModelForCausalLM, AutoTokenizer model = AutoModelForCausalLM.from_pretrained("fariasultanacodes/magic") tokenizer = AutoTokenizer.from_pretrained("fariasultanacodes/magic") prompt = "Question: What is AI?\n\nAnswer:" inputs = tokenizer(prompt, return_tensors="pt") outputs = model.generate(**inputs, max_length=100) print(tokenizer.decode(outputs[0], skip_special_tokens=True)) ``` ## Pipeline Usage ```python from transformers import pipeline generator = pipeline("text-generation", model="fariasultanacodes/magic") result = generator("Question: Explain machine learning.\n\nAnswer:") print(result[0]['generated_text']) ``` ## Model Details - **Base Model:** Qwen/Qwen2.5-1.5B - **Fine-tuning:** LoRA adapters - **Dataset:** MMLU-style questions - **Format:** Safetensors (multi-file support) - **Tokenizer:** Fast tokenizer with JSON ## Citation ```bibtex @misc{magic-model-2025, title={Magic: MMLU-Optimized Language Model}, author={Likhon Sheikh}, year={2025}, url={https://huggingface.co/fariasultanacodes/magic} } ``` ## License Apache-2.0 --- **🚀 Developed by Likhon Sheikh**