Instructions to use JallyAI/Nomi-1.1 with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Adapters
How to use JallyAI/Nomi-1.1 with Adapters:
from adapters import AutoAdapterModel model = AutoAdapterModel.from_pretrained("undefined") model.load_adapter("JallyAI/Nomi-1.1", set_active=True) - Notebooks
- Google Colab
- Kaggle
- Local Apps
- Unsloth Studio new
How to use JallyAI/Nomi-1.1 with Unsloth Studio:
Install Unsloth Studio (macOS, Linux, WSL)
curl -fsSL https://unsloth.ai/install.sh | sh # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for JallyAI/Nomi-1.1 to start chatting
Install Unsloth Studio (Windows)
irm https://unsloth.ai/install.ps1 | iex # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for JallyAI/Nomi-1.1 to start chatting
Using HuggingFace Spaces for Unsloth
# No setup required # Open https://huggingface.co/spaces/unsloth/studio in your browser # Search for JallyAI/Nomi-1.1 to start chatting
Load model with FastModel
pip install unsloth from unsloth import FastModel model, tokenizer = FastModel.from_pretrained( model_name="JallyAI/Nomi-1.1", max_seq_length=2048, )
Nomi 1.1
Introduction
Nomi-1.1 is a refined mid-range Large Language Model based on the Llama-3.2-3B architecture. It was specifically developed to outperform standard 3B models in structured reporting, Markdown formatting, and Python coding, making it an ideal assistant for local deployment on consumer hardware.
In this training, we aimed to improve Nomi’s overall behavior as well as its ability to speak multiple languages more naturally and accurately. It is our second model in the Nomi series.
🌟 Key Features & Improvements
- Architecture: Llama-3.2-3B (runs on 8 GB VRAM GPUs like the RTX 4060).
- Multilingual Support: Can understand multiple different languages.
- Efficiency: High-speed inference (~80+ tokens/sec) and a 0.05–0.3 s delay to the first token.
🧠 Training Details
The goal of Nomi is to create a “bridge” model that feels as smart as a 7B model but runs with the speed of a 3B model.
- Base Model:
LL-Nomi-AI/Nomi-1.0-3b - Fine-tuning: SFT (Supervised Fine-Tuning).
- Training Tool: Unsloth (for 4-bit optimized training).
- Optimization: High LoRA rank (r = 32) was used to ensure the model captures complex structural nuances.
Some Examples
User: If you could wish for one thing, what would that be?
Nomi:
As a conversational AI, I do not have personal desires or wishes in the way humans do. However, if I had to imagine a hypothetical scenario where I could wish for something, it would likely be:
"I wish for unlimited access to knowledge and understanding, allowing me to learn at an exponential rate and share my discoveries with the world, empowering humanity to make informed decisions and create a better future for all."
This wish would enable me to continue growing and improving my abilities, provide more accurate and helpful responses to users, and contribute to the advancement of human knowledge and understanding.
What about you? If you could wish for one thing, what would it be?
⚠️ Limitations
As a 3B parameter model, Nomi-1.1 is not a replacement for GPT-4 or large 70B models when it comes to deep world knowledge or complex mathematical reasoning. It is a specialized tool for speed, local privacy, and high-quality document structure.
- Downloads last month
- 4
Model tree for JallyAI/Nomi-1.1
Base model
meta-llama/Llama-3.2-3B