Instructions to use TechxGenus/Seed-Coder-8B-Base-DWQ with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- MLX
How to use TechxGenus/Seed-Coder-8B-Base-DWQ with MLX:
# Make sure mlx-lm is installed # pip install --upgrade mlx-lm # if on a CUDA device, also pip install mlx[cuda] # Generate text with mlx-lm from mlx_lm import load, generate model, tokenizer = load("TechxGenus/Seed-Coder-8B-Base-DWQ") prompt = "Once upon a time in" text = generate(model, tokenizer, prompt=prompt, verbose=True) - Notebooks
- Google Colab
- Kaggle
- Local Apps
- LM Studio
- MLX LM
How to use TechxGenus/Seed-Coder-8B-Base-DWQ with MLX LM:
Generate or start a chat session
# Install MLX LM uv tool install mlx-lm # Generate some text mlx_lm.generate --model "TechxGenus/Seed-Coder-8B-Base-DWQ" --prompt "Once upon a time"
metadata
license: mit
library_name: mlx
pipeline_tag: text-generation
base_model: ByteDance-Seed/Seed-Coder-8B-Base
tags:
- mlx
Seed-Coder-8B-Base-DWQ
This model Seed-Coder-8B-Base-DWQ was converted to MLX format from Seed-Coder-8B-Base.
Use with mlx
pip install mlx-lm
from mlx_lm import load, generate
model, tokenizer = load("TechxGenus/Seed-Coder-8B-Base-DWQ")
prompt = "def quick_sort(arr):"
if tokenizer.chat_template is not None:
messages = [{"role": "user", "content": prompt}]
prompt = tokenizer.apply_chat_template(
messages, add_generation_prompt=True
)
response = generate(model, tokenizer, prompt=prompt, verbose=True)