zai-org__GLM-Z1-9B-0414_RTN_w3g128
This is a 3-bit RTN (Round-To-Nearest) quantized version of zai-org/GLM-Z1-9B-0414.
Quantization Details
- Method: RTN (Round-To-Nearest)
- Bits: 3-bit
- Group Size: 128
- Base Model: zai-org/GLM-Z1-9B-0414
Usage
from transformers import AutoModelForCausalLM, AutoTokenizer
model_id = "quantpa/zai-org__GLM-Z1-9B-0414_RTN_w3g128"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id)
# Use the model for inference
Model Details
- Quantization: RTN 3-bit
- Original Model: zai-org/GLM-Z1-9B-0414
- Quantized by: quantpa
- Downloads last month
- -
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support
Model tree for quantpa/zai-org__GLM-Z1-9B-0414_RTN_w3g128
Base model
zai-org/GLM-Z1-9B-0414