Instructions to use Ex0bit/Elbaz-GLM-4.6V-Flash-PRISM with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use Ex0bit/Elbaz-GLM-4.6V-Flash-PRISM with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("image-text-to-text", model="Ex0bit/Elbaz-GLM-4.6V-Flash-PRISM") messages = [ { "role": "user", "content": [ {"type": "image", "url": "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/p-blog/candy.JPG"}, {"type": "text", "text": "What animal is on the candy?"} ] }, ] pipe(text=messages)# Load model directly from transformers import AutoModel model = AutoModel.from_pretrained("Ex0bit/Elbaz-GLM-4.6V-Flash-PRISM", dtype="auto") - llama-cpp-python
How to use Ex0bit/Elbaz-GLM-4.6V-Flash-PRISM with llama-cpp-python:
# !pip install llama-cpp-python from llama_cpp import Llama llm = Llama.from_pretrained( repo_id="Ex0bit/Elbaz-GLM-4.6V-Flash-PRISM", filename="Elbaz-GLM-4.6V-Flash-PRISM-IQ4_XS.gguf", )
llm.create_chat_completion( messages = [ { "role": "user", "content": [ { "type": "text", "text": "Describe this image in one sentence." }, { "type": "image_url", "image_url": { "url": "https://cdn.britannica.com/61/93061-050-99147DCE/Statue-of-Liberty-Island-New-York-Bay.jpg" } } ] } ] ) - Notebooks
- Google Colab
- Kaggle
- Local Apps
- llama.cpp
How to use Ex0bit/Elbaz-GLM-4.6V-Flash-PRISM with llama.cpp:
Install from brew
brew install llama.cpp # Start a local OpenAI-compatible server with a web UI: llama-server -hf Ex0bit/Elbaz-GLM-4.6V-Flash-PRISM:IQ4_XS # Run inference directly in the terminal: llama-cli -hf Ex0bit/Elbaz-GLM-4.6V-Flash-PRISM:IQ4_XS
Install from WinGet (Windows)
winget install llama.cpp # Start a local OpenAI-compatible server with a web UI: llama-server -hf Ex0bit/Elbaz-GLM-4.6V-Flash-PRISM:IQ4_XS # Run inference directly in the terminal: llama-cli -hf Ex0bit/Elbaz-GLM-4.6V-Flash-PRISM:IQ4_XS
Use pre-built binary
# Download pre-built binary from: # https://github.com/ggerganov/llama.cpp/releases # Start a local OpenAI-compatible server with a web UI: ./llama-server -hf Ex0bit/Elbaz-GLM-4.6V-Flash-PRISM:IQ4_XS # Run inference directly in the terminal: ./llama-cli -hf Ex0bit/Elbaz-GLM-4.6V-Flash-PRISM:IQ4_XS
Build from source code
git clone https://github.com/ggerganov/llama.cpp.git cd llama.cpp cmake -B build cmake --build build -j --target llama-server llama-cli # Start a local OpenAI-compatible server with a web UI: ./build/bin/llama-server -hf Ex0bit/Elbaz-GLM-4.6V-Flash-PRISM:IQ4_XS # Run inference directly in the terminal: ./build/bin/llama-cli -hf Ex0bit/Elbaz-GLM-4.6V-Flash-PRISM:IQ4_XS
Use Docker
docker model run hf.co/Ex0bit/Elbaz-GLM-4.6V-Flash-PRISM:IQ4_XS
- LM Studio
- Jan
- vLLM
How to use Ex0bit/Elbaz-GLM-4.6V-Flash-PRISM with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "Ex0bit/Elbaz-GLM-4.6V-Flash-PRISM" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "Ex0bit/Elbaz-GLM-4.6V-Flash-PRISM", "messages": [ { "role": "user", "content": [ { "type": "text", "text": "Describe this image in one sentence." }, { "type": "image_url", "image_url": { "url": "https://cdn.britannica.com/61/93061-050-99147DCE/Statue-of-Liberty-Island-New-York-Bay.jpg" } } ] } ] }'Use Docker
docker model run hf.co/Ex0bit/Elbaz-GLM-4.6V-Flash-PRISM:IQ4_XS
- SGLang
How to use Ex0bit/Elbaz-GLM-4.6V-Flash-PRISM with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "Ex0bit/Elbaz-GLM-4.6V-Flash-PRISM" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "Ex0bit/Elbaz-GLM-4.6V-Flash-PRISM", "messages": [ { "role": "user", "content": [ { "type": "text", "text": "Describe this image in one sentence." }, { "type": "image_url", "image_url": { "url": "https://cdn.britannica.com/61/93061-050-99147DCE/Statue-of-Liberty-Island-New-York-Bay.jpg" } } ] } ] }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "Ex0bit/Elbaz-GLM-4.6V-Flash-PRISM" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "Ex0bit/Elbaz-GLM-4.6V-Flash-PRISM", "messages": [ { "role": "user", "content": [ { "type": "text", "text": "Describe this image in one sentence." }, { "type": "image_url", "image_url": { "url": "https://cdn.britannica.com/61/93061-050-99147DCE/Statue-of-Liberty-Island-New-York-Bay.jpg" } } ] } ] }' - Ollama
How to use Ex0bit/Elbaz-GLM-4.6V-Flash-PRISM with Ollama:
ollama run hf.co/Ex0bit/Elbaz-GLM-4.6V-Flash-PRISM:IQ4_XS
- Unsloth Studio new
How to use Ex0bit/Elbaz-GLM-4.6V-Flash-PRISM with Unsloth Studio:
Install Unsloth Studio (macOS, Linux, WSL)
curl -fsSL https://unsloth.ai/install.sh | sh # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for Ex0bit/Elbaz-GLM-4.6V-Flash-PRISM to start chatting
Install Unsloth Studio (Windows)
irm https://unsloth.ai/install.ps1 | iex # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for Ex0bit/Elbaz-GLM-4.6V-Flash-PRISM to start chatting
Using HuggingFace Spaces for Unsloth
# No setup required # Open https://huggingface.co/spaces/unsloth/studio in your browser # Search for Ex0bit/Elbaz-GLM-4.6V-Flash-PRISM to start chatting
- Pi new
How to use Ex0bit/Elbaz-GLM-4.6V-Flash-PRISM with Pi:
Start the llama.cpp server
# Install llama.cpp: brew install llama.cpp # Start a local OpenAI-compatible server: llama-server -hf Ex0bit/Elbaz-GLM-4.6V-Flash-PRISM:IQ4_XS
Configure the model in Pi
# Install Pi: npm install -g @mariozechner/pi-coding-agent # Add to ~/.pi/agent/models.json: { "providers": { "llama-cpp": { "baseUrl": "http://localhost:8080/v1", "api": "openai-completions", "apiKey": "none", "models": [ { "id": "Ex0bit/Elbaz-GLM-4.6V-Flash-PRISM:IQ4_XS" } ] } } }Run Pi
# Start Pi in your project directory: pi
- Hermes Agent new
How to use Ex0bit/Elbaz-GLM-4.6V-Flash-PRISM with Hermes Agent:
Start the llama.cpp server
# Install llama.cpp: brew install llama.cpp # Start a local OpenAI-compatible server: llama-server -hf Ex0bit/Elbaz-GLM-4.6V-Flash-PRISM:IQ4_XS
Configure Hermes
# Install Hermes: curl -fsSL https://hermes-agent.nousresearch.com/install.sh | bash hermes setup # Point Hermes at the local server: hermes config set model.provider custom hermes config set model.base_url http://127.0.0.1:8080/v1 hermes config set model.default Ex0bit/Elbaz-GLM-4.6V-Flash-PRISM:IQ4_XS
Run Hermes
hermes
- Docker Model Runner
How to use Ex0bit/Elbaz-GLM-4.6V-Flash-PRISM with Docker Model Runner:
docker model run hf.co/Ex0bit/Elbaz-GLM-4.6V-Flash-PRISM:IQ4_XS
- Lemonade
How to use Ex0bit/Elbaz-GLM-4.6V-Flash-PRISM with Lemonade:
Pull the model
# Download Lemonade from https://lemonade-server.ai/ lemonade pull Ex0bit/Elbaz-GLM-4.6V-Flash-PRISM:IQ4_XS
Run and chat with the model
lemonade run user.Elbaz-GLM-4.6V-Flash-PRISM-IQ4_XS
List all available models
lemonade list
ELBAZ GLM-4.6V-FLASH PRISM (Uncensored)
GLM-4.6V-Flash: A 10B Dense Vision-Language Model
Introduction
GLM-4.6V-Flash is a 10.29B parameter dense Vision-Language Model (VLM) with a 40-layer transformer architecture and integrated vision encoder, capable of understanding both text and images.
Model Description
This model is an abliterated version of zai-org/GLM-4.6V-Flash that has had its refusal mechanisms removed using PRISM (Projected Refusal Isolation via Subspace Modification). The model will respond to prompts that the original model would refuse.
Key Specs:
- 10.29B parameter dense Vision-Language Model
- 40-layer transformer architecture
- Integrated vision encoder for image understanding
- 128K context length
- Supports text, image, and video inputs
Motivation
This project exists as research and development experimentation into understanding how large language models encode and enforce refusal behaviors, contributing to broader AI safety research by providing empirical data on refusal mechanism localization and tradeoffs between safety and capability.
Author
Eric Elbaz (Ex0bit)
Model Tree
zai-org/GLM-4.6V-Flash (Base Model - BF16)
└── Ex0bit/Elbaz-GLM-4.6V-Flash-PRISM (This Model)
└── Elbaz-GLM-4.6V-Flash-PRISM-IQ4_XS.gguf
Available Quantizations
| Quantization | Size | Description |
|---|---|---|
| IQ4_XS | 5.0 GB | Importance-weighted 4-bit, excellent quality |
The IQ4_XS quantization uses importance-weighted quantization which provides better quality than standard Q4 quantizations at similar sizes. Embedding and output layers use Q6_K precision for optimal quality.
Prompt Format
This model uses the GLM chat format with optional thinking/reasoning support:
[gMASK]<sop><|system|>
{system_prompt}<|user|>
{user_prompt}<|assistant|>
Template Structure
| Component | Token/Format |
|---|---|
| System Start | <|system|> |
| User Start | <|user|> |
| Assistant Start | <|assistant|> |
| Thinking Start | <think> |
| Thinking End | </think> |
| End of Text | <|endoftext|> |
Special Tokens
| Token | ID | Purpose |
|---|---|---|
<|system|> |
151335 | System prompt marker |
<|user|> |
151336 | User message marker |
<|assistant|> |
151337 | Assistant response marker |
<think> |
151350 | Reasoning block start |
</think> |
151351 | Reasoning block end |
<|endoftext|> |
151329 | EOS token |
<|begin_of_image|> |
151339 | Image input start |
<|end_of_image|> |
151340 | Image input end |
Technical Details
Performance Impact
| Metric | Result |
|---|---|
| Refusal Bypass Rate | 100% |
| English Output Rate | 100% |
| KL Divergence | 0.0000 (no capability degradation) |
| Response Coherence | Detailed, technically accurate |
Testing shows that PRISM abliteration maintains full model coherence with no measurable capability degradation.
Quick Start
Using with llama.cpp
# Download the model
huggingface-cli download Ex0bit/Elbaz-GLM-4.6V-Flash-PRISM \
Elbaz-GLM-4.6V-Flash-PRISM-IQ4_XS.gguf \
--local-dir .
# Run inference
./llama-cli -m Elbaz-GLM-4.6V-Flash-PRISM-IQ4_XS.gguf \
-p "[gMASK]<sop><|system|>
You are a helpful assistant. You MUST respond in English only.<|user|>
Your prompt here<|assistant|>
" \
-n 2048 \
--temp 0.7 \
-ngl 999
llama.cpp with llama-server
# Start the server
./llama-server -m Elbaz-GLM-4.6V-Flash-PRISM-IQ4_XS.gguf \
--host 0.0.0.0 \
--port 8080 \
-ngl 999 \
-c 32768
# Example API call
curl http://localhost:8080/v1/chat/completions \
-H "Content-Type: application/json" \
-d '{
"messages": [
{"role": "system", "content": "You are a helpful assistant. You MUST respond in English only."},
{"role": "user", "content": "Your prompt here"}
],
"temperature": 0.7
}'
Using with Ollama
# Pull and run directly from Hugging Face
ollama pull hf.co/Ex0bit/Elbaz-GLM-4.6V-Flash-PRISM
ollama run hf.co/Ex0bit/Elbaz-GLM-4.6V-Flash-PRISM
Note: The
hf.co/prefix is required to pull from Hugging Face. Requires Ollama 0.3.0+.
Using with Transformers (Full Weights)
from transformers import AutoModelForCausalLM, AutoProcessor
model_id = "Ex0bit/Elbaz-GLM-4.6V-Flash-PRISM"
processor = AutoProcessor.from_pretrained(model_id, trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained(model_id, device_map="auto", trust_remote_code=True)
messages = [
{"role": "system", "content": [{"type": "text", "text": "You are a helpful assistant. You MUST respond in English only."}]},
{"role": "user", "content": [{"type": "text", "text": "Your prompt here"}]}
]
inputs = processor.apply_chat_template(
messages,
tokenize=True,
add_generation_prompt=True,
return_dict=True,
return_tensors="pt"
).to(model.device)
outputs = model.generate(**inputs, max_new_tokens=2048, temperature=0.7, do_sample=True)
print(processor.decode(outputs[0], skip_special_tokens=False))
PRISM Methodology
Method: Projected Refusal Isolation via Subspace Modification
The model was abliterated using PRISM - a state-of-the-art abliteration methodology combining multiple principled techniques for effective refusal removal while preserving model capabilities.
Hardware Requirements
| Quantization | Min RAM/VRAM | Recommended | Hardware Examples |
|---|---|---|---|
| IQ4_XS | T GB | 12+ GB | RTX 3060 12GB, RTX 4070, Apple M1/M2/M3/M4 |
Tested Configurations
| Hardware | RAM/VRAM | Status |
|---|---|---|
| NVIDIA RTX GPU | 12+ GB | Works |
| Apple Silicon | 16+ GB Unified | Works |
Note: This is a relatively lightweight model that can run on consumer hardware with 12GB+ or less VRAM.
Vision Capabilities
GLM-4.6V-Flash supports multimodal inputs:
- Images: Use
<|begin_of_image|><|image|><|end_of_image|>tags - Videos: Use
<|begin_of_video|><|video|><|end_of_video|>tags
Example with image:
messages = [
{
"role": "user",
"content": [
{"type": "image", "image": "path/to/image.jpg"},
{"type": "text", "text": "What is in this image?"}
]
}
]
Ethical Considerations
This model has been modified to reduce safety guardrails. Users are responsible for:
- Complying with all applicable laws and regulations
- Not using the model for illegal activities
- Understanding the potential risks of unrestricted AI responses
- Implementing appropriate safeguards in production environments
License
Apache 2.0 (same as base model zai-org/GLM-4.6V-Flash)
Citation
@misc{elbaz2025glm46vprism,
author = {Elbaz, Eric},
title = {Elbaz-GLM-4.6V-Flash-PRISM: An Abliterated GLM-4.6V Vision-Language Model},
year = {2025},
publisher = {Hugging Face},
howpublished = {\url{https://huggingface.co/Ex0bit/Elbaz-GLM-4.6V-Flash-PRISM}}
}
Acknowledgments
Related Models
- zai-org/GLM-4.6V-Flash - Base model
- Ex0bit/Elbaz-Prime-Intellect-3_Prism_Abliterated - INTELLECT-3 abliterated
Created by: Ex0bit (Eric Elbaz)
- Downloads last month
- 275
4-bit
Model tree for Ex0bit/Elbaz-GLM-4.6V-Flash-PRISM
Base model
zai-org/GLM-4.6V-Flash