Quantized Qwen2.5-VL-7B-Instruct
This repository provides Quantized version of the instruction-tuned multimodal version of the Qwen2.5-VL-7B-Instruct model.This includes both Q4_K_M and Q5_K_M version. The model extends language capabilities with powerful visual and video understanding, structured output generation, and agentic behaviours. Optimized for tasks requiring both text and vision inputs, the model sets new benchmarks in document parsing, OCR, chart understanding, and long video reasoning.
Model Overview
- Original Model: Qwen2.5-VL-7B
- Variants: Instruct-tuned multimodal model
- Architecture: Vision-Language Transformer with decoder-only backbone
- Base Model: Qwen2.5-7B-Instruct
- Modalities: Text, Image, Video
- Quantized Version:
- Q4_K_M (4-bit quantization)
- Q5_K_M (5-bit quantization)
- Developer: Qwen
- License: Apache 2.0 License
- Languages: English, Chinese
Quantization Details
Q4_K_M Version
- Approx. ~40% size reduction
- Lower memory footprint (~9 GB)
- Well-suited for deployment on edge devices or low-resource GPUs
- Minor performance degradation in highly complex reasoning scenarios
Q5_K_M Version
- Approx. ~34% size reduction
- Lower memory footprint (~10 GB)
- Better performance retention, recommended when quality is a priority
Key Features
- Advanced visual perception: recognizes natural images, charts, plots, forms, and multilingual text.
- Long video reasoning: understands and localizes events in videos exceeding 1 hour in length.
- Agentic abilities: supports UI control, tool-use, and interactive multimodal tasks.
- Structured outputs: can generate bounding boxes, keypoints, and JSON-formatted structured responses.
- Dynamic resolution handling and efficient temporal encoding for video tasks.
Dataset Highlights
Post-training corpus enlarged from ~1M samples / 1.2B tokens to ~5M samples / ~60B tokens combining reasoning and non-reasoning data.
Emphasis on reasoning traces, schema adherence (valid JSON, format compliance), and reduced refusal.
Supports tool / function-calling outputs and structured output formats
Usage Example
Text-Only Inference:
./llama-cli -hf SandLogicTechnologies/Qwen2.5-VL-7B-Instruct-GGUF -p "Explain Transformer architecture"
Multi-Modal Inference:
./llava-cli -m ./models/SandLogicTechnologies/Qwen2.5-VL-7B-Instruct-GGUF --mmproj ./models/Qwen2.5-VL-7B-Instruct-mmproj.gguf --image ./examples/chart.png -p "What does this chart represent?"
Recommended Use Cases
Document understanding Extract structured information from forms, invoices, and tables.
Visual question answering Handle reasoning over complex images, charts, and diagrams.
Video reasoning and summarization Identify key moments and provide natural language summaries.
Agent-style interactions Power multimodal AI agents capable of interacting with digital environments.
Acknowledgments
These quantized models are based on the original work by the Qwen development team.
Special thanks to:
The Qwen team for developing and releasing the Qwen2.5-VL-7B-Instruct model.
Georgi Gerganov and the entire
llama.cppopen-source community for enabling efficient model quantization and inference via the GGUF format.
Contact
For any inquiries or support, please contact us at support@sandlogic.com or visit our Website.
- Downloads last month
- 17
4-bit
5-bit
Model tree for SandLogicTechnologies/Qwen2.5-VL-7B-Instruct-GGUF
Base model
Qwen/Qwen2.5-VL-7B-Instruct