mlx-community/Qwen3-Embedding-0.6B-8bit

This model mlx-community/Qwen3-Embedding-0.6B-8bit was converted to MLX format from Qwen/Qwen3-Embedding-0.6B

Downloads last month
391
MLX
Hardware compatibility
Log In to view the estimation

Quantized

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for mlx-community/Qwen3-Embedding-0.6B-8bit

Finetuned
(118)
this model