inference-optimization/NVIDIA-Nemotron-3-Nano-30B-A3B-quantized.w4a16 6B • Updated about 10 hours ago
inference-optimization/Qwen3-Next-80B-A3B-Thinking-FP8-block Text Generation • 80B • Updated 2 days ago • 50
inference-optimization/Qwen3-Next-80B-A3B-Instruct-FP8-block Text Generation • 80B • Updated 2 days ago • 91
inference-optimization/Qwen3-Next-80B-A3B-Instruct-FP8-dynamic Text Generation • 80B • Updated 2 days ago • 90
inference-optimization/Qwen3-Next-80B-A3B-Thinking-FP8-dynamic Text Generation • 80B • Updated 2 days ago • 75
inference-optimization/Qwen3-30B-A3B-Thinking-2507.w4a16 Text Generation • 5B • Updated 19 days ago • 19
inference-optimization/Qwen3-30B-A3B-Instruct-2507.w4a16 Text Generation • 5B • Updated 19 days ago • 36
inference-optimization/Llama-3.1-8B-Instruct-FP8-dynamic-QKV-Cache-FP8-Per-Head 8B • Updated 27 days ago • 23
inference-optimization/Llama-3.1-8B-Instruct-Mixed-NVFP4-FP8_DYNAMIC-gate_up_proj-all 7B • Updated Dec 4, 2025 • 2
inference-optimization/Llama-3.1-8B-Instruct-Mixed-NVFP4-FP8_DYNAMIC-down_proj-all 6B • Updated Dec 4, 2025 • 2
inference-optimization/Llama-3.1-8B-Instruct-Mixed-NVFP4-FP8_DYNAMIC-qkv_proj-all 5B • Updated Dec 4, 2025 • 20
inference-optimization/Llama-3.1-8B-Instruct-Mixed-NVFP4-FP8_DYNAMIC-out_proj-all 5B • Updated Dec 4, 2025 • 2
inference-optimization/Llama-3.1-8B-Instruct-Mixed-NVFP4-FP8_BLOCK-gate_up_proj-all 7B • Updated Dec 4, 2025 • 2
inference-optimization/Llama-3.1-8B-Instruct-Mixed-NVFP4-FP8_BLOCK-down_proj-all 6B • Updated Dec 4, 2025 • 2