Is this a thinking model?
#10
by
geveent
- opened
Is GLM 4.7 a thinking model? It is behaving like an INSTRUCT model. Is there a way for me to make it think before it answers?
Yeah it's thinking by default. Make sure you add --jinja to llama-server (see the examples in this model card here).
If it's still not working, you could try --chat-template-kwargs '{"enable_thinking": true}'
So far in my limited testing I'm liking GLM-4.7 using it with pydantic-ai agentic type stuff. Both this model and Kimi-K2-Thinking feel pretty good.
Let's see where 2026 takes us! (hopefully there will still be consumer PCs available though hah.. π )