Hi, while trying the examples from unit 2.1, the InferenceClientModel class now defaults to the model Qwen/Qwen3-Next-80B-A3B-Thinking. However, this model seems to perform poorly It doesn't generate proper code snippets or solve the task correctly.
While with Qwen/Qwen2.5-Coder-32B-Instruct the agent performs much better and generates correct code.
