Streaming agent execution intermittently fails after a tool call with OpenRouter/OpenAI returning:
Invalid parameter: messages with role 'tool' must be a response to a preceeding message with 'tool_calls'.
The failure bubbles up as:
A valid tool run occurs, then the next streamed model call fails with HTTP 400.
Provider payload error:
Framework should always send tool-role messages only immediately following assistant messages that include matching tool_calls in the same request sequence, so streaming continues normally after tool results.
Streaming aborts with provider 400 and task fails.
ai-api-1 | Traceback (most recent call last):
ai-api-1 | File "....../runners.py", line 308, in stream_agent
ai-api-1 | async for update in stream:
ai-api-1 | ...<2 lines>...
ai-api-1 | yield (event_type, data)
ai-api-1 | File "/usr/local/lib/python3.13/site-packages/agent_framework/_types.py", line 2943, in __anext__
ai-api-1 | update: UpdateT = await self._iterator.__anext__()
ai-api-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
ai-api-1 | File "/usr/local/lib/python3.13/site-packages/agent_framework/_types.py", line 2943, in __anext__
ai-api-1 | update: UpdateT = await self._iterator.__anext__()
ai-api-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
ai-api-1 | File "/usr/local/lib/python3.13/site-packages/agent_framework/_types.py", line 2943, in __anext__
ai-api-1 | update: UpdateT = await self._iterator.__anext__()
ai-api-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
ai-api-1 | File "/usr/local/lib/python3.13/site-packages/agent_framework/_tools.py", line 2378, in _stream
ai-api-1 | async for update in inner_stream:
ai-api-1 | yield update
ai-api-1 | File "/usr/local/lib/python3.13/site-packages/agent_framework/_types.py", line 2943, in __anext__
ai-api-1 | update: UpdateT = await self._iterator.__anext__()
ai-api-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
ai-api-1 | File "/usr/local/lib/python3.13/site-packages/agent_framework/_types.py", line 2943, in __anext__
ai-api-1 | update: UpdateT = await self._iterator.__anext__()
ai-api-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
ai-api-1 | File "/usr/local/lib/python3.13/site-packages/agent_framework_openai/_chat_client.py", line 558, in _stream
ai-api-1 | self._handle_request_error(ex)
ai-api-1 | ~~~~~~~~~~~~~~~~~~~~~~~~~~^^^^
ai-api-1 | File "/usr/local/lib/python3.13/site-packages/agent_framework_openai/_chat_client.py", line 493, in _handle_request_error
ai-api-1 | raise ChatClientException(
ai-api-1 | ...<5 lines>...
ai-api-1 | ) from ex
ai-api-1 | agent_framework.exceptions.ChatClientException: <class 'agent_framework_openai._chat_client.OpenAIChatClient'> service failed to complete the prompt: Error code: 400 - {'error': {'message': 'Provider returned error', 'code': 400, 'metadata': {'raw': '{\n "error": {\n "message": "Invalid parameter: messages with role \'tool\' must be a response to a preceeding message with \'tool_calls\'.",\n "type": "invalid_request_error",\n "param": "messages.[0].role",\n "code": null\n }\n}', 'provider_name': 'OpenAI', 'is_byok': False}}, 'user_id': 'user_2t7Bf1EvP3yaPWbIsLaLDXgTWhk'}
Summary
Streaming agent execution intermittently fails after a tool call with OpenRouter/OpenAI returning:
Invalid parameter: messages with role 'tool' must be a response to a preceeding message with 'tool_calls'.The failure bubbles up as:
openai.BadRequestErrorfromopenai/resources/responses/responses.pyagent_framework_openai._chat_client.OpenAIChatClientObserved behavior
A valid tool run occurs, then the next streamed model call fails with HTTP 400.
Provider payload error:
param: messages.[0].rolemessages with role 'tool' must be a response to a preceeding message with 'tool_calls'Expected behavior
Framework should always send tool-role messages only immediately following assistant messages that include matching
tool_callsin the same request sequence, so streaming continues normally after tool results.Actual behavior
Streaming aborts with provider 400 and task fails.
Error Messages / Stack Traces
Package Versions
agent-framework: 1.0.0
Python Version
3.11.12
Additional Context
No response