feat: Support prompt caching#3587
Conversation
…essage conversion
…nd add example for prompt caching usage
|
Important Review skippedAuto reviews are disabled on this repository. Please check the settings in the CodeRabbit UI or the You can disable this status message by setting the
✨ Finishing touches🧪 Generate unit tests (beta)
Important Action Needed: IP Allowlist UpdateIf your organization protects your Git platform with IP whitelisting, please add the new CodeRabbit IP address to your allowlist:
Reviews will stop working after February 8, 2026 if the new IP is not added to your allowlist. Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
Wendong-Fan
left a comment
There was a problem hiding this comment.
thanks @Zephyroam , left some comments below, love the optimize for current workforce prompt
…ax_tokens parameter
…ces in OpenAIModel
…chunk conversion logic
… and conversion functions
|
Check out this pull request on See visual diffs & provide feedback on Jupyter Notebooks. Powered by ReviewNB |
|
Also removed retired Claude models according to: https://platform.claude.com/docs/en/about-claude/model-deprecations |
JINO-ROHIT
left a comment
There was a problem hiding this comment.
Hi @Zephyroam thanks for the PR! from a quick skim , ive added some feedback, i would like to test this feature a bit in the evening when i have some more time.
There was a problem hiding this comment.
LGTM if there are no issues for the https://github.com/camel-ai/camel/pull/3587/files#r2706162404 (prompt modifications)
|
@bytecraftii @Wendong-Fan Now the prompt changes only include changing orders. |
Wendong-Fan
left a comment
There was a problem hiding this comment.
when i ran examples/agents/chatagent_stream.py with claude model I got error msg:
Error processing with model: <camel.models.anthropic_model.AnthropicModel object at 0x104eda320>
2026-02-03 23:55:14,725 - camel.camel.agents.chat_agent - ERROR - Error in streaming model response: Error code: 400 - {'type': 'error', 'error': {'type': 'invalid_request_error', 'message': 'messages.5: `tool_use` ids were found without `tool_result` blocks immediately after: toolu_016QXsnH8t84QqBCec2zGBez, toolu_01QudEKMtgDwFB56qZKTw6Kc. Each `tool_use` block must have a corresponding `tool_result` block in the next message.'}, 'request_id': 'req_011CXmQbGMdRWQXqjmX4pgdM'}
Traceback (most recent call last):
File "/Users/enrei/Desktop/repos/camel0120/camel/camel/agents/chat_agent.py", line 4233, in _stream_response
response = self.model_backend.run(
File "/Users/enrei/Desktop/repos/camel0120/camel/camel/models/model_manager.py", line 239, in run
raise exc
File "/Users/enrei/Desktop/repos/camel0120/camel/camel/models/model_manager.py", line 229, in run
response = self.current_model.run(messages, response_format, tools)
File "/Users/enrei/Desktop/repos/camel0120/camel/camel/models/base_model.py", line 212, in wrapped_run
return original_run(self, messages, *args, **kwargs)
File "/Users/enrei/Desktop/repos/camel0120/camel/camel/models/base_model.py", line 633, in run
result = self._run(messages, response_format, tools)
File "/Users/enrei/Desktop/repos/camel0120/camel/camel/models/anthropic_model.py", line 751, in _run
stream = create_func(**request_params, stream=True)
File "/Users/enrei/Desktop/repos/camel0120/camel/.venv/lib/python3.10/site-packages/anthropic/_utils/_utils.py", line 275, in wrapper
return func(*args, **kwargs)
File "/Users/enrei/Desktop/repos/camel0120/camel/.venv/lib/python3.10/site-packages/anthropic/resources/messages/messages.py", line 953, in create
return self._post(
File "/Users/enrei/Desktop/repos/camel0120/camel/.venv/lib/python3.10/site-packages/anthropic/_base_client.py", line 1336, in post
return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls))
File "/Users/enrei/Desktop/repos/camel0120/camel/.venv/lib/python3.10/site-packages/anthropic/_base_client.py", line 1013, in request
return self._request(
File "/Users/enrei/Desktop/repos/camel0120/camel/.venv/lib/python3.10/site-packages/anthropic/_base_client.py", line 1117, in _request
raise self._make_status_error_from_response(err.response) from None
anthropic.BadRequestError: Error code: 400 - {'type': 'error', 'error': {'type': 'invalid_request_error', 'message': 'messages.5: `tool_use` ids were found without `tool_result` blocks immediately after: toolu_016QXsnH8t84QqBCec2zGBez, toolu_01QudEKMtgDwFB56qZKTw6Kc. Each `tool_use` block must have a corresponding `tool_result` block in the next message.'}, 'request_id': 'req_011CXmQbGMdRWQXqjmX4pgdM'}
Error: Error code: 400 - {'type': 'error', 'error': {'type': 'invalid_request_error', 'message': 'messages.5: `tool_use` ids were found without `tool_result` blocks immediately after: toolu_016QXsnH8t84QqBCec2zGBez, toolu_01QudEKMtgDwFB56qZKTw6Kc. Each `tool_use` block must have a corresponding `tool_result` block in the next message.'}, 'request_id': 'req_011CXmQbGMdRWQXqjmX4pgdM'}
could help checking this?
|
Issue fixed |
There was a problem hiding this comment.
thanks @Zephyroam , LGTM now, we'd better also check the prompt cache feature for gemini model and aws bedrock/ azure, i created one follow up issue for this #3770
Description
Describe your changes in detail (optional if the linked issue already contains a detailed description of the changes).
Closes #3586
Checklist
Go over all the following points, and put an
xin all the boxes that apply.Fixes #issue-numberin the PR description (required)pyproject.tomlanduv lockIf you are unsure about any of these, don't hesitate to ask. We are here to help!