Skip to content

fix(renderer-client): normalize empty content like token client#1290

Merged
willccbb merged 1 commit intofeat/renderer-inference-v1-generatefrom
fix/renderer-client-empty-content-normalize
May 5, 2026
Merged

fix(renderer-client): normalize empty content like token client#1290
willccbb merged 1 commit intofeat/renderer-inference-v1-generatefrom
fix/renderer-client-empty-content-normalize

Conversation

@eligotts
Copy link
Copy Markdown
Contributor

@eligotts eligotts commented May 5, 2026

Summary

  • Stacked on top of #<feat/renderer-inference-v1-generate PR>. Mirrors a coercion already present in OpenAIChatCompletionsTokenClient.normalize_for_comparison: tool-call-only assistant messages can be serialized with content=None or content="" depending on the upstream pipeline (reasoning parsers strip text to "", other paths leave None), and incremental-prompt prefix matching must treat them as equivalent or it spuriously falls back to MITO.
  • renderer_client._normalize_for_comparison already drops None values from mappings, so a content=None message normalizes to "no content key" while content="" would stay as "content": "". We extend the same filter to also drop content == "", unifying the two shapes.

Test plan

  • Existing renderer-client tests still pass.
  • Manually verify a multi-turn rollout where one trajectory step has assistant content=None and the next prompt presents the same step with content="" (e.g. via a reasoning parser): bridge succeeds instead of falling back.

🤖 Generated with Claude Code


Note

Low Risk
Low risk: a small normalization tweak that only affects incremental prompt prefix-matching by treating content:"" like missing/None, reducing spurious bridge failures without changing generation behavior.

Overview
Improves incremental prompt prefix-matching in renderer_client._normalize_for_comparison by dropping mapping keys where content == "", making tool-call-only assistant messages serialized as content=None vs content="" compare equal.

This prevents format-only drift in upstream pipelines from forcing a full re-render/fallback when stitching multi-turn prompts.

Reviewed by Cursor Bugbot for commit 2aaac1a. Bugbot is set up for automated code reviews on this repo. Configure here.

…x match

Mirrors the coercion already in OpenAIChatCompletionsTokenClient: tool-call-only
assistant messages can be serialized with content=None or content="" depending
on the upstream pipeline (reasoning parsers strip to "" while other paths leave
it None), and incremental-prompt prefix matching must not split on that.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
@willccbb willccbb merged commit f68bc5b into feat/renderer-inference-v1-generate May 5, 2026
3 checks passed
@willccbb willccbb deleted the fix/renderer-client-empty-content-normalize branch May 5, 2026 19:48
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants