Releases: VoltAgent/voltagent
@voltagent/server-core@2.1.11
Patch Changes
-
#1183
b48f107Thanks @omeraplak! - feat: persist selected assistant message metadata to memoryYou can enable persisted assistant message metadata at the agent level or per request.
const result = await agent.streamText("Hello", { memory: { userId: "user-1", conversationId: "conv-1", options: { messageMetadataPersistence: { usage: true, finishReason: true, }, }, }, });
With this enabled, fetching messages from memory returns assistant
UIMessage.metadata
with fields likeusageandfinishReason, not just stream-time metadata.REST API requests can enable the same behavior with
options.memory.options:curl -X POST http://localhost:3141/agents/assistant/text \ -H "Content-Type: application/json" \ -d '{ "input": "Hello", "options": { "memory": { "userId": "user-1", "conversationId": "conv-1", "options": { "messageMetadataPersistence": { "usage": true, "finishReason": true } } } } }'
-
Updated dependencies [
b48f107,195155b]:- @voltagent/core@2.6.14
@voltagent/core@2.6.14
Patch Changes
-
#1183
b48f107Thanks @omeraplak! - feat: persist selected assistant message metadata to memoryYou can enable persisted assistant message metadata at the agent level or per request.
const result = await agent.streamText("Hello", { memory: { userId: "user-1", conversationId: "conv-1", options: { messageMetadataPersistence: { usage: true, finishReason: true, }, }, }, });
With this enabled, fetching messages from memory returns assistant
UIMessage.metadata
with fields likeusageandfinishReason, not just stream-time metadata.REST API requests can enable the same behavior with
options.memory.options:curl -X POST http://localhost:3141/agents/assistant/text \ -H "Content-Type: application/json" \ -d '{ "input": "Hello", "options": { "memory": { "userId": "user-1", "conversationId": "conv-1", "options": { "messageMetadataPersistence": { "usage": true, "finishReason": true } } } } }'
-
#1167
195155bThanks @octo-patch! - fix: use OpenAI-compatible adapter for MiniMax provider
@voltagent/core@2.6.13
Patch Changes
- #1172
8cb2aa5Thanks @omeraplak! - fix: tighten prompt-context usage telemetry- redact nested large binary fields when estimating prompt context usage
- preserve circular-reference detection when serializing nested prompt message content
- exclude runtime-only tool metadata from tool schema token estimates
- avoid emitting cached and reasoning token span attributes when their values are zero
@voltagent/ag-ui@1.0.7
Patch Changes
- #1137
bb6e9b1Thanks @corners99! - feat(ag-ui): add ACTIVITY_SNAPSHOT and ACTIVITY_DELTA event support
@voltagent/core@2.6.12
Patch Changes
- #1169
25b21d0Thanks @omeraplak! - feat: add estimated prompt context telemetry for observability- record estimated prompt-context breakdown for system instructions, conversation messages, and tool schemas on LLM spans
- expose cached and reasoning token usage on LLM spans for observability consumers
- add tests for prompt-context estimation helpers
@voltagent/core@2.6.11
Patch Changes
-
#1168
2075bd9Thanks @omeraplak! - fix: emit LLM judge token and provider cost telemetry on eval scorer spansVoltAgent now records LLM judge model, token usage, cached tokens, reasoning tokens,
and provider-reported cost details on live eval scorer spans.This makes scorer-side usage visible in observability backends and enables downstream
cost aggregation to distinguish agent costs from eval scorer costs. -
#1163
6f14c4dThanks @omeraplak! - fix: preserve usage and provider cost metadata on structured output failuresWhen
generateTextreceives a successful model response but structured output is not produced,
VoltAgent now keeps the resolved usage, finish reason, and provider metadata on the resulting
error path.This preserves provider-reported cost data for observability spans and makes the same metadata
available to error hooks throughVoltAgentError.metadata.
@voltagent/server-hono@2.0.8
Patch Changes
-
b523a60Thanks @omeraplak! - feat: publish the latest Hono server workflow route updatesWhat's Changed
- expose the latest workflow execution endpoints in the Hono server package
- include OpenAPI route definitions for attaching to active workflow streams and replaying workflow executions
- publish the current
@voltagent/server-honoroute surface so dev-server installs match the latest repo behavior
@voltagent/core@2.6.10
Patch Changes
-
#1155
52bda94Thanks @omeraplak! - fix: capture provider-reported OpenRouter costs in observability spansWhat's Changed
- Forward OpenRouter provider-reported cost metadata to both LLM spans and root agent spans.
- Record
usage.costandusage.cost_details.upstream_inference_*attributes for downstream cost consumers. - Document OpenRouter usage accounting and custom
onEndhook-based cost reporting in the observability docs.
@voltagent/ag-ui@1.0.6
Patch Changes
-
#1149
19c4fcfThanks @corners99! - fix: useinputinstead ofargsfor tool-call parts in message conversionWhen converting CopilotKit assistant messages with tool calls to VoltAgent format,
the adapter was settingargson tool-call parts. The AI SDK'sToolCallPart
interface expectsinput, causing the Anthropic provider to sendundefinedas
the tool_use input — rejected by the API with:"messages.N.content.N.tool_use.input: Input should be a valid dictionary"
@voltagent/core@2.6.8
Patch Changes
- #1146
c7b4c45Thanks @omeraplak! - Improve structured-output error handling forAgent.generateTextwhen models do not emit a final output (for example after tool-calling steps).- Detect missing
result.outputimmediately whenoutputis requested and throw a descriptiveVoltAgentError(STRUCTURED_OUTPUT_NOT_GENERATED) instead of surfacing a vagueAI_NoOutputGeneratedErrorlater. - Include finish reason and step/tool metadata in the error for easier debugging.
- Add an unhandled-rejection hint in
VoltAgentlogs for missing structured outputs.
- Detect missing