Skip to content

Releases: VoltAgent/voltagent

@voltagent/server-core@2.1.11

01 Apr 17:46
3776cb6

Choose a tag to compare

Patch Changes

  • #1183 b48f107 Thanks @omeraplak! - feat: persist selected assistant message metadata to memory

    You can enable persisted assistant message metadata at the agent level or per request.

    const result = await agent.streamText("Hello", {
      memory: {
        userId: "user-1",
        conversationId: "conv-1",
        options: {
          messageMetadataPersistence: {
            usage: true,
            finishReason: true,
          },
        },
      },
    });

    With this enabled, fetching messages from memory returns assistant UIMessage.metadata
    with fields like usage and finishReason, not just stream-time metadata.

    REST API requests can enable the same behavior with options.memory.options:

    curl -X POST http://localhost:3141/agents/assistant/text \
      -H "Content-Type: application/json" \
      -d '{
        "input": "Hello",
        "options": {
          "memory": {
            "userId": "user-1",
            "conversationId": "conv-1",
            "options": {
              "messageMetadataPersistence": {
                "usage": true,
                "finishReason": true
              }
            }
          }
        }
      }'
  • Updated dependencies [b48f107, 195155b]:

    • @voltagent/core@2.6.14

@voltagent/core@2.6.14

01 Apr 17:46
3776cb6

Choose a tag to compare

Patch Changes

  • #1183 b48f107 Thanks @omeraplak! - feat: persist selected assistant message metadata to memory

    You can enable persisted assistant message metadata at the agent level or per request.

    const result = await agent.streamText("Hello", {
      memory: {
        userId: "user-1",
        conversationId: "conv-1",
        options: {
          messageMetadataPersistence: {
            usage: true,
            finishReason: true,
          },
        },
      },
    });

    With this enabled, fetching messages from memory returns assistant UIMessage.metadata
    with fields like usage and finishReason, not just stream-time metadata.

    REST API requests can enable the same behavior with options.memory.options:

    curl -X POST http://localhost:3141/agents/assistant/text \
      -H "Content-Type: application/json" \
      -d '{
        "input": "Hello",
        "options": {
          "memory": {
            "userId": "user-1",
            "conversationId": "conv-1",
            "options": {
              "messageMetadataPersistence": {
                "usage": true,
                "finishReason": true
              }
            }
          }
        }
      }'
  • #1167 195155b Thanks @octo-patch! - fix: use OpenAI-compatible adapter for MiniMax provider

@voltagent/core@2.6.13

25 Mar 14:06
5d6d386

Choose a tag to compare

Patch Changes

  • #1172 8cb2aa5 Thanks @omeraplak! - fix: tighten prompt-context usage telemetry
    • redact nested large binary fields when estimating prompt context usage
    • preserve circular-reference detection when serializing nested prompt message content
    • exclude runtime-only tool metadata from tool schema token estimates
    • avoid emitting cached and reasoning token span attributes when their values are zero

@voltagent/ag-ui@1.0.7

25 Mar 14:06
5d6d386

Choose a tag to compare

Patch Changes

@voltagent/core@2.6.12

21 Mar 00:10
7bd1cca

Choose a tag to compare

Patch Changes

  • #1169 25b21d0 Thanks @omeraplak! - feat: add estimated prompt context telemetry for observability
    • record estimated prompt-context breakdown for system instructions, conversation messages, and tool schemas on LLM spans
    • expose cached and reasoning token usage on LLM spans for observability consumers
    • add tests for prompt-context estimation helpers

@voltagent/core@2.6.11

20 Mar 16:03
d3e3ca0

Choose a tag to compare

Patch Changes

  • #1168 2075bd9 Thanks @omeraplak! - fix: emit LLM judge token and provider cost telemetry on eval scorer spans

    VoltAgent now records LLM judge model, token usage, cached tokens, reasoning tokens,
    and provider-reported cost details on live eval scorer spans.

    This makes scorer-side usage visible in observability backends and enables downstream
    cost aggregation to distinguish agent costs from eval scorer costs.

  • #1163 6f14c4d Thanks @omeraplak! - fix: preserve usage and provider cost metadata on structured output failures

    When generateText receives a successful model response but structured output is not produced,
    VoltAgent now keeps the resolved usage, finish reason, and provider metadata on the resulting
    error path.

    This preserves provider-reported cost data for observability spans and makes the same metadata
    available to error hooks through VoltAgentError.metadata.

@voltagent/server-hono@2.0.8

18 Mar 01:32
4d3f4a6

Choose a tag to compare

Patch Changes

  • b523a60 Thanks @omeraplak! - feat: publish the latest Hono server workflow route updates

    What's Changed

    • expose the latest workflow execution endpoints in the Hono server package
    • include OpenAPI route definitions for attaching to active workflow streams and replaying workflow executions
    • publish the current @voltagent/server-hono route surface so dev-server installs match the latest repo behavior

@voltagent/core@2.6.10

16 Mar 21:12
e1958fb

Choose a tag to compare

Patch Changes

  • #1155 52bda94 Thanks @omeraplak! - fix: capture provider-reported OpenRouter costs in observability spans

    What's Changed

    • Forward OpenRouter provider-reported cost metadata to both LLM spans and root agent spans.
    • Record usage.cost and usage.cost_details.upstream_inference_* attributes for downstream cost consumers.
    • Document OpenRouter usage accounting and custom onEnd hook-based cost reporting in the observability docs.

@voltagent/ag-ui@1.0.6

16 Mar 21:12
e1958fb

Choose a tag to compare

Patch Changes

  • #1149 19c4fcf Thanks @corners99! - fix: use input instead of args for tool-call parts in message conversion

    When converting CopilotKit assistant messages with tool calls to VoltAgent format,
    the adapter was setting args on tool-call parts. The AI SDK's ToolCallPart
    interface expects input, causing the Anthropic provider to send undefined as
    the tool_use input — rejected by the API with:

    "messages.N.content.N.tool_use.input: Input should be a valid dictionary"

@voltagent/core@2.6.8

10 Mar 00:12
a1b68cc

Choose a tag to compare

Patch Changes

  • #1146 c7b4c45 Thanks @omeraplak! - Improve structured-output error handling for Agent.generateText when models do not emit a final output (for example after tool-calling steps).
    • Detect missing result.output immediately when output is requested and throw a descriptive VoltAgentError (STRUCTURED_OUTPUT_NOT_GENERATED) instead of surfacing a vague AI_NoOutputGeneratedError later.
    • Include finish reason and step/tool metadata in the error for easier debugging.
    • Add an unhandled-rejection hint in VoltAgent logs for missing structured outputs.