Skip to content

fix(langchain): inline aafter_model logic in HumanInTheLoopMiddleware#35273

Open
Saakshi Gupta (saakshigupta2002) wants to merge 4 commits intolangchain-ai:masterfrom
saakshigupta2002:fix/hitl-middleware-async-context
Open

fix(langchain): inline aafter_model logic in HumanInTheLoopMiddleware#35273
Saakshi Gupta (saakshigupta2002) wants to merge 4 commits intolangchain-ai:masterfrom
saakshigupta2002:fix/hitl-middleware-async-context

Conversation

@saakshigupta2002
Copy link
Contributor

Summary

This PR fixes a RuntimeError that occurs when using HumanInTheLoopMiddleware with async agent invocation (.ainvoke()). The middleware's aafter_model method was delegating directly to the synchronous after_model method, which broke the async runnable context chain required by langgraph's interrupt() function.

Fixes #34974

Problem

When an agent using HumanInTheLoopMiddleware is invoked via .ainvoke(), the following error is raised:

RuntimeError: Called get_config outside of a runnable context

Stack trace:

File ".../human_in_the_loop.py", line 381, in aafter_model
    return self.after_model(state, runtime)
File ".../human_in_the_loop.py", line 331, in after_model
    decisions = interrupt(hitl_request)["decisions"]
File ".../langgraph/types.py", line 515, in interrupt
    conf = get_config()["configurable"]
File ".../langgraph/config.py", line 29, in get_config
    raise RuntimeError("Called get_config outside of a runnable context")

Reproduction steps:

from langchain.agents.middleware.human_in_the_loop import HumanInTheLoopMiddleware
from langgraph.prebuilt import create_react_agent
from langgraph.checkpoint.memory import MemorySaver

middleware = HumanInTheLoopMiddleware(interrupt_on={"my_tool": True})
agent = create_react_agent(
    model, tools, middleware=[middleware], checkpointer=MemorySaver()
)

# This works fine (sync):
agent.invoke({"messages": [("user", "use my_tool")]}, config=config)

# This fails with RuntimeError (async):
await agent.ainvoke({"messages": [("user", "use my_tool")]}, config=config)

Root Cause

The aafter_model async method was implemented as a simple delegation to the sync after_model:

async def aafter_model(self, state, runtime):
    return self.after_model(state, runtime)  # Bug: loses async context

When langgraph's async executor calls aafter_model, it sets up an async runnable context (via contextvars). The interrupt() function, called inside after_model, relies on get_config() to access this context. However, since after_model is called as a plain synchronous function from within the async coroutine, the async runnable context is not properly accessible, causing the RuntimeError.

This issue is particularly pronounced on Python 3.10 where contextvars propagation in asyncio differs from later versions.

Solution

Inlined the full after_model logic directly into aafter_model instead of delegating to the sync method. The logic is identical -- no async-specific changes are needed since interrupt() is synchronous. The key fix is ensuring that interrupt() -> get_config() executes directly within the aafter_model coroutine body, where the async runnable context is accessible.

Changes

libs/langchain_v1/langchain/agents/middleware/human_in_the_loop.py

  • Replaced the one-line delegation in aafter_model with the full inlined logic from after_model
  • The method now independently handles: message inspection, action request/config creation, interrupt flow, decision validation, and tool call reconstruction

libs/langchain_v1/tests/unit_tests/agents/middleware/implementations/test_human_in_the_loop.py

Added 6 new async unit tests for the aafter_model path:

Test Description
test_aafter_model_no_interrupts_needed Verifies early returns for empty messages, no tool calls, and non-matching tools
test_aafter_model_single_tool_accept Tests approval flow through async path
test_aafter_model_single_tool_edit Tests edit flow with modified args through async path
test_aafter_model_single_tool_reject Tests rejection flow with tool message generation
test_aafter_model_multiple_tools_mixed_responses Tests mixed approve/reject across multiple tools
test_aafter_model_preserves_tool_call_order Verifies original tool call order is preserved with interleaved auto-approved and interrupted tools

Test Plan

  • All 18 existing sync tests continue to pass
  • All 6 new async tests pass
  • Full test suite: 24/24 passed

How to verify the fix end-to-end

import asyncio
from langchain.agents.middleware.human_in_the_loop import HumanInTheLoopMiddleware
from langgraph.prebuilt import create_react_agent
from langgraph.checkpoint.memory import MemorySaver

middleware = HumanInTheLoopMiddleware(interrupt_on={"my_tool": True})
agent = create_react_agent(
    model, tools, middleware=[middleware], checkpointer=MemorySaver()
)

# Previously failed with RuntimeError, now works correctly:
result = asyncio.run(agent.ainvoke(
    {"messages": [("user", "use my_tool")]},
    config={"configurable": {"thread_id": "1"}}
))

The aafter_model method was delegating to the synchronous after_model,
which caused interrupt() -> get_config() to fail with RuntimeError when
the agent was invoked via .ainvoke(). This is because the async runnable
context set up by langgraph's async executor was not accessible from the
nested synchronous call.

The fix inlines the after_model logic directly into aafter_model so that
interrupt() executes within the async coroutine's context.

Added async unit tests covering accept, edit, reject, mixed responses,
and tool call order preservation for the aafter_model path.

Fixes langchain-ai#34974
@github-actions github-actions bot added external langchain `langchain` package issues & PRs fix For PRs that implement a fix and removed external labels Feb 17, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

fix For PRs that implement a fix langchain `langchain` package issues & PRs

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Human In the Loop Middleware throws error when the agent is invoked as .ainvoke()

1 participant