fix(langchain): inline aafter_model logic in HumanInTheLoopMiddleware#35273
Open
Saakshi Gupta (saakshigupta2002) wants to merge 4 commits intolangchain-ai:masterfrom
Open
Conversation
The aafter_model method was delegating to the synchronous after_model, which caused interrupt() -> get_config() to fail with RuntimeError when the agent was invoked via .ainvoke(). This is because the async runnable context set up by langgraph's async executor was not accessible from the nested synchronous call. The fix inlines the after_model logic directly into aafter_model so that interrupt() executes within the async coroutine's context. Added async unit tests covering accept, edit, reject, mixed responses, and tool call order preservation for the aafter_model path. Fixes langchain-ai#34974
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Summary
This PR fixes a
RuntimeErrorthat occurs when usingHumanInTheLoopMiddlewarewith async agent invocation (.ainvoke()). The middleware'saafter_modelmethod was delegating directly to the synchronousafter_modelmethod, which broke the async runnable context chain required by langgraph'sinterrupt()function.Fixes #34974
Problem
When an agent using
HumanInTheLoopMiddlewareis invoked via.ainvoke(), the following error is raised:Stack trace:
Reproduction steps:
Root Cause
The
aafter_modelasync method was implemented as a simple delegation to the syncafter_model:When langgraph's async executor calls
aafter_model, it sets up an async runnable context (viacontextvars). Theinterrupt()function, called insideafter_model, relies onget_config()to access this context. However, sinceafter_modelis called as a plain synchronous function from within the async coroutine, the async runnable context is not properly accessible, causing theRuntimeError.This issue is particularly pronounced on Python 3.10 where
contextvarspropagation in asyncio differs from later versions.Solution
Inlined the full
after_modellogic directly intoaafter_modelinstead of delegating to the sync method. The logic is identical -- no async-specific changes are needed sinceinterrupt()is synchronous. The key fix is ensuring thatinterrupt()->get_config()executes directly within theaafter_modelcoroutine body, where the async runnable context is accessible.Changes
libs/langchain_v1/langchain/agents/middleware/human_in_the_loop.pyaafter_modelwith the full inlined logic fromafter_modellibs/langchain_v1/tests/unit_tests/agents/middleware/implementations/test_human_in_the_loop.pyAdded 6 new async unit tests for the
aafter_modelpath:test_aafter_model_no_interrupts_neededtest_aafter_model_single_tool_accepttest_aafter_model_single_tool_edittest_aafter_model_single_tool_rejecttest_aafter_model_multiple_tools_mixed_responsestest_aafter_model_preserves_tool_call_orderTest Plan
How to verify the fix end-to-end