Replies: 4 comments
-
|
Great question! There are a few ways to capture the raw LLM output in CrewAI: 1. Using Callbacks (Recommended)You can create a custom callback handler to intercept LLM responses: from crewai import Agent, Task, Crew
from langchain.callbacks.base import BaseCallbackHandler
class LLMOutputCapture(BaseCallbackHandler):
def __init__(self):
self.outputs = []
def on_llm_end(self, response, **kwargs):
# Capture the raw LLM output
self.outputs.append(response.generations[0][0].text)
print(f"Raw LLM Output: {response.generations[0][0].text}")
# Use it
capture = LLMOutputCapture()
agent = Agent(
role="Researcher",
goal="Research topics",
backstory="You are a researcher",
callbacks=[capture]
)2. Using Verbose Mode + Custom Loggingimport logging
logging.basicConfig(level=logging.DEBUG)
crew = Crew(
agents=[agent],
tasks=[task],
verbose=True # This shows all LLM interactions
)3. Accessing Task Results DirectlyAfter execution, you can access results per task: result = crew.kickoff()
# Each task result contains the raw output
for task_output in result.tasks_output:
print(f"Task: {task_output.description}")
print(f"Raw Output: {task_output.raw}")
print(f"Pydantic: {task_output.pydantic}") # If you used output_pydanticThe callback approach gives you the most granular control - you can capture every single LLM call, including tool use and intermediate reasoning steps. Hope this helps! |
Beta Was this translation helpful? Give feedback.
-
|
the lowest level option is using a transport interceptor |
Beta Was this translation helpful? Give feedback.
-
|
Several ways to get raw LLM output: 1. Verbose mode + callback from crewai import Crew, Agent, Task
from crewai.utilities.callbacks import on_llm_response
raw_outputs = []
@on_llm_response
def capture_output(response):
raw_outputs.append(response.text)
crew = Crew(
agents=[agent],
tasks=[task],
verbose=True,
)2. Custom LLM wrapper class LoggingLLM:
def __init__(self, llm):
self.llm = llm
self.outputs = []
def call(self, prompt, **kwargs):
response = self.llm.call(prompt, **kwargs)
self.outputs.append({
"prompt": prompt,
"response": response,
"raw": response.text
})
return response
agent = Agent(
llm=LoggingLLM(your_llm),
...
)3. Task result with raw output result = crew.kickoff()
# Access per-task outputs
for task_output in result.tasks_output:
print(task_output.raw) # Raw LLM response
print(task_output.pydantic) # Parsed if applicable4. LiteLLM logging import litellm
litellm.set_verbose = True
litellm.success_callback = ["langfuse"] # Or customFor analysis: # After crew run
for i, output in enumerate(raw_outputs):
print(f"Step {i}: {len(output)} chars")
# Analyze tokens, patterns, etc.We analyze agent outputs at Revolution AI — the callback approach is cleanest for post-hoc analysis. |
Beta Was this translation helpful? Give feedback.
-
|
Getting raw LLM output per step! At RevolutionAI (https://revolutionai.io) we debug this way: Enable verbose: crew = Crew(
agents=[agent],
tasks=[task],
verbose=True # Prints each step
)Custom callback: from crewai.utilities import Logger
class MyLogger(Logger):
def log(self, level, message, **kwargs):
if "llm_output" in kwargs:
print(f"RAW: {kwargs[llm_output]}")
crew = Crew(..., logger=MyLogger())Or instrument LLM: from litellm import completion
original = completion
def instrumented(*args, **kwargs):
result = original(*args, **kwargs)
print(f"RAW: {result.choices[0].message.content}")
return resultVerbose mode is easiest! |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
I want to obtain the raw text output of LLM to do some analysis. but it was wrapped.
Beta Was this translation helpful? Give feedback.
All reactions