", obs) LangSmith tracing — set LANGCHAIN_TRACING_V2=true and every agent run is captured as a full tree trace in LangSmith. You can see token counts, latency per step, exact prompts sent to the model, and tool call details. This is the most powerful debugging tool for production issues. StdOutCallbackHandler — equivalent to verbose but via the callback system, useful when you need to attach it conditionally: from langchain_core.callbacks import StdOutCallbackHandler result = executor.invoke({"input": "..."}, config={"callbacks": [StdOutCallbackHandler()]})"> ", obs) LangSmith tracing — set LANGCHAIN_TRACING_V2=true and every agent run is captured as a full tree trace in LangSmith. You can see token counts, latency per step, exact prompts sent to the model, and tool call details. This is the most powerful debugging tool for production issues. StdOutCallbackHandler — equivalent to verbose but via the callback system, useful when you need to attach it conditionally: from langchain_core.callbacks import StdOutCallbackHandler result = executor.invoke({"input": "..."}, config={"callbacks": [StdOutCallbackHandler()]})" />

Prev Next

AI / LangGraph LangChain Interview questions

How do you debug LangChain agents?

Debugging LangChain agents requires visibility into the agent's reasoning steps, tool inputs, and tool outputs — not just the final answer. Several tools address this at different levels of depth.

verbose=True — prints every Thought, Action, and Observation to stdout during execution. Quick and zero-setup, ideal during development:

executor = AgentExecutor(agent=agent, tools=tools, verbose=True)

return_intermediate_steps=True — returns the full [(AgentAction, observation), ...] list in the output dict so you can inspect programmatically in tests:

result = executor.invoke({"input": "..."}, return_intermediate_steps=True)
for action, obs in result["intermediate_steps"]:
    print(action.tool, action.tool_input, "=>", obs)

LangSmith tracing — set LANGCHAIN_TRACING_V2=true and every agent run is captured as a full tree trace in LangSmith. You can see token counts, latency per step, exact prompts sent to the model, and tool call details. This is the most powerful debugging tool for production issues.

StdOutCallbackHandler — equivalent to verbose but via the callback system, useful when you need to attach it conditionally:

from langchain_core.callbacks import StdOutCallbackHandler
result = executor.invoke({"input": "..."}, config={"callbacks": [StdOutCallbackHandler()]})
What does verbose=True in AgentExecutor print to stdout?
Which debugging approach provides the most detail for production issues including token counts and exact prompts?

Invest now in Acorns!!! 🚀 Join Acorns and get your $5 bonus!

Invest now in Acorns!!! 🚀
Join Acorns and get your $5 bonus!

Earn passively and while sleeping

Acorns is a micro-investing app that automatically invests your "spare change" from daily purchases into diversified, expert-built portfolios of ETFs. It is designed for beginners, allowing you to start investing with as little as $5. The service automates saving and investing. Disclosure: I may receive a referral bonus.

Invest now!!! Get Free equity stock (US, UK only)!

Use Robinhood app to invest in stocks. It is safe and secure. Use the Referral link to claim your free stock when you sign up!.

The Robinhood app makes it easy to trade stocks, crypto and more.


Webull! Receive free stock by signing up using the link: Webull signup.

More Related questions...

What is LangChain? What is LCEL (LangChain Expression Language)? What are the key components of LangChain? How does LangChain differ from traditional LLM integration? What are LangChain Runnables? How do you install and set up LangChain? How do you use ChatModels in LangChain? What are PromptTemplates in LangChain? What are output parsers in LangChain? What is the LangSmith platform? What is LangChain Hub? What is LangServe? How do callbacks work in LangChain? How do you implement streaming in LangChain? How does LangChain handle versioning? What are Chains in LangChain? What is the difference between sequential and parallel chains? How do you use the pipe operator in LCEL? What are RunnablePassthrough and RunnableLambda? What are common chain composition patterns? How do you implement a ConversationChain? How does routing work in LCEL? How do you handle errors in chains? What are chain fallbacks and retries? How do you do batch processing with LCEL? What are LangChain Agents? What are the different agent types in LangChain? How do you create custom agents? What is AgentExecutor? How do tools work in LangChain agents? How do you create custom tools? What are multi-action agents? How do agents plan and reason? How do you integrate memory with agents? How do you debug LangChain agents? What is LangGraph? What are the differences between LangGraph and LangChain Agents? What is StateGraph in LangGraph? How do nodes and edges work in LangGraph? How do you implement conditional edges in LangGraph? How does state management work in LangGraph? What is the difference between MessageGraph and StateGraph? How does checkpointing work in LangGraph? How do you implement human-in-the-loop with LangGraph? How do you build multi-agent systems with LangGraph? What are subgraphs in LangGraph? How do streaming and callbacks work in LangGraph? What are persistence patterns in LangGraph? How do you handle errors in LangGraph? How do you deploy LangGraph applications?
Show more question and Answers...

LangGraph LangChain Interview questions II

Comments & Discussions