Tools / AI Agents interview questions
Core Concepts
Question: What is the difference between a standard LLM chatbot and an AI Agent?
Answer: A chatbot is reactive, providing a single response to a single prompt. An AI Agent is proactive; it uses a reasoning loop (like ReAct) to break down a complex goal into sub-tasks, select tools, and execute actions autonomously until the goal is met.
Reasoning & Frameworks
Question: Explain the ReAct (Reason + Act) prompting strategy.
Answer: ReAct combines chain-of-thought reasoning with action execution. The agent generates a "Thought" (internal reasoning), performs an "Action" (calling a tool), observes the "Result," and repeats the process. This reduces hallucinations by grounding the agent's logic in real-world data.
Memory Management
Question: How do you manage long-term vs. short-term memory in an agentic workflow?
Answer: Short-term memory is handled via the context window (storing recent message history). Long-term memory is managed using a Vector Database (like Pinecone or Weaviate). The agent uses RAG (Retrieval-Augmented Generation) to query historical data or documents and inject only the relevant snippets into the current prompt.
Multi-Agent Systems
Question: What are the benefits of a multi-agent architecture (e.g., CrewAI or AutoGen) over a single monolithic agent?
Answer: Multi-agent systems allow for specialization. By assigning distinct roles (e.g., a "Researcher" and a "Writer"), you reduce the cognitive load on a single model. This leads to higher accuracy, easier debugging, and the ability to run tasks in parallel.
Security & Guardrails
Question: How do you prevent an agent from performing unintended actions when given tool access?
Answer: Implementation of "Human-in-the-Loop" (HITL) triggers for sensitive actions, using "Sandboxed" environments for code execution, and applying strict input validation (Pydantic schemas) for tool arguments. I also use "Pre-computation" checks to ensure the agent's plan aligns with safety policies before execution.
Reliability & Error Handling
Question: How do you handle an agent getting stuck in an infinite reasoning loop?
Answer: I implement a "Max Iterations" cap and a "Timeout" limit. Additionally, I use self-reflection steps where a supervisor agent or a secondary LLM call evaluates if progress is being made; if not, the agent is programmed to stop and ask for human intervention.
