How an AI Agent Evolves: From Manual Memory to LangGraph ReactAgent
How an AI Agent Evolves: From Manual Memory to LangGraph ReactAgent In the previous post , we built an AI agent from scratch—no framework, no orchestration layer. Just a model, some tools, and a loop. It worked, but it was stateless. The agent couldn’t remember what happened in previous turns, which limited its ability to reason across multiple steps. This post introduces memory. We’ll walk through a manual implementation using a conversation history and modular agent nodes. Then we’ll compare it to LangGraph’s prebuilt ReactAgent, which handles memory, tool routing, and multi-turn reasoning automatically. Link to full code Core Components of a Stateful Agent We’re still using the same building blocks: A local model via ChatOllama A couple of tools (add, multiply) LangChain message types (HumanMessage, ToolMessage, AIMessage) But now we’re introducing: AgentState: a structured container for memory chat_node and tool_node: modular functions that mirror LangGraph’s node-based de...