How an AI Agent Evolves: From Manual Memory to LangGraph ReactAgent

 

How an AI Agent Evolves: From Manual Memory to LangGraph ReactAgent

In the previous post, we built an AI agent from scratch—no framework, no orchestration layer. Just a model, some tools, and a loop. It worked, but it was stateless. The agent couldn’t remember what happened in previous turns, which limited its ability to reason across multiple steps.

This post introduces memory. We’ll walk through a manual implementation using a conversation history and modular agent nodes. Then we’ll compare it to LangGraph’s prebuilt ReactAgent, which handles memory, tool routing, and multi-turn reasoning automatically.

Link to full code


Core Components of a Stateful Agent


We’re still using the same building blocks:

  • A local model via ChatOllama

  • A couple of tools (add, multiply)

  • LangChain message types (HumanMessage, ToolMessage, AIMessage)

But now we’re introducing:

  • AgentState: a structured container for memory

  • chat_node and tool_node: modular functions that mirror LangGraph’s node-based design

  • A persistent conversation_history list



1. Defining the Model and Tools

Same as before, we define a local model and register tools:

```

from langchain_ollama import ChatOllama

from langchain_core.tools import tool


@tool

def add(a: int, b: int) -> int:

    '''This function adds two numbers'''

    return a + b


@tool

def multiply(a: int, b: int) -> int:

    '''This function multiplies two numbers'''

    return a * b


tools = [add, multiply]

model = ChatOllama(model="qwen2.5:3b").bind_tools(tools=tools)

tool_lookup = {tool.name: tool for tool in tools}


```

This setup is unchanged—but now we’re preparing to pass memory into the model.



2. Introducing State:

What State Means in Agentic Workflows

In agentic systems, state refers to the evolving context that an agent uses to make decisions. It’s the memory, metadata, intermediate results, and reasoning history that accumulate as the agent interacts with users and tools. Without state, an agent is stateless—it can respond to a single input, but it can’t reflect, adapt, or reason across multiple steps.

To introduce state into our manual agent, we define a simple container:

‘’’

from typing import TypedDict, Union

from langchain_core.messages import HumanMessage, ToolMessage, AIMessage


class AgentState(TypedDict):

    messages: list[Union[HumanMessage, AIMessage, ToolMessage]]

‘’’

This AgentState object holds the full conversation history—every user input, model response, and tool result. It’s minimal, but powerful. By passing this state between nodes, we enable the agent to:

  • Maintain memory across turns

  • Reason based on prior inputs and outputs

  • Track tool usage and results

  • Support multi-step workflows


Why State Matters

In simple chatbot systems, each message is processed independently. But agentic systems are different—they operate in loops, where each step builds on the last. The agent might:

  1. Receive a user question

  2. Decide to use a tool

  3. Execute the tool

  4. Reflect on the result

  5. Generate a final answer

Each of these steps produces data that must be stored and passed forward. That’s what the state enables.

State as a Transition Mechanism

In our manual agent, we use AgentState to transition between two core nodes:

  • chat_node: sends messages to the model and receives responses

  • tool_node: executes tools and sends follow-up messages

Each node receives the current state, modifies it, and returns an updated version. This mirrors the design of LangGraph, where state flows through a graph of nodes, each performing a specific function.

This pattern—state in, state out—is foundational to agentic reasoning. It allows agents to be modular, extensible, and adaptive.

What Could Be Added to State?

As agents grow more complex, state can include:

  • intermediate_steps: reasoning traces or scratchpad notes

  • tool_usage: counters or logs for tool calls

  • metadata: tags, filters, or routing signals

  • memory: summaries or embeddings of past interactions

  • user_profile: preferences, goals, or constraints

Frameworks like LangGraph support rich state objects that evolve dynamically. But even in a manual setup, starting with a simple AgentState gives you the scaffolding to build toward that complexity.



3. Modularizing the Agent: Nodes Are Just Functions

In agentic systems like LangGraph, a node is simply a unit of computation—a function that takes in state, performs some logic, and returns updated state. That’s it. There’s no magic, no special syntax. A node is just a Python function with a clear input/output contract.

In our manual agent, we define two nodes:

  • chat_node: handles model invocation and tool detection

  • tool_node: executes tools and sends follow-up messages

Each node receives the current AgentState, modifies it, and returns a new version. This mirrors LangGraph’s design, where nodes are connected in a graph and pass state between each other.


chat_node: Handles model invocation and tool detection


```

def chat_node(state: AgentState) -> AgentState:

    result = model.invoke(state["messages"])

    print(result.content)

    state["messages"].append(result)


    if result.tool_calls:

        state = tool_node(state, result)

    return state


```

This node sends the current message history to the model, appends the response, and checks for tool calls. If tools are requested, it hands off to tool_node.

tool_node: Executes tools and sends follow-up messages


```

def tool_node(state: AgentState, result: AIMessage) -> AgentState:

    tool_messages = []


    for tool_call in result.tool_calls:

        tool_name = tool_call["name"]

        tool_args = tool_call["args"]

        tool_id = tool_call["id"]


        print(f"Tool_call: {tool_name} with args {tool_args}")


        tool_fn = tool_lookup.get(tool_name)

        if tool_fn:

            tool_result = tool_fn.invoke(tool_args)

        else:

            tool_result = f"Tool '{tool_name}' not found."


        tool_msg = ToolMessage(name=tool_name, content=str(tool_result), tool_call_id=tool_id)

        state["messages"].append(tool_msg)

        tool_messages.append(tool_msg)


    followup = model.invoke(state["messages"])

    print(followup.content)

    state["messages"].append(followup)


    return state


```

This node loops through tool calls, executes them, wraps results in ToolMessage, and sends a follow-up message to the model. It’s a manual version of LangGraph’s act and react nodes.



4. Running the Agent with Memory

We maintain a global conversation_history and update it each turn:

```

conversation_history = []


while True:

    user_input = input("Enter: ")

    if user_input.lower() in ["exit", "quit", "q"]:

        break


    conversation_history.append(HumanMessage(content=user_input))

    state = AgentState(messages=conversation_history.copy())

    state = chat_node(state)


    new_messages = state["messages"][len(conversation_history):]

    conversation_history.extend(new_messages)


    for msg in new_messages:

        msg.pretty_print()


```

This loop now supports multi-turn memory. The model sees the full conversation every time, enabling it to reference earlier inputs and tool results.


The Cost of Manual Orchestration

At this point, you’ve built a functioning agent with memory, tool execution, and multi-turn reasoning. But it’s important to pause and reflect on how much manual orchestration this requires.

You’re managing message formatting, tool registration, state transitions, and error handling—all by hand. Every time the model responds, you have to inspect whether it wants to use a tool, look up the correct function, execute it, wrap the result, and send it back. And that’s just for one tool call. If the model requests multiple tools, or if you want to add retries, conditional logic, or branching workflows, the complexity grows exponentially.

Even something as simple as tracking intermediate steps or debugging a failed tool call requires custom logic. You’re essentially building a framework from scratch—one function at a time.

The Power of Abstraction

On the other end of the spectrum is LangGraph’s prebuilt ReactAgent. It abstracts all of this orchestration into a clean, declarative interface. You define your tools, your model, and a prompt—and LangGraph handles everything else:

  • Tool selection and execution

  • State tracking and memory

  • Follow-up reasoning

  • Multi-turn message handling

You don’t need to write chat_node, tool_node, or manage message lists manually. The agent runs as a graph, with built-in nodes for planning, acting, and reacting. It’s optimized, extensible, and battle-tested.

This abstraction doesn’t just save time—it enforces best practices. It lets you focus on designing intelligent workflows instead of wiring together low-level components. And as your agent grows more complex—with conditional routing, retries, or multi-agent collaboration—LangGraph scales with you.


5. Transitioning to LangGraph’s ReactAgent

LangGraph offers a prebuilt agent that handles all of this automatically. Here’s the same logic using create_react_agent:

```

from langgraph.prebuilt import create_react_agent

from langchain_ollama import ChatOllama

from langchain_core.messages import HumanMessage


def add(a: int, b: int) -> int:

    '''This function adds two numbers'''

    return a + b


model = ChatOllama(model="qwen2.5:3b")

agent = create_react_agent(model=model, tools=[add], prompt="You are a helpful assistant.")


result = agent.invoke({"messages": [HumanMessage(content="What is 7 plus 6")]})

for m in result["messages"]:

    m.pretty_print()


```

This agent:

  • Automatically tracks memory

  • Detects tool calls

  • Executes tools

  • Sends follow-up messages

You don’t need to define chat_node, tool_node, or manage state manually. It’s all abstracted into a graph.

What You’re Extracting

By comparing the manual agent to LangGraph’s ReactAgent, you’re extracting:

  • Message handling logic

  • Tool routing

  • State transitions

  • Multi-turn reasoning

You’re not just using LangGraph—you’re reverse-engineering its design.



Conclusion: From Mechanics to Abstractions—and Back Again

By building this agent manually, you’ve peeled back the layers of abstraction and seen how agentic systems actually work. You’ve written the orchestration logic yourself: invoking models, handling tool calls, managing state, and looping through reasoning steps. This isn’t just a coding exercise—it’s foundational knowledge. You now understand what frameworks like LangChain and LangGraph are doing behind the scenes.

LangGraph builds on these mechanics by abstracting them into reusable, declarative graphs. It handles state transitions, tool routing, and multi-turn reasoning automatically. That abstraction is powerful—but it’s only meaningful once you understand what’s being abstracted.

And now you do.

You’ve reached a critical point in your agent-building journey: you can choose your abstractions intentionally. You’re no longer dependent on frameworks—you’re leveraging them. You know when to use a prebuilt agent, and when to write your own node. You know how to debug a tool call, how to track state manually, and how to extend the system when the framework falls short.


What’s Next: Building a Hybrid Agent with Manual Graph Control

In the next post, we’ll find a happy medium. We’ll use LangGraph and LangChain to handle the complex, repetitive tasks—like tool execution loops and message formatting—but we’ll define our own graph manually.

This hybrid approach gives us the best of both worlds: the flexibility of raw Python and the structure of LangGraph. It’s ideal for developers who want to build adaptive, explainable agents without reinventing the wheel.

You’ll see how to scaffold a graph from scratch, wire up your own nodes, and selectively extract the pieces of LangChain that serve your goals. Because now that you understand the mechanics, you can confidently choose the abstractions that benefit you—and discard the ones that don’t.

Stay tuned.





Comments

Popular posts from this blog

How an AI Agent Works Without a Framework

Linear Regression: One Idea, Three Perspectives