How an AI Agent Works Without a Framework

How an AI Agent Works Without a Framework: A Practical Guide

AI Agents are complicated. They rely on a combination of models, tools, and messages to understand input, perform tasks, and respond intelligently. Frameworks like LangChain and LangGraph help simplify this process by organizing these parts into reusable workflows. But what happens when you build an agent without the framework?

This post walks through a minimal example of an AI agent built from scratch—no orchestration layer, no abstractions. Just raw components working together. By reverse-engineering this agent, you’ll gain a deeper understanding of how frameworks work under the hood. You’re not just learning LangChain or LangGraph—you’re mastering the core mechanics that make agentic systems possible: how tools are registered, how messages are passed, and how models interact with external functions.

For the full code Here ->


Core Components of an Agent

Agent Step-by-Step Breakdown


1. Defining a Model

ChatOllama

ChatOllama is a wrapper that connects LangChain to a local Ollama model. Ollama is a system that lets you run large language models (LLMs) on your own machine—no cloud, no API keys. It’s ideal for privacy-preserving, local-first workflows.

The ChatOllama class gives you a standardized interface to interact with the model: you send it messages, and it returns responses. It behaves like a chatbot, but under the hood, it’s capable of much more—like calling tools.

If you don't have Ollama installed or download, check out Ollama Deployment Guide

```

from langchain_ollama import ChatOllama

model = ChatOllama(model="qwen2.5:3b")

```


2. Defining Tools

Tools are simple Python functions that perform tasks. In this example, we define two tools: one that adds numbers and one that multiplies them.

```

from langchain_core.tools import tool


@tool

def add(a: int, b: int) -> int:

    '''this function adds two numbers'''

    return a + b


@tool

def multiply(a: int, b: int) -> int:

    '''this function multiplies two numbers'''

    return a + b

```

The @tool decorator registers the function so the model knows it can call it during a conversation. It wraps the function with metadata that describes its name, input parameters, and output format in a structured way—typically as JSON. This makes it easier for the model to understand how to use the tool and how to format its tool call requests.


Once defined, the tools are collected into a list:

```tools = [add, multiply]```



3. Connecting Tools to the Model

The model used here is ChatOllama, which wraps a language model (in this case, "qwen2.5:3b"). We bind the tools to the model so it knows what functions are available:

model = ChatOllama(model="qwen2.5:3b").bind_tools(tools=tools)


We also create a lookup dictionary so we can find the correct function later when the model requests a tool:

tool_lookup = {tool.name: tool for tool in tools}



4. Handling User Input


**Note:

Before running this section of code, you must open a terminal and run 

ollama serve 

Ollama Deployment Guide


The agent runs in a loop, waiting for user input:

```

user_input = input("Enter: ")

```

When the user types something, it’s wrapped in a HumanMessage and sent to the model:

```

from langchain_core.messages import HumanMessage

result = model.invoke([HumanMessage(content=user_input)])

```

The HumanMessage is part of LangChain’s message abstraction system. It tells the model, “This came from a human,” and helps structure the conversation. You can pass:

  • A single string, wrapped in HumanMessage(content="...")

  • A list of messages, like [SystemMessage(...), HumanMessage(...)]

  • Or even a raw string, depending on the model interface—some accept invoke("hello") directly, but using HumanMessage ensures clarity and consistency, especially when tool calls or memory are involved.

This message gets sent to the model, which responds with either:

  • A direct answer (e.g. a AIMessage with text)

  • Or a list of tool calls, which are structured requests to run one of the registered functions

You can then inspect the result and decide whether to print the answer or execute the tool calls.




5. Executing Tool Calls

Once the model receives a user message, it decides whether it needs help from an external tool to answer the question. If so, it returns a list of tool calls. This is where the agentic behavior begins—but unlike in a framework, you as the developer must manually handle each step.

Let’s walk through the full cycle:


5.1 The Model Decides to Use a Tool

After receiving the user input, the model analyzes it and determines whether a tool is needed. If so, it returns a list of tool calls:

if result.tool_calls:


Each tool call is a structured dictionary containing:

  • "name": the name of the tool to call (e.g. "add")

  • "args": the arguments to pass (e.g. {"a": 2, "b": 3})

  • "id": a unique identifier for tracking the call

This is the model’s way of saying: “I want to use this tool, here’s what I need.”


5.2 The Agent Manually Executes the Tool

The agent (your code) loops through each tool call and looks up the corresponding function:

tool_fn = tool_lookup.get(tool_name)


If the function is found, it’s invoked with the provided arguments:

tool_result = tool_fn.invoke(tool_args)


This step is manual because we’re not using a framework. You’re responsible for:

  • Validating the tool name

  • Executing the function

  • Handling errors if the tool isn’t found

This is where the agent becomes active—it’s not just a model anymore, it’s a system that can take action.


5.3 Wrapping the Result in a ToolMessage

Once the tool returns a result, it’s wrapped in a ToolMessage. This message tells the model: “Here’s the result of the tool you asked for.”

ToolMessage(

    name=tool_name,

    content=str(tool_result),

    tool_call_id=tool_id

)


The tool_call_id is important—it links the result back to the original request so the model can match them up internally.


5.4 Sending the Results Back to the Model

Finally, the agent sends the original user input and all tool results back to the model:

followup = model.invoke([

    HumanMessage(content=user_input),

    *tool_messages

])


This second invocation is critical. It completes the reasoning loop by giving the model everything it needs to generate a final, informed response.

This is the agentic moment: the system receives input, decides to act, performs the action, and reflects on the result. You’re orchestrating that loop manually—something frameworks like LangChain automate behind the scenes.




What You’re Actually Mastering

By building this agent manually, you’re learning how agentic systems work at a fundamental level. You’re mastering:

  • How messages are passed between users, models, and tools

  • How tools are registered and invoked dynamically

  • How models reason through multiple steps using feedback

  • How to orchestrate logic without relying on external frameworks

This knowledge gives you full control over your agent’s behavior and prepares you to build more advanced systems.



Conclusion: Why Frameworks Matter

This agent works—but it’s missing key features like state tracking, conditional logic, and modular composition. That’s where LangChain and LangGraph come in.

LangChain simplifies tool registration, message formatting, and multi-step reasoning. LangGraph builds on that by introducing graph-based orchestration, which makes it easier to manage state, route decisions, and coordinate multiple agents.

The teams behind LangChain and LangGraph have done exceptional work making these tools accessible and powerful. Their abstractions let you focus on designing intelligent workflows instead of wiring everything together by hand.


What’s Next: Adding State

In the next post, we’ll extend this agent by adding state. That means tracking previous interactions, storing intermediate results, and enabling memory-aware reasoning. This will introduce the next layer of abstraction and bring us closer to the kinds of adaptive, autonomous agents LangGraph was built to support.

Stay tuned.

.



 

Comments

Popular posts from this blog

Linear Regression: One Idea, Three Perspectives