BlogEngineering

Building Your First AI Agent: A Practical Guide

An AI agent is an LLM that can use tools—it reasons, decides to act, executes an action, and observes the result. This loop is called ReAct (Reason + Act). In this guide, we'll build a working agent in under 150 lines of Python.

What We're Building

A research agent that can:

  • Search the web for information
  • Run Python code
  • Read files from disk
  • Give you a final synthesized answer

The ReAct Loop

Every agent runs this loop:

1. THINK: LLM decides what to do next
2. ACT: LLM calls a tool (search, code, file)
3. OBSERVE: Get the tool's result
4. Repeat until done

The Agent Framework

import os
import openai
import json

client = openai.OpenAI(
    api_key=os.environ.get("CELUXE_API_KEY"),
    base_url="https://api.celuxe.shop/v1"
)

class Tool:
    def __init__(self, name, description, fn):
        self.name = name
        self.description = description
        self.fn = fn

    def to_dict(self):
        return {
            "name": self.name,
            "description": self.description,
            "parameters": {
                "type": "object",
                "properties": {"input": {"type": "string"}},
                "required": ["input"]
            }
        }

# Define tools
def search_web(query):
    """Search the web for information."""
    # Simplified — use DuckDuckGo or SerpAPI in production
    return f"Web search results for: {query}"

def run_python(code):
    """Execute Python code and return output."""
    try:
        import subprocess
        result = subprocess.run(["python3", "-c", code],
                              capture_output=True, text=True, timeout=10)
        return result.stdout or result.stderr
    except Exception as e:
        return f"Error: {e}"

def read_file(path):
    """Read contents of a file."""
    with open(path) as f:
        return f.read()

TOOLS = [
    Tool("search", "Search the web for information.", search_web),
    Tool("python", "Execute Python code.", run_python),
    Tool("read_file", "Read a file from disk.", read_file),
]

The Agent Loop

def run_agent(user_question, max_turns=10):
    messages = [
        {"role": "system", "content": f"""You are a helpful research assistant.
You have access to these tools: {[t.to_dict() for t in TOOLS]}
Think step by step. When you need information, call a tool.
When you have the answer, respond directly."""},
        {"role": "user", "content": user_question}
    ]

    for turn in range(max_turns):
        # Get LLM's response
        response = client.chat.completions.create(
            model="deepseek-chat",
            messages=messages,
            tools=[t.to_dict() for t in TOOLS],
            tool_choice="auto"
        )
        msg = response.choices[0].message
        
        if msg.tool_calls:
            # LLM wants to use a tool
            for call in msg.tool_calls:
                tool_name = call.function.name
                tool_input = json.loads(call.function.arguments)["input"]
                
                # Find the tool
                tool = next((t for t in TOOLS if t.name == tool_name), None)
                if tool:
                    result = tool.fn(tool_input)
                    messages.append({"role": "assistant", "content": ""})
                    messages.append({
                        "role": "tool",
                        "tool_call_id": call.id,
                        "content": result
                    })
                else:
                    messages.append({"role": "tool", "tool_call_id": call.id, "content": "Unknown tool"})
        else:
            # LLM has final answer
            return msg.content

        print(f"[Turn {turn+1}] {msg.content or '(tool call)'}")

# Example
result = run_agent("What is 15% of 892? Also write it to a file.")
print(result)

Adding Memory

By default, the agent doesn't remember previous interactions. Add memory:

class Agent:
    def __init__(self):
        self.messages = [{"role": "system", "content": "You are a helpful assistant."}]
    
    def ask(self, question):
        self.messages.append({"role": "user", "content": question})
        while not self.is_done():
            self.step()
        answer = self.messages[-1].content
        self.messages.append({"role": "assistant", "content": answer})
        return answer

Production Considerations

  • Token budgeting: Agents can loop forever. Set max_turns and token limits.
  • Tool error handling: Tools fail. Network timeouts, bad inputs, file not found. Handle gracefully.
  • Streaming: Use streaming responses so the user sees the agent thinking.
  • Persistence: Save agent state between conversations for multi-session memory.

Build Your Agent Today

Access DeepSeek V3 and Claude through a single Celuxe API. Build multi-model agents with intelligent routing.

Get Your API Key →
C

Celuxe Team

We write about real production AI engineering.