Building AI Agents with Ollama and .NET

AI agent with interconnected nodes representing reasoning, planning, and tool execution

Building AI Agents with Ollama and .NET

Part 3 of 3 in the “Local AI with Ollama and .NET” series: Part 1 – Local AI Development | Part 2 – Local RAG | 🇫🇷 Version

In the previous posts, we explored running local LLMs with Ollama and building RAG systems. Now we’re taking it further: autonomous AI agents that can reason, plan, and take actions to solve complex tasks.

A complete, working example is available here: mongeon/code-examples · ai-agents-ollama-dotnet.

What Are AI Agents?

An AI agent is more than a chatbot. While a chatbot responds to prompts, an agent:

  • Reasons about problems and breaks them into steps
  • Plans multi-step solutions
  • Uses tools (APIs, databases, code execution, file systems)
  • Self-corrects when encountering errors
  • Remembers context across interactions
  • Acts autonomously to achieve goals

Think of the difference:

Chatbot: “What’s the weather in Paris?”
→ Response: “I can’t check real-time data.”

Agent: “What’s the weather in Paris?”
→ Agent calls WeatherAPI(“Paris”) → “Currently sunny, 22°C”

The agent has access to tools and knows when to use them.

Core Components of an AI Agent

1. The Reasoning Engine (LLM)

The LLM (via Ollama) acts as the agent’s “brain”:

  • Analyzes user requests
  • Decides which tools to invoke
  • Interprets results
  • Plans next steps

Modern models like Llama 3.3, Qwen 2.5, and Mistral excel at reasoning and function calling.

2. Tools (Functions)

Tools are capabilities the agent can invoke:

public interface ITool
{
    string Name { get; }
    string Description { get; }
    Task<string> ExecuteAsync(Dictionary<string, object> parameters);
}

Example tools:

  • SearchWeb(query) - Internet searches
  • ExecuteCode(code) - Run code snippets
  • QueryDatabase(sql) - Database access
  • ReadFile(path) - File operations
  • SendEmail(to, subject, body) - External communication
  • Calculate(expression) - Math operations

Each tool has a clear description that helps the LLM decide when to use it.

3. Memory Systems

Short-term Memory: Current conversation context

List<ChatMessage> conversationHistory;

Long-term Memory: Persistent knowledge (can leverage RAG from Part 2)

// Store in Qdrant from previous article
IVectorStore vectorMemory;

Working Memory: Temporary scratchpad during task execution

Dictionary<string, object> workingMemory;

4. Agent Loop

The execution loop that drives the agent’s behavior:

public async Task<string> RunAsync(string goal, int maxIterations = 10)
{
    var messages = new List<ChatMessage> 
    { 
        new ChatMessage("user", goal) 
    };
    
    for (int i = 0; i < maxIterations; i++)
    {
        var response = await llm.ChatAsync(messages, availableTools);
        
        if (response.ToolCalls.Any())
        {
            foreach (var toolCall in response.ToolCalls)
            {
                var result = await ExecuteToolAsync(toolCall);
                messages.Add(new ToolResultMessage(toolCall.Id, result));
            }
        }
        else if (response.IsComplete)
        {
            return response.Content;
        }
        
        messages.Add(response);
    }
    
    throw new MaxIterationsException();
}

The ReAct Pattern

ReAct (Reasoning + Acting) is the foundational agent pattern:

┌─────────────────────────────────────────────────┐
 1. THOUGHT: Analyze the problem                 
    "I need to check the weather first"          
└─────────────────────────────────────────────────┘
                    
┌─────────────────────────────────────────────────┐
 2. ACTION: Call a tool                          
    GetWeather("Paris")                          
└─────────────────────────────────────────────────┘
                    
┌─────────────────────────────────────────────────┐
 3. OBSERVATION: Process result                  
    "Temperature: 22°C, Condition: Sunny"        
└─────────────────────────────────────────────────┘
                    
┌─────────────────────────────────────────────────┐
 4. THOUGHT: Decide next step                    
    "Good weather, I can recommend activities"   
└─────────────────────────────────────────────────┘
                    
              [Repeat or Finish]

The agent alternates between thinking (reasoning) and acting (tool use) until it reaches a conclusion.

Building a Simple Agent in .NET

Let’s build a basic agent with tool-calling capabilities using Ollama.

Step 1: Define Tools

public class CalculatorTool : ITool
{
    public string Name => "calculator";
    public string Description => "Performs mathematical calculations. Input: expression (string)";
    
    public Task<string> ExecuteAsync(Dictionary<string, object> parameters)
    {
        var expression = parameters["expression"].ToString();
        var result = EvaluateExpression(expression); // Use NCalc or similar
        return Task.FromResult(result.ToString());
    }
    
    private double EvaluateExpression(string expr)
    {
        var e = new NCalc.Expression(expr);
        return Convert.ToDouble(e.Evaluate());
    }
}

public class WebSearchTool : ITool
{
    public string Name => "web_search";
    public string Description => "Searches the web for information. Input: query (string)";
    
    private readonly HttpClient _client;
    
    public async Task<string> ExecuteAsync(Dictionary<string, object> parameters)
    {
        var query = parameters["query"].ToString();
        // Use DuckDuckGo, Google Custom Search, or Bing API
        var results = await SearchAsync(query);
        return JsonSerializer.Serialize(results.Take(3));
    }
}

Step 2: Tool Registry

public class ToolRegistry
{
    private readonly Dictionary<string, ITool> _tools = new();
    
    public void RegisterTool(ITool tool)
    {
        _tools[tool.Name] = tool;
    }
    
    public async Task<string> ExecuteAsync(string toolName, Dictionary<string, object> parameters)
    {
        if (!_tools.TryGetValue(toolName, out var tool))
            throw new ToolNotFoundException(toolName);
            
        return await tool.ExecuteAsync(parameters);
    }
    
    public List<ToolDefinition> GetToolDefinitions()
    {
        return _tools.Values.Select(t => new ToolDefinition
        {
            Name = t.Name,
            Description = t.Description
        }).ToList();
    }
}

Step 3: Agent Implementation

public class OllamaAgent
{
    private readonly HttpClient _client;
    private readonly string _model;
    private readonly ToolRegistry _tools;
    private readonly List<ChatMessage> _history;
    
    public OllamaAgent(string model = "llama3.3")
    {
        _client = new HttpClient { BaseAddress = new Uri("http://localhost:11434") };
        _model = model;
        _tools = new ToolRegistry();
        _history = new List<ChatMessage>();
        
        // Register tools
        _tools.RegisterTool(new CalculatorTool());
        _tools.RegisterTool(new WebSearchTool());
    }
    
    public async Task<string> RunAsync(string goal, int maxIterations = 10)
    {
        _history.Add(new ChatMessage 
        { 
            Role = "user", 
            Content = goal 
        });
        
        for (int iteration = 0; iteration < maxIterations; iteration++)
        {
            var response = await CallLLMAsync();
            
            // Check if model wants to call a tool
            if (TryParseToolCall(response, out var toolCall))
            {
                Console.WriteLine($"[Agent] Calling tool: {toolCall.Name}");
                
                var result = await _tools.ExecuteAsync(
                    toolCall.Name, 
                    toolCall.Parameters
                );
                
                Console.WriteLine($"[Agent] Tool result: {result}");
                
                _history.Add(new ChatMessage
                {
                    Role = "tool",
                    Name = toolCall.Name,
                    Content = result
                });
            }
            else
            {
                // Final answer
                return response;
            }
        }
        
        throw new Exception("Agent exceeded maximum iterations");
    }
    
    private async Task<string> CallLLMAsync()
    {
        var systemPrompt = BuildSystemPrompt();
        
        var request = new
        {
            model = _model,
            messages = new[] 
            { 
                new { role = "system", content = systemPrompt } 
            }.Concat(_history.Select(m => new { role = m.Role, content = m.Content })),
            stream = false
        };
        
        var response = await _client.PostAsJsonAsync("/api/chat", request);
        var result = await response.Content.ReadFromJsonAsync<OllamaResponse>();
        
        var assistantMessage = result.Message.Content;
        _history.Add(new ChatMessage 
        { 
            Role = "assistant", 
            Content = assistantMessage 
        });
        
        return assistantMessage;
    }
    
    private string BuildSystemPrompt()
    {
        var toolDescriptions = string.Join("\n", 
            _tools.GetToolDefinitions()
                .Select(t => $"- {t.Name}: {t.Description}")
        );
        
        return $@"You are a helpful AI agent with access to tools.

Available tools:
{toolDescriptions}

To use a tool, respond with:
TOOL: tool_name
PARAMETERS: {{""param"": ""value""}}

When you have a final answer, respond normally without the TOOL prefix.

Think step by step and use tools when needed.";
    }
    
    private bool TryParseToolCall(string response, out ToolCall toolCall)
    {
        // Parse tool call format from LLM response
        if (response.Contains("TOOL:"))
        {
            // Simple parsing (in production, use structured output)
            var lines = response.Split('\n');
            var toolName = lines.First(l => l.StartsWith("TOOL:"))
                .Replace("TOOL:", "").Trim();
            var paramsLine = lines.First(l => l.StartsWith("PARAMETERS:"))
                .Replace("PARAMETERS:", "").Trim();
            var parameters = JsonSerializer.Deserialize<Dictionary<string, object>>(paramsLine);
            
            toolCall = new ToolCall { Name = toolName, Parameters = parameters };
            return true;
        }
        
        toolCall = null;
        return false;
    }
}

Step 4: Usage

var agent = new OllamaAgent("llama3.3");

var result = await agent.RunAsync(
    "What is 15% of 2,450? Then search for the current USD to CAD exchange rate."
);

Console.WriteLine(result);

Output:

[Agent] Calling tool: calculator
[Agent] Tool result: 367.5
[Agent] Calling tool: web_search
[Agent] Tool result: [{"title":"USD to CAD","snippet":"1 USD = 1.42 CAD"}]
15% of 2,450 is 367.5. The current exchange rate is approximately 1 USD = 1.42 CAD, 
so 367.5 USD equals about 521.85 CAD.

Advanced Agent Patterns

1. Multi-Agent System

Different agents with specialized roles:

public class MultiAgentSystem
{
    private readonly PlannerAgent _planner;
    private readonly ResearchAgent _researcher;
    private readonly CodingAgent _coder;
    private readonly ReviewAgent _reviewer;
    
    public async Task<string> SolveComplexTask(string task)
    {
        // Step 1: Break down the task
        var plan = await _planner.CreatePlanAsync(task);
        
        // Step 2: Execute sub-tasks
        var results = new List<string>();
        
        foreach (var subtask in plan.Steps)
        {
            var result = subtask.Type switch
            {
                "research" => await _researcher.ExecuteAsync(subtask),
                "coding" => await _coder.ExecuteAsync(subtask),
                "review" => await _reviewer.ExecuteAsync(subtask),
                _ => throw new NotSupportedException()
            };
            
            results.Add(result);
        }
        
        // Step 3: Synthesize final answer
        return await _planner.SynthesizeAsync(results);
    }
}

2. Agent with RAG Integration

Combine agents with the RAG system from Part 2:

public class RAGAgent : OllamaAgent
{
    private readonly QdrantVectorStore _vectorStore;
    
    public RAGAgent(string model, QdrantVectorStore vectorStore) 
        : base(model)
    {
        _vectorStore = vectorStore;
        
        // Add RAG search tool
        RegisterTool(new KnowledgeBaseTool(_vectorStore));
    }
}

public class KnowledgeBaseTool : ITool
{
    public string Name => "search_knowledge_base";
    public string Description => "Searches the company knowledge base for information. Input: query (string)";
    
    private readonly QdrantVectorStore _vectorStore;
    
    public async Task<string> ExecuteAsync(Dictionary<string, object> parameters)
    {
        var query = parameters["query"].ToString();
        var results = await _vectorStore.SearchAsync(query, topK: 5);
        
        return string.Join("\n\n", results.Select(r => r.Content));
    }
}

Now your agent can search your private knowledge base!

3. Autonomous Agent with Self-Assessment

public class AutonomousAgent
{
    private readonly string _goal;
    private readonly List<string> _completedTasks;
    private readonly OllamaAgent _agent;
    
    public async Task RunUntilComplete()
    {
        while (!await IsGoalAchieved())
        {
            // Self-assess progress
            var status = await AssessProgressAsync();
            
            // Determine next task
            var nextTask = await PlanNextTaskAsync(status);
            
            Console.WriteLine($"[Autonomous] Next task: {nextTask}");
            
            // Execute with agent
            var result = await _agent.RunAsync(nextTask);
            
            _completedTasks.Add($"{nextTask} -> {result}");
            
            // Store in long-term memory
            await StoreInMemoryAsync(nextTask, result);
        }
    }
    
    private async Task<bool> IsGoalAchieved()
    {
        var prompt = $@"Goal: {_goal}
Completed tasks: {string.Join(", ", _completedTasks)}

Is the goal fully achieved? Answer YES or NO only.";
        
        var response = await _agent.RunAsync(prompt, maxIterations: 1);
        return response.Contains("YES");
    }
}

Real-World Use Cases

1. Code Analysis Agent

public class CodeAnalysisAgent : OllamaAgent
{
    public CodeAnalysisAgent() : base("deepseek-coder:33b")
    {
        RegisterTool(new ParseCodeFileTool());
        RegisterTool(new RunStaticAnalysisTool());
        RegisterTool(new SearchDocumentationTool());
        RegisterTool(new GenerateTestsTool());
    }
}

// Usage
var agent = new CodeAnalysisAgent();
var report = await agent.RunAsync(
    "Analyze Program.cs for security vulnerabilities and suggest fixes"
);

2. DevOps Assistant

public class DevOpsAgent : OllamaAgent
{
    public DevOpsAgent() : base("llama3.3")
    {
        RegisterTool(new CheckBuildStatusTool());
        RegisterTool(new QueryLogsTool());
        RegisterTool(new RestartServiceTool());
        RegisterTool(new SendAlertTool());
    }
}

// Usage
var agent = new DevOpsAgent();
await agent.RunAsync(
    "Monitor production for the next hour. If error rate exceeds 5%, rollback and alert the team."
);

3. Customer Support Agent

public class SupportAgent : OllamaAgent
{
    public SupportAgent(QdrantVectorStore knowledgeBase) : base("llama3.3")
    {
        RegisterTool(new SearchKnowledgeBaseTool(knowledgeBase));
        RegisterTool(new CheckOrderStatusTool());
        RegisterTool(new CreateTicketTool());
        RegisterTool(new EscalateToHumanTool());
    }
}

Best Practices and Considerations

1. Safety and Reliability

Timeouts: Prevent infinite loops

using var cts = new CancellationTokenSource(TimeSpan.FromMinutes(5));
await agent.RunAsync(goal, cancellationToken: cts.Token);

Sandboxing: Isolate dangerous operations

public class SafeCodeExecutionTool : ITool
{
    public async Task<string> ExecuteAsync(Dictionary<string, object> parameters)
    {
        // Run in Docker container or restricted environment
        var result = await DockerRunner.RunIsolatedAsync(code);
        return result;
    }
}

Human-in-the-Loop: Require approval for critical actions

if (toolCall.RequiresApproval)
{
    Console.WriteLine($"Agent wants to: {toolCall.Description}");
    Console.Write("Approve? (y/n): ");
    if (Console.ReadLine() != "y")
        return "Action denied by user";
}

2. Observability

Log all agent decisions:

public class ObservableAgent : OllamaAgent
{
    protected override async Task<string> CallLLMAsync()
    {
        var startTime = DateTime.UtcNow;
        var response = await base.CallLLMAsync();
        var duration = DateTime.UtcNow - startTime;
        
        _logger.LogInformation(
            "LLM call completed in {Duration}ms. Tokens: {Tokens}", 
            duration.TotalMilliseconds,
            CountTokens(response)
        );
        
        return response;
    }
}

3. Cost and Performance

Caching: Avoid redundant tool calls

private readonly MemoryCache _toolResultCache = new();

public async Task<string> ExecuteToolAsync(ToolCall toolCall)
{
    var cacheKey = $"{toolCall.Name}:{JsonSerializer.Serialize(toolCall.Parameters)}";
    
    if (_toolResultCache.TryGetValue(cacheKey, out string cached))
        return cached;
    
    var result = await _tools.ExecuteAsync(toolCall.Name, toolCall.Parameters);
    _toolResultCache.Set(cacheKey, result, TimeSpan.FromMinutes(10));
    
    return result;
}

Model Selection: Use appropriate models

  • Simple tasks: llama3.2:3b, phi-3
  • Complex reasoning: llama3.3:70b, qwen2.5:72b
  • Code tasks: deepseek-coder, qwen2.5-coder

4. Testing Agents

[Fact]
public async Task Agent_CanCalculateAndSearchWeb()
{
    var agent = new OllamaAgent("llama3.3");
    var result = await agent.RunAsync(
        "Calculate 25 * 16 and search for the capital of France"
    );
    
    Assert.Contains("400", result);
    Assert.Contains("Paris", result);
}

Use mock tools for unit testing:

public class MockWeatherTool : ITool
{
    public Task<string> ExecuteAsync(Dictionary<string, object> parameters)
    {
        return Task.FromResult("{\"temp\": 22, \"condition\": \"sunny\"}");
    }
}

Comparing Agent Frameworks

Microsoft Agent Framework

// Create an agent with Ollama backend
var agent = new Agent(
    new OllamaModelClient("llama3.3"),
    name: "WeatherAgent"
);

// Register tools/functions
agent.RegisterTool(new WeatherTool());
agent.RegisterTool(new CalculatorTool());

// Execute with automatic tool calling
var response = await agent.RunAsync(
    "What's the weather in Paris and what's 15% of 2450?"
);

Pros:

  • Purpose-built for agent development
  • Native multi-agent support
  • Integrated observability and telemetry
  • Microsoft support and active development

Cons:

  • Newer framework (evolving API)
  • Less community content than Semantic Kernel

LangChain .NET

var agent = new AgentExecutor(
    llm: new OllamaLLM("llama3.3"),
    tools: new[] { weatherTool, calculatorTool },
    agentType: AgentType.ReAct
);

await agent.RunAsync("Complex task");

Pros:

  • Established patterns
  • Python parity

Cons:

  • Less mature than Python version
  • Community-driven

Custom Implementation (This Article)

Pros:

  • Full control
  • Minimal dependencies
  • Integrates with existing code

Cons:

  • More code to write
  • Need to implement patterns yourself

Future Directions

1. Multi-Modal Agents

With vision models (Llama 3.2 Vision, Qwen2-VL):

RegisterTool(new AnalyzeImageTool());
RegisterTool(new GenerateDiagramTool());

2. Agents with Code Execution

public class CodeInterpreterAgent : OllamaAgent
{
    public CodeInterpreterAgent() : base("deepseek-coder")
    {
        RegisterTool(new ExecutePythonTool());
        RegisterTool(new ExecuteCSharpTool());
        RegisterTool(new AnalyzeResultsTool());
    }
}

3. Long-Running Agents

Background agents that monitor and respond:

public class MonitoringAgent : BackgroundService
{
    protected override async Task ExecuteAsync(CancellationToken stoppingToken)
    {
        while (!stoppingToken.IsCancellationRequested)
        {
            var issues = await DetectIssuesAsync();
            
            foreach (var issue in issues)
            {
                await _agent.RunAsync($"Resolve: {issue}");
            }
            
            await Task.Delay(TimeSpan.FromMinutes(5), stoppingToken);
        }
    }
}

Conclusion

AI agents represent the next evolution in building intelligent systems. By combining local LLMs via Ollama with tool-calling capabilities, you can create powerful, autonomous assistants that:

  • Solve complex, multi-step problems
  • Access and manipulate external data
  • Work completely offline and privately
  • Scale without per-token costs

Key Takeaways:

  1. Agents = LLM + Tools + Loop - The core pattern is simple but powerful
  2. ReAct pattern - Alternating between reasoning and acting is fundamental
  3. Safety first - Implement timeouts, sandboxing, and approval workflows
  4. Start simple - Build basic tool-calling before tackling autonomous agents
  5. Leverage RAG - Combine agents with knowledge retrieval for domain expertise

Next Steps:

  • Experiment with the code examples on GitHub
  • Try different models—some excel at reasoning (llama3.3, qwen2.5), others at coding (deepseek-coder)
  • Build domain-specific agents for your use cases
  • Explore multi-agent systems for complex workflows

The future of AI development is agentic, autonomous, and local. Start building today!

Resources


This post was created with the assistance of AI.


See also