Run Your AI Agents Locally With An Open Source Model

Shweta Lodha
3 min readJust now

--

In this article we will see how we can run our agents locally which means we will be using OpenAI Swarm framework but still we will not be paying anything to OpenAI as we will not be utilizing OpenAI’s API key. Using OpenAI’s Swarm but not using OpenAI’s key, got confused?

Well, we will be achieving this using Ollama :)

Now, before we proceed, if you have not watched my earlier video on what is OpenAI-Swarm and how to get started with it, I would recommend you check this one:

Here is link of the GitHub repository containing Swarm’s source code and implementation details.

What Are We Trying To Do?

We will create our own agent utilizing OpenAI-Swarm framework using Ollama and the open-source model Llama3.2:1b. This agent will run locally on our machine without the need to any API key from OpenAI.

Setting Up The Things

Install Swarm

We need to install Swarm from GitHub as it is still in experimental stage and that can be done by running below command:

pip install git+https://github.com/openai/swarm.git

Import Dependencies

Here are the two dependencies, we need to import:

from openai import OpenAI
from swarm import Swarm, Agent

Pull Llama3.2:1b Onto Local Machine Using Ollama

From Ollama website, you can select any model which supports tools. Here is the snapshot of how it looks:

Once the model is decided, we can open a terminal and pull the model using below command:

ollama pull <modelname>

Here is how it looks on successful execution:

Now, we are good to go ahead and write our code to query our model with agentic capabilities.

Writing The Agent

As we have already imported the dependencies, we are good to go ahead and create our Swarm client object:

client = OpenAI(
base_url = “http://localhost:11434/v1",
api_key=”ollama”)

client = Swarm(client=client)

Next, we need to create an agent and the respective function:

rephraser_agent = Agent(
name=”Rephraser”,
instructions=”INSTRUCTION_FOR_AGENT”,
model=<MODEL_NAME>)

def transfer_to_rephraser_agent():
return rephraser_agent

rephraser_agent.functions.append(transfer_to_rephraser_agent)

At this point, if you have multiple agents, you can add all of them here and get them appended as shown below:

rephraser_agent.functions.append(<agent2>)
rephraser_agent.functions.append(<agent3>)

Now, the final thing is to run the agent and that can be done using below lines of code:

messages = [{“role”:”user”,”content”:”USER_MSG_HERE”}]
response = client.run(
agent=rephraser_agent,
messages=messages)

response.messages[-1][“content”]

If everything went well, then you will see the output something like this:

Hope you got an initial idea about how to run your agents locally.

Point To Be Noted

There are lot many hidden details, which I couldn’t mention here but I did cover those in my video recording. Feel free to check this out:

--

--