Skip to content

Getting Started

What is agentics?

Agentics is a lightweight, Python-native framework for building structured, agentic workflows over tabular or JSON-based data using Pydantic types and transduction logic. Designed to work seamlessly with large language models (LLMs), Agentics enables users to define input and output schemas as structured types and apply declarative, composable transformations, called transductions across data collections. Inspired by a low-code design philosophy, Agentics is ideal for rapidly prototyping intelligent systems that require structured reasoning and interpretable outputs over both structured and unstructured data.

Installation

  • Clone the repository
  git clone git@github.com:IBM/agentics.git
  cd agentics
  • Install uv (skip if available)
curl -LsSf https://astral.sh/uv/install.sh | sh

Other installation options here

  • Install the dependencies
uv sync
# Source the environment (optional, you can skip this and prepend uv run to the later lines)
source .venv/bin/activate # bash/zsh 🐚
source .venv/bin/activate.fish # fish 🐟

🎯 Set Environment Variables

Create a .env file in the root directory with your environment variables. See .env.sample for an example.

Set Up LLM provider, Chose one of the following:

OpenAI

  • Obtain API key from OpenAI
  • OPENAI_API_KEY - Your OpenAI APIKey
  • OPENAI_MODEL_ID - Your favorute model, default to openai/gpt-4

Ollama (local)

  • Download and install Ollama
  • Download a Model. You should use a model that support reasoning and fit your GPU. So smaller are preferred.
    ollama pull ollama/deepseek-r1:latest
    
  • "OLLAMA_MODEL_ID" - ollama/gpt-oss:latest (better quality), ollama/deepseek-r1:latest (smaller)

IBM WatsonX:

  • WATSONX_APIKEY - WatsonX API key

  • MODEL - watsonx/meta-llama/llama-3-3-70b-instruct (or alternative supporting function call)

Google Gemini (offer free API key)

  • WATSONX_APIKEY - WatsonX API key

  • MODEL - watsonx/meta-llama/llama-3-3-70b-instruct (or alternative supporting function call)

VLLM (Need dedicated GPU server):

  • Set up your local instance of VLLM
  • VLLM_URL - http://base_url:PORT/v1
  • VLLM_MODEL_ID - Your model id (e.g. "hosted_vllm/meta-llama/Llama-3.3-70B-Instruct" )

LiteLLM (100+ providers via single interface)

LiteLLM provides a unified interface to access 100+ LLM providers. You can use models from OpenAI, Anthropic, Google, Cohere, Azure, Hugging Face, and more.

Basic Setup (Local LiteLLM):

  • LITELLM_MODEL - Model in format provider/model-name (e.g., openai/gpt-4, claude/claude-opus-4-5-20251101, gemini/gemini-2.0-flash)
  • The required API key for your provider should be in environment variables (e.g., OPENAI_API_KEY, ANTHROPIC_API_KEY, etc.)
  • Optional: LITELLM_TEMPERATURE - Set temperature (default: varies by provider)
  • Optional: LITELLM_TOP_P - Set top-p sampling (default: varies by provider)

Examples:

OpenAI via LiteLLM:

export LITELLM_MODEL="openai/gpt-4"
export OPENAI_API_KEY="sk-..."

Anthropic Claude via LiteLLM:

export LITELLM_MODEL="claude/claude-opus-4-5-20251101"
export ANTHROPIC_API_KEY="sk-ant-..."

Google Gemini via LiteLLM:

export LITELLM_MODEL="gemini/gemini-2.0-flash"
export GOOGLE_API_KEY="..."

LiteLLM Proxy Server

If you have a self-hosted LiteLLM proxy server:

  • LITELLM_PROXY_URL - Base URL of your LiteLLM proxy (e.g., http://localhost:8000)
  • LITELLM_PROXY_API_KEY - API key for the proxy
  • LITELLM_PROXY_MODEL - Model name in format litellm_proxy/<model-name> (e.g., litellm_proxy/gpt-4)
  • Optional: LITELLM_PROXY_TEMPERATURE - Set temperature
  • Optional: LITELLM_PROXY_TOP_P - Set top-p sampling

Example:

export LITELLM_PROXY_URL="http://localhost:8000"
export LITELLM_PROXY_API_KEY="sk-proxy-key-123"
export LITELLM_PROXY_MODEL="litellm_proxy/my-model"

Also you can use the provided script for configuration in the git repo (⚠️not available through pip install)

uv run tasks.py setup

Checking LiteLLM Status

After configuration, you can check if your LiteLLM setup is working:

show-llms

This will display a table showing the authentication status of all configured LLMs, including LiteLLM.

Test Installation

test hello world example (need to set up llm credentials first)

uv run examples/hello_world.py
uv run examples/self_transduction.py
uv run examples/agentics_web_search_report.py

Hello World

from typing import Optional
from pydantic import BaseModel, Field

from agentics.core.transducible_functions import Transduce, transducible


class Movie(BaseModel):
    movie_name: Optional[str] = None
    description: Optional[str] = None
    year: Optional[int] = None


class Genre(BaseModel):
    genre: Optional[str] = Field(None, description="e.g., comedy, drama, action")

movie = Movie(movie_name="The Godfather")

genre = await (Genre << Movie)(movie)

Installation details

Install poetry (skip if available)

curl -sSL https://install.python-poetry.org | python3 -

Clone and install agentics

poetry install
source $(poetry env info --path)/bin/activate 

Ensure you have Python 3.11+ 🚨.

python --version
  • Create a virtual environment with Python's built in venv module. In linux, this package may be required to be installed with the Operating System package manager.

    python -m venv .venv
    

  • Activate the virtual environment

Bash/Zsh

source .venv/bin/activate

Fish

source .venv/bin/activate.fish

VSCode

Press F1 key and start typing > Select python and select Select Python Interpreter

  • Install the package
    python -m pip install ./agentics
    
  • Ensure uv is installed.
    command -v uv >/dev/null &&  curl -LsSf https://astral.sh/uv/install.sh | sh
    # It's recommended to restart the shell afterwards
    exec $SHELL
    
  • uv venv --python 3.11
  • uv pip install ./agentics or uv add ./agentics (recommended)

This is a way to run agentics temporarily or quick tests

  • Ensure uv is installed.
    command -v uv >/dev/null &&  curl -LsSf https://astral.sh/uv/install.sh | sh
    # It's recommended to restart the shell afterwards
    exec $SHELL
    
  • uvx --verbose --from ./agentics ipython
  1. Create a conda environment:

    conda create -n agentics python=3.11
    
    In this example the name of the environment is agetnics but you can change it to your personal preference.

  2. Activate the environment

    conda activate agentics
    

  3. Install agentics from a folder or git reference
    pip install ./agentics
    

Documentation

This documentation page is written using Mkdocs. You can start the server to visualize this interactively.

mkdocs serve
After started, documentation will be available here http://127.0.0.1:8000/