Skip to content

Building Production-Ready AI Agents in the Open Source Ecosystem

This hands-on workshop guides participants through building intelligent AI agents using IBM's open source stack with a focus on local deployment and production readiness. Attendees will create a complete pipeline starting with Docling for document preprocessing, then deploy Granite models locally via Ollama for private, cost-effective inference, and finally orchestrate everything through Bee Agent Framework. The workshop emphasizes practical integration and local deployment strategies, perfect for organizations requiring data privacy or cost control.


🎯 Key Takeaways

Document Preprocessing Pipeline

Use Docling to extract and structure content from complex documents (PDFs, presentations, forms), creating clean, standardized data ready for AI agent consumption.

Local Granite Deployment with Ollama

Download and configure Granite models locally using Ollama.

Granite in Practice

Practice summarizing a text document, entity extraction, and multi-modal RAG using Docling and Granite models.

Production Ready Agent Development with Tools

Build multi-step agents workflows using the BeeAI Framework. Implement custom tools (including MCP tools), memory optimization strategies, built-in observability, resource management, and more.

Using the Agent Stack as an A2A agent server and multi-agent UI

  • Serving agents – Run an agent from source and see it automatically register with Agent Stack
  • UI forms – Create a form to use for an agent UI
  • Monitoring – Monitor and debug agents with built-in logging and tracing capabilities

📚 What You'll Learn

Through interactive coding exercises, you'll gain hands-on experience with:

Core Components

  • System Prompts – Learn the foundation of agent behavior by crafting effective prompts that guide your agent's responses
  • RequirementAgent – Explore BeeAI's powerful agent implementation that provides fine-grained control over agent behavior
  • LLM Providers – Work with both local and hosted model options to understand deployment flexibility

Advanced Features

  • Memory Systems – Implement conversation context to maintain coherent, contextual interactions across sessions
  • Tools Integration – Extend agent capabilities by integrating external APIs and data sources
  • Conditional Requirements – Enforce business logic and rules to ensure compliance and consistency
  • Monitoring – Monitor and debug agents with built-in logging and tracing capabilities

🚀 Let's Get Started

This workshop is designed for immediate hands-on learning through interactive exercises:


Learn More