Welcome to Flexo
The Flexo Agent Library is a powerful and flexible codebase that enables users to configure, customize, and deploy a generative AI agent. Designed for adaptability, the library can be tailored to a wide range of use cases, from conversational AI to specialized automation.
Why Flexo?
- Simplified Deployment: Deploy anywhere with comprehensive platform guides
- Production Ready: Built for scalability and reliability
- Extensible: Add custom tools and capabilities
- Well Documented: Clear guides for every step
Key Features
- Configurable Agent: YAML-based configuration for custom behaviors
- Tool Integration: Execute Python functions and REST API calls
- Streaming Support: Real-time streaming with pattern detection
- Production Ready: Containerized deployment support with logging
- FastAPI Backend: Modern async API with comprehensive docs
Supported LLM Providers
☁️ Cloud Providers

OpenAI
GPT-powered models

watsonx.ai
Enterprise AI solutions

Anthropic
Claude family models

xAI
Grok and beyond

Mistral AI
Efficient open models
🖥️ Local & Self-Hosted Options

High-throughput serving

Ollama
Easy local LLMs

Optimized C++ runtime

LM Studio
User-friendly interface

LocalAI
Self-hosted versatility
⚙️ Unified Configuration Interface
Switch providers effortlessly with Flexo's adapter layer. Customize your LLM settings in one place:
gpt-4o:
provider: "openai" # Choose your provider
model: "gpt-4o" # Select specific model
temperature: 0.7
max_tokens: 4000 # Additional model-specific parameters
Need more details? Check our comprehensive Model Configuration Guide for provider-specific settings and optimization tips.
Quick Start Guide
1. Local Development
Start developing with Flexo locally:
2. Production Deployment
Deploy Flexo to your preferred platform:
Documentation
Deployment Guides
Code Reference
System Architecture
Chat Agent State Flow
Contributing
See our Contributing Guide for details.
Security
For security concerns, please review our Security Policy.