API Documentation
Overview
The API module provides HTTP endpoints for interacting with the LLM agent system. It consists of two main components:
- Chat Completions API - REST endpoints for chat interactions
- Server-Sent Events (SSE) Models - Data models for streaming responses
Components
Chat Completions API
The Chat Completions API provides REST endpoints that follow patterns similar to OpenAI's Chat API, making it familiar for developers. Key features include:
- Support for multiple message types (user, system, assistant)
- Function/tool calling capabilities
- Streaming and non-streaming responses
- Temperature and other LLM parameter controls
Example endpoint: /v1/chat/completions
SSE Models
The SSE (Server-Sent Events) models handle streaming responses from the system. These models structure the data for:
- Incremental token/text-chunk streaming
- Function/tool call events
- Status updates
- Error messages
API Reference
For detailed information about specific components, see:
- API Request Models - Details about the chat endpoint models
- SSE Models - Information about streaming response models
See Also
- Agent Configuration - Configure the underlying agent
- Tool Registry - Available tools and functions
- LLM Factory - LLM provider configuration