Installation¶
Overview¶
Docling Graph uses uv as the package manager for fast, reliable dependency management. All installation and execution commands use uv exclusively.
What You'll Install¶
- Core Package: Docling Graph with VLM support
- Optional Features: LLM providers (local and/or remote)
- GPU Support (optional): PyTorch with CUDA for local inference
- API Keys (optional): For remote LLM providers
Quick Start¶
Minimal Installation¶
For basic VLM functionality:
# Clone repository
git clone https://github.com/IBM/docling-graph
cd docling-graph
# Install core dependencies
uv sync
This installs:
- ✅ Docling (document conversion)
- ✅ VLM backend (NuExtract models)
- ✅ Core graph functionality
- ❌ LLM providers (not included)
Installation¶
LiteLLM is included by default; no extra installs are required for LLM providers.
# Clone repository
git clone https://github.com/IBM/docling-graph
cd docling-graph
# Install dependencies
uv sync
System Requirements¶
Minimum Requirements¶
- Python: 3.10, 3.11, or 3.12
- RAM: 8 GB minimum
- Disk: 5 GB free space
- OS: Linux, macOS, or Windows (with WSL recommended)
Recommended for Local Inference¶
- GPU: NVIDIA GPU with 8+ GB VRAM
- CUDA: 11.8 or 12.1
- RAM: 16 GB or more
- Disk: 20 GB free space (for models)
For VLM Only¶
- GPU: NVIDIA GPU with 4+ GB VRAM (for NuExtract-2B)
- GPU: NVIDIA GPU with 8+ GB VRAM (for NuExtract-8B)
For Remote API Only¶
- No GPU required
- Internet connection required
- API keys required
Verification¶
Check Installation¶
# Check version
uv run docling-graph --version
# Check Python version
uv run python --version
# Test CLI
uv run docling-graph --help
Expected output:
Test Import¶
Expected output:
Next Steps¶
After installation, you need to:
- Set Up Requirements - Verify system requirements
- Configure GPU (optional) - Set up CUDA for local inference
- Set Up API Keys (optional) - Configure remote providers
- Define Schema - Create your first Pydantic template
Common Issues¶
🐛 uv not found¶
Solution: Install uv first:
# Linux/macOS
curl -LsSf https://astral.sh/uv/install.sh | sh
# Windows (PowerShell)
powershell -c "irm https://astral.sh/uv/install.ps1 | iex"
# Or with pip
pip install uv
🐛 Python version mismatch¶
Solution: Specify Python version:
🐛 Import errors after installation¶
Solution: Ensure you're using uv run:
🐛 GPU not detected¶
Solution: See GPU Setup Guide
Performance Notes¶
New in v1.2.0: Significant CLI performance improvements:
- Init command: 75-85% faster with intelligent dependency caching
- First run: ~1-1.5s (checks dependencies)
- Subsequent runs: ~0.5-1s (uses cache)
- Dependency validation: 90-95% faster (2-3s → 0.1-0.2s)
- Lazy loading: Configuration constants loaded on-demand
Development Installation¶
For contributing to the project:
# Clone repository
git clone https://github.com/IBM/docling-graph
cd docling-graph
# Install with development dependencies
uv sync --all-extras --dev
# Install pre-commit hooks
uv run pre-commit install
# Run tests
uv run pytest
Updating¶
To update to the latest version:
Uninstalling¶
To remove Docling Graph: