You're reading an old version of this documentation. If you want up-to-date information, please have a look at v3.0.0.
Contents Menu Expand Light mode Dark mode Auto light/dark, in light mode Auto light/dark, in dark mode Skip to content
IBM Generative AI Python SDK (Tech Preview)
IBM Generative AI Python SDK (Tech Preview)
  • Getting Started
  • V2 Migration Guide
  • Examples
    • Text
      • Stream answer from a model
      • Tokenize text data
      • Chat with a model
      • Compare a set of hyperparameters
      • Get embedding vectors for text data
      • Moderate text data
      • Generate text using a model
    • Models
      • Show information about supported models
    • Tunes
      • Tune a custom model (Prompt Tuning)
    • Prompts
      • Create a custom prompt with variables
    • Files
      • Working with files
    • Users
      • Show information about current user
    • Requests
      • Working with your requests
    • Extra
      • Overriding built-in services
      • Text generation with custom concurrency limit and multiple processes
      • Shutdown Handling
      • Customize underlying API (httpx) Client
      • Enable/Disable logging for SDK
      • Error Handling
    • Extensions
      • LocalServer
        • Customize behavior of local client
        • Use a local server with a custom model
      • LangChain
        • Chat with a model using LangChain
        • Text generation using LangChain
        • Serialize LangChain model to a file
        • Streaming response from LangChain
        • QA using native LangChain features
      • LLamaIndex
        • Use a model through LLamaIndex
      • Transformers (HuggingFace)
        • Run Transformers Agents
  • FAQ

Versions

  • main (unreleased)
  • v3.0.0
  • v2.3.0
  • v2.2.0
  • v2.1.1
  • v2.1.0
  • v2.0.0
Back to top
Copyright © 2024, IBM Research
Made with Sphinx and @pradyunsg's Furo