You're reading an old version of this documentation. If you want up-to-date information, please have a look at v3.0.0.
Contents Menu Expand Light mode Dark mode Auto light/dark, in light mode Auto light/dark, in dark mode Skip to content
IBM Generative AI Python SDK (Tech Preview)
IBM Generative AI Python SDK (Tech Preview)
  • Getting Started
  • V2 Migration Guide
  • Examples
    • Text
      • Stream answer from a model
      • Tokenize text data
      • Chat with a model
      • Compare a set of hyperparameters
      • Get embedding vectors for text data
      • Moderate text data
      • Generate text using a model
    • Models
      • Show information about supported models
    • Tunes
      • Tune a custom model (Prompt Tuning)
    • Prompts
      • Create a custom prompt with variables
    • System Prompts
      • Working with system prompts
    • Files
      • Working with files
    • Users
      • Show information about current user
    • Requests
      • Working with your requests
    • Extra
      • Overriding built-in services
      • Text generation with custom concurrency limit and multiple processes
      • Retrieve metadata for given service method
      • Shutdown Handling
      • Customize underlying API (httpx) Client
      • Enable/Disable logging for SDK
      • Error Handling
      • Vector Databases
        • Chroma DB
          • Create ChromaDB Embedding Function
    • Extensions
      • LocalServer
        • Customize behavior of local client
        • Use a local server with a custom model
      • LangChain
        • Use LangChain generation with a custom template.
        • LangChain Embeddings
        • Chat with a model using LangChain
        • LangChain agent
        • Text generation using LangChain
        • Serialize LangChain model to a file
        • Streaming response from LangChain
        • QA using native LangChain features
      • LLamaIndex
        • Use a model through LLamaIndex
      • Transformers (HuggingFace)
        • Run Transformers Agents
  • FAQ
  • Changelog

Versions

  • main (unreleased)
  • v3.0.0
  • v2.3.0
  • v2.2.0
  • v2.1.1
  • v2.1.0
  • v2.0.0
Back to top
View this page

LangChainΒΆ

Before you start

To use the following extension, first install it by running pip install 'ibm-generative-ai[langchain]'.

  • Use LangChain generation with a custom template.
  • LangChain Embeddings
  • Chat with a model using LangChain
  • LangChain agent
  • Text generation using LangChain
  • Serialize LangChain model to a file
  • Streaming response from LangChain
  • QA using native LangChain features
Next
Use LangChain generation with a custom template.
Previous
Use a local server with a custom model
Copyright © 2024, IBM Research
Made with Sphinx and @pradyunsg's Furo