Extensions

LangChain

IBM integration with langchain is available under link.

class langchain_ibm.WatsonxLLM(*, name=None, cache=None, verbose=None, callbacks=None, tags=None, metadata=None, custom_get_token_ids=None, callback_manager=None, model_id='', deployment_id='', project_id='', space_id='', url=None, apikey=None, token=None, password=None, username=None, instance_id=None, version=None, params=None, verify=None, streaming=False, watsonx_model=None)[source]

IBM watsonx.ai large language models.

To use, you should have langchain_ibm python package installed, and the environment variable WATSONX_APIKEY set with your API key, or pass it as a named parameter to the constructor.

Example:
from ibm_watsonx_ai.metanames import GenTextParamsMetaNames
parameters = {
    GenTextParamsMetaNames.DECODING_METHOD: "sample",
    GenTextParamsMetaNames.MAX_NEW_TOKENS: 100,
    GenTextParamsMetaNames.MIN_NEW_TOKENS: 1,
    GenTextParamsMetaNames.TEMPERATURE: 0.5,
    GenTextParamsMetaNames.TOP_K: 50,
    GenTextParamsMetaNames.TOP_P: 1,
}

from langchain_ibm import WatsonxLLM
watsonx_llm = WatsonxLLM(
    model_id="google/flan-ul2",
    url="https://us-south.ml.cloud.ibm.com",
    apikey="*****",
    project_id="*****",
    params=parameters,
)
apikey

Apikey to Watson Machine Learning or CPD instance

deployment_id

Type of deployed model to use.

get_num_tokens(text)[source]

Get the number of tokens present in the text.

Useful for checking if an input will fit in a model’s context window.

Args:

text: The string input to tokenize.

Returns:

The integer number of tokens in the text.

get_token_ids(text)[source]

Return the ordered ids of the tokens in a text.

Args:

text: The string input to tokenize.

Returns:
A list of ids corresponding to the tokens in the text, in order they occur

in the text.

instance_id

Instance_id of CPD instance

model_id

Type of model to use.

params

Model parameters to use during generate requests.

password

Password to CPD instance

project_id

ID of the Watson Studio project.

space_id

ID of the Watson Studio space.

streaming

Whether to stream the results or not.

token

Token to CPD instance

url

Url to Watson Machine Learning or CPD instance

username

Username to CPD instance

verify

User can pass as verify one of following: the path to a CA_BUNDLE file the path of directory with certificates of trusted CAs True - default path to truststore will be taken False - no verification will be made

version

Version of CPD instance

Example of SimpleSequentialChain usage

from langchain_ibm import WatsonxLLM
from ibm_watsonx_ai import Credentials
from ibm_watsonx_ai.foundation_models import ModelInference
from ibm_watsonx_ai.foundation_models.utils.enums import ModelTypes, DecodingMethods
from ibm_watsonx_ai.metanames import GenTextParamsMetaNames as GenParams

from langchain_core.prompts import PromptTemplate
from langchain.chains import LLMChain, SimpleSequentialChain

params = {
    GenParams.MAX_NEW_TOKENS: 100,
    GenParams.MIN_NEW_TOKENS: 1,
    GenParams.DECODING_METHOD: DecodingMethods.SAMPLE,
    GenParams.TEMPERATURE: 0.5,
    GenParams.TOP_K: 50,
    GenParams.TOP_P: 1
}
credentials = Credentials(
                  url = "https://us-south.ml.cloud.ibm.com",
                  api_key = "***********"
                 )
project = "*****"

pt1 = PromptTemplate(
    input_variables=["topic"],
    template="Generate a random question about {topic}: Question: ")
pt2 = PromptTemplate(
    input_variables=["question"],
    template="Answer the following question: {question}")

flan_ul2_model = ModelInference(
    model_id='google/flan-ul2',
    credentials=credentials,
    params=params,
    project_id=project_id)
flan_ul2_llm = WatsonxLLM(watsonx_model=flan_ul2_model)

flan_t5_model = ModelInference(
    model_id="google/flan-t5-xxl",
    credentials=credentials,
    project_id=project_id)
flan_t5_llm = WatsonxLLM(watsonx_model=flan_t5_model)

prompt_to_flan_ul2 = LLMChain(llm=flan_ul2_llm, prompt=pt1)
flan_ul2_to_flan_t5 = LLMChain(llm=flan_t5_llm, prompt=pt2)

qa = SimpleSequentialChain(chains=[prompt_to_flan_ul2, flan_ul2_to_flan_t5], verbose=True)
qa.run("cat")