Extensions#
LangChain#
- class ibm_watson_machine_learning.foundation_models.extensions.langchain.WatsonxLLM(model)[source]#
LangChain CustomLLM wrapper for watsonx foundation models.
- Parameters:
model (Model) – foundation model inference object instance
- Supported chain types:
LLMChain,
TransformChain,
SequentialChain,
SimpleSequentialChain
ConversationChain (including ConversationBufferMemory)
LLMMathChain (
bigscience/mt0-xxl
,eleutherai/gpt-neox-20b
,ibm/mpt-7b-instruct2
,bigcode/starcoder
,meta-llama/llama-2-70b-chat
,ibm/granite-13b-instruct-v1
models only)
Instantiate the WatsonxLLM interface
from ibm_watson_machine_learning.foundation_models import Model from ibm_watson_machine_learning.metanames import GenTextParamsMetaNames as GenParams from ibm_watson_machine_learning.foundation_models.extensions.langchain import WatsonxLLM generate_params = { GenParams.MAX_NEW_TOKENS: 25 } model = Model( model_id="google/flan-ul2", credentials={ "apikey": "***", "url": "https://us-south.ml.cloud.ibm.com" }, params=generate_params, project_id="*****" ) custom_llm = WatsonxLLM(model=model)
Example of SimpleSequentialChain usage
from ibm_watson_machine_learning.foundation_models.extensions.langchain import WatsonxLLM
from ibm_watson_machine_learning.foundation_models.utils.enums import ModelTypes, DecodingMethods
from ibm_watson_machine_learning.metanames import GenTextParamsMetaNames as GenParams
from ibm_watson_machine_learning.foundation_models import Model
from langchain import PromptTemplate
from langchain.chains import LLMChain, SimpleSequentialChain
params = {
GenParams.MAX_NEW_TOKENS: 100,
GenParams.MIN_NEW_TOKENS: 1,
GenParams.DECODING_METHOD: DecodingMethods.SAMPLE,
GenParams.TEMPERATURE: 0.5,
GenParams.TOP_K: 50,
GenParams.TOP_P: 1
}
credentials = {
'url': "https://us-south.ml.cloud.ibm.com",
'apikey' : "*****"
}
project = "*****"
pt1 = PromptTemplate(
input_variables=["topic"],
template="Generate a random question about {topic}: Question: ")
pt2 = PromptTemplate(
input_variables=["question"],
template="Answer the following question: {question}")
flan_ul2_model = Model(
model_id='google/flan-ul2',
credentials=credentials,
params=params,
project_id=project_id)
flan_ul2_llm = WatsonxLLM(model=flan_ul2_model)
flan_t5_model = Model(
model_id="google/flan-t5-xxl",
credentials=credentials,
project_id=project_id)
flan_t5_llm = WatsonxLLM(model=flan_t5_model)
prompt_to_flan_ul2 = LLMChain(llm=flan_ul2_llm, prompt=pt1)
flan_ul2_to_flan_t5 = LLMChain(llm=flan_t5_llm, prompt=pt2)
qa = SimpleSequentialChain(chains=[prompt_to_flan_ul2, flan_ul2_to_flan_t5], verbose=True)
qa.run("cat")