Model¶
- class ibm_watsonx_ai.foundation_models.Model(model_id, credentials, params=None, project_id=None, space_id=None, verify=None, validate=True)[source]¶
Bases:
ModelInference
Instantiate the model interface.
Deprecated since version 1.1.21: Use
ModelInference()
instead.Hint
To use the Model class with LangChain, use the
to_langchain()
function.- Parameters:
model_id (str) – type of model to use
credentials (Credentials or dict) – credentials for the Watson Machine Learning instance
params (dict, TextGenParameters, TextChatParameters, optional) – parameters to use during generate requests
project_id (str, optional) – ID of the Watson Studio project
space_id (str, optional) – ID of the Watson Studio space
verify (bool or str, optional) –
You can pass one of following as verify:
the path to a CA_BUNDLE file
the path of a directory with certificates of trusted CAs
True - default path to truststore will be taken
False - no verification will be made
validate (bool, optional) – model ID validation, defaults to True
Note
One of these parameters is required: [‘project_id ‘, ‘space_id’].
Hint
You can copy the project_id from the Project’s Manage tab (Project -> Manage -> General -> Details).
Example:
from ibm_watsonx_ai.foundation_models import Model from ibm_watsonx_ai.metanames import GenTextParamsMetaNames as GenParams from ibm_watsonx_ai.foundation_models.utils.enums import ModelTypes, DecodingMethods # To display example params enter GenParams().get_example_values() generate_params = { GenParams.MAX_NEW_TOKENS: 25 } model = Model( model_id=ModelTypes.FLAN_UL2, params=generate_params, credentials=Credentials( api_key = "***", url = "https://us-south.ml.cloud.ibm.com"), project_id="*****" )
- chat(messages, params=None, tools=None, tool_choice=None, tool_choice_option=None)[source]¶
Given a list of messages comprising a conversation, the model will return a response.
- Parameters:
messages (list[dict]) – The messages for this chat session.
params (dict, TextChatParameters, optional) – meta props for chat generation, use
ibm_watsonx_ai.foundation_models.schema.TextChatParameters.show()
tools (list) – Tool functions that can be called with the response.
tool_choice (dict, optional) – Specifying a particular tool via {“type”: “function”, “function”: {“name”: “my_function”}} forces the model to call that tool.
tool_choice_option (Literal["none", "auto"], optional) – Tool choice option
- Returns:
scoring result containing generated chat content.
- Return type:
dict
Example:
messages = [ {"role": "system", "content": "You are a helpful assistant."}, {"role": "user", "content": "Who won the world series in 2020?"} ] generated_response = model.chat(messages=messages) # Print all response print(generated_response) # Print only content print(response['choices'][0]['message']['content'])
- chat_stream(messages, params=None, tools=None, tool_choice=None, tool_choice_option=None)[source]¶
Given a list of messages comprising a conversation, the model will return a response in stream.
- Parameters:
messages (list[dict]) – The messages for this chat session.
params (dict, TextChatParameters, optional) – meta props for chat generation, use
ibm_watsonx_ai.foundation_models.schema.TextChatParameters.show()
tools (list) – Tool functions that can be called with the response.
tool_choice (dict, optional) – Specifying a particular tool via {“type”: “function”, “function”: {“name”: “my_function”}} forces the model to call that tool.
tool_choice_option (Literal["none", "auto"], optional) – Tool choice option
- Returns:
scoring result containing generated chat content.
- Return type:
generator
Example:
messages = [ {"role": "system", "content": "You are a helpful assistant."}, {"role": "user", "content": "Who won the world series in 2020?"} ] generated_response = model.chat_stream(messages=messages) for chunk in generated_response: print(chunk['choices'][0]['delta'].get('content', ''), end='', flush=True)
- generate(prompt=None, params=None, guardrails=False, guardrails_hap_params=None, guardrails_pii_params=None, concurrency_limit=10, async_mode=False)[source]¶
Generates a completion text as generated_text after getting a text prompt as input and parameters for the selected model (model_id).
- Parameters:
params (dict) – MetaProps for text generation, use
ibm_watsonx_ai.metanames.GenTextParamsMetaNames().show()
to view the list of MetaNamesconcurrency_limit (int) – number of requests that will be sent in parallel, max is 10
prompt (str, list) – the prompt string or list of strings. If a list of strings is passed, requests will be managed in parallel with the rate of concurency_limit
guardrails (bool) – if True, the detection filter for potentially hateful, abusive, and/or profane language (HAP) is toggle on for both prompt and generated text, defaults to False
guardrails_hap_params – MetaProps for HAP moderations, use
ibm_watsonx_ai.metanames.GenTextModerationsMetaNames().show()
to view the list of MetaNamesasync_mode (bool) – if True, then yields results asynchronously using a generator. In this case, both prompt and generated text will be concatenated in the final response - under generated_text, defaults to False
- Returns:
scoring result that contains the generated content
- Return type:
dict
Example:
q = "What is 1 + 1?" generated_response = model.generate(prompt=q) print(generated_response['results'][0]['generated_text'])
- generate_text(prompt=None, params=None, raw_response=False, guardrails=False, guardrails_hap_params=None, guardrails_pii_params=None, concurrency_limit=10)[source]¶
Generates a completion text as generated_text after getting a text prompt as input and parameters for the selected model (model_id).
- Parameters:
params (dict) – MetaProps for text generation, use
ibm_watsonx_ai.metanames.GenTextParamsMetaNames().show()
to view the list of MetaNamesconcurrency_limit (int) – number of requests to be sent in parallel, max is 10
prompt (str, list) – the prompt string or list of strings. If a list of strings is passed, requests will be managed in parallel with the rate of concurency_limit
raw_response (bool, optional) – return the whole response object
guardrails (bool) – if True, the detection filter for potentially hateful, abusive, and/or profane language (HAP) is toggle on for both prompt and generated text, defaults to False
guardrails_hap_params – MetaProps for HAP moderations, use
ibm_watsonx_ai.metanames.GenTextModerationsMetaNames().show()
to view the list of MetaNames
- Returns:
generated content
- Return type:
str or dict
Example:
q = "What is 1 + 1?" generated_text = model.generate_text(prompt=q) print(generated_text)
- generate_text_stream(prompt=None, params=None, raw_response=False, guardrails=False, guardrails_hap_params=None, guardrails_pii_params=None)[source]¶
Generates a streamed text as generate_text_stream after getting a text prompt as input and parameters for the selected model (model_id).
- Parameters:
params (dict) – MetaProps for text generation, use
ibm_watsonx_ai.metanames.GenTextParamsMetaNames().show()
to view the list of MetaNamesprompt (str,) – the prompt string
raw_response (bool, optional) – yields the whole response object
guardrails (bool) – If True, the detection filter for potentially hateful, abusive, and/or profane language (HAP) is toggle on for both prompt and generated text, defaults to False
guardrails_hap_params – MetaProps for HAP moderations, use
ibm_watsonx_ai.metanames.GenTextModerationsMetaNames().show()
to view the list of MetaNames
- Returns:
scoring result that contains the generated content
- Return type:
generator
Example:
q = "Write an epigram about the sun" generated_response = model.generate_text_stream(prompt=q) for chunk in generated_response: print(chunk, end='', flush=True)
- get_details()[source]¶
Get the model’s details.
- Returns:
model’s details
- Return type:
dict
Example:
model.get_details()
- to_langchain()[source]¶
- Returns:
WatsonxLLM wrapper for watsonx foundation models
- Return type:
WatsonxLLM
Example:
from langchain import PromptTemplate from langchain.chains import LLMChain from ibm_watsonx_ai.foundation_models import Model from ibm_watsonx_ai.foundation_models.utils.enums import ModelTypes flan_ul2_model = Model( model_id=ModelTypes.FLAN_UL2, credentials=Credentials( api_key = "***", url = "https://us-south.ml.cloud.ibm.com"), project_id="*****" ) prompt_template = "What color is the {flower}?" llm_chain = LLMChain(llm=flan_ul2_model.to_langchain(), prompt=PromptTemplate.from_template(prompt_template)) llm_chain('sunflower')
- tokenize(prompt, return_tokens=False)[source]¶
The text tokenize operation allows you to check the conversion of provided input to tokens for a given model. It splits text into words or sub-words, which are are converted to IDs through a look-up table (vocabulary). Tokenization allows the model to have a reasonable vocabulary size.
- Parameters:
prompt (str) – prompt string
return_tokens (bool) – parameter for text tokenization, defaults to False
- Returns:
result of tokenizing the input string
- Return type:
dict
Example:
q = "Write an epigram about the moon" tokenized_response = model.tokenize(prompt=q, return_tokens=True) print(tokenized_response["result"])
Enums¶
- class metanames.GenTextParamsMetaNames[source]¶
Set of MetaNames for Foundation Model Parameters.
Available MetaNames:
MetaName
Type
Required
Example value
DECODING_METHOD
str
N
sample
LENGTH_PENALTY
dict
N
{'decay_factor': 2.5, 'start_index': 5}
TEMPERATURE
float
N
0.5
TOP_P
float
N
0.2
TOP_K
int
N
1
RANDOM_SEED
int
N
33
REPETITION_PENALTY
float
N
2
MIN_NEW_TOKENS
int
N
50
MAX_NEW_TOKENS
int
N
200
STOP_SEQUENCES
list
N
['fail']
TIME_LIMIT
int
N
600000
TRUNCATE_INPUT_TOKENS
int
N
200
PROMPT_VARIABLES
dict
N
{'object': 'brain'}
RETURN_OPTIONS
dict
N
{'input_text': True, 'generated_tokens': True, 'input_tokens': True, 'token_logprobs': True, 'token_ranks': False, 'top_n_tokens': False}
- class metanames.GenTextReturnOptMetaNames[source]¶
Set of MetaNames for Foundation Model Parameters.
Available MetaNames:
MetaName
Type
Required
Example value
INPUT_TEXT
bool
Y
True
GENERATED_TOKENS
bool
N
True
INPUT_TOKENS
bool
Y
True
TOKEN_LOGPROBS
bool
N
True
TOKEN_RANKS
bool
N
True
TOP_N_TOKENS
int
N
True
Note
One of these parameters is required: [‘INPUT_TEXT’, ‘INPUT_TOKENS’]
- class ibm_watsonx_ai.foundation_models.utils.enums.DecodingMethods(value)[source]¶
Bases:
Enum
Supported decoding methods for text generation.
- GREEDY = 'greedy'¶
- SAMPLE = 'sample'¶
- class ibm_watsonx_ai.foundation_models.utils.enums.ModelTypes(value)[source]¶
Bases:
StrEnum
Deprecated since version 1.0.5: Use
TextModels()
instead.Supported foundation models.
Note
You can check the current list of supported models types of various environments with
get_model_specs()
or by referring to the watsonx.ai documentation.