Model#
- class ibm_watson_machine_learning.foundation_models.Model(model_id, credentials, params=None, project_id=None, space_id=None, verify=None)[source]#
Bases:
ModelInference
Instantiate the model interface.
Hint
To use the Model class with LangChain, use the
to_langchain()
function.- Parameters:
model_id (str) – the type of model to use
credentials (dict) – credentials to Watson Machine Learning instance
params (dict, optional) – parameters to use during generate requests
project_id (str, optional) – ID of the Watson Studio project
space_id (str, optional) – ID of the Watson Studio space
verify (bool or str, optional) –
user can pass as verify one of following:
the path to a CA_BUNDLE file
the path of directory with certificates of trusted CAs
True - default path to truststore will be taken
False - no verification will be made
Note
One of these parameters is required: [‘project_id ‘, ‘space_id’]
Hint
You can copy the project_id from Project’s Manage tab (Project -> Manage -> General -> Details).
Example
from ibm_watson_machine_learning.foundation_models import Model from ibm_watson_machine_learning.metanames import GenTextParamsMetaNames as GenParams from ibm_watson_machine_learning.foundation_models.utils.enums import ModelTypes, DecodingMethods # To display example params enter GenParams().get_example_values() generate_params = { GenParams.MAX_NEW_TOKENS: 25 } model = Model( model_id=ModelTypes.FLAN_UL2, params=generate_params, credentials={ "apikey": "***", "url": "https://us-south.ml.cloud.ibm.com" }, project_id="*****" )
- generate(prompt, params=None, guardrails=False, guardrails_hap_params=None, guardrails_pii_params=None, concurrency_limit=10, async_mode=False)[source]#
Given a text prompt as input, and parameters the selected model (model_id) will generate a completion text as generated_text.
- Parameters:
params (dict) – meta props for text generation, use
ibm_watson_machine_learning.metanames.GenTextParamsMetaNames().show()
to view the list of MetaNamesconcurrency_limit (int) – number of requests that will be sent in parallel, max is 10
prompt (str, list) – the prompt string or list of strings. If list of strings is passed requests will be managed in parallel with the rate of concurency_limit
guardrails (bool) – If True then potentially hateful, abusive, and/or profane language (HAP) detection filter is toggle on for both prompt and generated text, defaults to False
guardrails_hap_params – meta props for HAP moderations, use
ibm_watson_machine_learning.metanames.GenTextModerationsMetaNames().show()
to view the list of MetaNamesasync_mode (bool) – If True then yield results asynchronously (using generator). In this case both prompt and generated text will be concatenated in the final response - under generated_text, defaults to False
- Returns:
scoring result containing generated content
- Return type:
dict
Example
q = "What is 1 + 1?" generated_response = model.generate(prompt=q) print(generated_response['results'][0]['generated_text'])
- generate_text(prompt, params=None, raw_response=False, guardrails=False, guardrails_hap_params=None, guardrails_pii_params=None, concurrency_limit=10)[source]#
Given a text prompt as input, and parameters the selected model (model_id) will generate a completion text as generated_text.
- Parameters:
params (dict) – meta props for text generation, use
ibm_watson_machine_learning.metanames.GenTextParamsMetaNames().show()
to view the list of MetaNamesconcurrency_limit (int) – number of requests that will be sent in parallel, max is 10
prompt (str, list) – the prompt string or list of strings. If list of strings is passed requests will be managed in parallel with the rate of concurency_limit
raw_response (bool, optional) – return the whole response object
guardrails (bool) – If True then potentially hateful, abusive, and/or profane language (HAP) detection filter is toggle on for both prompt and generated text, defaults to False
guardrails_hap_params – meta props for HAP moderations, use
ibm_watson_machine_learning.metanames.GenTextModerationsMetaNames().show()
to view the list of MetaNames
- Returns:
generated content
- Return type:
str
Example
q = "What is 1 + 1?" generated_text = model.generate_text(prompt=q) print(generated_text)
- generate_text_stream(prompt, params=None, raw_response=False, guardrails=False, guardrails_hap_params=None, guardrails_pii_params=None)[source]#
Given a text prompt as input, and parameters the selected model (model_id) will generate a streamed text as generate_text_stream.
- Parameters:
params (dict) – meta props for text generation, use
ibm_watson_machine_learning.metanames.GenTextParamsMetaNames().show()
to view the list of MetaNamesprompt (str,) – the prompt string
raw_response (bool, optional) – yields the whole response object
guardrails (bool) – If True then potentially hateful, abusive, and/or profane language (HAP) detection filter is toggle on for both prompt and generated text, defaults to False
guardrails_hap_params – meta props for HAP moderations, use
ibm_watson_machine_learning.metanames.GenTextModerationsMetaNames().show()
to view the list of MetaNames
- Returns:
scoring result containing generated content
- Return type:
generator
Example
q = "Write an epigram about the sun" generated_response = model.generate_text_stream(prompt=q) for chunk in generated_response: print(chunk, end='')
- get_details()[source]#
Get model’s details
- Returns:
model’s details
- Return type:
dict
Example
model.get_details()
- to_langchain()[source]#
- Returns:
WatsonxLLM wrapper for watsonx foundation models
- Return type:
Example
from langchain import PromptTemplate from langchain.chains import LLMChain from ibm_watson_machine_learning.foundation_models import Model from ibm_watson_machine_learning.foundation_models.utils.enums import ModelTypes flan_ul2_model = Model( model_id=ModelTypes.FLAN_UL2, credentials={ "apikey": "***", "url": "https://us-south.ml.cloud.ibm.com" }, project_id="*****" ) prompt_template = "What color is the {flower}?" llm_chain = LLMChain(llm=flan_ul2_model.to_langchain(), prompt=PromptTemplate.from_template(prompt_template)) llm_chain('sunflower')
- tokenize(prompt, return_tokens=False)[source]#
The text tokenize operation allows you to check the conversion of provided input to tokens for a given model. It splits text into words or sub-words, which then are converted to ids through a look-up table (vocabulary). Tokenization allows the model to have a reasonable vocabulary size.
- Parameters:
prompt (str) – the prompt string
return_tokens (bool) – the parameter for text tokenization, defaults to False
- Returns:
the result of tokenizing the input string.
- Return type:
dict
Example
q = "Write an epigram about the moon" tokenized_response = model.tokenize(prompt=q, return_tokens=True) print(tokenized_response["result"])
Enums#
- class metanames.GenTextParamsMetaNames[source]#
Set of MetaNames for Foundation Model Parameters.
Available MetaNames:
MetaName
Type
Required
Example value
DECODING_METHOD
str
N
sample
LENGTH_PENALTY
dict
N
{'decay_factor': 2.5, 'start_index': 5}
TEMPERATURE
float
N
0.5
TOP_P
float
N
0.2
TOP_K
int
N
1
RANDOM_SEED
int
N
33
REPETITION_PENALTY
float
N
2
MIN_NEW_TOKENS
int
N
50
MAX_NEW_TOKENS
int
N
200
STOP_SEQUENCES
list
N
['fail']
TIME_LIMIT
int
N
600000
TRUNCATE_INPUT_TOKENS
int
N
200
RETURN_OPTIONS
dict
N
{'input_text': True, 'generated_tokens': True, 'input_tokens': True, 'token_logprobs': True, 'token_ranks': False, 'top_n_tokens': False}
- class metanames.GenTextReturnOptMetaNames[source]#
Set of MetaNames for Foundation Model Parameters.
Available MetaNames:
MetaName
Type
Required
Example value
INPUT_TEXT
bool
Y
True
GENERATED_TOKENS
bool
N
True
INPUT_TOKENS
bool
Y
True
TOKEN_LOGPROBS
bool
N
True
TOKEN_RANKS
bool
N
True
TOP_N_TOKENS
int
N
True
Note
One of these parameters is required: [‘INPUT_TEXT’, ‘INPUT_TOKENS’]
- class ibm_watson_machine_learning.foundation_models.utils.enums.DecodingMethods(value)[source]#
Bases:
Enum
Supported decoding methods for text generation.
- GREEDY = 'greedy'#
- SAMPLE = 'sample'#
- class ibm_watson_machine_learning.foundation_models.utils.enums.ModelTypes(value)[source]#
Bases:
Enum
Supported foundation models.
- CODELLAMA_34B_INSTRUCT_HF = 'codellama/codellama-34b-instruct-hf'#
- ELYZA_JAPANESE_LLAMA_2_7B_INSTRUCT = 'elyza/elyza-japanese-llama-2-7b-instruct'#
- FLAN_T5_XL = 'google/flan-t5-xl'#
- FLAN_T5_XXL = 'google/flan-t5-xxl'#
- FLAN_UL2 = 'google/flan-ul2'#
- GPT_NEOX = 'eleutherai/gpt-neox-20b'#
- GRANITE_13B_CHAT = 'ibm/granite-13b-chat-v1'#
- GRANITE_13B_CHAT_V2 = 'ibm/granite-13b-chat-v2'#
- GRANITE_13B_INSTRUCT = 'ibm/granite-13b-instruct-v1'#
- GRANITE_13B_INSTRUCT_V2 = 'ibm/granite-13b-instruct-v2'#
- GRANITE_20B_MULTILINGUAL = 'ibm/granite-20b-multilingual'#
- LLAMA_2_13B_CHAT = 'meta-llama/llama-2-13b-chat'#
- LLAMA_2_70B_CHAT = 'meta-llama/llama-2-70b-chat'#
- MIXTRAL_8X7B_INSTRUCT_V01_Q = 'ibm-mistralai/mixtral-8x7b-instruct-v01-q'#
- MPT_7B_INSTRUCT2 = 'ibm/mpt-7b-instruct2'#
- MT0_XXL = 'bigscience/mt0-xxl'#
- STARCODER = 'bigcode/starcoder'#