Prompt Template Manager¶
- class ibm_watsonx_ai.foundation_models.prompts.PromptTemplateManager(credentials=None, *, project_id=None, space_id=None, verify=None, api_client=None)[source]¶
Bases:
WMLResource
Instantiate the prompt template manager.
- Parameters:
credentials (Credentials or dict, optional) – credentials for the watsonx.ai instance
project_id (str, optional) – ID of the project
space_id (str, optional) – ID of the space
verify (bool or str, optional) – You can pass one of the following as verify: * the path to a CA_BUNDLE file * the path of directory with certificates of trusted CAs * True - default path to truststore will be taken * False - no verification will be made
Note
One of these parameters is required: [‘project_id ‘, ‘space_id’]
Example:
from ibm_watsonx_ai import Credentials from ibm_watsonx_ai.foundation_models.prompts import PromptTemplate, PromptTemplateManager from ibm_watsonx_ai.foundation_models.utils.enums import ModelTypes prompt_mgr = PromptTemplateManager( credentials=Credentials( api_key="***", url="https://us-south.ml.cloud.ibm.com" ), project_id="*****" ) prompt_template = PromptTemplate(name="My prompt", model_id=ModelTypes.GRANITE_13B_CHAT_V2, input_prefix="Human:", output_prefix="Assistant:", input_text="What is {object} and how does it work?", input_variables=['object'], examples=[['What is the Stock Market?', 'A stock market is a place where investors buy and sell shares of publicly traded companies.']]) stored_prompt_template = prompt_mgr.store_prompt(prompt_template) print(stored_prompt_template.prompt_id) # id of prompt template asset
Note
Here’s an example of how you can pass variables to your deployed prompt template:
from ibm_watsonx_ai.metanames import GenTextParamsMetaNames meta_props = { client.deployments.ConfigurationMetaNames.NAME: "SAMPLE DEPLOYMENT PROMPT TEMPLATE", client.deployments.ConfigurationMetaNames.ONLINE: {}, client.deployments.ConfigurationMetaNames.BASE_MODEL_ID: ModelTypes.GRANITE_13B_CHAT_V2 } deployment_details = client.deployments.create(stored_prompt_template.prompt_id, meta_props) client.deployments.generate_text( deployment_id=deployment_details["metadata"]["id"], params={ GenTextParamsMetaNames.PROMPT_VARIABLES: { "object": "brain" } } )
- delete_prompt(prompt_id, *, force=False)[source]¶
Remove a prompt template from a project or space.
- Parameters:
prompt_id (str) – ID of the prompt template to be deleted
force (bool) – if True, then the prompt template is unlocked and then deleted, defaults to False.
- Returns:
status ‘SUCCESS’ if the prompt template is successfully deleted
- Return type:
str
Example:
prompt_mgr.delete_prompt(prompt_id) # delete if asset is unlocked
- get_lock(prompt_id)[source]¶
Get the current locked state of a prompt template.
- Parameters:
prompt_id (str) – ID of the prompt template
- Returns:
information about the locked state of a prompt template asset
- Return type:
dict
Example:
print(prompt_mgr.get_lock(prompt_id))
- list(*, limit=None)[source]¶
List all available prompt templates in the DataFrame format.
- Parameters:
limit (int, optional) – limit number of fetched records, defaults to None.
- Returns:
DataFrame of fundamental properties of available prompts.
- Return type:
pandas.core.frame.DataFrame
Example:
prompt_mgr.list(limit=5) # list of 5 recent created prompt template assets
Hint
Additionally you can sort available prompt templates by “LAST MODIFIED” field.
df_prompts = prompt_mgr.list() df_prompts.sort_values("LAST MODIFIED", ascending=False)
- load_prompt(prompt_id, astype=PromptTemplateFormats.PROMPTTEMPLATE, *, prompt_variables=None)[source]¶
Retrieve a prompt template asset.
- Parameters:
prompt_id (str) – ID of the processed prompt template
astype (PromptTemplateFormats) – type of return object
prompt_variables (dict[str, str]) – dictionary of input variables and values that will replace the input variables
- Returns:
prompt template asset
- Return type:
FreeformPromptTemplate | PromptTemplate | DetachedPromptTemplate | str | langchain.prompts.PromptTemplate
Example:
loaded_prompt_template = prompt_mgr.load_prompt(prompt_id) loaded_prompt_template_lc = prompt_mgr.load_prompt(prompt_id, PromptTemplateFormats.LANGCHAIN) loaded_prompt_template_string = prompt_mgr.load_prompt(prompt_id, PromptTemplateFormats.STRING)
- lock(prompt_id, force=False)[source]¶
Lock a prompt template if it is unlocked and you have permission to lock it.
- Parameters:
prompt_id (str) – ID of the prompt template
force (bool) – if True, lock is forcefully overwritten
- Returns:
locked prompt template
- Return type:
dict
Example:
prompt_mgr.lock(prompt_id)
- store_prompt(prompt_template)[source]¶
Store a new prompt template.
- Parameters:
prompt_template ((FreeformPromptTemplate | PromptTemplate | DetachedPromptTemplate | langchain.prompts.PromptTemplate)) – PromptTemplate to be stored.
- Returns:
PromptTemplate object that is initialized with values provided in the server response object.
- Return type:
FreeformPromptTemplate | PromptTemplate | DetachedPromptTemplate
- unlock(prompt_id)[source]¶
Unlock a prompt template if it is locked and you have permission to unlock it.
- Parameters:
prompt_id (str) – ID of the prompt template
- Returns:
unlocked prompt template
- Return type:
dict
Example:
prompt_mgr.unlock(prompt_id)
- update_prompt(prompt_id, prompt_template)[source]¶
Update prompt template data.
- Parameters:
prompt_id (str) – ID of the prompt template to be updated
prompt (FreeformPromptTemplate | PromptTemplate | DetachedPromptTemplate) – prompt template with new data
- Returns:
metadata of the updated deployment
- Return type:
dict
Example:
updataed_prompt_template = PromptTemplate(name="New name") prompt_mgr.update_prompt(prompt_id, prompt_template) # {'name': 'New name'} in metadata
- class ibm_watsonx_ai.foundation_models.prompts.PromptTemplate(name=None, model_id=None, model_params=None, template_version=None, task_ids=None, description=None, input_text=None, input_variables=None, instruction=None, input_prefix=None, output_prefix=None, examples=None, validate_template=True, **kwargs)[source]¶
Bases:
BasePromptTemplate
Parameter storage for a structured prompt template.
- Parameters:
prompt_id (str, attribute setting not allowed) – ID of the prompt template, defaults to None.
created_at (str, attribute setting not allowed) – time that the prompt was created (UTC), defaults to None.
lock (PromptTemplateLock | None, attribute setting not allowed) – locked state of the asset, defaults to None.
is_template (bool | None, attribute setting not allowed) – True if the prompt is a template, False otherwise; defaults to None.
name (str, optional) – name of the prompt template, defaults to None.
model_id (ModelTypes | str | None, optional) – ID of the Foundation model, defaults to None.
model_params (dict, optional) – parameters of the model, defaults to None.
template_version (str, optional) – semantic version for tracking in IBM AI Factsheets, defaults to None.
task_ids (list[str] | None, optional) – List of task IDs, defaults to None.
description (str, optional) – description of the prompt template asset, defaults to None.
input_text (str, optional) – input text for the prompt, defaults to None.
input_variables ((list | dict[str, dict[str, str]]), optional) – Input variables can be present in fields: instruction, input_prefix, output_prefix, input_text, examples and are identified by braces (‘{’ and ‘}’), defaults to None.
instruction (str, optional) – instruction for the model, defaults to None.
input_prefix (str, optional) – prefix string placed before the input text, defaults to None.
output_prefix (str, optional) – prefix placed before the model response, defaults to None.
examples (list[list[str]]], optional) – examples that might help the model adjust the response; [[input1, output1], …], defaults to None.
validate_template (bool, optional) – if True, the prompt template is validated for the presence of input variables, defaults to True.
- Raises:
ValidationError – raised when the set of input_variables is not consistent with the input variables present in the template. Raised only when validate_template is set to True.
Examples
Example of an invalid prompt template:
prompt_template = PromptTemplate( name="My structured prompt", model_id="ibm/granite-13b-chat-v2" input_text='What are the most famous monuments in ?', input_variables=['country']) # Traceback (most recent call last): # ... # ValidationError: Invalid prompt template; check for mismatched or missing input variables. Missing input variable: {'country'}
Example of a valid prompt template:
prompt_template = PromptTemplate( name="My structured prompt", model_id="ibm/granite-13b-chat-v2" input_text='What are the most famous monuments in {country}?', input_variables=['country'])
- class ibm_watsonx_ai.foundation_models.prompts.FreeformPromptTemplate(name=None, model_id=None, model_params=None, template_version=None, task_ids=None, description=None, input_text=None, input_variables=None, validate_template=True)[source]¶
Bases:
BasePromptTemplate
Storage for Freeform prompt template asset parameters.
- Parameters:
prompt_id (str, attribute setting not allowed) – ID of the prompt template, defaults to None.
created_at (str, attribute setting not allowed) – time that the prompt was created (UTC), defaults to None.
lock (PromptTemplateLock | None, attribute setting not allowed) – locked state of the asset, defaults to None.
is_template (bool | None, attribute setting not allowed) – True if the prompt is a template, False otherwise; defaults to None.
name (str, optional) – name of the prompt template, defaults to None.
model_id (ModelTypes | str | None, optional) – ID of the foundation model, defaults to None.
model_params (dict, optional) – parameters of the model, defaults to None.
template_version (str, optional) – semantic version for tracking in IBM AI Factsheets, defaults to None.
task_ids (list[str] | None, optional) – list of task IDs, defaults to None.
description (str, optional) – description of the prompt template asset, defaults to None.
input_text (str, optional) – input text for the prompt, defaults to None.
input_variables ((list | dict[str, dict[str, str]]), optional) – input variables can be present in field input_text and are identified by braces (‘{’ and ‘}’), defaults to None.
validate_template (bool, optional) – if True, the prompt template is validated for the presence of input variables, defaults to True.
- Raises:
ValidationError – raised when the set of input_variables is not consistent with the input variables present in the template. Raised only when validate_template is set to True.
Examples
Example of an invalid Freeform prompt template:
prompt_template = FreeformPromptTemplate( name="My freeform prompt", model_id="ibm/granite-13b-chat-v2", input_text='What are the most famous monuments in ?', input_variables=['country']) # Traceback (most recent call last): # ... # ValidationError: Invalid prompt template; check for mismatched or missing input variables. Missing input variable: {'country'}
Example of a valid Freeform prompt template:
prompt_template = FreeformPromptTemplate( name="My freeform prompt", model_id="ibm/granite-13b-chat-v2" input_text='What are the most famous monuments in {country}?', input_variables=['country'])
- class ibm_watsonx_ai.foundation_models.prompts.DetachedPromptTemplate(name=None, model_id=None, model_params=None, template_version=None, task_ids=None, description=None, input_text=None, input_variables=None, detached_prompt_id=None, detached_model_id=None, detached_model_provider=None, detached_prompt_url=None, detached_prompt_additional_information=None, detached_model_name=None, detached_model_url=None, validate_template=True)[source]¶
Bases:
BasePromptTemplate
Storage for detached prompt template parameters.
- Parameters:
prompt_id (str, attribute setting not allowed) – ID of the prompt template, defaults to None.
created_at (str, attribute setting not allowed) – time that the prompt was created (UTC), defaults to None.
lock (PromptTemplateLock | None, attribute setting not allowed) – locked state of the asset, defaults to None.
is_template (bool | None, attribute setting not allowed) – True if the prompt is a template, False otherwise; defaults to None.
name (str, optional) – name of the prompt template, defaults to None.
model_id (ModelTypes | str | None, optional) – ID of the foundation model, defaults to None.
model_params (dict, optional) – parameters of the model, defaults to None.
template_version (str, optional) – semantic version for tracking in IBM AI Factsheets, defaults to None.
task_ids (list[str] | None, optional) – list of task IDs, defaults to None.
description (str, optional) – description of the prompt template asset, defaults to None.
input_text (str, optional) – input text for the prompt, defaults to None.
input_variables ((list | dict[str, dict[str, str]]), optional) – input variables can be present in field: input_text and are identified by braces (‘{’ and ‘}’), defaults to None.
detached_prompt_id (str | None, optional) – ID of the external prompt, defaults to None
detached_model_id (str | None, optional) – ID of the external model, defaults to None
detached_model_provider (str | None, optional) – external model provider, defaults to None
detached_prompt_url (str | None, optional) – URL for the external prompt, defaults to None
detached_prompt_additional_information (list[dict[str, Any]] | None, optional) – additional information of the external prompt, defaults to None
detached_model_name (str | None, optional) – name of the external model, defaults to None
detached_model_url (str | None, optional) – URL for the external model, defaults to None
validate_template (bool, optional) – if True, the prompt template is validated for the presence of input variables, defaults to True.
- Raises:
ValidationError – raised when the set of input_variables is not consistent with the input variables present in the template. Raised only when validate_template is set to True.
Examples
Example of an invalid detached prompt template:
prompt_template = DetachedPromptTemplate( name="My detached prompt", model_id="<some model>", input_text='What are the most famous monuments in ?', input_variables=['country'], detached_prompt_id="<prompt id>", detached_model_id="<model id>", detached_model_provider="<provider>", detached_prompt_url="<url>", detached_prompt_additional_information=[[{"key":"value"}]]}, detached_model_name="<model name>", detached_model_url ="<model url>") # Traceback (most recent call last): # ... # ValidationError: Invalid prompt template; check for mismatched or missing input variables. Missing input variable: {'country'}
Example of a valid detached prompt template:
prompt_template = DetachedPromptTemplate( name="My detached prompt", model_id="<some model>", input_text='What are the most famous monuments in {country}?', input_variables=['country'], detached_prompt_id="<prompt id>", detached_model_id="<model id>", detached_model_provider="<provider>", detached_prompt_url="<url>", detached_prompt_additional_information=[[{"key":"value"}]]}, detached_model_name="<model name>", detached_model_url ="<model url>"))