Core¶
Connections¶
- class client.Connections(client)[source]¶
Store and manage connections.
- ConfigurationMetaNames = <ibm_watsonx_ai.metanames.ConnectionMetaNames object>¶
MetaNames for connection creation.
- create(meta_props)[source]¶
Create a connection. Examples of PROPERTIES field input:
MySQL
client.connections.ConfigurationMetaNames.PROPERTIES: { "database": "database", "password": "password", "port": "3306", "host": "host url", "ssl": "false", "username": "username" }
Google BigQuery
Method 1: Using service account json. The generated service account json can be provided as input as-is. Provide actual values in json. The example below is only indicative to show the fields. For information on how to generate the service account json, refer to Google BigQuery documentation.
client.connections.ConfigurationMetaNames.PROPERTIES: { "type": "service_account", "project_id": "project_id", "private_key_id": "private_key_id", "private_key": "private key contents", "client_email": "client_email", "client_id": "client_id", "auth_uri": "https://accounts.google.com/o/oauth2/auth", "token_uri": "https://oauth2.googleapis.com/token", "auth_provider_x509_cert_url": "https://www.googleapis.com/oauth2/v1/certs", "client_x509_cert_url": "client_x509_cert_url" }
Method 2: Using OAuth Method. For information on how to generate a OAuth token, refer to Google BigQuery documentation.
client.connections.ConfigurationMetaNames.PROPERTIES: { "access_token": "access token generated for big query", "refresh_token": "refresh token", "project_id": "project_id", "client_secret": "This is your gmail account password", "client_id": "client_id" }
MS SQL
client.connections.ConfigurationMetaNames.PROPERTIES: { "database": "database", "password": "password", "port": "1433", "host": "host", "username": "username" }
Teradata
client.connections.ConfigurationMetaNames.PROPERTIES: { "database": "database", "password": "password", "port": "1433", "host": "host", "username": "username" }
- Parameters:
meta_props (dict) –
metadata of the connection configuration. To see available meta names, use:
client.connections.ConfigurationMetaNames.get()
- Returns:
metadata of the stored connection
- Return type:
dict
Example:
sqlserver_data_source_type_id = client.connections.get_datasource_type_id_by_name('sqlserver') connections_details = client.connections.create({ client.connections.ConfigurationMetaNames.NAME: "sqlserver connection", client.connections.ConfigurationMetaNames.DESCRIPTION: "connection description", client.connections.ConfigurationMetaNames.DATASOURCE_TYPE: sqlserver_data_source_type_id, client.connections.ConfigurationMetaNames.PROPERTIES: { "database": "database", "password": "password", "port": "1433", "host": "host", "username": "username"} })
- delete(connection_id)[source]¶
Delete a stored connection.
- Parameters:
connection_id (str) – unique ID of the connection to be deleted
- Returns:
status (“SUCCESS” or “FAILED”)
- Return type:
str
Example:
client.connections.delete(connection_id)
- get_datasource_type_details_by_id(datasource_type_id, connection_properties=False)[source]¶
Get datasource type details for the given datasource type ID.
- Parameters:
datasource_type_id (str) – ID of the datasource type
connection_properties (bool) – if True, the connection properties are included in the returned details. defaults to False
- Returns:
Datasource type details
- Return type:
dict
Example:
client.connections.get_datasource_type_details_by_id(datasource_type_id)
- get_datasource_type_id_by_name(name)[source]¶
Get a stored datasource type ID for the given datasource type name.
- Parameters:
name (str) – name of datasource type
- Returns:
ID of datasource type
- Return type:
str
Example:
client.connections.get_datasource_type_id_by_name('cloudobjectstorage')
- get_datasource_type_uid_by_name(name)[source]¶
Get a stored datasource type ID for the given datasource type name.
Deprecated: Use
Connections.get_datasource_type_id_by_name(name)
instead.- Parameters:
name (str) – name of datasource type
- Returns:
ID of datasource type
- Return type:
str
Example:
client.connections.get_datasource_type_uid_by_name('cloudobjectstorage')
- get_details(connection_id=None)[source]¶
Get connection details for the given unique connection ID. If no connection_id is passed, details for all connections are returned.
- Parameters:
connection_id (str) – unique ID of the connection
- Returns:
metadata of the stored connection
- Return type:
dict
Example:
connection_details = client.connections.get_details(connection_id) connection_details = client.connections.get_details()
- static get_id(connection_details)[source]¶
Get ID of a stored connection.
- Parameters:
connection_details (dict) – metadata of the stored connection
- Returns:
unique ID of the stored connection
- Return type:
str
Example:
connection_id = client.connection.get_id(connection_details)
- static get_uid(connection_details)[source]¶
Get the unique ID of a stored connection.
Deprecated: Use
Connections.get_id(details)
instead.- Parameters:
connection_details (dict) – metadata of the stored connection
- Returns:
unique ID of the stored connection
- Return type:
str
Example:
connection_uid = client.connection.get_uid(connection_details)
- get_uploaded_db_drivers()[source]¶
Get uploaded db driver jar names and paths. Supported for IBM Cloud Pak® for Data, version 4.6.1 and up.
Output
Important
Returns dictionary containing name and path for connection files.
return type: Dict[Str, Str]
Example:
>>> result = client.connections.get_uploaded_db_drivers()
- list()[source]¶
Return pd.DataFrame table with all stored connections in a table format.
- Returns:
pandas.DataFrame with listed connections
- Return type:
pandas.DataFrame
Example:
client.connections.list()
- list_datasource_types()[source]¶
Print stored datasource types assets in a table format.
- Returns:
pandas.DataFrame with listed datasource types
- Return type:
pandas.DataFrame
Example: https://test.cloud.ibm.com/apidocs/watsonx-ai#trainings-list
client.connections.list_datasource_types()
- list_uploaded_db_drivers()[source]¶
Return pd.DataFrame table with uploaded db driver jars in table a format. Supported for IBM Cloud Pak® for Data only.
- Returns:
pandas.DataFrame with listed uploaded db drivers
- Return type:
pandas.DataFrame
Example:
client.connections.list_uploaded_db_drivers()
- sign_db_driver_url(jar_name)[source]¶
Get a signed db driver jar URL to be used during JDBC generic connection creation. The jar name passed as argument needs to be uploaded into the system first. Supported for IBM Cloud Pak® for Data only, version 4.0.4 and later.
- Parameters:
jar_name (str) – name of db driver jar
- Returns:
URL of signed db driver
- Return type:
str
Example:
jar_uri = client.connections.sign_db_driver_url('db2jcc4.jar')
- class metanames.ConnectionMetaNames[source]¶
Set of MetaNames for Connection.
Available MetaNames:
MetaName
Type
Required
Example value
NAME
str
Y
my_space
DESCRIPTION
str
N
my_description
DATASOURCE_TYPE
str
Y
1e3363a5-7ccf-4fff-8022-4850a8024b68
PROPERTIES
dict
Y
{'database': 'db_name', 'host': 'host_url', 'password': 'password', 'username': 'user'}
FLAGS
list
N
['personal_credentials']
Data assets¶
- class client.Assets(client)[source]¶
Store and manage data assets.
- ConfigurationMetaNames = <ibm_watsonx_ai.metanames.AssetsMetaNames object>¶
MetaNames for Data Assets creation.
- create(name, file_path)[source]¶
Create a data asset and upload content to it.
- Parameters:
name (str) – name to be given to the data asset
file_path (str) – path to the content file to be uploaded
- Returns:
metadata of the stored data asset
- Return type:
dict
Example:
asset_details = client.data_assets.create(name="sample_asset", file_path="/path/to/file")
- delete(asset_id=None, **kwargs)[source]¶
Delete a stored data asset.
- Parameters:
asset_id (str) – unique ID of the data asset
- Returns:
status (“SUCCESS” or “FAILED”) or dictionary, if deleted asynchronously
- Return type:
str or dict
Example:
client.data_assets.delete(asset_id)
- download(asset_id=None, filename='', **kwargs)[source]¶
Download and store the content of a data asset.
- Parameters:
asset_id (str) – unique ID of the data asset to be downloaded
filename (str) – filename to be used for the downloaded file
- Returns:
normalized path to the downloaded asset content
- Return type:
str
Example:
client.data_assets.download(asset_id,"sample_asset.csv")
- get_content(asset_id=None, **kwargs)[source]¶
Download the content of a data asset.
- Parameters:
asset_id (str) – unique ID of the data asset to be downloaded
- Returns:
the asset content
- Return type:
bytes
Example:
content = client.data_assets.get_content(asset_id).decode('ascii')
- get_details(asset_id=None, get_all=None, limit=None, **kwargs)[source]¶
Get data asset details. If no asset_id is passed, details for all assets are returned.
- Parameters:
asset_id (str) – unique ID of the asset
limit (int, optional) – limit number of fetched records
get_all (bool, optional) – if True, it will get all entries in ‘limited’ chunks
- Returns:
metadata of the stored data asset
- Return type:
dict
Example:
asset_details = client.data_assets.get_details(asset_id)
- static get_href(asset_details)[source]¶
Get the URL of a stored data asset.
- Parameters:
asset_details (dict) – details of the stored data asset
- Returns:
href of the stored data asset
- Return type:
str
Example:
asset_details = client.data_assets.get_details(asset_id) asset_href = client.data_assets.get_href(asset_details)
- static get_id(asset_details)[source]¶
Get the unique ID of a stored data asset.
- Parameters:
asset_details (dict) – details of the stored data asset
- Returns:
unique ID of the stored data asset
- Return type:
str
Example:
asset_id = client.data_assets.get_id(asset_details)
- list(limit=None)[source]¶
Lists stored data assets in a table format. If limit is set to none, only the first 50 records are shown.
- Parameters:
limit (int) – limit number for fetched records
- Return type:
DataFrame
- Returns:
listed elements
Example:
client.data_assets.list()
- store(meta_props)[source]¶
Create a data asset and upload content to it.
- Parameters:
meta_props (dict) –
metadata of the space configuration. To see available meta names, use:
client.data_assets.ConfigurationMetaNames.get()
Example:
Example of data asset creation for files:
metadata = { client.data_assets.ConfigurationMetaNames.NAME: 'my data assets', client.data_assets.ConfigurationMetaNames.DESCRIPTION: 'sample description', client.data_assets.ConfigurationMetaNames.DATA_CONTENT_NAME: 'sample.csv' } asset_details = client.data_assets.store(meta_props=metadata)
Example of data asset creation using a connection:
metadata = { client.data_assets.ConfigurationMetaNames.NAME: 'my data assets', client.data_assets.ConfigurationMetaNames.DESCRIPTION: 'sample description', client.data_assets.ConfigurationMetaNames.CONNECTION_ID: '39eaa1ee-9aa4-4651-b8fe-95d3ddae', client.data_assets.ConfigurationMetaNames.DATA_CONTENT_NAME: 't1/sample.csv' } asset_details = client.data_assets.store(meta_props=metadata)
Example of data asset creation with a database sources type connection:
metadata = { client.data_assets.ConfigurationMetaNames.NAME: 'my data assets', client.data_assets.ConfigurationMetaNames.DESCRIPTION: 'sample description', client.data_assets.ConfigurationMetaNames.CONNECTION_ID: '23eaf1ee-96a4-4651-b8fe-95d3dadfe', client.data_assets.ConfigurationMetaNames.DATA_CONTENT_NAME: 't1' } asset_details = client.data_assets.store(meta_props=metadata)
Deployments¶
- class client.Deployments(client)[source]¶
Deploy and score published artifacts (models and functions).
- class HardwareRequestSizes(value)[source]¶
An enum class that represents the different hardware request sizes available.
- capitalize()¶
Return a capitalized version of the string.
More specifically, make the first character have upper case and the rest lower case.
- casefold()¶
Return a version of the string suitable for caseless comparisons.
- center(width, fillchar=' ', /)¶
Return a centered string of length width.
Padding is done using the specified fill character (default is a space).
- count(sub[, start[, end]]) int ¶
Return the number of non-overlapping occurrences of substring sub in string S[start:end]. Optional arguments start and end are interpreted as in slice notation.
- encode(encoding='utf-8', errors='strict')¶
Encode the string using the codec registered for encoding.
- encoding
The encoding in which to encode the string.
- errors
The error handling scheme to use for encoding errors. The default is ‘strict’ meaning that encoding errors raise a UnicodeEncodeError. Other possible values are ‘ignore’, ‘replace’ and ‘xmlcharrefreplace’ as well as any other name registered with codecs.register_error that can handle UnicodeEncodeErrors.
- endswith(suffix[, start[, end]]) bool ¶
Return True if S ends with the specified suffix, False otherwise. With optional start, test S beginning at that position. With optional end, stop comparing S at that position. suffix can also be a tuple of strings to try.
- expandtabs(tabsize=8)¶
Return a copy where all tab characters are expanded using spaces.
If tabsize is not given, a tab size of 8 characters is assumed.
- find(sub[, start[, end]]) int ¶
Return the lowest index in S where substring sub is found, such that sub is contained within S[start:end]. Optional arguments start and end are interpreted as in slice notation.
Return -1 on failure.
- format(*args, **kwargs) str ¶
Return a formatted version of S, using substitutions from args and kwargs. The substitutions are identified by braces (‘{’ and ‘}’).
- format_map(mapping) str ¶
Return a formatted version of S, using substitutions from mapping. The substitutions are identified by braces (‘{’ and ‘}’).
- index(sub[, start[, end]]) int ¶
Return the lowest index in S where substring sub is found, such that sub is contained within S[start:end]. Optional arguments start and end are interpreted as in slice notation.
Raises ValueError when the substring is not found.
- isalnum()¶
Return True if the string is an alpha-numeric string, False otherwise.
A string is alpha-numeric if all characters in the string are alpha-numeric and there is at least one character in the string.
- isalpha()¶
Return True if the string is an alphabetic string, False otherwise.
A string is alphabetic if all characters in the string are alphabetic and there is at least one character in the string.
- isascii()¶
Return True if all characters in the string are ASCII, False otherwise.
ASCII characters have code points in the range U+0000-U+007F. Empty string is ASCII too.
- isdecimal()¶
Return True if the string is a decimal string, False otherwise.
A string is a decimal string if all characters in the string are decimal and there is at least one character in the string.
- isdigit()¶
Return True if the string is a digit string, False otherwise.
A string is a digit string if all characters in the string are digits and there is at least one character in the string.
- isidentifier()¶
Return True if the string is a valid Python identifier, False otherwise.
Call keyword.iskeyword(s) to test whether string s is a reserved identifier, such as “def” or “class”.
- islower()¶
Return True if the string is a lowercase string, False otherwise.
A string is lowercase if all cased characters in the string are lowercase and there is at least one cased character in the string.
- isnumeric()¶
Return True if the string is a numeric string, False otherwise.
A string is numeric if all characters in the string are numeric and there is at least one character in the string.
- isprintable()¶
Return True if the string is printable, False otherwise.
A string is printable if all of its characters are considered printable in repr() or if it is empty.
- isspace()¶
Return True if the string is a whitespace string, False otherwise.
A string is whitespace if all characters in the string are whitespace and there is at least one character in the string.
- istitle()¶
Return True if the string is a title-cased string, False otherwise.
In a title-cased string, upper- and title-case characters may only follow uncased characters and lowercase characters only cased ones.
- isupper()¶
Return True if the string is an uppercase string, False otherwise.
A string is uppercase if all cased characters in the string are uppercase and there is at least one cased character in the string.
- join(iterable, /)¶
Concatenate any number of strings.
The string whose method is called is inserted in between each given string. The result is returned as a new string.
Example: ‘.’.join([‘ab’, ‘pq’, ‘rs’]) -> ‘ab.pq.rs’
- ljust(width, fillchar=' ', /)¶
Return a left-justified string of length width.
Padding is done using the specified fill character (default is a space).
- lower()¶
Return a copy of the string converted to lowercase.
- lstrip(chars=None, /)¶
Return a copy of the string with leading whitespace removed.
If chars is given and not None, remove characters in chars instead.
- static maketrans()¶
Return a translation table usable for str.translate().
If there is only one argument, it must be a dictionary mapping Unicode ordinals (integers) or characters to Unicode ordinals, strings or None. Character keys will be then converted to ordinals. If there are two arguments, they must be strings of equal length, and in the resulting dictionary, each character in x will be mapped to the character at the same position in y. If there is a third argument, it must be a string, whose characters will be mapped to None in the result.
- partition(sep, /)¶
Partition the string into three parts using the given separator.
This will search for the separator in the string. If the separator is found, returns a 3-tuple containing the part before the separator, the separator itself, and the part after it.
If the separator is not found, returns a 3-tuple containing the original string and two empty strings.
- removeprefix(prefix, /)¶
Return a str with the given prefix string removed if present.
If the string starts with the prefix string, return string[len(prefix):]. Otherwise, return a copy of the original string.
- removesuffix(suffix, /)¶
Return a str with the given suffix string removed if present.
If the string ends with the suffix string and that suffix is not empty, return string[:-len(suffix)]. Otherwise, return a copy of the original string.
- replace(old, new, count=-1, /)¶
Return a copy with all occurrences of substring old replaced by new.
- count
Maximum number of occurrences to replace. -1 (the default value) means replace all occurrences.
If the optional argument count is given, only the first count occurrences are replaced.
- rfind(sub[, start[, end]]) int ¶
Return the highest index in S where substring sub is found, such that sub is contained within S[start:end]. Optional arguments start and end are interpreted as in slice notation.
Return -1 on failure.
- rindex(sub[, start[, end]]) int ¶
Return the highest index in S where substring sub is found, such that sub is contained within S[start:end]. Optional arguments start and end are interpreted as in slice notation.
Raises ValueError when the substring is not found.
- rjust(width, fillchar=' ', /)¶
Return a right-justified string of length width.
Padding is done using the specified fill character (default is a space).
- rpartition(sep, /)¶
Partition the string into three parts using the given separator.
This will search for the separator in the string, starting at the end. If the separator is found, returns a 3-tuple containing the part before the separator, the separator itself, and the part after it.
If the separator is not found, returns a 3-tuple containing two empty strings and the original string.
- rsplit(sep=None, maxsplit=-1)¶
Return a list of the substrings in the string, using sep as the separator string.
- sep
The separator used to split the string.
When set to None (the default value), will split on any whitespace character (including \n \r \t \f and spaces) and will discard empty strings from the result.
- maxsplit
Maximum number of splits (starting from the left). -1 (the default value) means no limit.
Splitting starts at the end of the string and works to the front.
- rstrip(chars=None, /)¶
Return a copy of the string with trailing whitespace removed.
If chars is given and not None, remove characters in chars instead.
- split(sep=None, maxsplit=-1)¶
Return a list of the substrings in the string, using sep as the separator string.
- sep
The separator used to split the string.
When set to None (the default value), will split on any whitespace character (including \n \r \t \f and spaces) and will discard empty strings from the result.
- maxsplit
Maximum number of splits (starting from the left). -1 (the default value) means no limit.
Note, str.split() is mainly useful for data that has been intentionally delimited. With natural text that includes punctuation, consider using the regular expression module.
- splitlines(keepends=False)¶
Return a list of the lines in the string, breaking at line boundaries.
Line breaks are not included in the resulting list unless keepends is given and true.
- startswith(prefix[, start[, end]]) bool ¶
Return True if S starts with the specified prefix, False otherwise. With optional start, test S beginning at that position. With optional end, stop comparing S at that position. prefix can also be a tuple of strings to try.
- strip(chars=None, /)¶
Return a copy of the string with leading and trailing whitespace removed.
If chars is given and not None, remove characters in chars instead.
- swapcase()¶
Convert uppercase characters to lowercase and lowercase characters to uppercase.
- title()¶
Return a version of the string where each word is titlecased.
More specifically, words start with uppercased characters and all remaining cased characters have lower case.
- translate(table, /)¶
Replace each character in the string using the given translation table.
- table
Translation table, which must be a mapping of Unicode ordinals to Unicode ordinals, strings, or None.
The table must implement lookup/indexing via __getitem__, for instance a dictionary or list. If this operation raises LookupError, the character is left untouched. Characters mapped to None are deleted.
- upper()¶
Return a copy of the string converted to uppercase.
- zfill(width, /)¶
Pad a numeric string with zeros on the left, to fill a field of the given width.
The string is never truncated.
- create(artifact_id=None, meta_props=None, rev_id=None, **kwargs)[source]¶
Create a deployment from an artifact. An artifact is a model or function that can be deployed.
- Parameters:
artifact_id (str) – ID of the published artifact (the model or function ID)
meta_props (dict, optional) –
meta props. To see the available list of meta names, use:
client.deployments.ConfigurationMetaNames.get()
rev_id (str, optional) – revision ID of the deployment
- Returns:
metadata of the created deployment
- Return type:
dict
Example:
meta_props = { client.deployments.ConfigurationMetaNames.NAME: "SAMPLE DEPLOYMENT NAME", client.deployments.ConfigurationMetaNames.ONLINE: {}, client.deployments.ConfigurationMetaNames.HARDWARE_SPEC : { "id": "e7ed1d6c-2e89-42d7-aed5-8sb972c1d2b"}, client.deployments.ConfigurationMetaNames.SERVING_NAME : 'sample_deployment' } deployment_details = client.deployments.create(artifact_id, meta_props)
- create_job(deployment_id, meta_props, retention=None, transaction_id=None, _asset_id=None)[source]¶
Create an asynchronous deployment job.
- Parameters:
deployment_id (str) – unique ID of the deployment
meta_props (dict) – metaprops. To see the available list of metanames, use
client.deployments.ScoringMetaNames.get()
orclient.deployments.DecisionOptimizationmetaNames.get()
retention (int, optional) – how many job days job meta should be retained, takes integer values >= -1, supported only on Cloud
transaction_id (str, optional) – transaction ID to be passed with the payload
- Returns:
metadata of the created async deployment job
- Return type:
dict or str
Note
The valid payloads for scoring input are either list of values, pandas or numpy dataframes.
Example:
scoring_payload = {client.deployments.ScoringMetaNames.INPUT_DATA: [{'fields': ['GENDER','AGE','MARITAL_STATUS','PROFESSION'], 'values': [['M',23,'Single','Student'], ['M',55,'Single','Executive']]}]} async_job = client.deployments.create_job(deployment_id, scoring_payload)
- delete(deployment_id=None, **kwargs)[source]¶
Delete a deployment.
- Parameters:
deployment_id (str) – unique ID of the deployment
- Returns:
status (“SUCCESS” or “FAILED”)
- Return type:
str
Example:
client.deployments.delete(deployment_id)
- delete_job(job_id=None, hard_delete=False, **kwargs)[source]¶
Delete a deployment job that is running. This method can also delete metadata details of completed or canceled jobs when hard_delete parameter is set to True.
- Parameters:
job_id (str) – unique ID of the deployment job to be deleted
hard_delete (bool, optional) –
specify True or False:
True - To delete the completed or canceled job.
False - To cancel the currently running deployment job.
- Returns:
status (“SUCCESS” or “FAILED”)
- Return type:
str
Example:
client.deployments.delete_job(job_id)
- generate(deployment_id, prompt=None, params=None, guardrails=False, guardrails_hap_params=None, guardrails_pii_params=None, concurrency_limit=10, async_mode=False, validate_prompt_variables=True)[source]¶
Generate a raw response with prompt for given deployment_id.
- Parameters:
deployment_id (str) – unique ID of the deployment
prompt (str, optional) – prompt needed for text generation. If deployment_id points to the Prompt Template asset, then the prompt argument must be None, defaults to None
params (dict, optional) – meta props for text generation, use
ibm_watsonx_ai.metanames.GenTextParamsMetaNames().show()
to view the list of MetaNamesguardrails (bool, optional) – If True, then potentially hateful, abusive, and/or profane language (HAP) was detected filter is toggle on for both prompt and generated text, defaults to False
guardrails_hap_params (dict, optional) – meta props for HAP moderations, use
ibm_watsonx_ai.metanames.GenTextModerationsMetaNames().show()
to view the list of MetaNamesconcurrency_limit (int, optional) – number of requests to be sent in parallel, maximum is 10
async_mode (bool, optional) – If True, then yield results asynchronously (using generator). In this case both the prompt and the generated text will be concatenated in the final response - under generated_text, defaults to False
validate_prompt_variables (bool) – If True, prompt variables provided in params are validated with the ones in Prompt Template Asset. This parameter is only applicable in a Prompt Template Asset deployment scenario and should not be changed for different cases, defaults to True
- Returns:
scoring result containing generated content
- Return type:
dict
- generate_text(deployment_id, prompt=None, params=None, raw_response=False, guardrails=False, guardrails_hap_params=None, guardrails_pii_params=None, concurrency_limit=10, validate_prompt_variables=True)[source]¶
Given the selected deployment (deployment_id), a text prompt as input, and the parameters and concurrency_limit, the selected inference will generate a completion text as generated_text response.
- Parameters:
deployment_id (str) – unique ID of the deployment
prompt (str, optional) – the prompt string or list of strings. If the list of strings is passed, requests will be managed in parallel with the rate of concurency_limit, defaults to None
params (dict, optional) – meta props for text generation, use
ibm_watsonx_ai.metanames.GenTextParamsMetaNames().show()
to view the list of MetaNamesraw_response (bool, optional) – returns the whole response object
guardrails (bool, optional) – If True, then potentially hateful, abusive, and/or profane language (HAP) was detected filter is toggle on for both prompt and generated text, defaults to False
guardrails_hap_params (dict, optional) – meta props for HAP moderations, use
ibm_watsonx_ai.metanames.GenTextModerationsMetaNames().show()
to view the list of MetaNamesconcurrency_limit (int, optional) – number of requests to be sent in parallel, maximum is 10
validate_prompt_variables (bool) – If True, prompt variables provided in params are validated with the ones in Prompt Template Asset. This parameter is only applicable in a Prompt Template Asset deployment scenario and should not be changed for different cases, defaults to True
- Returns:
generated content
- Return type:
str
Note
By default only the first occurance of HAPDetectionWarning is displayed. To enable printing all warnings of this category, use:
import warnings from ibm_watsonx_ai.foundation_models.utils import HAPDetectionWarning warnings.filterwarnings("always", category=HAPDetectionWarning)
- generate_text_stream(deployment_id, prompt=None, params=None, raw_response=False, guardrails=False, guardrails_hap_params=None, guardrails_pii_params=None, validate_prompt_variables=True)[source]¶
Given the selected deployment (deployment_id), a text prompt as input and parameters, the selected inference will generate a streamed text as generate_text_stream.
- Parameters:
deployment_id (str) – unique ID of the deployment
prompt (str, optional) – the prompt string, defaults to None
params (dict, optional) – meta props for text generation, use
ibm_watsonx_ai.metanames.GenTextParamsMetaNames().show()
to view the list of MetaNamesraw_response (bool, optional) – yields the whole response object
guardrails (bool, optional) – If True, then potentially hateful, abusive, and/or profane language (HAP) was detected filter is toggle on for both prompt and generated text, defaults to False
guardrails_hap_params (dict, optional) – meta props for HAP moderations, use
ibm_watsonx_ai.metanames.GenTextModerationsMetaNames().show()
to view the list of MetaNamesvalidate_prompt_variables (bool) – If True, prompt variables provided in params are validated with the ones in Prompt Template Asset. This parameter is only applicable in a Prompt Template Asset deployment scenario and should not be changed for different cases, defaults to True
- Returns:
generated content
- Return type:
str
Note
By default only the first occurance of HAPDetectionWarning is displayed. To enable printing all warnings of this category, use:
import warnings from ibm_watsonx_ai.foundation_models.utils import HAPDetectionWarning warnings.filterwarnings("always", category=HAPDetectionWarning)
- get_details(deployment_id=None, serving_name=None, limit=None, asynchronous=False, get_all=False, spec_state=None, _silent=False, **kwargs)[source]¶
Get information about deployment(s). If deployment_id is not passed, all deployment details are returned.
- Parameters:
deployment_id (str, optional) – unique ID of the deployment
serving_name (str, optional) – serving name that filters deployments
limit (int, optional) – limit number of fetched records
asynchronous (bool, optional) – if True, it will work as a generator
get_all (bool, optional) – if True, it will get all entries in ‘limited’ chunks
spec_state (SpecStates, optional) – software specification state, can be used only when deployment_id is None
- Returns:
metadata of the deployment(s)
- Return type:
dict (if deployment_id is not None) or {“resources”: [dict]} (if deployment_id is None)
Example:
deployment_details = client.deployments.get_details(deployment_id) deployment_details = client.deployments.get_details(deployment_id=deployment_id) deployments_details = client.deployments.get_details() deployments_details = client.deployments.get_details(limit=100) deployments_details = client.deployments.get_details(limit=100, get_all=True) deployments_details = [] for entry in client.deployments.get_details(limit=100, asynchronous=True, get_all=True): deployments_details.extend(entry)
- get_download_url(deployment_details)[source]¶
Get deployment_download_url from the deployment details.
- Parameters:
deployment_details (dict) – created deployment details
- Returns:
deployment download URL that is used to get file deployment (for example: Core ML)
- Return type:
str
Example:
deployment_url = client.deployments.get_download_url(deployment)
- static get_href(deployment_details)[source]¶
Get deployment_href from the deployment details.
- Parameters:
deployment_details (dict) – metadata of the deployment
- Returns:
deployment href that is used to manage the deployment
- Return type:
str
Example:
deployment_href = client.deployments.get_href(deployment)
- static get_id(deployment_details)[source]¶
Get the deployment ID from the deployment details.
- Parameters:
deployment_details (dict) – metadata of the deployment
- Returns:
deployment ID that is used to manage the deployment
- Return type:
str
Example:
deployment_id = client.deployments.get_id(deployment)
- get_job_details(job_id=None, include=None, limit=None, **kwargs)[source]¶
Get information about deployment job(s). If deployment job_id is not passed, all deployment jobs details are returned.
- Parameters:
job_id (str, optional) – unique ID of the job
include (str, optional) – fields to be retrieved from ‘decision_optimization’ and ‘scoring’ section mentioned as value(s) (comma separated) as output response fields
limit (int, optional) – limit number of fetched records
- Returns:
metadata of deployment job(s)
- Return type:
dict (if job_id is not None) or {“resources”: [dict]} (if job_id is None)
Example:
deployment_details = client.deployments.get_job_details() deployments_details = client.deployments.get_job_details(job_id=job_id)
- get_job_href(job_details)[source]¶
Get the href of a deployment job.
- Parameters:
job_details (dict) – metadata of the deployment job
- Returns:
href of the deployment job
- Return type:
str
Example:
job_details = client.deployments.get_job_details(job_id=job_id) job_status = client.deployments.get_job_href(job_details)
- get_job_id(job_details)[source]¶
Get the unique ID of a deployment job.
- Parameters:
job_details (dict) – metadata of the deployment job
- Returns:
unique ID of the deployment job
- Return type:
str
Example:
job_details = client.deployments.get_job_details(job_id=job_id) job_status = client.deployments.get_job_id(job_details)
- get_job_status(job_id)[source]¶
Get the status of a deployment job.
- Parameters:
job_id (str) – unique ID of the deployment job
- Returns:
status of the deployment job
- Return type:
dict
Example:
job_status = client.deployments.get_job_status(job_id)
- get_job_uid(job_details)[source]¶
Get the unique ID of a deployment job.
Deprecated: Use
get_job_id(job_details)
instead.- Parameters:
job_details (dict) – metadata of the deployment job
- Returns:
unique ID of the deployment job
- Return type:
str
Example:
job_details = client.deployments.get_job_details(job_uid=job_uid) job_status = client.deployments.get_job_uid(job_details)
- static get_scoring_href(deployment_details)[source]¶
Get scoring URL from deployment details.
- Parameters:
deployment_details (dict) – metadata of the deployment
- Returns:
scoring endpoint URL that is used to make scoring requests
- Return type:
str
Example:
scoring_href = client.deployments.get_scoring_href(deployment)
- static get_serving_href(deployment_details)[source]¶
Get serving URL from the deployment details.
- Parameters:
deployment_details (dict) – metadata of the deployment
- Returns:
serving endpoint URL that is used to make scoring requests
- Return type:
str
Example:
scoring_href = client.deployments.get_serving_href(deployment)
- static get_uid(deployment_details)[source]¶
Get deployment_uid from the deployment details.
Deprecated: Use
get_id(deployment_details)
instead.- Parameters:
deployment_details (dict) – metadata of the deployment
- Returns:
deployment UID that is used to manage the deployment
- Return type:
str
Example:
deployment_uid = client.deployments.get_uid(deployment)
- is_serving_name_available(serving_name)[source]¶
Check if the serving name is available for use.
- Parameters:
serving_name (str) – serving name that filters deployments
- Returns:
information about whether the serving name is available
- Return type:
bool
Example:
is_available = client.deployments.is_serving_name_available('test')
- list(limit=None, artifact_type=None)[source]¶
Returns deployments in a table format. If limit is set to None, only the first 50 records are shown.
- Parameters:
limit (int, optional) – limit number of fetched records
artifact_type (str, optional) – return only deployments with the specified artifact_type
- Returns:
pandas.DataFrame with the listed deployments
- Return type:
pandas.DataFrame
Example:
client.deployments.list()
- list_jobs(limit=None)[source]¶
Return the async deployment jobs in a table format. If the limit is set to None, only the first 50 records are shown.
- Parameters:
limit (int, optional) – limit number of fetched records
- Returns:
pandas.DataFrame with listed deployment jobs
- Return type:
pandas.DataFrame
Note
This method list only async deployment jobs created for WML deployment.
Example:
client.deployments.list_jobs()
- run_ai_service(deployment_id, ai_service_payload)[source]¶
Execute an AI service by providing a scoring payload.
- Parameters:
deployment_id (str) – unique ID of the deployment
ai_service_payload (dict) – AI service payload to be passed to generate the method
- Returns:
response of the AI service
- Return type:
Any
- run_ai_service_stream(deployment_id, ai_service_payload)[source]¶
Execute an AI service by providing a scoring payload.
- Parameters:
deployment_id (str) – unique ID of the deployment
ai_service_payload (dict) – AI service payload to be passed to generate the method
- Returns:
stream of the response of the AI service
- Return type:
Generator
- score(deployment_id, meta_props, transaction_id=None)[source]¶
Make scoring requests against the deployed artifact.
- Parameters:
deployment_id (str) – unique ID of the deployment to be scored
meta_props (dict) – meta props for scoring, use
client.deployments.ScoringMetaNames.show()
to view the list of ScoringMetaNamestransaction_id (str, optional) – transaction ID to be passed with the records during payload logging
- Returns:
scoring result that contains prediction and probability
- Return type:
dict
Note
client.deployments.ScoringMetaNames.INPUT_DATA is the only metaname valid for sync scoring.
The valid payloads for scoring input are either list of values, pandas or numpy dataframes.
Example:
scoring_payload = {client.deployments.ScoringMetaNames.INPUT_DATA: [{'fields': ['GENDER','AGE','MARITAL_STATUS','PROFESSION'], 'values': [ ['M',23,'Single','Student'], ['M',55,'Single','Executive'] ] }] } predictions = client.deployments.score(deployment_id, scoring_payload)
- update(deployment_id=None, changes=None, **kwargs)[source]¶
Updates existing deployment metadata. If ASSET is patched, then ‘id’ field is mandatory and it starts a deployment with the provided asset id/rev. Deployment ID remains the same.
- Parameters:
deployment_id (str) – unique ID of deployment to be updated
changes (dict) – elements to be changed, where keys are ConfigurationMetaNames
- Returns:
metadata of the updated deployment
- Return type:
dict or None
Examples
metadata = {client.deployments.ConfigurationMetaNames.NAME:"updated_Deployment"} updated_deployment_details = client.deployments.update(deployment_id, changes=metadata) metadata = {client.deployments.ConfigurationMetaNames.ASSET: { "id": "ca0cd864-4582-4732-b365-3165598dc945", "rev":"2" }} deployment_details = client.deployments.update(deployment_id, changes=metadata)
- class metanames.DeploymentMetaNames[source]¶
Set of MetaNames for Deployments Specs.
Available MetaNames:
MetaName
Type
Required
Schema
Example value
TAGS
list
N
['string']
['string1', 'string2']
NAME
str
N
my_deployment
DESCRIPTION
str
N
my_deployment
CUSTOM
dict
N
{}
ASSET
dict
N
{'id': '4cedab6d-e8e4-4214-b81a-2ddb122db2ab', 'rev': '1'}
PROMPT_TEMPLATE
dict
N
{'id': '4cedab6d-e8e4-4214-b81a-2ddb122db2ab'}
HARDWARE_SPEC
dict
N
{'id': '3342-1ce536-20dc-4444-aac7-7284cf3befc'}
HARDWARE_REQUEST
dict
N
{'size': 'gpu_s', 'num_nodes': 1}
HYBRID_PIPELINE_HARDWARE_SPECS
list
N
[{'node_runtime_id': 'auto_ai.kb', 'hardware_spec': {'id': '3342-1ce536-20dc-4444-aac7-7284cf3befc', 'num_nodes': '2'}}]
ONLINE
dict
N
{}
BATCH
dict
N
{}
DETACHED
dict
N
{}
R_SHINY
dict
N
{'authentication': 'anyone_with_url'}
VIRTUAL
dict
N
{}
OWNER
str
N
<owner_id>
BASE_MODEL_ID
str
N
google/flan-ul2
BASE_DEPLOYMENT_ID
str
N
76a60161-facb-4968-a475-a6f1447c44bf
PROMPT_VARIABLES
dict
N
{'key': 'value'}
- class ibm_watsonx_ai.utils.enums.RShinyAuthenticationValues(value)[source]¶
Allowable values of R_Shiny authentication.
- ANYONE_WITH_URL = 'anyone_with_url'¶
- ANY_VALID_USER = 'any_valid_user'¶
- MEMBERS_OF_DEPLOYMENT_SPACE = 'members_of_deployment_space'¶
- class metanames.ScoringMetaNames[source]¶
Set of MetaNames for Scoring.
Available MetaNames:
MetaName
Type
Required
Schema
Example value
NAME
str
N
jobs test
INPUT_DATA
list
N
[{'name(optional)': 'string', 'id(optional)': 'string', 'fields(optional)': 'array[string]', 'values': 'array[array[string]]'}]
[{'fields': ['name', 'age', 'occupation'], 'values': [['john', 23, 'student']]}]
INPUT_DATA_REFERENCES
list
N
[{'id(optional)': 'string', 'type(required)': 'string', 'connection(required)': {'href(required)': 'string'}, 'location(required)': {'bucket': 'string', 'path': 'string'}, 'schema(optional)': {'id(required)': 'string', 'fields(required)': [{'name(required)': 'string', 'type(required)': 'string', 'nullable(optional)': 'string'}]}}]
OUTPUT_DATA_REFERENCE
dict
N
{'type(required)': 'string', 'connection(required)': {'href(required)': 'string'}, 'location(required)': {'bucket': 'string', 'path': 'string'}, 'schema(optional)': {'id(required)': 'string', 'fields(required)': [{'name(required)': 'string', 'type(required)': 'string', 'nullable(optional)': 'string'}]}}
EVALUATIONS_SPEC
list
N
[{'id(optional)': 'string', 'input_target(optional)': 'string', 'metrics_names(optional)': 'array[string]'}]
[{'id': 'string', 'input_target': 'string', 'metrics_names': ['auroc', 'accuracy']}]
ENVIRONMENT_VARIABLES
dict
N
{'my_env_var1': 'env_var_value1', 'my_env_var2': 'env_var_value2'}
SCORING_PARAMETERS
dict
N
{'forecast_window': 50}
- class metanames.DecisionOptimizationMetaNames[source]¶
Set of MetaNames for Decision Optimization.
Available MetaNames:
MetaName
Type
Required
Schema
Example value
INPUT_DATA
list
N
[{'name(optional)': 'string', 'id(optional)': 'string', 'fields(optional)': 'array[string]', 'values': 'array[array[string]]'}]
[{'fields': ['name', 'age', 'occupation'], 'values': [['john', 23, 'student']]}]
INPUT_DATA_REFERENCES
list
N
[{'name(optional)': 'string', 'id(optional)': 'string', 'fields(optional)': 'array[string]', 'values': 'array[array[string]]'}]
[{'fields': ['name', 'age', 'occupation'], 'values': [['john', 23, 'student']]}]
OUTPUT_DATA
list
N
[{'name(optional)': 'string'}]
OUTPUT_DATA_REFERENCES
list
N
{'name(optional)': 'string', 'type(required)': 'string', 'connection(required)': {'endpoint_url(required)': 'string', 'access_key_id(required)': 'string', 'secret_access_key(required)': 'string'}, 'location(required)': {'bucket': 'string', 'path': 'string'}, 'schema(optional)': {'id(required)': 'string', 'fields(required)': [{'name(required)': 'string', 'type(required)': 'string', 'nullable(optional)': 'string'}]}}
SOLVE_PARAMETERS
dict
N
- class ibm_watsonx_ai.deployments.RuntimeContext(api_client, request_payload_json=None)[source]¶
Class included to keep the interface compatible with the Deployment’s RuntimeContext used in AIServices implementation.
- Parameters:
api_client (APIClient) – initialized APIClient object with a set project ID or space ID. If passed,
credentials
andproject_id
/space_id
are not required.request_payload_json (dict, optional) – Request payload for testing of generate/ generate_stream call of AI Service.
`` RuntimeContext`` initialized for testing purposes before deployment:
context = RuntimeContext(api_client=client, request_payload_json={"field": "value"})
Examples of
RuntimeContext
usage within AI Service source code:def deployable_ai_service(context, **custom): task_token = context.generate_token() def generate(context) -> dict: user_token = context.get_token() headers = context.get_headers() json_body = context.get_json() ... return {"body": json_body} return generate generate = deployable_ai_service(context) generate_output = generate(context) # returns {"body": {"field": "value"}}
Change the JSON body in
RuntimeContext
:context.request_payload_json = {"field2": "value2"} generate = deployable_ai_service(context) generate_output = generate(context) # returns {"body": {"field2": "value2"}}
Export/Import¶
- class client.Export(client)[source]¶
- cancel(export_id, space_id=None, project_id=None)[source]¶
Cancel an export job. space_id or project_id has to be provided.
Note
To delete an export_id job, use
delete()
API.- Parameters:
export_id (str) – export job identifier
space_id (str, optional) – space identifier
project_id (str, optional) – project identifier
- Returns:
status (“SUCCESS” or “FAILED”)
- Return type:
str
Example:
client.export_assets.cancel(export_id='6213cf1-252f-424b-b52d-5cdd9814956c', space_id='3421cf1-252f-424b-b52d-5cdd981495fe')
- delete(export_id, space_id=None, project_id=None)[source]¶
Delete the given export_id job. space_id or project_id has to be provided.
- Parameters:
export_id (str) – export job identifier
space_id (str, optional) – space identifier
project_id (str, optional) – project identifier
- Returns:
status (“SUCCESS” or “FAILED”)
- Return type:
str
Example:
client.export_assets.delete(export_id='6213cf1-252f-424b-b52d-5cdd9814956c', space_id= '98a53931-a8c0-4c2f-8319-c793155e4598')
- get_details(export_id=None, space_id=None, project_id=None, limit=None, asynchronous=False, get_all=False)[source]¶
Get metadata of a given export job. If no export_id is specified, all export metadata is returned.
- Parameters:
export_id (str, optional) – export job identifier
space_id (str, optional) – space identifier
project_id (str, optional) – project identifier
limit (int, optional) – limit number of fetched records
asynchronous (bool, optional) – if True, it will work as a generator
get_all (bool, optional) – if True, it will get all entries in ‘limited’ chunks
- Returns:
export metadata
- Return type:
dict (if export_id is not None) or {“resources”: [dict]} (if export_id is None)
Example:
details = client.export_assets.get_details(export_id, space_id= '98a53931-a8c0-4c2f-8319-c793155e4598') details = client.export_assets.get_details() details = client.export_assets.get_details(limit=100) details = client.export_assets.get_details(limit=100, get_all=True) details = [] for entry in client.export_assets.get_details(limit=100, asynchronous=True, get_all=True): details.extend(entry)
- get_exported_content(export_id, space_id=None, project_id=None, file_path=None)[source]¶
Get the exported content as a zip file.
- Parameters:
export_id (str) – export job identifier
space_id (str, optional) – space identifier
project_id (str, optional) – project identifier
file_path (str, optional) – name of local file to create, this should be absolute path of the file and the file shouldn’t exist
- Returns:
path to the downloaded function content
- Return type:
str
Example:
client.exports.get_exported_content(export_id, space_id='98a53931-a8c0-4c2f-8319-c793155e4598', file_path='/home/user/my_exported_content.zip')
- static get_id(export_details)[source]¶
Get the ID of the export job from export details.
- Parameters:
export_details (dict) – metadata of the export job
- Returns:
ID of the export job
- Return type:
str
Example:
id = client.export_assets.get_id(export_details)
- list(space_id=None, project_id=None, limit=None)[source]¶
Return export jobs in a table format. If limit is set to None, only the first 50 records are shown.
- Parameters:
space_id (str, optional) – space identifier
project_id (str, optional) – project identifier
limit (int, optional) – limit number of fetched records
- Returns:
pandas.DataFrame with listed connections
- Return type:
pandas.DataFrame
Example:
client.export_assets.list()
- start(meta_props, space_id=None, project_id=None)[source]¶
Start the export. You must provide the space_id or the project_id. ALL_ASSETS is by default False. You don’t need to provide it unless it is set to True. You must provide one of the following in the meta_props: ALL_ASSETS, ASSET_TYPES, or ASSET_IDS. Only one of these can be provided.
In the meta_props:
ALL_ASSETS is a boolean. When set to True, it exports all assets in the given space. ASSET_IDS is an array that contains the list of assets IDs to be exported. ASSET_TYPES is used to provide the asset types to be exported. All assets of that asset type will be exported.
Eg: wml_model, wml_model_definition, wml_pipeline, wml_function, wml_experiment, software_specification, hardware_specification, package_extension, script
- Parameters:
meta_props (dict) – metadata, to see available meta names use
client.export_assets.ConfigurationMetaNames.get()
space_id (str, optional) – space identifier
project_id – project identifier
- Returns:
Response json
- Return type:
dict
Example:
metadata = { client.export_assets.ConfigurationMetaNames.NAME: "export_model", client.export_assets.ConfigurationMetaNames.ASSET_IDS: ["13a53931-a8c0-4c2f-8319-c793155e7517", "13a53931-a8c0-4c2f-8319-c793155e7518"]} details = client.export_assets.start(meta_props=metadata, space_id="98a53931-a8c0-4c2f-8319-c793155e4598")
metadata = { client.export_assets.ConfigurationMetaNames.NAME: "export_model", client.export_assets.ConfigurationMetaNames.ASSET_TYPES: ["wml_model"]} details = client.export_assets.start(meta_props=metadata, space_id="98a53931-a8c0-4c2f-8319-c793155e4598")
metadata = { client.export_assets.ConfigurationMetaNames.NAME: "export_model", client.export_assets.ConfigurationMetaNames.ALL_ASSETS: True} details = client.export_assets.start(meta_props=metadata, space_id="98a53931-a8c0-4c2f-8319-c793155e4598")
- class client.Import(client)[source]¶
- cancel(import_id, space_id=None, project_id=None)[source]¶
Cancel an import job. You must provide the space_id or the project_id.
Note
To delete an import_id job, use delete() api
- Parameters:
import_id (str) – import the job identifier
space_id (str, optional) – space identifier
project_id (str, optional) – project identifier
Example:
client.import_assets.cancel(import_id='6213cf1-252f-424b-b52d-5cdd9814956c', space_id='3421cf1-252f-424b-b52d-5cdd981495fe')
- delete(import_id, space_id=None, project_id=None)[source]¶
Deletes the given import_id job. You must provide the space_id or the project_id.
- Parameters:
import_id (str) – import the job identifier
space_id (str, optional) – space identifier
project_id (str, optional) – project identifier
Example:
client.import_assets.delete(import_id='6213cf1-252f-424b-b52d-5cdd9814956c', space_id= '98a53931-a8c0-4c2f-8319-c793155e4598')
- get_details(import_id=None, space_id=None, project_id=None, limit=None, asynchronous=False, get_all=False)[source]¶
Get metadata of the given import job. If no import_id is specified, all import metadata is returned.
- Parameters:
import_id (str, optional) – import the job identifier
space_id (str, optional) – space identifier
project_id (str, optional) – project identifier
limit (int, optional) – limit number of fetched records
asynchronous (bool, optional) – if True, it will work as a generator
get_all (bool, optional) – if True, it will get all entries in ‘limited’ chunks
- Returns:
import(s) metadata
- Return type:
dict (if import_id is not None) or {“resources”: [dict]} (if import_id is None)
Example:
details = client.import_assets.get_details(import_id) details = client.import_assets.get_details() details = client.import_assets.get_details(limit=100) details = client.import_assets.get_details(limit=100, get_all=True) details = [] for entry in client.import_assets.get_details(limit=100, asynchronous=True, get_all=True): details.extend(entry)
- static get_id(import_details)[source]¶
Get ID of the import job from import details.
- Parameters:
import_details (dict) – metadata of the import job
- Returns:
ID of the import job
- Return type:
str
Example:
id = client.import_assets.get_id(import_details)
- list(space_id=None, project_id=None, limit=None)[source]¶
Return import jobs in a table format. If limit is set to None, only the first 50 records are shown.
- Parameters:
space_id (str, optional) – space identifier
project_id (str, optional) – project identifier
limit (int, optional) – limit number of fetched records
- Returns:
pandas.DataFrame with listed assets
- Return type:
pandas.DataFrame
Example:
client.import_assets.list()
- start(file_path, space_id=None, project_id=None)[source]¶
Start the import. You must provide the space_id or the project_id.
- Parameters:
file_path (str) – file path to the zip file with exported assets
space_id (str, optional) – space identifier
project_id (str, optional) – project identifier
- Returns:
response json
- Return type:
dict
Example:
details = client.import_assets.start(space_id="98a53931-a8c0-4c2f-8319-c793155e4598", file_path="/home/user/data_to_be_imported.zip")
Factsheets (IBM Cloud only)¶
Warning! Not supported for IBM Cloud Pak® for Data.
- class client.Factsheets(client)[source]¶
Link WML Model to Model Entry.
- list_model_entries(catalog_id=None)[source]¶
Return all WKC Model Entry assets for a catalog.
- Parameters:
catalog_id (str, optional) – catalog ID where you want to register model. If no catalog_id is provided, WKC Model Entry assets from all catalogs are listed.
- Returns:
all WKC Model Entry assets for a catalog
- Return type:
dict
Example:
model_entries = client.factsheets.list_model_entries(catalog_id)
- register_model_entry(model_id, meta_props, catalog_id=None)[source]¶
Link WML Model to Model Entry
- Parameters:
model_id (str) – ID of the published model/asset
meta_props (dict[str, str]) –
metaprops, to see the available list of meta names use:
client.factsheets.ConfigurationMetaNames.get()
catalog_id (str, optional) – catalog ID where you want to register model
- Returns:
metadata of the registration
- Return type:
dict
Example:
meta_props = { client.factsheets.ConfigurationMetaNames.ASSET_ID: '83a53931-a8c0-4c2f-8319-c793155e7517'} registration_details = client.factsheets.register_model_entry(model_id, catalog_id, meta_props)
or
meta_props = { client.factsheets.ConfigurationMetaNames.NAME: "New model entry", client.factsheets.ConfigurationMetaNames.DESCRIPTION: "New model entry"} registration_details = client.factsheets.register_model_entry(model_id, meta_props)
- unregister_model_entry(asset_id, catalog_id=None)[source]¶
Unregister WKC Model Entry
- Parameters:
asset_id (str) – ID of the WKC model entry
catalog_id (str, optional) – catalog ID where the asset is stored, when not provided, default client space or project will be taken
Example:
model_entries = client.factsheets.unregister_model_entry(asset_id='83a53931-a8c0-4c2f-8319-c793155e7517', catalog_id='34553931-a8c0-4c2f-8319-c793155e7517')
or
client.set.default_space('98f53931-a8c0-4c2f-8319-c793155e7517') model_entries = client.factsheets.unregister_model_entry(asset_id='83a53931-a8c0-4c2f-8319-c793155e7517')
- class metanames.FactsheetsMetaNames[source]¶
Set of MetaNames for Factsheets metanames.
Available MetaNames:
MetaName
Type
Required
Example value
ASSET_ID
str
N
13a53931-a8c0-4c2f-8319-c793155e7517
NAME
str
N
New model entry
DESCRIPTION
str
N
New model entry
MODEL_ENTRY_CATALOG_ID
str
Y
13a53931-a8c0-4c2f-8319-c793155e7517
Hardware specifications¶
- class client.HwSpec(client)[source]¶
Store and manage hardware specs.
- ConfigurationMetaNames = <ibm_watsonx_ai.metanames.HwSpecMetaNames object>¶
MetaNames for Hardware Specification.
- delete(hw_spec_id)[source]¶
Delete a hardware specification.
- Parameters:
hw_spec_id (str) – unique ID of the hardware specification to be deleted
- Returns:
status (“SUCCESS” or “FAILED”)
- Return type:
str
- get_details(hw_spec_id=None, **kwargs)[source]¶
Get hardware specification details.
- Parameters:
hw_spec_id (str) – unique ID of the hardware spec
- Returns:
metadata of the hardware specifications
- Return type:
dict
Example:
hw_spec_details = client.hardware_specifications.get_details(hw_spec_uid)
- static get_href(hw_spec_details)[source]¶
Get the URL of hardware specifications.
- Parameters:
hw_spec_details (dict) – details of the hardware specifications
- Returns:
href of the hardware specifications
- Return type:
str
Example:
hw_spec_details = client.hw_spec.get_details(hw_spec_id) hw_spec_href = client.hw_spec.get_href(hw_spec_details)
- static get_id(hw_spec_details)[source]¶
Get the ID of a hardware specifications asset.
- Parameters:
hw_spec_details (dict) – metadata of the hardware specifications
- Returns:
unique ID of the hardware specifications
- Return type:
str
Example:
asset_id = client.hardware_specifications.get_id(hw_spec_details)
- get_id_by_name(hw_spec_name)[source]¶
Get the unique ID of a hardware specification for the given name.
- Parameters:
hw_spec_name (str) – name of the hardware specification
- Returns:
unique ID of the hardware specification
- Return type:
str
Example:
asset_id = client.hardware_specifications.get_id_by_name(hw_spec_name)
- static get_uid(hw_spec_details)[source]¶
Get the UID of a hardware specifications asset.
Deprecated: Use
get_id(hw_spec_details)
instead.- Parameters:
hw_spec_details (dict) – metadata of the hardware specifications
- Returns:
unique ID of the hardware specifications
- Return type:
str
Example:
asset_uid = client.hardware_specifications.get_uid(hw_spec_details)
- get_uid_by_name(hw_spec_name)[source]¶
Get the unique ID of a hardware specification for the given name.
Deprecated: Use
get_id_by_name(hw_spec_name)
instead.- Parameters:
hw_spec_name (str) – name of the hardware specification
- Returns:
unique ID of the hardware specification
- Return type:
str
Example:
asset_uid = client.hardware_specifications.get_uid_by_name(hw_spec_name)
- list(name=None, limit=None)[source]¶
List hardware specifications in a table format.
- Parameters:
name (str, optional) – unique ID of the hardware spec
limit (int, optional) – limit number of fetched records
- Returns:
pandas.DataFrame with listed hardware specifications
- Return type:
pandas.DataFrame
Example:
client.hardware_specifications.list()
- store(meta_props)[source]¶
Create a hardware specification.
- Parameters:
meta_props (dict) –
metadata of the hardware specification configuration. To see available meta names, use:
client.hardware_specifications.ConfigurationMetaNames.get()
- Returns:
metadata of the created hardware specification
- Return type:
dict
Example:
meta_props = { client.hardware_specifications.ConfigurationMetaNames.NAME: "custom hardware specification", client.hardware_specifications.ConfigurationMetaNames.DESCRIPTION: "Custom hardware specification creted with SDK", client.hardware_specifications.ConfigurationMetaNames.NODES:{"cpu":{"units":"2"},"mem":{"size":"128Gi"},"gpu":{"num_gpu":1}} } client.hardware_specifications.store(meta_props)
Helpers¶
- class ibm_watsonx_ai.helpers.helpers.get_credentials_from_config(env_name, credentials_name, config_path='./config.ini')[source]¶
Bases:
Load credentials from the config file.
[DEV_LC] credentials = { } cos_credentials = { }
- Parameters:
env_name (str) – name of [ENV] defined in the config file
credentials_name (str) – name of credentials
config_path (str) – path to the config file
- Returns:
loaded credentials
- Return type:
dict
Example:
get_credentials_from_config(env_name='DEV_LC', credentials_name='credentials')
Model definitions¶
- class client.ModelDefinition(client)[source]¶
Store and manage model definitions.
- ConfigurationMetaNames = <ibm_watsonx_ai.metanames.ModelDefinitionMetaNames object>¶
MetaNames for model definition creation.
- create_revision(model_definition_id=None, **kwargs)[source]¶
Create a revision for the given model definition. Revisions are immutable once created. The metadata and attachment of the model definition is taken and a revision is created out of it.
- Parameters:
model_definition_id (str) – ID of the model definition
- Returns:
revised metadata of the stored model definition
- Return type:
dict
Example:
model_definition_revision = client.model_definitions.create_revision(model_definition_id)
- delete(model_definition_id=None, **kwargs)[source]¶
Delete a stored model definition.
- Parameters:
model_definition_id (str) – unique ID of the stored model definition
- Returns:
status (“SUCCESS” or “FAILED”)
- Return type:
str
Example:
client.model_definitions.delete(model_definition_id)
- download(model_definition_id, filename=None, rev_id=None, **kwargs)[source]¶
Download the content of a model definition asset.
- Parameters:
model_definition_id (str) – unique ID of the model definition asset to be downloaded
filename (str) – filename to be used for the downloaded file
rev_id (str, optional) – revision ID
- Returns:
path to the downloaded asset content
- Return type:
str
Example:
client.model_definitions.download(model_definition_id, "model_definition_file")
- get_details(model_definition_id=None, limit=None, get_all=None, **kwargs)[source]¶
Get metadata of a stored model definition. If no model_definition_id is passed, details for all model definitions are returned.
- Parameters:
model_definition_id (str, optional) – unique ID of the model definition
limit (int, optional) – limit number of fetched records
get_all (bool, optional) – if True, it will get all entries in ‘limited’ chunks
- Returns:
metadata of model definition
- Return type:
dict (if model_definition_id is not None)
Example:
- get_href(model_definition_details)[source]¶
Get the href of a stored model definition.
- Parameters:
model_definition_details (dict) – details of the stored model definition
- Returns:
href of the stored model definition
- Return type:
str
Example:
model_definition_id = client.model_definitions.get_href(model_definition_details)
- get_id(model_definition_details)[source]¶
Get the unique ID of a stored model definition asset.
- Parameters:
model_definition_details (dict) – metadata of the stored model definition asset
- Returns:
unique ID of the stored model definition asset
- Return type:
str
Example:
asset_id = client.model_definition.get_id(asset_details)
- get_revision_details(model_definition_id=None, rev_id=None, **kwargs)[source]¶
Get metadata of a model definition.
- Parameters:
model_definition_id (str) – ID of the model definition
rev_id (str, optional) – ID of the revision. If this parameter is not provided, it returns the latest revision. If there is no latest revision, it returns an error.
- Returns:
metadata of the stored model definition
- Return type:
dict
Example:
script_details = client.model_definitions.get_revision_details(model_definition_id, rev_id)
- get_uid(model_definition_details)[source]¶
Get the UID of the stored model.
Deprecated: Use
get_id(model_definition_details)
instead.- Parameters:
model_definition_details (dict) – details of the stored model definition
- Returns:
UID of the stored model definition
- Return type:
str
Example:
model_definition_uid = client.model_definitions.get_uid(model_definition_details)
- list(limit=None)[source]¶
Return the stored model definition assets in a table format. If limit is set to None, only the first 50 records are shown.
- Parameters:
limit (int, optional) – limit number of fetched records
- Returns:
pandas.DataFrame with listed model definitions
- Return type:
pandas.DataFrame
Example:
client.model_definitions.list()
- list_revisions(model_definition_id=None, limit=None, **kwargs)[source]¶
Return the stored model definition assets in a table format. If limit is set to None, only the first 50 records are shown.
- Parameters:
model_definition_id (str) – unique ID of the model definition
limit (int, optional) – limit number of fetched records
- Returns:
pandas.DataFrame with listed model definitions
- Return type:
pandas.DataFrame
Example:
client.model_definitions.list_revisions()
- store(model_definition, meta_props)[source]¶
Create a model definition.
- Parameters:
meta_props (dict) –
metadata of the model definition configuration. To see available meta names, use:
client.model_definitions.ConfigurationMetaNames.get()
model_definition (str) – path to the content file to be uploaded
- Returns:
metadata of the created model definition
- Return type:
dict
Example:
client.model_definitions.store(model_definition, meta_props)
- update(model_definition_id, meta_props=None, file_path=None)[source]¶
Update the model definition with metadata, attachment, or both.
- Parameters:
model_definition_id (str) – ID of the model definition
meta_props (dict) – metadata of the model definition configuration to be updated
file_path (str, optional) – path to the content file to be uploaded
- Returns:
updated metadata of the model definition
- Return type:
dict
Example:
model_definition_details = client.model_definition.update(model_definition_id, meta_props, file_path)
- class metanames.ModelDefinitionMetaNames[source]¶
Set of MetaNames for Model Definition.
Available MetaNames:
MetaName
Type
Required
Schema
Example value
NAME
str
Y
my_model_definition
DESCRIPTION
str
N
my model_definition
PLATFORM
dict
Y
{'name(required)': 'string', 'versions(required)': ['versions']}
{'name': 'python', 'versions': ['3.10']}
VERSION
str
Y
1.0
COMMAND
str
N
python3 convolutional_network.py
CUSTOM
dict
N
{'field1': 'value1'}
SPACE_UID
str
N
3c1ce536-20dc-426e-aac7-7284cf3befc6
Package extensions¶
- class client.PkgExtn(client)[source]¶
Store and manage software Packages Extension specs.
- ConfigurationMetaNames = <ibm_watsonx_ai.metanames.PkgExtnMetaNames object>¶
MetaNames for Package Extensions creation.
- delete(pkg_extn_id)[source]¶
Delete a package extension.
- Parameters:
pkg_extn_id (str) – unique ID of the package extension
- Returns:
status (“SUCCESS” or “FAILED”) if deleted synchronously or dictionary with response
- Return type:
str or dict
Example:
client.package_extensions.delete(pkg_extn_id)
- download(pkg_extn_id, filename)[source]¶
Download a package extension.
- Parameters:
pkg_extn_id (str) – unique ID of the package extension to be downloaded
filename (str) – filename to be used for the downloaded file
- Returns:
path to the downloaded package extension content
- Return type:
str
Example:
client.package_extensions.download(pkg_extn_id,"sample_conda.yml/custom_library.zip")
- get_details(pkg_extn_id)[source]¶
Get package extensions details.
- Parameters:
pkg_extn_id (str) – unique ID of the package extension
- Returns:
details of the package extension
- Return type:
dict
Example:
pkg_extn_details = client.pkg_extn.get_details(pkg_extn_id)
- static get_href(pkg_extn_details)[source]¶
Get the URL of a stored package extension.
- Parameters:
pkg_extn_details (dict) – details of the package extension
- Returns:
href of the package extension
- Return type:
str
Example:
pkg_extn_details = client.package_extensions.get_details(pkg_extn_id) pkg_extn_href = client.package_extensions.get_href(pkg_extn_details)
- static get_id(pkg_extn_details)[source]¶
Get the unique ID of a package extension.
- Parameters:
pkg_extn_details (dict) – details of the package extension
- Returns:
unique ID of the package extension
- Return type:
str
Example:
asset_id = client.package_extensions.get_id(pkg_extn_details)
- get_id_by_name(pkg_extn_name)[source]¶
Get the ID of a package extension.
- Parameters:
pkg_extn_name (str) – name of the package extension
- Returns:
unique ID of the package extension
- Return type:
str
Example:
asset_id = client.package_extensions.get_id_by_name(pkg_extn_name)
- list()[source]¶
List the package extensions in a table format.
- Returns:
pandas.DataFrame with listed package extensions
- Return type:
pandas.DataFrame
client.package_extensions.list()
- store(meta_props, file_path)[source]¶
Create a package extension.
- Parameters:
meta_props (dict) –
metadata of the package extension. To see available meta names, use:
client.package_extensions.ConfigurationMetaNames.get()
file_path (str) – path to the file to be uploaded as a package extension
- Returns:
metadata of the package extension
- Return type:
dict
Example:
meta_props = { client.package_extensions.ConfigurationMetaNames.NAME: "skl_pipeline_heart_problem_prediction", client.package_extensions.ConfigurationMetaNames.DESCRIPTION: "description scikit-learn_0.20", client.package_extensions.ConfigurationMetaNames.TYPE: "conda_yml" } pkg_extn_details = client.package_extensions.store(meta_props=meta_props, file_path="/path/to/file")
Parameter Sets¶
- class client.ParameterSets(client)[source]¶
Store and manage parameter sets.
- ConfigurationMetaNames = <ibm_watsonx_ai.metanames.ParameterSetsMetaNames object>¶
MetaNames for Parameter Sets creation.
- create(meta_props)[source]¶
Create a parameter set.
- Parameters:
meta_props (dict) –
metadata of the space configuration. To see available meta names, use:
client.parameter_sets.ConfigurationMetaNames.get()
- Returns:
metadata of the stored parameter set
- Return type:
dict
Example:
meta_props = { client.parameter_sets.ConfigurationMetaNames.NAME: "Example name", client.parameter_sets.ConfigurationMetaNames.DESCRIPTION: "Example description", client.parameter_sets.ConfigurationMetaNames.PARAMETERS: [ { "name": "string", "description": "string", "prompt": "string", "type": "string", "subtype": "string", "value": "string", "valid_values": [ "string" ] } ], client.parameter_sets.ConfigurationMetaNames.VALUE_SETS: [ { "name": "string", "values": [ { "name": "string", "value": "string" } ] } ] } parameter_sets_details = client.parameter_sets.create(meta_props)
- delete(parameter_set_id)[source]¶
Delete a parameter set.
- Parameters:
parameter_set_id (str) – unique ID of the parameter set
- Returns:
status (“SUCCESS” or “FAILED”)
- Return type:
str
Example:
client.parameter_sets.delete(parameter_set_id)
- get_details(parameter_set_id=None)[source]¶
Get parameter set details. If no parameter_sets_id is passed, details for all parameter sets are returned.
- Parameters:
parameter_set_id (str, optional) – ID of the software specification
- Returns:
metadata of the stored parameter set(s)
- Return type:
dict - if parameter_set_id is not None
{“parameter_sets”: [dict]} - if parameter_set_id is None
Examples
If parameter_set_id is None:
parameter_sets_details = client.parameter_sets.get_details()
If parameter_set_id is given:
parameter_sets_details = client.parameter_sets.get_details(parameter_set_id)
- get_id_by_name(parameter_set_name)[source]¶
Get the unique ID of a parameter set.
- Parameters:
parameter_set_name (str) – name of the parameter set
- Returns:
unique ID of the parameter set
- Return type:
str
Example:
asset_id = client.parameter_sets.get_id_by_name(parameter_set_name)
- list(limit=None)[source]¶
List parameter sets in a table format.
- Parameters:
limit (int, optional) – limit number of fetched records
- Returns:
pandas.DataFrame with listed parameter sets
- Return type:
pandas.DataFrame
Example:
client.parameter_sets.list()
- update(parameter_set_id, new_data, file_path)[source]¶
Update parameter sets.
- Parameters:
parameter_set_id (str) – unique ID of the parameter sets
new_data (str, list) – new data for parameters
file_path (str) – path to update
- Returns:
metadata of the updated parameter sets
- Return type:
dict
Example for description
new_description_data = "New description" parameter_set_details = client.parameter_sets.update(parameter_set_id, new_description_data, "description")
Example for parameters
new_parameters_data = [ { "name": "string", "description": "new_description", "prompt": "new_string", "type": "new_string", "subtype": "new_string", "value": "new_string", "valid_values": [ "new_string" ] } ] parameter_set_details = client.parameter_sets.update(parameter_set_id, new_parameters_data, "parameters")
Example for value_sets
new_value_sets_data = [ { "name": "string", "values": [ { "name": "string", "value": "new_string" } ] } ] parameter_set_details = client.parameter_sets.update_value_sets(parameter_set_id, new_value_sets_data, "value_sets")
- class metanames.ParameterSetsMetaNames[source]¶
Set of MetaNames for Parameter Sets metanames.
Available MetaNames:
MetaName
Type
Required
Example value
NAME
str
Y
sample name
DESCRIPTION
str
N
sample description
PARAMETERS
list
Y
[{'name': 'string', 'description': 'string', 'prompt': 'string', 'type': 'string', 'subtype': 'string', 'value': 'string', 'valid_values': ['string']}]
VALUE_SETS
list
N
[{'name': 'string', 'values': [{'name': 'string', 'value': 'string'}]}]
Repository¶
- class client.Repository(client)[source]¶
Store and manage models, functions, spaces, pipelines, and experiments using the Watson Machine Learning Repository.
To view ModelMetaNames, use:
client.repository.ModelMetaNames.show()
To view ExperimentMetaNames, use:
client.repository.ExperimentMetaNames.show()
To view FunctionMetaNames, use:
client.repository.FunctionMetaNames.show()
To view PipelineMetaNames, use:
client.repository.PipelineMetaNames.show()
To view AIServiceMetaNames, use:
client.repository.AIServiceMetaNames.show()
- class ModelAssetTypes(DO_DOCPLEX_20_1='do-docplex_20.1', DO_OPL_20_1='do-opl_20.1', DO_CPLEX_20_1='do-cplex_20.1', DO_CPO_20_1='do-cpo_20.1', DO_DOCPLEX_22_1='do-docplex_22.1', DO_OPL_22_1='do-opl_22.1', DO_CPLEX_22_1='do-cplex_22.1', DO_CPO_22_1='do-cpo_22.1', WML_HYBRID_0_1='wml-hybrid_0.1', PMML_4_2_1='pmml_4.2.1', PYTORCH_ONNX_1_12='pytorch-onnx_1.12', PYTORCH_ONNX_RT22_2='pytorch-onnx_rt22.2', PYTORCH_ONNX_2_0='pytorch-onnx_2.0', PYTORCH_ONNX_RT23_1='pytorch-onnx_rt23.1', SCIKIT_LEARN_1_1='scikit-learn_1.1', MLLIB_3_3='mllib_3.3', SPSS_MODELER_17_1='spss-modeler_17.1', SPSS_MODELER_18_1='spss-modeler_18.1', SPSS_MODELER_18_2='spss-modeler_18.2', TENSORFLOW_2_9='tensorflow_2.9', TENSORFLOW_RT22_2='tensorflow_rt22.2', TENSORFLOW_2_12='tensorflow_2.12', TENSORFLOW_RT23_1='tensorflow_rt23.1', XGBOOST_1_6='xgboost_1.6', PROMPT_TUNE_1_0='prompt_tune_1.0', CUSTOM_FOUNDATION_MODEL_1_0='custom_foundation_model_1.0', CURATED_FOUNDATION_MODEL_1_0='curated_foundation_model_1.0')[source]¶
Data class with supported model asset types.
- create_ai_service_revision(ai_service_id, **kwargs)[source]¶
Create a new AI service revision.
- Parameters:
ai_service_id (str) – unique ID of the AI service
- Returns:
revised metadata of the stored AI service
- Return type:
dict
Example:
client.repository.create_ai_service_revision(ai_service_id)
- create_experiment_revision(experiment_id)[source]¶
Create a new experiment revision.
- Parameters:
experiment_id (str) – unique ID of the stored experiment
- Returns:
new revision details of the stored experiment
- Return type:
dict
Example:
experiment_revision_artifact = client.repository.create_experiment_revision(experiment_id)
- create_function_revision(function_id=None, **kwargs)[source]¶
Create a new function revision.
- Parameters:
function_id (str) – unique ID of the function
- Returns:
revised metadata of the stored function
- Return type:
dict
Example:
client.repository.create_function_revision(pipeline_id)
- create_model_revision(model_id=None, **kwargs)[source]¶
Create a revision for a given model ID.
- Parameters:
model_id (str) – ID of the stored model
- Returns:
revised metadata of the stored model
- Return type:
dict
Example:
model_details = client.repository.create_model_revision(model_id)
- create_pipeline_revision(pipeline_id=None, **kwargs)[source]¶
Create a new pipeline revision.
- Parameters:
pipeline_id (str) – unique ID of the pipeline
- Returns:
details of the pipeline revision
- Return type:
dict
Example:
client.repository.create_pipeline_revision(pipeline_id)
- create_revision(artifact_id=None, **kwargs)[source]¶
Create a revision for passed artifact_id.
- Parameters:
artifact_id (str) – unique ID of a stored model, experiment, function, or pipelines
- Returns:
artifact new revision metadata
- Return type:
dict
Example:
details = client.repository.create_revision(artifact_id)
- delete(artifact_id=None, **kwargs)[source]¶
Delete a model, experiment, pipeline, function, or AI service from the repository.
- Parameters:
artifact_id (str) – unique ID of the stored model, experiment, function, pipeline, or AI service
- Returns:
status “SUCCESS” if deletion is successful
- Return type:
Literal[“SUCCESS”]
Example:
client.repository.delete(artifact_id)
- download(artifact_id=None, filename='downloaded_artifact.tar.gz', rev_id=None, format=None, **kwargs)[source]¶
Download the configuration file for an artifact with the specified ID.
- Parameters:
artifact_id (str) – unique ID of the model or function
filename (str, optional) – name of the file to which the artifact content will be downloaded
rev_id (str, optional) – revision ID
format (str, optional) – format of the content, applicable for models
- Returns:
path to the downloaded artifact content
- Return type:
str
Examples
client.repository.download(model_id, 'my_model.tar.gz') client.repository.download(model_id, 'my_model.json') # if original model was saved as json, works only for xgboost 1.3
- get_ai_service_details(ai_service_id=None, limit=None, asynchronous=False, get_all=False, spec_state=None, ai_service_name=None, **kwargs)[source]¶
Get the metadata of AI service(s). If neither AI service ID nor AI service is specified, all AI service metadata is returned. If only AI service name is specified, metadata of AI services with the name is returned (if any).
- Parameters:
ai_service_id (str, optional) – ID of the AI service
limit (int | None, optional) – limit number of fetched records
asynchronous (bool, optional) – if True, it will work as a generator, defaults to False
get_all (bool, optional) – if True, it will get all entries in ‘limited’ chunks, defaults to False
spec_state (SpecStates | None, optional) – software specification state, can be used only when ai_service_id is None
ai_service_name (str, optional) – name of the AI service, can be used only when ai_service_id is None
- Returns:
metadata of the AI service
- Return type:
dict (if ID is not None) or {“resources”: [dict]} (if ID is None)
Note
In the current implementation setting, spec_state=True might break the set limit and return less records than stated in the set limit.
Examples:
ai_service_details = client.repository.get_ai_service_details(ai_service_id) ai_service_details = client.repository.get_ai_service_details(ai_service_name) ai_service_details = client.repository.get_ai_service_details() ai_service_details = client.repository.get_ai_service_details(limit=100) ai_service_details = client.repository.get_ai_service_details(limit=100, get_all=True) ai_service_details = [] for entry in client.repository.get_ai_service_details(limit=100, asynchronous=True, get_all=True): ai_service_details.extend(entry)
- static get_ai_service_id(ai_service_details)[source]¶
Get the ID of a stored AI service.
- Parameters:
ai_service_details (dict) – metadata of the stored AI service
- Returns:
ID of the stored AI service
- Return type:
str
Example:
ai_service_details = client.repository.get_ai_service_details(ai_service_id) ai_service_id = client.repository.get_ai_service_id(ai_service_details)
- get_ai_service_revision_details(ai_service_id, rev_id, **kwargs)[source]¶
Get the metadata of a specific revision of a stored AI service.
- Parameters:
ai_service_id (str) – definition of the stored AI service
rev_id (str) – unique ID of the AI service revision
- Returns:
metadata of the stored AI service revision
- Return type:
dict
Example:
ai_service_revision_details = client.repository.get_ai_service_revision_details(ai_service_id, rev_id)
- get_details(artifact_id=None, spec_state=None, artifact_name=None, **kwargs)[source]¶
Get metadata of stored artifacts. If artifact_id and artifact_name are not specified, the metadata of all models, experiments, functions, pipelines, and ai services is returned. If only artifact_name is specified, metadata of all artifacts with the name is returned.
- Parameters:
artifact_id (str, optional) – unique ID of the stored model, experiment, function, or pipeline
spec_state (SpecStates, optional) – software specification state, can be used only when artifact_id is None
artifact_name (str, optional) – name of the stored model, experiment, function, pipeline, or ai service can be used only when artifact_id is None
- Returns:
metadata of the stored artifact(s)
- Return type:
dict (if artifact_id is not None)
{“models”: dict, “experiments”: dict, “pipeline”: dict, “functions”: dict, “ai_service”: dict} (if artifact_id is None)
Examples
details = client.repository.get_details(artifact_id) details = client.repository.get_details(artifact_name='Sample_model') details = client.repository.get_details()
Example of getting all repository assets with deprecated software specifications:
from ibm_watsonx_ai.lifecycle import SpecStates details = client.repository.get_details(spec_state=SpecStates.DEPRECATED)
- get_experiment_details(experiment_id=None, limit=None, asynchronous=False, get_all=False, experiment_name=None, **kwargs)[source]¶
Get metadata of the experiment(s). If neither experiment ID nor experiment name is specified, all experiment metadata is returned. If only experiment name is specified, metadata of experiments with the name is returned (if any).
- Parameters:
experiment_id (str, optional) – ID of the experiment
limit (int, optional) – limit number of fetched records
asynchronous (bool, optional) – if True, it will work as a generator
get_all (bool, optional) – if True, it will get all entries in ‘limited’ chunks
experiment_name (str, optional) – name of the experiment, can be used only when experiment_id is None
- Returns:
experiment metadata
- Return type:
dict (if ID is not None) or {“resources”: [dict]} (if ID is None)
Example:
experiment_details = client.repository.get_experiment_details(experiment_id) experiment_details = client.repository.get_experiment_details(experiment_name='Sample_experiment') experiment_details = client.repository.get_experiment_details() experiment_details = client.repository.get_experiment_details(limit=100) experiment_details = client.repository.get_experiment_details(limit=100, get_all=True) experiment_details = [] for entry in client.repository.get_experiment_details(limit=100, asynchronous=True, get_all=True): experiment_details.extend(entry)
- static get_experiment_href(experiment_details)[source]¶
Get the href of a stored experiment.
- Parameters:
experiment_details (dict) – metadata of the stored experiment
- Returns:
href of the stored experiment
- Return type:
str
Example:
experiment_details = client.repository.get_experiment_details(experiment_id) experiment_href = client.repository.get_experiment_href(experiment_details)
- static get_experiment_id(experiment_details)[source]¶
Get the unique ID of a stored experiment.
- Parameters:
experiment_details (dict) – metadata of the stored experiment
- Returns:
unique ID of the stored experiment
- Return type:
str
Example:
experiment_details = client.repository.get_experiment_details(experiment_id) experiment_id = client.repository.get_experiment_id(experiment_details)
- get_experiment_revision_details(experiment_id, rev_id, **kwargs)[source]¶
Get metadata of a stored experiments revisions.
- Parameters:
experiment_id (str) – ID of the stored experiment
rev_id (str) – rev_id number of the stored experiment
- Returns:
revision metadata of the stored experiment
- Return type:
dict
Example:
experiment_details = client.repository.get_experiment_revision_details(experiment_id, rev_id)
- get_function_details(function_id=None, limit=None, asynchronous=False, get_all=False, spec_state=None, function_name=None, **kwargs)[source]¶
Get metadata of function(s). If neither function ID nor function name is specified, the metadata of all functions is returned. If only function name is specified, metadata of functions with the name is returned (if any).
- Parameters:
function_id – ID of the function
limit (int, optional) – limit number of fetched records
asynchronous (bool, optional) – if True, it will work as a generator
get_all (bool, optional) – if True, it will get all entries in ‘limited’ chunks
spec_state (SpecStates, optional) – software specification state, can be used only when function_id is None
function_name (str, optional) – name of the function, can be used only when function_id is None
- Type:
str, optional
- Returns:
metadata of the function
- Return type:
dict (if ID is not None) or {“resources”: [dict]} (if ID is None)
Note
In current implementation setting spec_state=True may break set limit, returning less records than stated by set limit.
Examples
function_details = client.repository.get_function_details(function_id) function_details = client.repository.get_function_details(function_name='Sample_function') function_details = client.repository.get_function_details() function_details = client.repository.get_function_details(limit=100) function_details = client.repository.get_function_details(limit=100, get_all=True) function_details = [] for entry in client.repository.get_function_details(limit=100, asynchronous=True, get_all=True): function_details.extend(entry)
- static get_function_href(function_details)[source]¶
Get the URL of a stored function.
- Parameters:
function_details (dict) – details of the stored function
- Returns:
href of the stored function
- Return type:
str
Example:
function_details = client.repository.get_function_details(function_id) function_url = client.repository.get_function_href(function_details)
- static get_function_id(function_details)[source]¶
Get ID of stored function.
- Parameters:
function_details (dict) – metadata of the stored function
- Returns:
ID of stored function
- Return type:
str
Example:
function_details = client.repository.get_function_details(function_id) function_id = client.repository.get_function_id(function_details)
- get_function_revision_details(function_id, rev_id, **kwargs)[source]¶
Get metadata of a specific revision of a stored function.
- Parameters:
function_id (str) – definition of the stored function
rev_id (str) – unique ID of the function revision
- Returns:
stored function revision metadata
- Return type:
dict
Example:
function_revision_details = client.repository.get_function_revision_details(function_id, rev_id)
- get_id_by_name(artifact_name)[source]¶
Get the ID of a stored artifact by name.
- Parameters:
artifact_name (str) – name of the stored artifact
- Returns:
ID of the stored artifact if exactly one with the ‘artifact_name’ exists. Otherwise, raise an error.
- Return type:
str
Example:
artifact_id = client.repository.get_id_by_name(artifact_name)
- get_model_details(model_id=None, limit=None, asynchronous=False, get_all=False, spec_state=None, model_name=None, **kwargs)[source]¶
Get metadata of stored models. If neither model ID nor model name is specified, the metadata of all models is returned. If only model name is specified, metadata of models with the name is returned (if any).
- Parameters:
model_id (str, optional) – ID of the stored model, definition, or pipeline
limit (int, optional) – limit number of fetched records
asynchronous (bool, optional) – if True, it will work as a generator
get_all (bool, optional) – if True, it will get all entries in ‘limited’ chunks
spec_state (SpecStates, optional) – software specification state, can be used only when model_id is None
model_name (str, optional) – name of the stored model, definition, or pipeline, can be used only when model_id is None
- Returns:
metadata of the stored model(s)
- Return type:
dict (if ID is not None) or {“resources”: [dict]} (if ID is None)
Note
In current implementation setting spec_state=True may break set limit, returning less records than stated by set limit.
Example:
model_details = client.repository.get_model_details(model_id) models_details = client.repository.get_model_details(model_name='Sample_model') models_details = client.repository.get_model_details() models_details = client.repository.get_model_details(limit=100) models_details = client.repository.get_model_details(limit=100, get_all=True) models_details = [] for entry in client.repository.get_model_details(limit=100, asynchronous=True, get_all=True): models_details.extend(entry)
- static get_model_href(model_details)[source]¶
Get the URL of a stored model.
- Parameters:
model_details (dict) – details of the stored model
- Returns:
URL of the stored model
- Return type:
str
Example:
model_url = client.repository.get_model_href(model_details)
- static get_model_id(model_details)[source]¶
Get the ID of a stored model.
- Parameters:
model_details (dict) – details of the stored model
- Returns:
ID of the stored model
- Return type:
str
Example:
model_id = client.repository.get_model_id(model_details)
- get_model_revision_details(model_id=None, rev_id=None, **kwargs)[source]¶
Get metadata of a stored model’s specific revision.
- Parameters:
model_id (str) – ID of the stored model, definition, or pipeline
rev_id (str) – unique ID of the stored model revision
- Returns:
metadata of the stored model(s)
- Return type:
dict
Example:
model_details = client.repository.get_model_revision_details(model_id, rev_id)
- get_pipeline_details(pipeline_id=None, limit=None, asynchronous=False, get_all=False, pipeline_name=None, **kwargs)[source]¶
Get metadata of stored pipeline(s). If neither pipeline ID nor pipeline name is specified, the metadata of all pipelines is returned. If only pipeline name is specified, metadata of pipelines with the name is returned (if any).
- Parameters:
pipeline_id (str, optional) – ID of the pipeline
limit (int, optional) – limit number of fetched records
asynchronous (bool, optional) – if True, it will work as a generator
get_all (bool, optional) – if True, it will get all entries in ‘limited’ chunks
pipeline_name (str, optional) – name of the pipeline, can be used only when pipeline_id is None
- Returns:
metadata of pipeline(s)
- Return type:
dict (if ID is not None) or {“resources”: [dict]} (if ID is None)
Example:
pipeline_details = client.repository.get_pipeline_details(pipeline_id) pipeline_details = client.repository.get_pipeline_details(pipeline_name='Sample_pipeline') pipeline_details = client.repository.get_pipeline_details() pipeline_details = client.repository.get_pipeline_details(limit=100) pipeline_details = client.repository.get_pipeline_details(limit=100, get_all=True) pipeline_details = [] for entry in client.repository.get_pipeline_details(limit=100, asynchronous=True, get_all=True): pipeline_details.extend(entry)
- static get_pipeline_href(pipeline_details)[source]¶
Get the href from pipeline details.
- Parameters:
pipeline_details (dict) – metadata of the stored pipeline
- Returns:
href of the pipeline
- Return type:
str
Example:
pipeline_details = client.repository.get_pipeline_details(pipeline_id) pipeline_href = client.repository.get_pipeline_href(pipeline_details)
- static get_pipeline_id(pipeline_details)[source]¶
Get the pipeline ID from pipeline details.
- Parameters:
pipeline_details (dict) – metadata of the stored pipeline
- Returns:
unique ID of the pipeline
- Return type:
str
Example:
pipeline_id = client.repository.get_pipeline_id(pipeline_details)
- get_pipeline_revision_details(pipeline_id=None, rev_id=None, **kwargs)[source]¶
Get metadata of a pipeline revision.
- Parameters:
pipeline_id (str) – ID of the stored pipeline
rev_id (str) – revision ID of the stored pipeline
- Returns:
revised metadata of the stored pipeline
- Return type:
dict
Example:
pipeline_details = client.repository.get_pipeline_revision_details(pipeline_id, rev_id)
Note
rev_id parameter is not applicable in Cloud platform.
- list(framework_filter=None)[source]¶
Get and list stored models, pipelines, functions, experiments, and AI services in a table/DataFrame format. If limit is set to None, only the first 50 records are shown.
- Parameters:
framework_filter (str, optional) – get only the frameworks with the desired names
- Returns:
DataFrame with listed names and IDs of stored models
- Return type:
pandas.DataFrame
Example:
client.repository.list() client.repository.list(framework_filter='prompt_tune')
- list_ai_service_revisions(ai_service_id, limit=None)[source]¶
Print all revisions for a given AI service ID in a table format.
- Parameters:
ai_service_id (str) – unique ID of the stored AI service
limit (int, optional) – limit number of fetched records
- Returns:
pandas.DataFrame with listed revisions
- Return type:
pandas.DataFrame
Example:
client.repository.list_ai_service_revisions(ai_service_id)
- list_ai_services(limit=None)[source]¶
Return stored AI services in a table format. If limit is set to None, only the first 50 records are shown.
- Parameters:
limit (int, optional) – limit number of fetched records
- Returns:
pandas.DataFrame with listed AI services
- Return type:
pandas.DataFrame
Example:
client.repository.list_ai_services()
- list_experiments(limit=None)[source]¶
List stored experiments in a table format. If limit is set to None, only the first 50 records are shown.
- Parameters:
limit (int, optional) – limit number of fetched records
- Returns:
pandas.DataFrame with listed experiments
- Return type:
pandas.DataFrame
Example:
client.repository.list_experiments()
- list_experiments_revisions(experiment_id=None, limit=None, **kwargs)[source]¶
Print all revisions for a given experiment ID in a table format.
- Parameters:
experiment_id (str) – unique ID of the stored experiment
limit (int, optional) – limit number of fetched records
- Returns:
pandas.DataFrame with listed revisions
- Return type:
pandas.DataFrame
Example:
client.repository.list_experiments_revisions(experiment_id)
- list_functions(limit=None)[source]¶
Return stored functions in a table format. If limit is set to None, only the first 50 records are shown.
- Parameters:
limit (int, optional) – limit number of fetched records
- Returns:
pandas.DataFrame with listed functions
- Return type:
pandas.DataFrame
Example:
client.repository.list_functions()
- list_functions_revisions(function_id=None, limit=None, **kwargs)[source]¶
Print all revisions for a given function ID in a table format.
- Parameters:
function_id (str) – unique ID of the stored function
limit (int, optional) – limit number of fetched records
- Returns:
pandas.DataFrame with listed revisions
- Return type:
pandas.DataFrame
Example:
client.repository.list_functions_revisions(function_id)
- list_models(limit=None, asynchronous=False, get_all=False)[source]¶
List stored models in a table format. If limit is set to None, only the first 50 records are shown.
- Parameters:
limit (int, optional) – limit number of fetched records
asynchronous (bool, optional) – if True, it will work as a generator
get_all (bool, optional) – if True, it will get all entries in ‘limited’ chunks
- Returns:
pandas.DataFrame with listed models or generator if asynchronous is set to True
- Return type:
pandas.DataFrame | Generator
Example:
client.repository.list_models() client.repository.list_models(limit=100) client.repository.list_models(limit=100, get_all=True) [entry for entry in client.repository.list_models(limit=100, asynchronous=True, get_all=True)]
- list_models_revisions(model_id=None, limit=None, **kwargs)[source]¶
Print all revisions for the given model ID in a table format.
- Parameters:
model_id (str) – unique ID of the stored model
limit (int, optional) – limit number of fetched records
- Returns:
pandas.DataFrame with listed revisions
- Return type:
pandas.DataFrame
Example:
client.repository.list_models_revisions(model_id)
- list_pipelines(limit=None)[source]¶
List stored pipelines in a table format. If limit is set to None, only the first 50 records are shown.
- Parameters:
limit (int, optional) – limit number of fetched records
- Returns:
pandas.DataFrame with listed pipelines
- Return type:
pandas.DataFrame
Example:
client.repository.list_pipelines()
- list_pipelines_revisions(pipeline_id=None, limit=None, **kwargs)[source]¶
List all revision for a given pipeline ID in a table format.
- Parameters:
pipeline_id (str) – unique ID of the stored pipeline
limit (int, optional) – limit number of fetched records
- Returns:
pandas.DataFrame with listed revisions
- Return type:
pandas.DataFrame
Example:
client.repository.list_pipelines_revisions(pipeline_id)
- load(artifact_id=None, **kwargs)[source]¶
Load a model from the repository to object in a local environment.
Note
The use of the load() method is restricted and not permitted for AutoAI models.
- Parameters:
artifact_id (str) – ID of the stored model
- Returns:
trained model
- Return type:
object
Example
model = client.repository.load(model_id)
- promote_model(model_id, source_project_id, target_space_id)[source]¶
Promote a model from a project to space. Supported only for IBM Cloud Pak® for Data.
Deprecated: Use client.spaces.promote(asset_id, source_project_id, target_space_id) instead.
- store_ai_service(ai_service, meta_props)[source]¶
Create an AI service asset.
- You can use one of the following as an ai_service:
filepath to gz file
generator function that takes no argument or arguments that all have primitive python default values, and returns a generate function.
- Parameters:
ai_service (str | Callable) – path to a file with an archived AI service function’s content or a generator function (as described above)
meta_props (dict) – metadata for storing an AI service asset. To see available meta names use
client.repository.AIServiceMetaNames.show()
- Returns:
metadata of the stored AI service
- Return type:
dict
Examples:
The most simple use of an AI service is:
documentation_request = { "application/json": { "$schema": "http://json-schema.org/draft-07/schema#", "type": "object", "properties": { "query": {"type": "string"}, "parameters": { "properties": { "max_new_tokens": {"type": "integer"}, "top_p": {"type": "number"}, }, "required": ["max_new_tokens", "top_p"], }, }, "required": ["query"], } } documentation_response = { "application/json": { "$schema": "http://json-schema.org/draft-07/schema#", "type": "object", "properties": { "query": {"type": "string"}, "result": {"type": "string"} }, "required": ["query", "result"], } } meta_props = { client.repository.AIServiceMetaNames.NAME: "AI service example", client.repository.AIServiceMetaNames.DESCRIPTION: "This is AI service function", client.repository.AIServiceMetaNames.SOFTWARE_SPEC_ID: "53dc4cf1-252f-424b-b52d-5cdd9814987f", client.repository.AIServiceMetaNames.REQUEST_DOCUMENTATION: request_documentation, client.repository.AIServiceMetaNames.RESPONSE_DOCUMENTATION: response_documentation } def deployable_ai_service(context, params={"k1":"v1"}, **kwargs): # imports from ibm_watsonx_ai import Credentials from ibm_watsonx_ai.foundation_models import ModelInference task_token = context.generate_token() outer_context = context url = "https://us-south.ml.cloud.ibm.com" project_id = "53dc4cf1-252f-424b-b52d-5cdd9814987f" def generate(context): task_token = outer_context.generate_token() payload = context.get_json() model = ModelInference( model_id="google/flan-t5-xl", credentials=Credentials( url=url, token=task_token ), project_id=project_id) response = model.generate_text(payload['query']) response_body = {'query': payload['query'], 'result': response} return {'body': response_body} return generate stored_ai_service_details = client.repository.store_ai_service(deployable_ai_service, meta_props)
- store_experiment(meta_props)[source]¶
Create an experiment.
- Parameters:
meta_props (dict) –
metadata of the experiment configuration. To see available meta names, use:
client.repository.ExperimentMetaNames.get()
- Returns:
metadata of the stored experiment
- Return type:
dict
Example:
metadata = { client.repository.ExperimentMetaNames.NAME: 'my_experiment', client.repository.ExperimentMetaNames.EVALUATION_METRICS: ['accuracy'], client.repository.ExperimentMetaNames.TRAINING_REFERENCES: [ {'pipeline': {'href': pipeline_href_1}}, {'pipeline': {'href':pipeline_href_2}} ] } experiment_details = client.repository.store_experiment(meta_props=metadata) experiment_href = client.repository.get_experiment_href(experiment_details)
- store_function(function, meta_props)[source]¶
Create a function.
- As a ‘function’ may be used one of the following:
filepath to gz file
‘score’ function reference, where the function is the function which will be deployed
generator function, which takes no argument or arguments which all have primitive python default values and as result return ‘score’ function
- Parameters:
function (str or function) – path to file with archived function content or function (as described above)
meta_props (str or dict) – meta data or name of the function, to see available meta names use
client.repository.FunctionMetaNames.show()
- Returns:
stored function metadata
- Return type:
dict
Examples
The most simple use is (using score function):
meta_props = { client.repository.FunctionMetaNames.NAME: "function", client.repository.FunctionMetaNames.DESCRIPTION: "This is ai function", client.repository.FunctionMetaNames.SOFTWARE_SPEC_UID: "53dc4cf1-252f-424b-b52d-5cdd9814987f"} def score(payload): values = [[row[0]*row[1]] for row in payload['values']] return {'fields': ['multiplication'], 'values': values} stored_function_details = client.repository.store_function(score, meta_props)
Other, more interesting example is using generator function. In this situation it is possible to pass some variables:
creds = {...} def gen_function(credentials=creds, x=2): def f(payload): values = [[row[0]*row[1]*x] for row in payload['values']] return {'fields': ['multiplication'], 'values': values} return f stored_function_details = client.repository.store_function(gen_function, meta_props)
- store_model(model=None, meta_props=None, training_data=None, training_target=None, pipeline=None, feature_names=None, label_column_names=None, subtrainingId=None, round_number=None, experiment_metadata=None, training_id=None)[source]¶
Create a model.
Here you can explore how to save external models in correct format.
- Parameters:
model (str (for filename or path) or object (corresponding to model type)) –
Can be one of following:
The train model object:
scikit-learn
xgboost
spark (PipelineModel)
path to saved model in format:
tensorflow / keras (.tar.gz)
pmml (.xml)
scikit-learn (.tar.gz)
spss (.str)
spark (.tar.gz)
xgboost (.tar.gz)
directory containing model file(s):
scikit-learn
xgboost
tensorflow
unique ID of the trained model
meta_props (dict, optional) –
metadata of the models configuration. To see available meta names, use:
client.repository.ModelMetaNames.get()
training_data (spark dataframe, pandas dataframe, numpy.ndarray or array, optional) – Spark DataFrame supported for spark models. Pandas dataframe, numpy.ndarray or array supported for scikit-learn models
training_target (array, optional) – array with labels required for scikit-learn models
pipeline (object, optional) – pipeline required for spark mllib models
feature_names (numpy.ndarray or list, optional) – feature names for the training data in case of Scikit-Learn/XGBoost models, this is applicable only in the case where the training data is not of type - pandas.DataFrame
label_column_names (numpy.ndarray or list, optional) – label column names of the trained Scikit-Learn/XGBoost models
round_number (int, optional) – round number of a Federated Learning experiment that has been configured to save intermediate models, this applies when model is a training id
experiment_metadata (dict, optional) – metadata retrieved from the experiment that created the model
training_id (str, optional) – Run id of AutoAI or TuneExperiment experiment.
- Returns:
metadata of the created model
- Return type:
dict
Note
For a keras model, model content is expected to contain a .h5 file and an archived version of it.
feature_names is an optional argument containing the feature names for the training data in case of Scikit-Learn/XGBoost models. Valid types are numpy.ndarray and list. This is applicable only in the case where the training data is not of type - pandas.DataFrame.
If the training_data is of type pandas.DataFrame and feature_names are provided, feature_names are ignored.
For INPUT_DATA_SCHEMA meta prop use list even when passing single input data schema. You can provide multiple schemas as dictionaries inside a list.
Examples
stored_model_details = client.repository.store_model(model, name)
In more complicated cases you should create proper metadata, similar to this one:
sw_spec_id = client.software_specifications.get_id_by_name('scikit-learn_0.23-py3.7') metadata = { client.repository.ModelMetaNames.NAME: 'customer satisfaction prediction model', client.repository.ModelMetaNames.SOFTWARE_SPEC_ID: sw_spec_id, client.repository.ModelMetaNames.TYPE: 'scikit-learn_0.23' }
In case when you want to provide input data schema of the model, you can provide it as part of meta:
sw_spec_id = client.software_specifications.get_id_by_name('spss-modeler_18.1') metadata = { client.repository.ModelMetaNames.NAME: 'customer satisfaction prediction model', client.repository.ModelMetaNames.SOFTWARE_SPEC_ID: sw_spec_id, client.repository.ModelMetaNames.TYPE: 'spss-modeler_18.1', client.repository.ModelMetaNames.INPUT_DATA_SCHEMA: [{'id': 'test', 'type': 'list', 'fields': [{'name': 'age', 'type': 'float'}, {'name': 'sex', 'type': 'float'}, {'name': 'fbs', 'type': 'float'}, {'name': 'restbp', 'type': 'float'}] }, {'id': 'test2', 'type': 'list', 'fields': [{'name': 'age', 'type': 'float'}, {'name': 'sex', 'type': 'float'}, {'name': 'fbs', 'type': 'float'}, {'name': 'restbp', 'type': 'float'}] }] }
store_model()
method used with a local tar.gz file that contains a model:stored_model_details = client.repository.store_model(path_to_tar_gz, meta_props=metadata, training_data=None)
store_model()
method used with a local directory that contains model files:stored_model_details = client.repository.store_model(path_to_model_directory, meta_props=metadata, training_data=None)
store_model()
method used with the ID of a trained model:stored_model_details = client.repository.store_model(trained_model_id, meta_props=metadata, training_data=None)
store_model()
method used with a pipeline that was generated by an AutoAI experiment:metadata = { client.repository.ModelMetaNames.NAME: 'AutoAI prediction model stored from object' } stored_model_details = client.repository.store_model(pipeline_model, meta_props=metadata, experiment_metadata=experiment_metadata)
metadata = { client.repository.ModelMetaNames.NAME: 'AutoAI prediction Pipeline_1 model' } stored_model_details = client.repository.store_model(model="Pipeline_1", meta_props=metadata, training_id = training_id)
Example of storing a prompt tuned model:
stored_model_details = client.repository.store_model(training_id = prompt_tuning_run_id)
Example of storing a custom foundation model:
sw_spec_id = client.software_specifications.get_id_by_name('watsonx-cfm-caikit-1.0') metadata = { client.repository.ModelMetaNames.NAME: 'custom FM asset', client.repository.ModelMetaNames.SOFTWARE_SPEC_ID: sw_spec_id, client.repository.ModelMetaNames.TYPE: client.repository.ModelAssetTypes.CUSTOM_FOUNDATION_MODEL_1_0 } stored_model_details = client.repository.store_model(model='mistralai/Mistral-7B-Instruct-v0.2', meta_props=metadata)
- store_pipeline(meta_props)[source]¶
Create a pipeline.
- Parameters:
meta_props (dict) –
metadata of the pipeline configuration. To see available meta names, use:
client.repository.PipelineMetaNames.get()
- Returns:
stored pipeline metadata
- Return type:
dict
Example:
metadata = { client.repository.PipelineMetaNames.NAME: 'my_training_definition', client.repository.PipelineMetaNames.DOCUMENT: {"doc_type":"pipeline", "version": "2.0", "primary_pipeline": "dlaas_only", "pipelines": [{"id": "dlaas_only", "runtime_ref": "hybrid", "nodes": [{"id": "training", "type": "model_node", "op": "dl_train", "runtime_ref": "DL", "inputs": [], "outputs": [], "parameters": {"name": "tf-mnist", "description": "Simple MNIST model implemented in TF", "command": "python3 convolutional_network.py --trainImagesFile ${DATA_DIR}/train-images-idx3-ubyte.gz --trainLabelsFile ${DATA_DIR}/train-labels-idx1-ubyte.gz --testImagesFile ${DATA_DIR}/t10k-images-idx3-ubyte.gz --testLabelsFile ${DATA_DIR}/t10k-labels-idx1-ubyte.gz --learningRate 0.001 --trainingIters 6000", "compute": {"name": "k80","nodes": 1}, "training_lib_href": "/v4/libraries/64758251-bt01-4aa5-a7ay-72639e2ff4d2/content" }, "target_bucket": "wml-dev-results" }] }] } } pipeline_details = client.repository.store_pipeline(training_definition_filepath, meta_props=metadata)
- update_ai_service(ai_service_id, changes, update_ai_service=None)[source]¶
Updates existing AI service asset metadata.
- Parameters:
ai_service_id (str) – ID of AI service to be updated
changes (dict) – elements that will be changed, where keys are AIServiceMetaNames
update_ai_service – path to the file with an archived AI service function’s content or function that will be changed for a specific ai_service_id
Example:
metadata = { client.repository.AIServiceMetaNames.NAME: "updated_ai_service" } ai_service_details = client.repository.update_ai_service(ai_service_id, changes=metadata)
- update_experiment(experiment_id=None, changes=None, **kwargs)[source]¶
Updates existing experiment metadata.
- Parameters:
experiment_id (str) – ID of the experiment with the definition to be updated
changes (dict) – elements to be changed, where keys are ExperimentMetaNames
- Returns:
metadata of the updated experiment
- Return type:
dict
Example:
metadata = { client.repository.ExperimentMetaNames.NAME: "updated_exp" } exp_details = client.repository.update_experiment(experiment_id, changes=metadata)
- update_function(function_id, changes=None, update_function=None, **kwargs)[source]¶
Updates existing function metadata.
- Parameters:
function_id (str) – ID of function which define what should be updated
changes (dict) – elements which should be changed, where keys are FunctionMetaNames
update_function (str or function, optional) – path to file with archived function content or function which should be changed for specific function_id, this parameter is valid only for CP4D 3.0.0
Example:
metadata = { client.repository.FunctionMetaNames.NAME: "updated_function" } function_details = client.repository.update_function(function_id, changes=metadata)
- update_model(model_id=None, updated_meta_props=None, update_model=None, **kwargs)[source]¶
Update an existing model.
- Parameters:
model_id (str) – ID of model to be updated
updated_meta_props (dict, optional) – new set of updated_meta_props to be updated
update_model (object or model, optional) – archived model content file or path to directory that contains the archived model file that needs to be changed for the specific model_id
- Returns:
updated metadata of the model
- Return type:
dict
Example:
model_details = client.repository.update_model(model_id, update_model=updated_content)
- update_pipeline(pipeline_id=None, changes=None, rev_id=None, **kwargs)[source]¶
Update metadata of an existing pipeline.
- Parameters:
pipeline_id (str) – unique ID of the pipeline to be updated
changes (dict) – elements to be changed, where keys are PipelineMetaNames
rev_id (str) – revision ID of the pipeline
- Returns:
metadata of the updated pipeline
- Return type:
dict
Example:
metadata = { client.repository.PipelineMetaNames.NAME: "updated_pipeline" } pipeline_details = client.repository.update_pipeline(pipeline_id, changes=metadata)
- class metanames.ModelMetaNames[source]¶
Set of MetaNames for models.
Available MetaNames:
MetaName
Type
Required
Schema
Example value
NAME
str
Y
my_model
DESCRIPTION
str
N
my_description
INPUT_DATA_SCHEMA
list
N
{'id(required)': 'string', 'fields(required)': [{'name(required)': 'string', 'type(required)': 'string', 'nullable(optional)': 'string'}]}
{'id': '1', 'type': 'struct', 'fields': [{'name': 'x', 'type': 'double', 'nullable': False, 'metadata': {}}, {'name': 'y', 'type': 'double', 'nullable': False, 'metadata': {}}]}
TRAINING_DATA_REFERENCES
list
N
[{'name(optional)': 'string', 'type(required)': 'string', 'connection(required)': {'endpoint_url(required)': 'string', 'access_key_id(required)': 'string', 'secret_access_key(required)': 'string'}, 'location(required)': {'bucket': 'string', 'path': 'string'}, 'schema(optional)': {'id(required)': 'string', 'fields(required)': [{'name(required)': 'string', 'type(required)': 'string', 'nullable(optional)': 'string'}]}}]
[]
TEST_DATA_REFERENCES
list
N
[{'name(optional)': 'string', 'type(required)': 'string', 'connection(required)': {'endpoint_url(required)': 'string', 'access_key_id(required)': 'string', 'secret_access_key(required)': 'string'}, 'location(required)': {'bucket': 'string', 'path': 'string'}, 'schema(optional)': {'id(required)': 'string', 'fields(required)': [{'name(required)': 'string', 'type(required)': 'string', 'nullable(optional)': 'string'}]}}]
[]
OUTPUT_DATA_SCHEMA
dict
N
{'id(required)': 'string', 'fields(required)': [{'name(required)': 'string', 'type(required)': 'string', 'nullable(optional)': 'string'}]}
{'id': '1', 'type': 'struct', 'fields': [{'name': 'x', 'type': 'double', 'nullable': False, 'metadata': {}}, {'name': 'y', 'type': 'double', 'nullable': False, 'metadata': {}}]}
LABEL_FIELD
str
N
PRODUCT_LINE
TRANSFORMED_LABEL_FIELD
str
N
PRODUCT_LINE_IX
TAGS
list
N
['string', 'string']
['string', 'string']
SIZE
dict
N
{'in_memory(optional)': 'string', 'content(optional)': 'string'}
{'in_memory': 0, 'content': 0}
PIPELINE_ID
str
N
53628d69-ced9-4f43-a8cd-9954344039a8
RUNTIME_ID
str
N
53628d69-ced9-4f43-a8cd-9954344039a8
TYPE
str
Y
mllib_2.1
CUSTOM
dict
N
{}
DOMAIN
str
N
Watson Machine Learning
HYPER_PARAMETERS
dict
N
METRICS
list
N
IMPORT
dict
N
{'name(optional)': 'string', 'type(required)': 'string', 'connection(required)': {'endpoint_url(required)': 'string', 'access_key_id(required)': 'string', 'secret_access_key(required)': 'string'}, 'location(required)': {'bucket': 'string', 'path': 'string'}}
{'connection': {'endpoint_url': 'https://s3-api.us-geo.objectstorage.softlayer.net', 'access_key_id': '***', 'secret_access_key': '***'}, 'location': {'bucket': 'train-data', 'path': 'training_path'}, 'type': 's3'}
TRAINING_LIB_ID
str
N
53628d69-ced9-4f43-a8cd-9954344039a8
MODEL_DEFINITION_ID
str
N
53628d6_cdee13-35d3-s8989343
SOFTWARE_SPEC_ID
str
N
53628d69-ced9-4f43-a8cd-9954344039a8
TF_MODEL_PARAMS
dict
N
{'save_format': 'None', 'signatures': 'struct', 'options': 'None', 'custom_objects': 'string'}
FAIRNESS_INFO
dict
N
{'favorable_labels': ['X']}
MODEL_LOCATION
dict
N
{'connection_id': '53628d69-ced9-4f43-a8cd-9954344039a8', 'bucket': 'cos_sample_bucket', 'file_path': 'path/to/model/on/cos'}
FRAMEWORK
str
N
custom_foundation_model
VERSION
str
N
1.0
Note: project (MetaNames.PROJECT_ID) and space (MetaNames.SPACE_ID) meta names are not supported and considered as invalid. Instead use client.set.default_space(<SPACE_ID>) to set the space or client.set.default_project(<PROJECT_ID>).
- class metanames.ExperimentMetaNames[source]¶
Set of MetaNames for experiments.
Available MetaNames:
MetaName
Type
Required
Schema
Example value
NAME
str
Y
Hand-written Digit Recognition
DESCRIPTION
str
N
Hand-written Digit Recognition training
TAGS
list
N
[{'value(required)': 'string', 'description(optional)': 'string'}]
[{'value': 'dsx-project.<project-guid>', 'description': 'DSX project guid'}]
EVALUATION_METHOD
str
N
multiclass
EVALUATION_METRICS
list
N
[{'name(required)': 'string', 'maximize(optional)': 'boolean'}]
[{'name': 'accuracy', 'maximize': False}]
TRAINING_REFERENCES
list
Y
[{'pipeline(optional)': {'href(required)': 'string', 'data_bindings(optional)': [{'data_reference(required)': 'string', 'node_id(required)': 'string'}], 'nodes_parameters(optional)': [{'node_id(required)': 'string', 'parameters(required)': 'dict'}]}, 'training_lib(optional)': {'href(required)': 'string', 'compute(optional)': {'name(required)': 'string', 'nodes(optional)': 'number'}, 'runtime(optional)': {'href(required)': 'string'}, 'command(optional)': 'string', 'parameters(optional)': 'dict'}}]
[{'pipeline': {'href': '/v4/pipelines/6d758251-bb01-4aa5-a7a3-72339e2ff4d8'}}]
SPACE_UID
str
N
3c1ce536-20dc-426e-aac7-7284cf3befc6
LABEL_COLUMN
str
N
label
CUSTOM
dict
N
{'field1': 'value1'}
- class metanames.FunctionMetaNames[source]¶
Set of MetaNames for AI functions.
Available MetaNames:
MetaName
Type
Required
Schema
Example value
NAME
str
Y
ai_function
DESCRIPTION
str
N
This is ai function
SOFTWARE_SPEC_ID
str
N
53628d69-ced9-4f43-a8cd-9954344039a8
SOFTWARE_SPEC_UID
str
N
53628d69-ced9-4f43-a8cd-9954344039a8
INPUT_DATA_SCHEMAS
list
N
[{'id(required)': 'string', 'fields(required)': [{'name(required)': 'string', 'type(required)': 'string', 'nullable(optional)': 'string'}]}]
[{'id': '1', 'type': 'struct', 'fields': [{'name': 'x', 'type': 'double', 'nullable': False, 'metadata': {}}, {'name': 'y', 'type': 'double', 'nullable': False, 'metadata': {}}]}]
OUTPUT_DATA_SCHEMAS
list
N
[{'id(required)': 'string', 'fields(required)': [{'name(required)': 'string', 'type(required)': 'string', 'nullable(optional)': 'string'}]}]
[{'id': '1', 'type': 'struct', 'fields': [{'name': 'multiplication', 'type': 'double', 'nullable': False, 'metadata': {}}]}]
TAGS
list
N
['string']
['tags1', 'tags2']
TYPE
str
N
python
CUSTOM
dict
N
{}
SAMPLE_SCORING_INPUT
dict
N
{'id(optional)': 'string', 'fields(optional)': 'array', 'values(optional)': 'array'}
{'input_data': [{'fields': ['name', 'age', 'occupation'], 'values': [['john', 23, 'student'], ['paul', 33, 'engineer']]}]}
- class metanames.PipelineMetanames[source]¶
Set of MetaNames for pipelines.
Available MetaNames:
MetaName
Type
Required
Schema
Example value
NAME
str
Y
Hand-written Digit Recognitionu
DESCRIPTION
str
N
Hand-written Digit Recognition training
SPACE_ID
str
N
3c1ce536-20dc-426e-aac7-7284cf3befc6
SPACE_UID
str
N
3c1ce536-20dc-426e-aac7-7284cf3befc6
TAGS
list
N
[{'value(required)': 'string', 'description(optional)': 'string'}]
[{'value': 'dsx-project.<project-guid>', 'description': 'DSX project guid'}]
DOCUMENT
dict
N
{'doc_type(required)': 'string', 'version(required)': 'string', 'primary_pipeline(required)': 'string', 'pipelines(required)': [{'id(required)': 'string', 'runtime_ref(required)': 'string', 'nodes(required)': [{'id': 'string', 'type': 'string', 'inputs': 'list', 'outputs': 'list', 'parameters': {'training_lib_href': 'string'}}]}]}
{'doc_type': 'pipeline', 'version': '2.0', 'primary_pipeline': 'dlaas_only', 'pipelines': [{'id': 'dlaas_only', 'runtime_ref': 'hybrid', 'nodes': [{'id': 'training', 'type': 'model_node', 'op': 'dl_train', 'runtime_ref': 'DL', 'inputs': [], 'outputs': [], 'parameters': {'name': 'tf-mnist', 'description': 'Simple MNIST model implemented in TF', 'command': 'python3 convolutional_network.py --trainImagesFile ${DATA_DIR}/train-images-idx3-ubyte.gz --trainLabelsFile ${DATA_DIR}/train-labels-idx1-ubyte.gz --testImagesFile ${DATA_DIR}/t10k-images-idx3-ubyte.gz --testLabelsFile ${DATA_DIR}/t10k-labels-idx1-ubyte.gz --learningRate 0.001 --trainingIters 6000', 'compute': {'name': 'k80', 'nodes': 1}, 'training_lib_href': '/v4/libraries/64758251-bt01-4aa5-a7ay-72639e2ff4d2/content'}, 'target_bucket': 'wml-dev-results'}]}]}
CUSTOM
dict
N
{'field1': 'value1'}
IMPORT
dict
N
{'name(optional)': 'string', 'type(required)': 'string', 'connection(required)': {'endpoint_url(required)': 'string', 'access_key_id(required)': 'string', 'secret_access_key(required)': 'string'}, 'location(required)': {'bucket': 'string', 'path': 'string'}}
{'connection': {'endpoint_url': 'https://s3-api.us-geo.objectstorage.softlayer.net', 'access_key_id': '***', 'secret_access_key': '***'}, 'location': {'bucket': 'train-data', 'path': 'training_path'}, 'type': 's3'}
RUNTIMES
list
N
[{'id': 'id', 'name': 'tensorflow', 'version': '1.13-py3'}]
COMMAND
str
N
convolutional_network.py --trainImagesFile train-images-idx3-ubyte.gz --trainLabelsFile train-labels-idx1-ubyte.gz --testImagesFile t10k-images-idx3-ubyte.gz --testLabelsFile t10k-labels-idx1-ubyte.gz --learningRate 0.001 --trainingIters 6000
COMPUTE
dict
N
{'name': 'k80', 'nodes': 1}
- class metanames.AIServiceMetaNames[source]¶
Set of MetaNames for AI services.
Available MetaNames:
MetaName
Type
Required
Schema
Example value
NAME
str
Y
ai_service
DESCRIPTION
str
N
This is AI service
SOFTWARE_SPEC_ID
str
N
53628d69-ced9-4f43-a8cd-9954344039a8
REQUEST_DOCUMENTATION
dict
N
{'application/json': {'$schema': 'http://json-schema.org/draft-07/schema#', 'type': 'object', 'properties': {'query': {'type': 'string'}, 'parameters': {'properties': {'max_new_tokens': {'type': 'integer'}, 'top_p': {'type': 'number'}}, 'required': ['max_new_tokens', 'top_p']}}, 'required': ['query']}}
RESPONSE_DOCUMENTATION
dict
N
{'application/json': {'$schema': 'http://json-schema.org/draft-07/schema#', 'type': 'object', 'properties': {'query': {'type': 'string'}, 'result': {'type': 'string'}}, 'required': ['query', 'result']}}
TAGS
list
N
['string']
['tags1', 'tags2']
CODE_TYPE
str
N
python
CUSTOM
dict
N
{'key1': 'value1'}
Script¶
- class client.Script(client)[source]¶
Store and manage script assets.
- ConfigurationMetaNames = <ibm_watsonx_ai.metanames.ScriptMetaNames object>¶
MetaNames for script assets creation.
- create_revision(script_id=None, **kwargs)[source]¶
Create a revision for the given script. Revisions are immutable once created. The metadata and attachment at script_id is taken and a revision is created out of it.
- Parameters:
script_id (str) – ID of the script
- Returns:
revised metadata of the stored script
- Return type:
dict
Example:
script_revision = client.script.create_revision(script_id)
- delete(asset_id=None, **kwargs)[source]¶
Delete a stored script asset.
- Parameters:
asset_id (str) – ID of the script asset
- Returns:
status (“SUCCESS” or “FAILED”) if deleted synchronously or dictionary with response
- Return type:
str | dict
Example:
client.script.delete(asset_id)
- download(asset_id=None, filename=None, rev_id=None, **kwargs)[source]¶
Download the content of a script asset.
- Parameters:
asset_id (str) – unique ID of the script asset to be downloaded
filename (str) – filename to be used for the downloaded file
rev_id (str, optional) – revision ID
- Returns:
path to the downloaded asset content
- Return type:
str
Example:
client.script.download(asset_id, "script_file")
- get_details(script_id=None, limit=None, get_all=None, **kwargs)[source]¶
Get script asset details. If no script_id is passed, details for all script assets are returned.
- Parameters:
script_id (str, optional) – unique ID of the script
limit (int, optional) – limit number of fetched records
get_all (bool, optional) – if True, it will get all entries in ‘limited’ chunks
- Returns:
metadata of the stored script asset
- Return type:
dict - if script_id is not None
{“resources”: [dict]} - if script_id is None
Example:
script_details = client.script.get_details(script_id)
- static get_href(asset_details)[source]¶
Get the URL of a stored script asset.
- Parameters:
asset_details (dict) – details of the stored script asset
- Returns:
href of the stored script asset
- Return type:
str
Example:
asset_details = client.script.get_details(asset_id) asset_href = client.script.get_href(asset_details)
- static get_id(asset_details)[source]¶
Get the unique ID of a stored script asset.
- Parameters:
asset_details (dict) – metadata of the stored script asset
- Returns:
unique ID of the stored script asset
- Return type:
str
Example:
asset_id = client.script.get_id(asset_details)
- get_revision_details(script_id=None, rev_id=None, **kwargs)[source]¶
Get metadata of the script revision.
- Parameters:
script_id (str) – ID of the script
rev_id (str, optional) – ID of the revision. If this parameter is not provided, it returns the latest revision. If there is no latest revision, it returns an error.
- Returns:
metadata of the stored script(s)
- Return type:
list
Example:
script_details = client.script.get_revision_details(script_id, rev_id)
- list(limit=None)[source]¶
List stored scripts in a table format. If limit is set to None, only the first 50 records are shown.
- Parameters:
limit (int, optional) – limit number of fetched records
- Returns:
pandas.DataFrame with listed scripts
- Return type:
pandas.DataFrame
Example:
client.script.list()
- list_revisions(script_id=None, limit=None, **kwargs)[source]¶
Print all revisions for the given script ID in a table format.
- Parameters:
script_id (str) – ID of the stored script
limit (int, optional) – limit number of fetched records
- Returns:
pandas.DataFrame with listed revisions
- Return type:
pandas.DataFrame
Example:
client.script.list_revisions(script_id)
- store(meta_props, file_path)[source]¶
Create a script asset and upload content to it.
- Parameters:
meta_props (dict) – name to be given to the script asset
file_path (str) – path to the content file to be uploaded
- Returns:
metadata of the stored script asset
- Return type:
dict
Example:
metadata = { client.script.ConfigurationMetaNames.NAME: 'my first script', client.script.ConfigurationMetaNames.DESCRIPTION: 'description of the script', client.script.ConfigurationMetaNames.SOFTWARE_SPEC_ID: '0cdb0f1e-5376-4f4d-92dd-da3b69aa9bda' } asset_details = client.script.store(meta_props=metadata, file_path="/path/to/file")
- update(script_id=None, meta_props=None, file_path=None, **kwargs)[source]¶
Update a script with metadata, attachment, or both.
- Parameters:
script_id (str) – ID of the script
meta_props (dict, optional) – changes for the script matadata
file_path (str, optional) – file path to the new attachment
- Returns:
updated metadata of the script
- Return type:
dict
Example:
script_details = client.script.update(script_id, meta, content_path)
Service instance¶
- class client.ServiceInstance(client)[source]¶
Connect, get details, and check usage of a Watson Machine Learning service instance.
- get_api_key()[source]¶
Get the API key of a Watson Machine Learning service.
- Returns:
API key
- Return type:
str
Example:
instance_details = client.service_instance.get_api_key()
- get_details()[source]¶
Get information about the Watson Machine Learning instance.
- Returns:
metadata of the service instance
- Return type:
dict
Example:
instance_details = client.service_instance.get_details()
- get_instance_id()[source]¶
Get the instance ID of a Watson Machine Learning service.
- Returns:
ID of the instance
- Return type:
str
Example:
instance_details = client.service_instance.get_instance_id()
- get_password()[source]¶
Get the password for the Watson Machine Learning service. Applicable only for IBM Cloud Pak® for Data.
- Returns:
password
- Return type:
str
Example:
instance_details = client.service_instance.get_password()
Set¶
- class client.Set(client)[source]¶
Set a space_id or a project_id to be used in the subsequent actions.
Shiny (IBM Cloud Pak® for Data only)¶
Warning! Not supported for IBM Cloud.
- class client.Shiny(client)[source]¶
Store and manage shiny assets.
- ConfigurationMetaNames = <ibm_watsonx_ai.metanames.ShinyMetaNames object>¶
MetaNames for Shiny Assets creation.
- create_revision(shiny_id=None, **kwargs)[source]¶
Create a revision for the given shiny asset. Revisions are immutable once created. The metadata and attachment at script_id is taken and a revision is created out of it.
- Parameters:
shiny_id (str) – ID of the shiny asset
- Returns:
revised metadata of the stored shiny asset
- Return type:
dict
Example:
shiny_revision = client.shiny.create_revision(shiny_id)
- delete(shiny_id=None, **kwargs)[source]¶
Delete a stored shiny asset.
- Parameters:
shiny_id (str) – unique ID of the shiny asset
- Returns:
status (“SUCCESS” or “FAILED”) if deleted synchronously or dictionary with response
- Return type:
str | dict
Example:
client.shiny.delete(shiny_id)
- download(shiny_id=None, filename=None, rev_id=None, **kwargs)[source]¶
Download the content of a shiny asset.
- Parameters:
shiny_id (str) – unique ID of the shiny asset to be downloaded
filename (str) – filename to be used for the downloaded file
rev_id (str, optional) – ID of the revision
- Returns:
path to the downloaded shiny asset content
- Return type:
str
Example:
client.shiny.download(shiny_id, "shiny_asset.zip")
- get_details(shiny_id=None, limit=None, get_all=None, **kwargs)[source]¶
Get shiny asset details. If no shiny_id is passed, details for all shiny assets are returned.
- Parameters:
shiny_id (str, optional) – unique ID of the shiny asset
limit (int, optional) – limit number of fetched records
get_all (bool, optional) – if True, it will get all entries in ‘limited’ chunks
- Returns:
metadata of the stored shiny asset
- Return type:
dict - if shiny_id is not None
{“resources”: [dict]} - if shiny_id is None
Example:
shiny_details = client.shiny.get_details(shiny_id)
- static get_href(shiny_details)[source]¶
Get the URL of a stored shiny asset.
- Parameters:
shiny_details (dict) – details of the stored shiny asset
- Returns:
href of the stored shiny asset
- Return type:
str
Example:
shiny_details = client.shiny.get_details(shiny_id) shiny_href = client.shiny.get_href(shiny_details)
- static get_id(shiny_details)[source]¶
Get the unique ID of a stored shiny asset.
- Parameters:
shiny_details (dict) – metadata of the stored shiny asset
- Returns:
unique ID of the stored shiny asset
- Return type:
str
Example:
shiny_id = client.shiny.get_id(shiny_details)
- get_revision_details(shiny_id=None, rev_id=None, **kwargs)[source]¶
Get metadata of the shiny_id revision.
- Parameters:
shiny_id (str) – ID of the shiny asset
rev_id (str, optional) – ID of the revision. If this parameter is not provided, it returns the latest revision. If there is no latest revision, it returns an error.
- Returns:
stored shiny(s) metadata
- Return type:
list
Example:
shiny_details = client.shiny.get_revision_details(shiny_id, rev_id)
- static get_uid(shiny_details)[source]¶
Get the Unique ID of a stored shiny asset.
Deprecated: Use
get_id(shiny_details)
instead.- Parameters:
shiny_details (dict) – metadata of the stored shiny asset
- Returns:
unique ID of the stored shiny asset
- Return type:
str
Example:
shiny_id = client.shiny.get_uid(shiny_details)
- list(limit=None)[source]¶
List stored shiny assets in a table format. If limit is set to None, only the first 50 records are shown.
- Parameters:
limit (int, optional) – limit number of fetched records
- Returns:
pandas.DataFrame with listed shiny assets
- Return type:
pandas.DataFrame
Example:
client.shiny.list()
- list_revisions(shiny_id=None, limit=None, **kwargs)[source]¶
List all revisions for the given shiny asset ID in a table format.
- Parameters:
shiny_id (str) – ID of the stored shiny asset
limit (int, optional) – limit number of fetched records
- Returns:
pandas.DataFrame with listed shiny revisions
- Return type:
pandas.DataFrame
Example:
client.shiny.list_revisions(shiny_id)
- store(meta_props, file_path)[source]¶
Create a shiny asset and upload content to it.
- Parameters:
meta_props (dict) – metadata of the shiny asset
file_path (str) – path to the content file to be uploaded
- Returns:
metadata of the stored shiny asset
- Return type:
dict
Example:
meta_props = { client.shiny.ConfigurationMetaNames.NAME: "shiny app name" } shiny_details = client.shiny.store(meta_props, file_path="/path/to/file")
- update(shiny_id=None, meta_props=None, file_path=None, **kwargs)[source]¶
Update a shiny asset with metadata, attachment, or both.
- Parameters:
shiny_id (str) – ID of the shiny asset
meta_props (dict, optional) – changes to the metadata of the shiny asset
file_path (str, optional) – file path to the new attachment
- Returns:
updated metadata of the shiny asset
- Return type:
dict
Example:
script_details = client.script.update(shiny_id, meta, content_path)
Software specifications¶
- class client.SwSpec(client)[source]¶
Store and manage software specs.
- ConfigurationMetaNames = <ibm_watsonx_ai.metanames.SwSpecMetaNames object>¶
MetaNames for Software Specification creation.
- add_package_extension(sw_spec_id=None, pkg_extn_id=None, **kwargs)[source]¶
Add a package extension to a software specification’s existing metadata.
- Parameters:
sw_spec_id (str) – unique ID of the software specification to be updated
pkg_extn_id (str) – unique ID of the package extension to be added to the software specification
- Returns:
status
- Return type:
str
Example:
client.software_specifications.add_package_extension(sw_spec_id, pkg_extn_id)
- delete(sw_spec_id=None, **kwargs)[source]¶
Delete a software specification.
- Parameters:
sw_spec_id (str) – unique ID of the software specification
- Returns:
status (“SUCCESS” or “FAILED”)
- Return type:
str
Example:
client.software_specifications.delete(sw_spec_id)
- delete_package_extension(sw_spec_id=None, pkg_extn_id=None, **kwargs)[source]¶
Delete a package extension from a software specification’s existing metadata.
- Parameters:
sw_spec_id (str) – unique ID of the software specification to be updated
pkg_extn_id (str) – unique ID of the package extension to be deleted from the software specification
- Returns:
status
- Return type:
str
Example:
client.software_specifications.delete_package_extension(sw_spec_uid, pkg_extn_id)
- get_details(sw_spec_id=None, state_info=False, **kwargs)[source]¶
Get software specification details. If no sw_spec_id is passed, details for all software specifications are returned.
- Parameters:
sw_spec_id (bool) – ID of the software specification
state_info – works only when sw_spec_id is None, instead of returning details of software specs, it returns the state of the software specs information (supported, unsupported, deprecated), containing suggested replacement in case of unsupported or deprecated software specs
- Returns:
metadata of the stored software specification(s)
- Return type:
dict - if sw_spec_uid is not None
{“resources”: [dict]} - if sw_spec_uid is None
Examples
sw_spec_details = client.software_specifications.get_details(sw_spec_uid) sw_spec_details = client.software_specifications.get_details() sw_spec_state_details = client.software_specifications.get_details(state_info=True)
- static get_href(sw_spec_details)[source]¶
Get the URL of a software specification.
- Parameters:
sw_spec_details (dict) – details of the software specification
- Returns:
href of the software specification
- Return type:
str
Example:
sw_spec_details = client.software_specifications.get_details(sw_spec_id) sw_spec_href = client.software_specifications.get_href(sw_spec_details)
- static get_id(sw_spec_details)[source]¶
Get the unique ID of a software specification.
- Parameters:
sw_spec_details (dict) – metadata of the software specification
- Returns:
unique ID of the software specification
- Return type:
str
Example:
asset_id = client.software_specifications.get_id(sw_spec_details)
- get_id_by_name(sw_spec_name)[source]¶
Get the unique ID of a software specification.
- Parameters:
sw_spec_name (str) – name of the software specification
- Returns:
unique ID of the software specification
- Return type:
str
Example:
asset_uid = client.software_specifications.get_id_by_name(sw_spec_name)
- static get_uid(sw_spec_details)[source]¶
Get the unique ID of a software specification.
Deprecated: Use
get_id(sw_spec_details)
instead.- Parameters:
sw_spec_details (dict) – metadata of the software specification
- Returns:
unique ID of the software specification
- Return type:
str
Example:
asset_uid = client.software_specifications.get_uid(sw_spec_details)
- get_uid_by_name(sw_spec_name)[source]¶
Get the unique ID of a software specification.
Deprecated: Use
get_id_by_name(self, sw_spec_name)
instead.- Parameters:
sw_spec_name (str) – name of the software specification
- Returns:
unique ID of the software specification
- Return type:
str
Example:
asset_uid = client.software_specifications.get_uid_by_name(sw_spec_name)
- list(limit=None)[source]¶
List software specifications in a table format.
- Parameters:
limit (int, optional) – limit number of fetched records
- Returns:
pandas.DataFrame with listed software specifications
- Return type:
pandas.DataFrame
Example:
client.software_specifications.list()
- store(meta_props)[source]¶
Create a software specification.
- Parameters:
meta_props (dict) –
metadata of the space configuration. To see available meta names, use:
client.software_specifications.ConfigurationMetaNames.get()
- Returns:
metadata of the stored space
- Return type:
dict
Example:
meta_props = { client.software_specifications.ConfigurationMetaNames.NAME: "skl_pipeline_heart_problem_prediction", client.software_specifications.ConfigurationMetaNames.DESCRIPTION: "description scikit-learn_0.20", client.software_specifications.ConfigurationMetaNames.PACKAGE_EXTENSIONS_UID: [], client.software_specifications.ConfigurationMetaNames.SOFTWARE_CONFIGURATIONS: {}, client.software_specifications.ConfigurationMetaNames.BASE_SOFTWARE_SPECIFICATION_ID: "guid" } sw_spec_details = client.software_specifications.store(meta_props)
- class metanames.SwSpecMetaNames[source]¶
Set of MetaNames for Software Specifications Specs.
Available MetaNames:
MetaName
Type
Required
Schema
Example value
NAME
str
Y
Python 3.10 with pre-installed ML package
DESCRIPTION
str
N
my_description
PACKAGE_EXTENSIONS
list
N
[{'guid': 'value'}]
SOFTWARE_CONFIGURATION
dict
N
{'platform(required)': 'string'}
{'platform': {'name': 'python', 'version': '3.10'}}
BASE_SOFTWARE_SPECIFICATION
dict
Y
{'guid': 'BASE_SOFTWARE_SPECIFICATION_ID'}
Spaces¶
- class client.Spaces(client)[source]¶
Store and manage spaces.
- ConfigurationMetaNames = <ibm_watsonx_ai.metanames.SpacesMetaNames object>¶
MetaNames for spaces creation.
- MemberMetaNames = <ibm_watsonx_ai.metanames.SpacesMemberMetaNames object>¶
MetaNames for space members creation.
- create_member(space_id, meta_props)[source]¶
Create a member within a space.
- Parameters:
space_id (str) – ID of the space with the definition to be updated
meta_props (dict) –
metadata of the member configuration. To see available meta names, use:
client.spaces.MemberMetaNames.get()
- Returns:
metadata of the stored member
- Return type:
dict
Note
role can be any one of the following: “viewer”, “editor”, “admin”
type can be any one of the following: “user”, “service”
id can be one of the following: service-ID or IAM-userID
Examples
metadata = { client.spaces.MemberMetaNames.MEMBERS: [{"id":"IBMid-100000DK0B", "type": "user", "role": "admin" }] } members_details = client.spaces.create_member(space_id=space_id, meta_props=metadata)
metadata = { client.spaces.MemberMetaNames.MEMBERS: [{"id":"iam-ServiceId-5a216e59-6592-43b9-8669-625d341aca71", "type": "service", "role": "admin" }] } members_details = client.spaces.create_member(space_id=space_id, meta_props=metadata)
- delete(space_id)[source]¶
Delete a stored space.
- Parameters:
space_id (str) – ID of the space
- Returns:
status “SUCCESS” if deletion is successful
- Return type:
Literal[“SUCCESS”]
Example:
client.spaces.delete(space_id)
- delete_member(space_id, member_id)[source]¶
Delete a member associated with a space.
- Parameters:
space_id (str) – ID of the space
member_id (str) – ID of the member
- Returns:
status (“SUCCESS” or “FAILED”)
- Return type:
str
Example:
client.spaces.delete_member(space_id,member_id)
- get_details(space_id=None, limit=None, asynchronous=False, get_all=False, space_name=None)[source]¶
Get metadata of stored space(s).
- Parameters:
space_id (str, optional) – ID of the space
limit (int, optional) – applicable when space_id is not provided, otherwise limit will be ignored
asynchronous (bool, optional) – if True, it will work as a generator
get_all (bool, optional) – if True, it will get all entries in ‘limited’ chunks
space_name (str, optional) – name of the stored space, can be used only when space_id is None
- Returns:
metadata of stored space(s)
- Return type:
dict - if space_id is not None
{“resources”: [dict]} - if space_id is None
Example:
space_details = client.spaces.get_details(space_id) space_details = client.spaces.get_details(space_name) space_details = client.spaces.get_details(limit=100) space_details = client.spaces.get_details(limit=100, get_all=True) space_details = [] for entry in client.spaces.get_details(limit=100, asynchronous=True, get_all=True): space_details.extend(entry)
- static get_id(space_details)[source]¶
Get the space_id from the space details.
- Parameters:
space_details (dict) – metadata of the stored space
- Returns:
ID of the stored space
- Return type:
str
Example:
space_details = client.spaces.store(meta_props) space_id = client.spaces.get_id(space_details)
- get_id_by_name(space_name)[source]¶
Get the ID of a stored space by name.
- Parameters:
space_name (str) – name of the stored space
- Returns:
ID of the stored space
- Return type:
str
Example:
space_id = client.spaces.get_id_by_name(space_name)
- get_member_details(space_id, member_id)[source]¶
Get metadata of a member associated with a space.
- Parameters:
space_id (str) – ID of that space with the definition to be updated
member_id (str) – ID of the member
- Returns:
metadata of the space member
- Return type:
dict
Example:
member_details = client.spaces.get_member_details(space_id,member_id)
- static get_uid(space_details)[source]¶
Get the unique ID of the space.
Deprecated: Use
get_id(space_details)
instead.- param space_details:
metadata of the space
- type space_details:
dict
- return:
unique ID of the space
- rtype:
str
Example:
space_details = client.spaces.store(meta_props) space_uid = client.spaces.get_uid(space_details)
- list(limit=None, member=None, roles=None, space_type=None)[source]¶
List stored spaces in a table format. If limit is set to None, only the first 50 records are shown.
- Parameters:
limit (int, optional) – limit number of fetched records
member (str, optional) – filters the result list, only includes spaces where the user with a matching user ID is a member
roles (str, optional) – limit number of fetched records
space_type (str, optional) – filter spaces by their type, available types are ‘wx’, ‘cpd’, and ‘wca’
- Returns:
pandas.DataFrame with listed spaces
- Return type:
pandas.DataFrame
Example:
client.spaces.list()
- list_members(space_id, limit=None, identity_type=None, role=None, state=None)[source]¶
Print the stored members of a space in a table format. If limit is set to None, only the first 50 records are shown.
- Parameters:
space_id (str) – ID of the space
limit (int, optional) – limit number of fetched records
identity_type (str, optional) – filter the members by type
role (str, optional) – filter the members by role
state (str, optional) – filter the members by state
- Returns:
pandas.DataFrame with listed members
- Return type:
pandas.DataFrame
Example:
client.spaces.list_members(space_id)
- promote(asset_id, source_project_id, target_space_id, rev_id=None)[source]¶
Promote an asset from a project to a space.
- Parameters:
asset_id (str) – ID of the stored asset
source_project_id (str) – source project, from which the asset is promoted
target_space_id (str) – target space, where the asset is promoted
rev_id (str, optional) – revision ID of the promoted asset
- Returns:
ID of the promoted asset
- Return type:
str
Examples
promoted_asset_id = client.spaces.promote(asset_id, source_project_id=project_id, target_space_id=space_id) promoted_model_id = client.spaces.promote(model_id, source_project_id=project_id, target_space_id=space_id) promoted_function_id = client.spaces.promote(function_id, source_project_id=project_id, target_space_id=space_id) promoted_data_asset_id = client.spaces.promote(data_asset_id, source_project_id=project_id, target_space_id=space_id) promoted_connection_asset_id = client.spaces.promote(connection_id, source_project_id=project_id, target_space_id=space_id)
- store(meta_props, background_mode=True)[source]¶
Create a space. The instance associated with the space via COMPUTE will be used for billing purposes on the cloud. Note that STORAGE and COMPUTE are applicable only for cloud.
- Parameters:
meta_props (dict) –
metadata of the space configuration. To see available meta names, use:
client.spaces.ConfigurationMetaNames.get()
background_mode (bool, optional) – indicator if store() method will run in background (async) or (sync)
- Returns:
metadata of the stored space
- Return type:
dict
Example:
metadata = { client.spaces.ConfigurationMetaNames.NAME: "my_space", client.spaces.ConfigurationMetaNames.DESCRIPTION: "spaces", client.spaces.ConfigurationMetaNames.STORAGE: {"resource_crn": "provide crn of the COS storage"}, client.spaces.ConfigurationMetaNames.COMPUTE: {"name": "test_instance", "crn": "provide crn of the instance"}, client.spaces.ConfigurationMetaNames.STAGE: {"production": True, "name": "stage_name"}, client.spaces.ConfigurationMetaNames.TAGS: ["sample_tag_1", "sample_tag_2"], client.spaces.ConfigurationMetaNames.TYPE: "cpd", } spaces_details = client.spaces.store(meta_props=metadata)
- update(space_id, changes)[source]¶
Update existing space metadata. ‘STORAGE’ cannot be updated. STORAGE and COMPUTE are applicable only for cloud.
- Parameters:
space_id (str) – ID of the space with the definition to be updated
changes (dict) – elements to be changed, where keys are ConfigurationMetaNames
- Returns:
metadata of the updated space
- Return type:
dict
Example:
metadata = { client.spaces.ConfigurationMetaNames.NAME:"updated_space", client.spaces.ConfigurationMetaNames.COMPUTE: {"name": "test_instance", "crn": "v1:staging:public:pm-20-dev:us-south:a/09796a1b4cddfcc9f7fe17824a68a0f8:f1026e4b-77cf-4703-843d-c9984eac7272::" } } space_details = client.spaces.update(space_id, changes=metadata)
- update_member(space_id, member_id, changes)[source]¶
Update the metadata of an existing member.
- Parameters:
space_id (str) – ID of the space
member_id (str) – ID of the member to be updated
changes (dict) – elements to be changed, where keys are ConfigurationMetaNames
- Returns:
metadata of the updated member
- Return type:
dict
Example:
metadata = { client.spaces.MemberMetaNames.MEMBER: {"role": "editor"} } member_details = client.spaces.update_member(space_id, member_id, changes=metadata)
- class metanames.SpacesMetaNames[source]¶
Set of MetaNames for Platform Spaces Specs.
Available MetaNames:
MetaName
Type
Required
Example value
NAME
str
Y
my_space
DESCRIPTION
str
N
my_description
STORAGE
dict
N
{'type': 'bmcos_object_storage', 'resource_crn': '', 'delegated(optional)': 'false'}
COMPUTE
dict
N
{'name': 'name', 'crn': 'crn of the instance'}
STAGE
dict
N
{'production': True, 'name': 'name of the stage'}
TAGS
list
N
['sample_tag']
TYPE
str
N
cpd
- class metanames.SpacesMemberMetaNames[source]¶
Set of MetaNames for Platform Spaces Member Specs.
Available MetaNames:
MetaName
Type
Required
Schema
Example value
MEMBERS
list
N
[{'id(required)': 'string', 'role(required)': 'string', 'type(required)': 'string', 'state(optional)': 'string'}]
[{'id': 'iam-id1', 'role': 'editor', 'type': 'user', 'state': 'active'}, {'id': 'iam-id2', 'role': 'viewer', 'type': 'user', 'state': 'active'}]
MEMBER
dict
N
{'id': 'iam-id1', 'role': 'editor', 'type': 'user', 'state': 'active'}
Training¶
- class client.Training(client)[source]¶
Train new models.
- cancel(training_id=None, hard_delete=False, **kwargs)[source]¶
Cancel a training that is currently running. This method can delete metadata details of a completed or canceled training run when hard_delete parameter is set to True.
- Parameters:
training_id (str) – ID of the training
hard_delete (bool, optional) –
specify True or False:
True - to delete the completed or canceled training run
False - to cancel the currently running training run
- Returns:
status “SUCCESS” if cancelation is successful
- Return type:
Literal[“SUCCESS”]
Example:
client.training.cancel(training_id)
- get_details(training_id=None, limit=None, asynchronous=False, get_all=False, training_type=None, state=None, tag_value=None, training_definition_id=None, _internal=False, **kwargs)[source]¶
Get metadata of training(s). If training_id is not specified, the metadata of all model spaces are returned.
- Parameters:
training_id (str, optional) – unique ID of the training
limit (int, optional) – limit number of fetched records
asynchronous (bool, optional) – if True, it will work as a generator
get_all (bool, optional) – if True, it will get all entries in ‘limited’ chunks
training_type (str, optional) – filter the fetched list of trainings based on the training type [“pipeline” or “experiment”]
state (str, optional) – filter the fetched list of training based on their state: [queued, running, completed, failed]
tag_value (str, optional) – filter the fetched list of training based on their tag value
training_definition_id (str, optional) – filter the fetched trainings that are using the given training definition
- Returns:
metadata of training(s)
- Return type:
dict - if training_id is not None
{“resources”: [dict]} - if training_id is None
Examples
training_run_details = client.training.get_details(training_id) training_runs_details = client.training.get_details() training_runs_details = client.training.get_details(limit=100) training_runs_details = client.training.get_details(limit=100, get_all=True) training_runs_details = [] for entry in client.training.get_details(limit=100, asynchronous=True, get_all=True): training_runs_details.extend(entry)
- static get_href(training_details)[source]¶
Get the training href from the training details.
- Parameters:
training_details (dict) – metadata of the created training
- Returns:
training href
- Return type:
str
Example:
training_details = client.training.get_details(training_id) run_url = client.training.get_href(training_details)
- static get_id(training_details)[source]¶
Get the training ID from the training details.
- Parameters:
training_details (dict) – metadata of the created training
- Returns:
unique ID of the training
- Return type:
str
Example:
training_details = client.training.get_details(training_id) training_id = client.training.get_id(training_details)
- get_metrics(training_id=None, **kwargs)[source]¶
Get metrics of a training run.
- Parameters:
training_id (str) – ID of the training
- Returns:
metrics of the training run
- Return type:
list of dict
Example:
training_status = client.training.get_metrics(training_id)
- get_status(training_id=None, **kwargs)[source]¶
Get the status of a created training.
- Parameters:
training_id (str) – ID of the training
- Returns:
training_status
- Return type:
dict
Example:
training_status = client.training.get_status(training_id)
- list(limit=None, asynchronous=False, get_all=False)[source]¶
List stored trainings in a table format. If limit is set to None, only the first 50 records are shown.
- Parameters:
limit (int, optional) – limit number of fetched records
asynchronous (bool, optional) – if True, it will work as a generator
get_all (bool, optional) – if True, it will get all entries in ‘limited’ chunks
- Returns:
pandas.DataFrame with listed experiments
- Return type:
pandas.DataFrame
Examples
client.training.list() training_runs_df = client.training.list(limit=100) training_runs_df = client.training.list(limit=100, get_all=True) training_runs_df = [] for entry in client.training.list(limit=100, asynchronous=True, get_all=True): training_runs_df.extend(entry)
- list_intermediate_models(training_id=None, **kwargs)[source]¶
Print the intermediate_models in a table format.
- Parameters:
training_id (str) – ID of the training
Note
This method is not supported for IBM Cloud Pak® for Data.
Example:
client.training.list_intermediate_models()
- monitor_logs(training_id=None, **kwargs)[source]¶
Print the logs of a training created.
- Parameters:
training_id (str) – training ID
Note
This method is not supported for IBM Cloud Pak® for Data.
Example:
client.training.monitor_logs(training_id)
- monitor_metrics(training_id=None, **kwargs)[source]¶
Print the metrics of a created training.
- Parameters:
training_id (str) – ID of the training
Note
This method is not supported for IBM Cloud Pak® for Data.
Example:
client.training.monitor_metrics(training_id)
- run(meta_props, asynchronous=True, **kwargs)[source]¶
Create a new Machine Learning training.
- Parameters:
meta_props (dict) –
metadata of the training configuration. To see available meta names, use:
client.training.ConfigurationMetaNames.show()
asynchronous (bool, optional) –
True - training job is submitted and progress can be checked later
False - method will wait till job completion and print training stats
- Returns:
metadata of the training created
- Return type:
dict
Note
- You can provide one of the following values for training:
client.training.ConfigurationMetaNames.EXPERIMENT
client.training.ConfigurationMetaNames.PIPELINE
client.training.ConfigurationMetaNames.MODEL_DEFINITION
Examples
Example of meta_props for creating a training run in IBM Cloud Pak® for Data version 3.0.1 or above:
metadata = { client.training.ConfigurationMetaNames.NAME: 'Hand-written Digit Recognition', client.training.ConfigurationMetaNames.DESCRIPTION: 'Hand-written Digit Recognition Training', client.training.ConfigurationMetaNames.PIPELINE: { "id": "4cedab6d-e8e4-4214-b81a-2ddb122db2ab", "rev": "12", "model_type": "string", "data_bindings": [ { "data_reference_name": "string", "node_id": "string" } ], "nodes_parameters": [ { "node_id": "string", "parameters": {} } ], "hardware_spec": { "id": "4cedab6d-e8e4-4214-b81a-2ddb122db2ab", "rev": "12", "name": "string", "num_nodes": "2" } }, client.training.ConfigurationMetaNames.TRAINING_DATA_REFERENCES: [{ 'type': 's3', 'connection': {}, 'location': {'href': 'v2/assets/asset1233456'}, 'schema': { 'id': 't1', 'name': 'Tasks', 'fields': [ { 'name': 'duration', 'type': 'number' } ]} }], client.training.ConfigurationMetaNames.TRAINING_RESULTS_REFERENCE: { 'id' : 'string', 'connection': { 'endpoint_url': 'https://s3-api.us-geo.objectstorage.service.networklayer.com', 'access_key_id': '***', 'secret_access_key': '***' }, 'location': { 'bucket': 'wml-dev-results', 'path' : "path" } 'type': 's3' } }
Example of a Federated Learning training job:
aggregator_metadata = { client.training.ConfigurationMetaNames.NAME: 'Federated_Learning_Tensorflow_MNIST', client.training.ConfigurationMetaNames.DESCRIPTION: 'MNIST digit recognition with Federated Learning using Tensorflow', client.training.ConfigurationMetaNames.TRAINING_DATA_REFERENCES: [], client.training.ConfigurationMetaNames.TRAINING_RESULTS_REFERENCE: { 'type': results_type, 'name': 'outputData', 'connection': {}, 'location': { 'path': '/projects/' + PROJECT_ID + '/assets/trainings/'} }, client.training.ConfigurationMetaNames.FEDERATED_LEARNING: { 'model': { 'type': 'tensorflow', 'spec': { 'id': untrained_model_id }, 'model_file': untrained_model_name }, 'fusion_type': 'iter_avg', 'metrics': 'accuracy', 'epochs': 3, 'rounds': 10, 'remote_training' : { 'quorum': 1.0, 'max_timeout': 3600, 'remote_training_systems': [ { 'id': prime_rts_id }, { 'id': nonprime_rts_id} ] }, 'hardware_spec': { 'name': 'S' }, 'software_spec': { 'name': 'runtime-22.1-py3.9' } } aggregator = client.training.run(aggregator_metadata, asynchronous=True) aggregator_id = client.training.get_id(aggregator)
- class metanames.TrainingConfigurationMetaNames[source]¶
Set of MetaNames for trainings.
Available MetaNames:
MetaName
Type
Required
Schema
Example value
TRAINING_DATA_REFERENCES
list
Y
[{'name(optional)': 'string', 'type(required)': 'string', 'connection(required)': {'endpoint_url(required)': 'string', 'access_key_id(required)': 'string', 'secret_access_key(required)': 'string'}, 'location(required)': {'bucket': 'string', 'path': 'string'}, 'schema(optional)': {'id(required)': 'string', 'fields(required)': [{'name(required)': 'string', 'type(required)': 'string', 'nullable(optional)': 'string'}]}}]
[{'connection': {'endpoint_url': 'https://s3-api.us-geo.objectstorage.softlayer.net', 'access_key_id': '***', 'secret_access_key': '***'}, 'location': {'bucket': 'train-data', 'path': 'training_path'}, 'type': 's3', 'schema': {'id': '1', 'fields': [{'name': 'x', 'type': 'double', 'nullable': 'False'}]}}]
TRAINING_RESULTS_REFERENCE
dict
Y
{'name(optional)': 'string', 'type(required)': 'string', 'connection(required)': {'endpoint_url(required)': 'string', 'access_key_id(required)': 'string', 'secret_access_key(required)': 'string'}, 'location(required)': {'bucket': 'string', 'path': 'string'}}
{'connection': {'endpoint_url': 'https://s3-api.us-geo.objectstorage.softlayer.net', 'access_key_id': '***', 'secret_access_key': '***'}, 'location': {'bucket': 'test-results', 'path': 'training_path'}, 'type': 's3'}
TEST_DATA_REFERENCES
list
N
[{'name(optional)': 'string', 'type(required)': 'string', 'connection(required)': {'endpoint_url(required)': 'string', 'access_key_id(required)': 'string', 'secret_access_key(required)': 'string'}, 'location(required)': {'bucket': 'string', 'path': 'string'}, 'schema(optional)': {'id(required)': 'string', 'fields(required)': [{'name(required)': 'string', 'type(required)': 'string', 'nullable(optional)': 'string'}]}}]
[{'connection': {'endpoint_url': 'https://s3-api.us-geo.objectstorage.softlayer.net', 'access_key_id': '***', 'secret_access_key': '***'}, 'location': {'bucket': 'train-data', 'path': 'training_path'}, 'type': 's3', 'schema': {'id': '1', 'fields': [{'name': 'x', 'type': 'double', 'nullable': 'False'}]}}]
TEST_OUTPUT_DATA
dict
N
{'name(optional)': 'string', 'type(required)': 'string', 'connection(required)': {'endpoint_url(required)': 'string', 'access_key_id(required)': 'string', 'secret_access_key(required)': 'string'}, 'location(required)': {'bucket': 'string', 'path': 'string'}, 'schema(optional)': {'id(required)': 'string', 'fields(required)': [{'name(required)': 'string', 'type(required)': 'string', 'nullable(optional)': 'string'}]}}
[{'connection': {'endpoint_url': 'https://s3-api.us-geo.objectstorage.softlayer.net', 'access_key_id': '***', 'secret_access_key': '***'}, 'location': {'bucket': 'train-data', 'path': 'training_path'}, 'type': 's3', 'schema': {'id': '1', 'fields': [{'name': 'x', 'type': 'double', 'nullable': 'False'}]}}]
TAGS
list
N
['string']
['string']
PIPELINE
dict
N
{'id': '3c1ce536-20dc-426e-aac7-7284cf3befc6', 'rev': '1', 'modeltype': 'tensorflow_1.1.3-py3', 'data_bindings': [{'data_reference_name': 'string', 'node_id': 'string'}], 'node_parameters': [{'node_id': 'string', 'parameters': {}}], 'hardware_spec': {'id': '4cedab6d-e8e4-4214-b81a-2ddb122db2ab', 'rev': '12', 'name': 'string', 'num_nodes': '2'}, 'hybrid_pipeline_hardware_specs': [{'node_runtime_id': 'string', 'hardware_spec': {'id': '4cedab6d-e8e4-4214-b81a-2ddb122db2ab', 'rev': '12', 'name': 'string', 'num_nodes': '2'}}]}
EXPERIMENT
dict
N
{'id': '3c1ce536-20dc-426e-aac7-7284cf3befc6', 'rev': 1, 'description': 'test experiment'}
PROMPT_TUNING
dict
N
{'task_id': 'generation', 'base_model': {'model_id': 'google/flan-t5-xl'}}
FINE_TUNING
dict
N
{'task_id': 'generation', 'base_model': {'model_id': 'bigscience/bloom-560m'}}
AUTO_UPDATE_MODEL
bool
N
False
FEDERATED_LEARNING
dict
N
3c1ce536-20dc-426e-aac7-7284cf3befc6
SPACE_UID
str
N
3c1ce536-20dc-426e-aac7-7284cf3befc6
MODEL_DEFINITION
dict
N
{'id': '4cedab6d-e8e4-4214-b81a-2ddb122db2ab', 'rev': '12', 'model_type': 'string', 'hardware_spec': {'id': '4cedab6d-e8e4-4214-b81a-2ddb122db2ab', 'rev': '12', 'name': 'string', 'num_nodes': '2'}, 'software_spec': {'id': '4cedab6d-e8e4-4214-b81a-2ddb122db2ab', 'rev': '12', 'name': '...'}, 'command': 'string', 'parameters': {}}
DESCRIPTION
str
Y
tensorflow model training
NAME
str
Y
sample training
Enums¶
- class ibm_watsonx_ai.utils.autoai.enums.ClassificationAlgorithms(value)[source]¶
Bases:
Enum
Classification algorithms that AutoAI can use for IBM Cloud.
- DT = 'DecisionTreeClassifier'¶
- EX_TREES = 'ExtraTreesClassifier'¶
- GB = 'GradientBoostingClassifier'¶
- LGBM = 'LGBMClassifier'¶
- LR = 'LogisticRegression'¶
- RF = 'RandomForestClassifier'¶
- SnapBM = 'SnapBoostingMachineClassifier'¶
- SnapDT = 'SnapDecisionTreeClassifier'¶
- SnapLR = 'SnapLogisticRegression'¶
- SnapRF = 'SnapRandomForestClassifier'¶
- SnapSVM = 'SnapSVMClassifier'¶
- XGB = 'XGBClassifier'¶
- class ibm_watsonx_ai.utils.autoai.enums.ClassificationAlgorithmsCP4D(value)[source]¶
Bases:
Enum
Classification algorithms that AutoAI can use for IBM Cloud Pak® for Data(CP4D). The SnapML estimators (SnapDT, SnapRF, SnapSVM, SnapLR) are supported on IBM Cloud Pak® for Data version 4.0.2 and later.
- DT = 'DecisionTreeClassifierEstimator'¶
- EX_TREES = 'ExtraTreesClassifierEstimator'¶
- GB = 'GradientBoostingClassifierEstimator'¶
- LGBM = 'LGBMClassifierEstimator'¶
- LR = 'LogisticRegressionEstimator'¶
- RF = 'RandomForestClassifierEstimator'¶
- SnapBM = 'SnapBoostingMachineClassifier'¶
- SnapDT = 'SnapDecisionTreeClassifier'¶
- SnapLR = 'SnapLogisticRegression'¶
- SnapRF = 'SnapRandomForestClassifier'¶
- SnapSVM = 'SnapSVMClassifier'¶
- XGB = 'XGBClassifierEstimator'¶
- class ibm_watsonx_ai.utils.autoai.enums.DataConnectionTypes[source]¶
Bases:
object
Supported types of DataConnection.
- CA = 'connection_asset'¶
- CN = 'container'¶
- DS = 'data_asset'¶
- FS = 'fs'¶
- S3 = 's3'¶
- class ibm_watsonx_ai.utils.autoai.enums.Directions[source]¶
Bases:
object
Possible metrics directions
- ASCENDING = 'ascending'¶
- DESCENDING = 'descending'¶
- class ibm_watsonx_ai.utils.autoai.enums.DocumentsSamplingTypes[source]¶
Bases:
object
Types of training data sampling.
- BENCHMARK_DRIVEN = 'benchmark_driven'¶
- RANDOM = 'random'¶
- class ibm_watsonx_ai.utils.autoai.enums.ForecastingAlgorithms(value)[source]¶
Bases:
Enum
Forecasting algorithms that AutoAI can use for IBM watsonx.ai software with IBM Cloud Pak® for Data.
- ARIMA = 'ARIMA'¶
- BATS = 'BATS'¶
- ENSEMBLER = 'Ensembler'¶
- HW = 'HoltWinters'¶
- LR = 'LinearRegression'¶
- RF = 'RandomForest'¶
- SVM = 'SVM'¶
- class ibm_watsonx_ai.utils.autoai.enums.ForecastingAlgorithmsCP4D(value)[source]¶
Bases:
Enum
Forecasting algorithms that AutoAI can use for IBM Cloud.
- ARIMA = 'ARIMA'¶
- BATS = 'BATS'¶
- ENSEMBLER = 'Ensembler'¶
- HW = 'HoltWinters'¶
- LR = 'LinearRegression'¶
- RF = 'RandomForest'¶
- SVM = 'SVM'¶
- class ibm_watsonx_ai.utils.autoai.enums.ForecastingPipelineTypes(value)[source]¶
Bases:
Enum
Forecasting pipeline types that AutoAI can use for IBM Cloud Pak® for Data(CP4D).
- ARIMA = 'ARIMA'¶
- ARIMAX = 'ARIMAX'¶
- ARIMAX_DMLR = 'ARIMAX_DMLR'¶
- ARIMAX_PALR = 'ARIMAX_PALR'¶
- ARIMAX_RAR = 'ARIMAX_RAR'¶
- ARIMAX_RSAR = 'ARIMAX_RSAR'¶
- Bats = 'Bats'¶
- DifferenceFlattenEnsembler = 'DifferenceFlattenEnsembler'¶
- ExogenousDifferenceFlattenEnsembler = 'ExogenousDifferenceFlattenEnsembler'¶
- ExogenousFlattenEnsembler = 'ExogenousFlattenEnsembler'¶
- ExogenousLocalizedFlattenEnsembler = 'ExogenousLocalizedFlattenEnsembler'¶
- ExogenousMT2RForecaster = 'ExogenousMT2RForecaster'¶
- ExogenousRandomForestRegressor = 'ExogenousRandomForestRegressor'¶
- ExogenousSVM = 'ExogenousSVM'¶
- FlattenEnsembler = 'FlattenEnsembler'¶
- HoltWinterAdditive = 'HoltWinterAdditive'¶
- HoltWinterMultiplicative = 'HoltWinterMultiplicative'¶
- LocalizedFlattenEnsembler = 'LocalizedFlattenEnsembler'¶
- MT2RForecaster = 'MT2RForecaster'¶
- RandomForestRegressor = 'RandomForestRegressor'¶
- SVM = 'SVM'¶
- static get_exogenous()[source]¶
Get a list of pipelines that use supporting features (exogenous pipelines).
- Returns:
list of pipelines using supporting features
- Return type:
list[ForecastingPipelineTypes]
- static get_non_exogenous()[source]¶
Get a list of pipelines that are not using supporting features (non-exogenous pipelines).
- Returns:
list of pipelines that do not use supporting features
- Return type:
list[ForecastingPipelineTypes]
- class ibm_watsonx_ai.utils.autoai.enums.ImputationStrategy(value)[source]¶
Bases:
Enum
Missing values imputation strategies.
- BEST_OF_DEFAULT_IMPUTERS = 'best_of_default_imputers'¶
- CUBIC = 'cubic'¶
- FLATTEN_ITERATIVE = 'flatten_iterative'¶
- LINEAR = 'linear'¶
- MEAN = 'mean'¶
- MEDIAN = 'median'¶
- MOST_FREQUENT = 'most_frequent'¶
- NEXT = 'next'¶
- NO_IMPUTATION = 'no_imputation'¶
- PREVIOUS = 'previous'¶
- VALUE = 'value'¶
- class ibm_watsonx_ai.utils.autoai.enums.Metrics[source]¶
Bases:
object
Supported types of classification and regression metrics in AutoAI.
- ACCURACY_AND_DISPARATE_IMPACT_SCORE = 'accuracy_and_disparate_impact'¶
- ACCURACY_SCORE = 'accuracy'¶
- AVERAGE_PRECISION_SCORE = 'average_precision'¶
- EXPLAINED_VARIANCE_SCORE = 'explained_variance'¶
- F1_SCORE = 'f1'¶
- F1_SCORE_MACRO = 'f1_macro'¶
- F1_SCORE_MICRO = 'f1_micro'¶
- F1_SCORE_WEIGHTED = 'f1_weighted'¶
- LOG_LOSS = 'neg_log_loss'¶
- MEAN_ABSOLUTE_ERROR = 'neg_mean_absolute_error'¶
- MEAN_SQUARED_ERROR = 'neg_mean_squared_error'¶
- MEAN_SQUARED_LOG_ERROR = 'neg_mean_squared_log_error'¶
- MEDIAN_ABSOLUTE_ERROR = 'neg_median_absolute_error'¶
- PRECISION_SCORE = 'precision'¶
- PRECISION_SCORE_MACRO = 'precision_macro'¶
- PRECISION_SCORE_MICRO = 'precision_micro'¶
- PRECISION_SCORE_WEIGHTED = 'precision_weighted'¶
- R2_AND_DISPARATE_IMPACT_SCORE = 'r2_and_disparate_impact'¶
- R2_SCORE = 'r2'¶
- RECALL_SCORE = 'recall'¶
- RECALL_SCORE_MACRO = 'recall_macro'¶
- RECALL_SCORE_MICRO = 'recall_micro'¶
- RECALL_SCORE_WEIGHTED = 'recall_weighted'¶
- ROC_AUC_SCORE = 'roc_auc'¶
- ROOT_MEAN_SQUARED_ERROR = 'neg_root_mean_squared_error'¶
- ROOT_MEAN_SQUARED_LOG_ERROR = 'neg_root_mean_squared_log_error'¶
- class ibm_watsonx_ai.utils.autoai.enums.MetricsToDirections(value)[source]¶
Bases:
Enum
Map of metrics directions.
- ACCURACY = 'ascending'¶
- AVERAGE_PRECISION = 'ascending'¶
- EXPLAINED_VARIANCE = 'ascending'¶
- F1 = 'ascending'¶
- F1_MACRO = 'ascending'¶
- F1_MICRO = 'ascending'¶
- F1_WEIGHTED = 'ascending'¶
- NEG_LOG_LOSS = 'descending'¶
- NEG_MEAN_ABSOLUTE_ERROR = 'descending'¶
- NEG_MEAN_SQUARED_ERROR = 'descending'¶
- NEG_MEAN_SQUARED_LOG_ERROR = 'descending'¶
- NEG_MEDIAN_ABSOLUTE_ERROR = 'descending'¶
- NEG_ROOT_MEAN_SQUARED_ERROR = 'descending'¶
- NEG_ROOT_MEAN_SQUARED_LOG_ERROR = 'descending'¶
- NORMALIZED_GINI_COEFFICIENT = 'ascending'¶
- PRECISION = 'ascending'¶
- PRECISION_MACRO = 'ascending'¶
- PRECISION_MICRO = 'ascending'¶
- PRECISION_WEIGHTED = 'ascending'¶
- R2 = 'ascending'¶
- RECALL = 'ascending'¶
- RECALL_MACRO = 'ascending'¶
- RECALL_MICRO = 'ascending'¶
- RECALL_WEIGHTED = 'ascending'¶
- ROC_AUC = 'ascending'¶
- class ibm_watsonx_ai.utils.autoai.enums.PipelineTypes[source]¶
Bases:
object
Supported types of Pipelines.
- LALE = 'lale'¶
- SKLEARN = 'sklearn'¶
- class ibm_watsonx_ai.utils.autoai.enums.PositiveLabelClass[source]¶
Bases:
object
Metrics that need positive label definition for binary classification.
- AVERAGE_PRECISION_SCORE = 'average_precision'¶
- F1_SCORE = 'f1'¶
- F1_SCORE_MACRO = 'f1_macro'¶
- F1_SCORE_MICRO = 'f1_micro'¶
- F1_SCORE_WEIGHTED = 'f1_weighted'¶
- PRECISION_SCORE = 'precision'¶
- PRECISION_SCORE_MACRO = 'precision_macro'¶
- PRECISION_SCORE_MICRO = 'precision_micro'¶
- PRECISION_SCORE_WEIGHTED = 'precision_weighted'¶
- RECALL_SCORE = 'recall'¶
- RECALL_SCORE_MACRO = 'recall_macro'¶
- RECALL_SCORE_MICRO = 'recall_micro'¶
- RECALL_SCORE_WEIGHTED = 'recall_weighted'¶
- class ibm_watsonx_ai.utils.autoai.enums.PredictionType[source]¶
Bases:
object
Supported types of learning.
- BINARY = 'binary'¶
- CLASSIFICATION = 'classification'¶
- FORECASTING = 'forecasting'¶
- MULTICLASS = 'multiclass'¶
- REGRESSION = 'regression'¶
- TIMESERIES_ANOMALY_PREDICTION = 'timeseries_anomaly_prediction'¶
- class ibm_watsonx_ai.utils.autoai.enums.RAGMetrics[source]¶
Bases:
object
Supported types of AutoAI RAG metrics
- ANSWER_CORRECTNESS = 'answer_correctness'¶
- CONTEXT_CORRECTNESS = 'context_correctness'¶
- FAITHFULNESS = 'faithfulness'¶
- class ibm_watsonx_ai.utils.autoai.enums.RegressionAlgorithms(value)[source]¶
Bases:
Enum
Regression algorithms that AutoAI can use for IBM Cloud.
- DT = 'DecisionTreeRegressor'¶
- EX_TREES = 'ExtraTreesRegressor'¶
- GB = 'GradientBoostingRegressor'¶
- LGBM = 'LGBMRegressor'¶
- LR = 'LinearRegression'¶
- RF = 'RandomForestRegressor'¶
- RIDGE = 'Ridge'¶
- SnapBM = 'SnapBoostingMachineRegressor'¶
- SnapDT = 'SnapDecisionTreeRegressor'¶
- SnapRF = 'SnapRandomForestRegressor'¶
- XGB = 'XGBRegressor'¶
- class ibm_watsonx_ai.utils.autoai.enums.RegressionAlgorithmsCP4D(value)[source]¶
Bases:
Enum
Regression algorithms that AutoAI can use for IBM Cloud Pak® for Data(CP4D). The SnapML estimators (SnapDT, SnapRF, SnapBM) are supported on IBM Cloud Pak® for Data version 4.0.2 and later.
- DT = 'DecisionTreeRegressorEstimator'¶
- EX_TREES = 'ExtraTreesRegressorEstimator'¶
- GB = 'GradientBoostingRegressorEstimator'¶
- LGBM = 'LGBMRegressorEstimator'¶
- LR = 'LinearRegressionEstimator'¶
- RF = 'RandomForestRegressorEstimator'¶
- RIDGE = 'RidgeEstimator'¶
- SnapBM = 'SnapBoostingMachineRegressor'¶
- SnapDT = 'SnapDecisionTreeRegressor'¶
- SnapRF = 'SnapRandomForestRegressor'¶
- XGB = 'XGBRegressorEstimator'¶
- class ibm_watsonx_ai.utils.autoai.enums.RunStateTypes[source]¶
Bases:
object
Supported types of AutoAI fit/run.
- COMPLETED = 'completed'¶
- FAILED = 'failed'¶
- class ibm_watsonx_ai.utils.autoai.enums.SamplingTypes[source]¶
Bases:
object
Types of training data sampling.
- FIRST_VALUES = 'first_n_records'¶
- LAST_VALUES = 'truncate'¶
- RANDOM = 'random'¶
- STRATIFIED = 'stratified'¶
- class ibm_watsonx_ai.utils.autoai.enums.TShirtSize[source]¶
Bases:
object
Possible sizes of the AutoAI POD. Depending on the POD size, AutoAI can support different data set sizes.
S - small (2vCPUs and 8GB of RAM)
M - Medium (4vCPUs and 16GB of RAM)
L - Large (8vCPUs and 32GB of RAM))
XL - Extra Large (16vCPUs and 64GB of RAM)
- L = 'l'¶
- M = 'm'¶
- S = 's'¶
- XL = 'xl'¶
- class ibm_watsonx_ai.utils.autoai.enums.TimeseriesAnomalyPredictionAlgorithms(value)[source]¶
Bases:
Enum
Timeseries Anomaly Prediction algorithms that AutoAI can use for IBM Cloud.
- Forecasting = 'Forecasting'¶
- Relationship = 'Relationship'¶
- Window = 'Window'¶
- class ibm_watsonx_ai.utils.autoai.enums.TimeseriesAnomalyPredictionPipelineTypes(value)[source]¶
Bases:
Enum
Timeseries Anomaly Prediction pipeline types that AutoAI can use for IBM Cloud.
- PointwiseBoundedBATS = 'PointwiseBoundedBATS'¶
- PointwiseBoundedBATSForceUpdate = 'PointwiseBoundedBATSForceUpdate'¶
- PointwiseBoundedHoltWintersAdditive = 'PointwiseBoundedHoltWintersAdditive'¶
- WindowLOF = 'WindowLOF'¶
- WindowNN = 'WindowNN'¶
- WindowPCA = 'WindowPCA'¶
- class ibm_watsonx_ai.utils.autoai.enums.Transformers[source]¶
Bases:
object
Supported types of congito transformers names in AutoAI.
- ABS = 'abs'¶
- CBRT = 'cbrt'¶
- COS = 'cos'¶
- CUBE = 'cube'¶
- DIFF = 'diff'¶
- DIVIDE = 'divide'¶
- FEATUREAGGLOMERATION = 'featureagglomeration'¶
- ISOFORESTANOMALY = 'isoforestanomaly'¶
- LOG = 'log'¶
- MAX = 'max'¶
- MINMAXSCALER = 'minmaxscaler'¶
- NXOR = 'nxor'¶
- PCA = 'pca'¶
- PRODUCT = 'product'¶
- ROUND = 'round'¶
- SIGMOID = 'sigmoid'¶
- SIN = 'sin'¶
- SQRT = 'sqrt'¶
- SQUARE = 'square'¶
- STDSCALER = 'stdscaler'¶
- SUM = 'sum'¶
- TAN = 'tan'¶