Core#

Connections#

class client.Connections(client)[source]#

Store and manage Connections.

ConfigurationMetaNames = <ibm_watson_machine_learning.metanames.ConnectionMetaNames object>#

MetaNames for Connection creation.

create(meta_props)[source]#

Create a connection. Input to PROPERTIES field examples:

  1. MySQL

    client.connections.ConfigurationMetaNames.PROPERTIES: {
        "database": "database",
        "password": "password",
        "port": "3306",
        "host": "host url",
        "ssl": "false",
        "username": "username"
    }
    
  2. Google Big query

    1. Method1: Use service account json. The service account json generated can be provided as

      input as-is. Provide actual values in json. Example is only indicative to show the fields. Refer to Google big query documents how to generate the service account json.

      client.connections.ConfigurationMetaNames.PROPERTIES: {
          "type": "service_account",
          "project_id": "project_id",
          "private_key_id": "private_key_id",
          "private_key": "private key contents",
          "client_email": "client_email",
          "client_id": "client_id",
          "auth_uri": "https://accounts.google.com/o/oauth2/auth",
          "token_uri": "https://oauth2.googleapis.com/token",
          "auth_provider_x509_cert_url": "https://www.googleapis.com/oauth2/v1/certs",
          "client_x509_cert_url": "client_x509_cert_url"
      }
      
    2. Method2: Using OAuth Method. Refer to Google big query documents how to generate OAuth token.

      client.connections.ConfigurationMetaNames.PROPERTIES: {
          "access_token": "access token generated for big query",
          "refresh_token": "refresh token",
          "project_id": "project_id",
          "client_secret": "This is your gmail account password",
          "client_id": "client_id"
      }
      
  3. MS SQL

    client.connections.ConfigurationMetaNames.PROPERTIES: {
        "database": "database",
        "password": "password",
        "port": "1433",
        "host": "host",
        "username": "username"
    }
    
  4. Tera data

    client.connections.ConfigurationMetaNames.PROPERTIES: {
        "database": "database",
        "password": "password",
        "port": "1433",
        "host": "host",
        "username": "username"
    }
    
Parameters:

meta_props (dict) –

metadata of the connection configuration. To see available meta names use:

client.connections.ConfigurationMetaNames.get()

Returns:

metadata of the stored connection

Return type:

dict

Example

sqlserver_data_source_type_id = client.connections.get_datasource_type_uid_by_name('sqlserver')
connections_details = client.connections.create({
    client.connections.ConfigurationMetaNames.NAME: "sqlserver connection",
    client.connections.ConfigurationMetaNames.DESCRIPTION: "connection description",
    client.connections.ConfigurationMetaNames.DATASOURCE_TYPE: sqlserver_data_source_type_id,
    client.connections.ConfigurationMetaNames.PROPERTIES: { "database": "database",
                                                            "password": "password",
                                                            "port": "1433",
                                                            "host": "host",
                                                            "username": "username"}
})
delete(connection_id)[source]#

Delete a stored Connection.

Parameters:

connection_id (str) – Unique id of the connection to be deleted.

Returns:

status (“SUCCESS” or “FAILED”)

Return type:

str

Example

client.connections.delete(connection_id)
get_datasource_type_uid_by_name(name)[source]#

Get stored datasource types id for the given datasource type name.

Parameters:

name (int) – name of datasource type

Returns:

datasource Unique Id

Return type:

str

Example

client.connections.get_datasource_type_uid_by_name('cloudobjectstorage')
get_details(connection_id=None)[source]#

Get connection details for the given unique Connection id. If no connection_id is passed, details for all connections will be returned.

Parameters:

connection_id (str) – Unique id of Connection

Returns:

metadata of the stored Connection

Return type:

dict

Example

connection_details = client.connections.get_details(connection_id)
connection_details = client.connections.get_details()
static get_uid(connection_details)[source]#

Get Unique Id of stored connection.

Parameters:

connection_details (dict) – metadata of the stored connection

Returns:

Unique Id of stored connection

Return type:

str

Example

connection_uid = client.connection.get_uid(connection_details)
list(return_as_df=True)[source]#

Print all stored connections in a table format.

Parameters:

return_as_df (bool, optional) – determinate if table should be returned as pandas.DataFrame object, default: True

Returns:

pandas.DataFrame with listed connections or None if return_as_df is False

Return type:

pandas.DataFrame or None

Example

client.connections.list()
list_datasource_types(return_as_df=True)[source]#

Print stored datasource types assets in a table format. :param return_as_df: determinate if table should be returned as pandas.DataFrame object, default: True :type return_as_df: bool, optional

Returns:

pandas.DataFrame with listed datasource types or None if return_as_df is False

Return type:

pandas.DataFrame or None

Example

client.connections.list_datasource_types()
list_uploaded_db_drivers(return_as_df=True)[source]#

Print uploaded db driver jars in table a format. Supported for IBM Cloud Pak for Data only.

Parameters:

return_as_df (bool, optional) – determinate if table should be returned as pandas.DataFrame object, default: True

Returns:

pandas.DataFrame with listed uploaded db drivers or None if return_as_df is False

Return type:

pandas.DataFrame or None

Example

client.connections.list_uploaded_db_drivers()
sign_db_driver_url(jar_name)[source]#

Get signed db driver jar url to be used during creating of JDBC generic connection. The jar name passed as argument needs to be uploaded into system first. Supported for IBM Cloud Pak for Data only, version 4.0.4 and above.

Parameters:

jar_name (str) – db driver jar name

Returns:

signed db driver url

Return type:

str

Example

jar_uri = client.connections.sign_db_driver_url('db2jcc4.jar')
upload_db_driver(path)[source]#

Upload db driver jar. Supported for IBM Cloud Pak for Data only, version 4.0.4 and above.

Parameters:

path (str) – path to db driver jar

Example

client.connections.upload_db_driver('example/path/db2jcc4.jar')
class metanames.ConnectionMetaNames[source]#

Set of MetaNames for Connection.

Available MetaNames:

MetaName

Type

Required

Example value

NAME

str

Y

my_space

DESCRIPTION

str

N

my_description

DATASOURCE_TYPE

str

Y

1e3363a5-7ccf-4fff-8022-4850a8024b68

PROPERTIES

dict

Y

{'database': 'db_name', 'host': 'host_url', 'password': 'password', 'username': 'user'}

FLAGS

list

N

['personal_credentials']

Data assets#

class client.Assets(client)[source]#

Store and manage data assets.

ConfigurationMetaNames = <ibm_watson_machine_learning.metanames.AssetsMetaNames object>#

MetaNames for Data Assets creation.

create(name, file_path)[source]#

Create a data asset and upload content to it.

Parameters:
  • name (str) – name to be given to the data asset

  • file_path (str) – path to the content file to be uploaded

Returns:

metadata of the stored data asset

Return type:

dict

Example

asset_details = client.data_assets.create(name="sample_asset", file_path="/path/to/file")
delete(asset_uid)[source]#

Delete a stored data asset.

Parameters:

asset_uid (str) – Unique Id of data asset

Returns:

status (“SUCCESS” or “FAILED”)

Return type:

str

Example

client.data_assets.delete(asset_uid)
download(asset_uid, filename)[source]#

Download and store the content of a data asset.

Parameters:
  • asset_uid (str) – the Unique Id of the data asset to be downloaded

  • filename (str) – filename to be used for the downloaded file

Returns:

normalized path to the downloaded asset content

Return type:

str

Example

client.data_assets.download(asset_uid,"sample_asset.csv")
get_content(asset_uid)[source]#

Download the content of a data asset.

Parameters:

asset_uid (str) – the Unique Id of the data asset to be downloaded

Returns:

the asset content

Return type:

binary

Example

content = client.data_assets.get_content(asset_uid).decode('ascii')
get_details(asset_uid=None)[source]#

Get data asset details. If no asset_uid is passed, details for all assets will be returned.

Parameters:

asset_uid (str) – Unique id of asset

Returns:

metadata of the stored data asset

Return type:

dict

Example

asset_details = client.data_assets.get_details(asset_uid)
static get_href(asset_details)[source]#

Get url of stored data asset.

Parameters:

asset_details (dict) – stored data asset details

Returns:

href of stored data asset

Return type:

str

Example

asset_details = client.data_assets.get_details(asset_uid)
asset_href = client.data_assets.get_href(asset_details)
static get_id(asset_details)[source]#

Get Unique Id of stored data asset.

Parameters:

asset_details (dict) – details of the stored data asset

Returns:

Unique Id of stored data asset

Return type:

str

Example

asset_id = client.data_assets.get_id(asset_details)
static get_uid(asset_details)[source]#

Get Unique Id of stored data asset.

Deprecated: Use get_id(details) instead.

Parameters:

asset_details (dict) – metadata of the stored data asset

Returns:

Unique Id of stored asset

Rtype**:

str

Example

asset_uid = client.data_assets.get_uid(asset_details)
list(limit=None, return_as_df=True)[source]#

Print stored data assets in a table format. If limit is set to None there will be only first 50 records shown.

Parameters:
  • limit (int) – limit number of fetched records

  • return_as_df (bool, optional) – determine if table should be returned as pandas.DataFrame object, default: True

Example

client.data_assets.list()
store(meta_props)[source]#

Create a data asset and upload content to it.

Parameters:

meta_props (dict) –

meta data of the space configuration. To see available meta names use:

client.data_assets.ConfigurationMetaNames.get()

Example

Example for data asset creation for files :

metadata = {
    client.data_assets.ConfigurationMetaNames.NAME: 'my data assets',
    client.data_assets.ConfigurationMetaNames.DESCRIPTION: 'sample description',
    client.data_assets.ConfigurationMetaNames.DATA_CONTENT_NAME: 'sample.csv'
}
asset_details = client.data_assets.store(meta_props=metadata)

Example of data asset creation using connection:

metadata = {
    client.data_assets.ConfigurationMetaNames.NAME: 'my data assets',
    client.data_assets.ConfigurationMetaNames.DESCRIPTION: 'sample description',
    client.data_assets.ConfigurationMetaNames.CONNECTION_ID: '39eaa1ee-9aa4-4651-b8fe-95d3ddae',
    client.data_assets.ConfigurationMetaNames.DATA_CONTENT_NAME: 't1/sample.csv'
}
asset_details = client.data_assets.store(meta_props=metadata)

Example for data asset creation with database sources type connection:

metadata = {
    client.data_assets.ConfigurationMetaNames.NAME: 'my data assets',
    client.data_assets.ConfigurationMetaNames.DESCRIPTION: 'sample description',
    client.data_assets.ConfigurationMetaNames.CONNECTION_ID: '23eaf1ee-96a4-4651-b8fe-95d3dadfe',
    client.data_assets.ConfigurationMetaNames.DATA_CONTENT_NAME: 't1'
}
asset_details = client.data_assets.store(meta_props=metadata)
class metanames.AssetsMetaNames[source]#

Set of MetaNames for Data Asset Specs.

Available MetaNames:

MetaName

Type

Required

Example value

NAME

str

Y

my_data_asset

DATA_CONTENT_NAME

str

Y

/test/sample.csv

CONNECTION_ID

str

N

39eaa1ee-9aa4-4651-b8fe-95d3ddae

DESCRIPTION

str

N

my_description

Deployments#

class client.Deployments(client)[source]#

Deploy and score published artifacts (models and functions).

create(artifact_uid=None, meta_props=None, rev_id=None, **kwargs)[source]#

Create a deployment from an artifact. As artifact, we understand model or function which may be deployed.

Parameters:
  • artifact_uid (str) – published artifact UID (model or function uid)

  • meta_props (dict) –

    metaprops, to see the available list of metanames use:

    client.deployments.ConfigurationMetaNames.get()
    

Returns:

metadata of the created deployment

Return type:

dict

Example

meta_props = {
    wml_client.deployments.ConfigurationMetaNames.NAME: "SAMPLE DEPLOYMENT NAME",
    wml_client.deployments.ConfigurationMetaNames.ONLINE: {},
    wml_client.deployments.ConfigurationMetaNames.HARDWARE_SPEC : { "id":  "e7ed1d6c-2e89-42d7-aed5-8sb972c1d2b"},
    wml_client.deployments.ConfigurationMetaNames.SERVING_NAME : 'sample_deployment'
}
deployment_details = client.deployments.create(artifact_uid, meta_props)
create_job(deployment_id, meta_props, retention=None, transaction_id=None, _asset_id=None)[source]#

Create an asynchronous deployment job.

Parameters:
  • deployment_id (str) – Unique Id of Deployment

  • meta_props (dict) – metaprops. To see the available list of metanames use client.deployments.ScoringMetaNames.get() or client.deployments.DecisionOptimizationmetaNames.get()

  • retention (int, optional) – how many job days job meta should be retained, takes integer values >= -1, supported only on Cloud

Returns:

metadata of the created async deployment job

Return type:

dict

Note

  • The valid payloads for scoring input are either list of values, pandas or numpy dataframes.

Example

scoring_payload = {wml_client.deployments.ScoringMetaNames.INPUT_DATA: [{'fields': ['GENDER','AGE','MARITAL_STATUS','PROFESSION'],
                                                                         'values': [['M',23,'Single','Student'],
                                                                                    ['M',55,'Single','Executive']]}]}
async_job = client.deployments.create_job(deployment_id, scoring_payload)
delete(deployment_uid)[source]#

Delete deployment.

Parameters:

deployment_uid (str) – Unique Id of Deployment

Returns:

status (“SUCCESS” or “FAILED”)

Return type:

str

Example

client.deployments.delete(deployment_uid)
delete_job(job_uid, hard_delete=False)[source]#

Cancels a deployment job that is currenlty running. This method is also be used to delete metadata details of the completed or canceled jobs when hard_delete parameter is set to True.

Parameters:
  • job_uid (str) – Unique Id of deployment job which should be canceled

  • hard_delete (bool, optional) –

    specify True or False:

    True - To delete the completed or canceled job.

    False - To cancel the currently running deployment job.

Returns:

status (“SUCCESS” or “FAILED”)

Return type:

str

Example

client.deployments.delete_job(job_uid)
generate(deployment_id, prompt=None, params=None, guardrails=False, guardrails_hap_params=None, guardrails_pii_params=None, concurrency_limit=10, async_mode=False)[source]#

Generate a raw response with prompt for given deployment_id.

Parameters:
  • deployment_id (str) – Id of deployment

  • prompt ((str | None), optional) – prompt needed for text generation. If deployment_id points to Prompt Template asset then prompt argument must be None, defaults to None

  • params (dict) – meta props for text generation, use ibm_watson_machine_learning.metanames.GenTextParamsMetaNames().show() to view the list of MetaNames

  • guardrails (bool) – If True then potentially hateful, abusive, and/or profane language (HAP) detection filter is toggle on for both prompt and generated text, defaults to False

  • guardrails_hap_params (dict) – meta props for HAP moderations, use ibm_watson_machine_learning.metanames.GenTextModerationsMetaNames().show() to view the list of MetaNames

  • concurrency_limit (int, optional) – number of requests that will be sent in parallel, max is 10

  • async_mode (bool) – If True then yield results asynchronously (using generator). In this case both prompt and generated text will be concatenated in the final response - under generated_text, defaults to False

Returns:

scoring result containing generated content.

Return type:

dict

generate_text(deployment_id, prompt=None, params=None, raw_response=False, guardrails=False, guardrails_hap_params=None, guardrails_pii_params=None, concurrency_limit=10)[source]#

Given the selected deployment (deployment_id), a text prompt as input, parameters and concurrency_limit, the selected inference will generate a completion text as generated_text response.

Parameters:
  • deployment_id (str) – Id of deployment

  • prompt ((str | None), optional) – the prompt string or list of strings. If list of strings is passed requests will be managed in parallel with the rate of concurency_limit, defaults to None

  • params (dict) – meta props for text generation, use ibm_watson_machine_learning.metanames.GenTextParamsMetaNames().show() to view the list of MetaNames

  • raw_response (bool, optional) – return the whole response object

  • guardrails (bool) – If True then potentially hateful, abusive, and/or profane language (HAP) detection filter is toggle on for both prompt and generated text, defaults to False

  • guardrails_hap_params (dict) – meta props for HAP moderations, use ibm_watson_machine_learning.metanames.GenTextModerationsMetaNames().show() to view the list of MetaNames

  • concurrency_limit (int) – number of requests that will be sent in parallel, max is 10

Returns:

generated content

Return type:

str

Note

By default only the first occurance of HAPDetectionWarning is displayed. To enable printing all warnings of this category, use:

import warnings
from ibm_watson_machine_learning.foundation_models.utils import HAPDetectionWarning

warnings.filterwarnings("always", category=HAPDetectionWarning)
generate_text_stream(deployment_id, prompt=None, params=None, raw_response=False, guardrails=False, guardrails_hap_params=None, guardrails_pii_params=None)[source]#

Given the selected deployment (deployment_id), a text prompt as input and parameters, the selected inference will generate a streamed text as generate_text_stream.

Parameters:
  • deployment_id (str) – Id of deployment

  • prompt ((str | None), optional) – the prompt string, defaults to None

  • params (dictl) – meta props for text generation, use ibm_watson_machine_learning.metanames.GenTextParamsMetaNames().show() to view the list of MetaNames

  • raw_response (bool, optional) – yields the whole response object

  • guardrails (bool) – If True then potentially hateful, abusive, and/or profane language (HAP) detection filter is toggle on for both prompt and generated text, defaults to False

  • guardrails_hap_params (dict) – meta props for HAP moderations, use ibm_watson_machine_learning.metanames.GenTextModerationsMetaNames().show() to view the list of MetaNames

Returns:

generated content

Return type:

str

Note

By default only the first occurance of HAPDetectionWarning is displayed. To enable printing all warnings of this category, use:

import warnings
from ibm_watson_machine_learning.foundation_models.utils import HAPDetectionWarning

warnings.filterwarnings("always", category=HAPDetectionWarning)
get_details(deployment_uid=None, serving_name=None, limit=None, asynchronous=False, get_all=False, spec_state=None, _silent=False)[source]#

Get information about deployment(s). If deployment_uid is not passed, all deployment details are fetched.

Parameters:
  • deployment_uid (str, optional) – Unique Id of Deployment

  • serving_name (str, optional) – serving name to filter deployments

  • limit (int, optional) – limit number of fetched records

  • asynchronous (bool, optional) – if True, it will work as a generator

  • get_all (bool, optional) – if True, it will get all entries in ‘limited’ chunks

  • spec_state (SpecStates, optional) – software specification state, can be used only when deployment_uid is None

Returns:

metadata of deployment(s)

Return type:

dict (if deployment_uid is not None) or {“resources”: [dict]} (if deployment_uid is None)

Example

deployment_details = client.deployments.get_details(deployment_uid)
deployment_details = client.deployments.get_details(deployment_uid=deployment_uid)
deployments_details = client.deployments.get_details()
deployments_details = client.deployments.get_details(limit=100)
deployments_details = client.deployments.get_details(limit=100, get_all=True)
deployments_details = []
for entry in client.deployments.get_details(limit=100, asynchronous=True, get_all=True):
    deployments_details.extend(entry)
get_download_url(deployment_details)[source]#

Get deployment_download_url from deployment details.

Parameters:

deployment_details (dict) – created deployment details

Returns:

deployment download URL that is used to get file deployment (for example: Core ML)

Return type:

str

Example

deployment_url = client.deployments.get_download_url(deployment)
static get_href(deployment_details)[source]#

Get deployment_href from deployment details.

Parameters:

deployment_details (dict) – metadata of the deployment.

Returns:

deployment href that is used to manage the deployment

Return type:

str

Example

deployment_href = client.deployments.get_href(deployment)
static get_id(deployment_details)[source]#

Get deployment id from deployment details.

Parameters:

deployment_details (dict) – metadata of the deployment

Returns:

deployment ID that is used to manage the deployment

Return type:

str

Example

deployment_id = client.deployments.get_id(deployment)
get_job_details(job_uid=None, include=None, limit=None)[source]#

Get information about deployment job(s). If deployment job_uid is not passed, all deployment jobs details are fetched.

Parameters:
  • job_uid (str, optional) – Unique Job ID

  • include (str, optional) – fields to be retrieved from ‘decision_optimization’ and ‘scoring’ section mentioned as value(s) (comma separated) as output response fields

  • limit (int, optional) – limit number of fetched records

Returns:

metadata of deployment job(s)

Return type:

dict (if job_uid is not None) or {“resources”: [dict]} (if job_uid is None)

Example

deployment_details = client.deployments.get_job_details()
deployments_details = client.deployments.get_job_details(job_uid=job_uid)
get_job_href(job_details)[source]#

Get the href of the deployment job.

Parameters:

job_details (dict) – metadata of the deployment job

Returns:

href of the deployment job

Return type:

str

Example

job_details = client.deployments.get_job_details(job_uid=job_uid)
job_status = client.deployments.get_job_href(job_details)
get_job_status(job_id)[source]#

Get the status of the deployment job.

Parameters:

job_id (str) – Unique Id of the deployment job

Returns:

status of the deployment job

Return type:

dict

Example

job_status = client.deployments.get_job_status(job_uid)
get_job_uid(job_details)[source]#

Get the Unique Id of the deployment job.

Parameters:

job_details (dict) – metadata of the deployment job

Returns:

Unique Id of the deployment job

Return type:

str

Example

job_details = client.deployments.get_job_details(job_uid=job_uid)
job_status = client.deployments.get_job_uid(job_details)
static get_scoring_href(deployment_details)[source]#

Get scoring url from deployment details.

Parameters:

deployment_details (dict) – metadata of the deployment

Returns:

scoring endpoint url that is used for making scoring requests

Return type:

str

Example

scoring_href = client.deployments.get_scoring_href(deployment)
static get_serving_href(deployment_details)[source]#

Get serving url from deployment details.

Parameters:

deployment_details (dict) – metadata of the deployment

Returns:

serving endpoint url that is used for making scoring requests

Return type:

str

Example

scoring_href = client.deployments.get_serving_href(deployment)
static get_uid(deployment_details)[source]#

Get deployment_uid from deployment details.

Deprecated: Use get_id(deployment_details) instead.

Parameters:

deployment_details (dict) – metadata of the deployment

Returns:

deployment UID that is used to manage the deployment

Return type:

str

Example

deployment_uid = client.deployments.get_uid(deployment)
is_serving_name_available(serving_name)[source]#

Check if serving name is available for usage.

Parameters:

serving_name (str) – serving name to filter deployments

Returns:

information if serving name is available

Return type:

bool

Example

is_available = client.deployments.is_serving_name_available('test')
list(limit=None, return_as_df=True, artifact_type=None)[source]#

Print deployments in a table format. If limit is set to None there will be only first 50 records shown.

Parameters:
  • limit (int, optional) – limit number of fetched records

  • return_as_df (bool, optional) – determinate if table should be returned as pandas.DataFrame object, default: True

  • artifact_type (str, optional) – return only deployments with the specified artifact_type

Returns:

pandas.DataFrame with listed deployments or None if return_as_df is False

Return type:

pandas.DataFrame or None

Example

client.deployments.list()
list_jobs(limit=None, return_as_df=True)[source]#

Print the async deployment jobs in a table format. If limit is set to None there will be only first 50 records shown.

Parameters:
  • limit (int, optional) – limit number of fetched records

  • return_as_df (bool, optional) – determinate if table should be returned as pandas.DataFrame object, default: True

Returns:

pandas.DataFrame with listed deployment jobs or None

Return type:

pandas.DataFrame or None if return_as_df is False

Note

This method list only async deployment jobs created for WML deployment.

Example

client.deployments.list_jobs()
score(deployment_id, meta_props, transaction_id=None)[source]#

Make scoring requests against deployed artifact.

Parameters:
  • deployment_id (str) – Unique Id of the deployment to be scored

  • meta_props (dict) – meta props for scoring, use client.deployments.ScoringMetaNames.show() to view the list of ScoringMetaNames

  • transaction_id (str, optional) – transaction id to be passed with records during payload logging

Returns:

scoring result containing prediction and probability

Return type:

dict

Note

  • client.deployments.ScoringMetaNames.INPUT_DATA is the only metaname valid for sync scoring.

  • The valid payloads for scoring input are either list of values, pandas or numpy dataframes.

Example

scoring_payload = {wml_client.deployments.ScoringMetaNames.INPUT_DATA:
    [{'fields':
        ['GENDER','AGE','MARITAL_STATUS','PROFESSION'],
        'values': [
            ['M',23,'Single','Student'],
            ['M',55,'Single','Executive']
        ]
    }]
}
predictions = client.deployments.score(deployment_id, scoring_payload)
update(deployment_uid, changes)[source]#

Updates existing deployment metadata. If ASSET is patched, then ‘id’ field is mandatory and it starts a deployment with the provided asset id/rev. Deployment id remains the same.

Parameters:
  • deployment_uid (str) – Unique Id of deployment which should be updated

  • changes (dict) – elements which should be changed, where keys are ConfigurationMetaNames

Returns:

metadata of updated deployment

Return type:

dict

Examples

metadata = {client.deployments.ConfigurationMetaNames.NAME:"updated_Deployment"}
updated_deployment_details = client.deployments.update(deployment_uid, changes=metadata)

metadata = {client.deployments.ConfigurationMetaNames.ASSET: {  "id": "ca0cd864-4582-4732-b365-3165598dc945",
                                                                "rev":"2" }}
deployment_details = client.deployments.update(deployment_uid, changes=metadata)
class metanames.DeploymentNewMetaNames[source]#

Set of MetaNames for Deployments Specs.

Available MetaNames:

MetaName

Type

Required

Schema

Example value

TAGS

list

N

['string']

['string1', 'string2']

NAME

str

N

my_deployment

DESCRIPTION

str

N

my_deployment

CUSTOM

dict

N

{}

ASSET

dict

N

{'id': '4cedab6d-e8e4-4214-b81a-2ddb122db2ab', 'rev': '1'}

PROMPT_TEMPLATE

dict

N

{'id': '4cedab6d-e8e4-4214-b81a-2ddb122db2ab'}

HARDWARE_SPEC

dict

N

{'id': '3342-1ce536-20dc-4444-aac7-7284cf3befc'}

HYBRID_PIPELINE_HARDWARE_SPECS

list

N

[{'node_runtime_id': 'auto_ai.kb', 'hardware_spec': {'id': '3342-1ce536-20dc-4444-aac7-7284cf3befc', 'num_nodes': '2'}}]

ONLINE

dict

N

{}

BATCH

dict

N

{}

R_SHINY

dict

N

{'authentication': 'anyone_with_url'}

VIRTUAL

dict

N

{}

OWNER

str

N

<owner_id>

BASE_MODEL_ID

str

N

google/flan-ul2

BASE_DEPLOYMENT_ID

str

N

76a60161-facb-4968-a475-a6f1447c44bf

PROMPT_VARIABLES

dict

N

{'key': 'value'}

class ibm_watson_machine_learning.utils.enums.RShinyAuthenticationValues(value)[source]#

Allowable values of R_Shiny authentication.

ANYONE_WITH_URL = 'anyone_with_url'#
ANY_VALID_USER = 'any_valid_user'#
MEMBERS_OF_DEPLOYMENT_SPACE = 'members_of_deployment_space'#
class metanames.ScoringMetaNames[source]#

Set of MetaNames for Scoring.

Available MetaNames:

MetaName

Type

Required

Schema

Example value

NAME

str

N

jobs test

INPUT_DATA

list

N

[{'name(optional)': 'string', 'id(optional)': 'string', 'fields(optional)': 'array[string]', 'values': 'array[array[string]]'}]

[{'fields': ['name', 'age', 'occupation'], 'values': [['john', 23, 'student']]}]

INPUT_DATA_REFERENCES

list

N

[{'id(optional)': 'string', 'type(required)': 'string', 'connection(required)': {'href(required)': 'string'}, 'location(required)': {'bucket': 'string', 'path': 'string'}, 'schema(optional)': {'id(required)': 'string', 'fields(required)': [{'name(required)': 'string', 'type(required)': 'string', 'nullable(optional)': 'string'}]}}]

OUTPUT_DATA_REFERENCE

dict

N

{'type(required)': 'string', 'connection(required)': {'href(required)': 'string'}, 'location(required)': {'bucket': 'string', 'path': 'string'}, 'schema(optional)': {'id(required)': 'string', 'fields(required)': [{'name(required)': 'string', 'type(required)': 'string', 'nullable(optional)': 'string'}]}}

EVALUATIONS_SPEC

list

N

[{'id(optional)': 'string', 'input_target(optional)': 'string', 'metrics_names(optional)': 'array[string]'}]

[{'id': 'string', 'input_target': 'string', 'metrics_names': ['auroc', 'accuracy']}]

ENVIRONMENT_VARIABLES

dict

N

{'my_env_var1': 'env_var_value1', 'my_env_var2': 'env_var_value2'}

class metanames.DecisionOptimizationMetaNames[source]#

Set of MetaNames for Decision Optimization.

Available MetaNames:

MetaName

Type

Required

Schema

Example value

INPUT_DATA

list

N

[{'name(optional)': 'string', 'id(optional)': 'string', 'fields(optional)': 'array[string]', 'values': 'array[array[string]]'}]

[{'fields': ['name', 'age', 'occupation'], 'values': [['john', 23, 'student']]}]

INPUT_DATA_REFERENCES

list

N

[{'name(optional)': 'string', 'id(optional)': 'string', 'fields(optional)': 'array[string]', 'values': 'array[array[string]]'}]

[{'fields': ['name', 'age', 'occupation'], 'values': [['john', 23, 'student']]}]

OUTPUT_DATA

list

N

[{'name(optional)': 'string'}]

OUTPUT_DATA_REFERENCES

list

N

{'name(optional)': 'string', 'type(required)': 'string', 'connection(required)': {'endpoint_url(required)': 'string', 'access_key_id(required)': 'string', 'secret_access_key(required)': 'string'}, 'location(required)': {'bucket': 'string', 'path': 'string'}, 'schema(optional)': {'id(required)': 'string', 'fields(required)': [{'name(required)': 'string', 'type(required)': 'string', 'nullable(optional)': 'string'}]}}

SOLVE_PARAMETERS

dict

N

Export/Import#

class client.Export(client)[source]#
cancel(export_id, space_id=None, project_id=None)[source]#

Cancel an export job. space_id or project_id has to be provided.

Note

To delete a export_id job, use delete() api.

Parameters:
  • export_id (str) – export job identifier

  • space_id (str, optional) – space identifier

  • project_id (str, optional) – project identifier

Returns:

status (“SUCCESS” or “FAILED”)

Return type:

str

Example

client.export_assets.cancel(export_id='6213cf1-252f-424b-b52d-5cdd9814956c',
                            space_id='3421cf1-252f-424b-b52d-5cdd981495fe')
delete(export_id, space_id=None, project_id=None)[source]#

Deletes the given export_id job. space_id or project_id has to be provided.

Parameters:
  • export_id (str) – export job identifier

  • space_id (str, optional) – space identifier

  • project_id (str, optional) – project identifier

Returns:

status (“SUCCESS” or “FAILED”)

Return type:

str

Example

client.export_assets.delete(export_id='6213cf1-252f-424b-b52d-5cdd9814956c',
                            space_id= '98a53931-a8c0-4c2f-8319-c793155e4598')
get_details(export_id=None, space_id=None, project_id=None, limit=None, asynchronous=False, get_all=False)[source]#

Get metadata of the given export job. if no export_id is specified all exports metadata is returned.

Parameters:
  • export_id (str, optional) – export job identifier

  • space_id (str, optional) – space identifier

  • project_id (str, optional) – project identifier

  • limit (int, optional) – limit number of fetched records

  • asynchronous (bool, optional) – if True, it will work as a generator

  • get_all (bool, optional) – if True, it will get all entries in ‘limited’ chunks

Returns:

export(s) metadata

Return type:

dict (if export_id is not None) or {“resources”: [dict]} (if export_id is None)

Example

details = client.export_assets.get_details(export_id, space_id= '98a53931-a8c0-4c2f-8319-c793155e4598')
details = client.export_assets.get_details()
details = client.export_assets.get_details(limit=100)
details = client.export_assets.get_details(limit=100, get_all=True)
details = []
for entry in client.export_assets.get_details(limit=100, asynchronous=True, get_all=True):
    details.extend(entry)
get_exported_content(export_id, space_id=None, project_id=None, file_path=None)[source]#

Get the exported content as a zip file.

Parameters:
  • export_id (str) – export job identifier

  • space_id (str, optional) – space identifier

  • project_id (str, optional) – project identifier

  • file_path (str, optional) – name of local file to create, this should be absolute path of the file and the file shouldn’t exist

Returns:

path to the downloaded function content

Return type:

str

Example

client.exports.get_exported_content(export_id,
                                    space_id='98a53931-a8c0-4c2f-8319-c793155e4598',
                                    file_path='/home/user/my_exported_content.zip')
static get_id(export_details)[source]#

Get ID of export job from export details.

Parameters:

export_details (dict) – metadata of the export job

Returns:

ID of the export job

Return type:

str

Example

id = client.export_assets.get_id(export_details)
list(space_id=None, project_id=None, limit=None, return_as_df=True)[source]#

Print export jobs in a table format. If limit is set to None there will be only first 50 records shown.

Parameters:
  • space_id (str, optional) – space identifier

  • project_id (str, optional) – project identifier

  • limit (int, optional) – limit number of fetched records

  • return_as_df (bool, optional) – determinate if table should be returned as pandas.DataFrame object, default: True

Returns:

pandas.DataFrame with listed connections or None if return_as_df is False

Return type:

pandas.DataFrame or None

Example

client.export_assets.list()
start(meta_props, space_id=None, project_id=None)[source]#

Start the export. Either space_id or project_id has to be provided and is mandatory. ALL_ASSETS is by default False. No need to provide explicitly unless it has to be set to True. Either ALL_ASSETS or ASSET_TYPES or ASSET_IDS has to be given in the meta_props. Only one of these can be provided.

In the meta_props:

ALL_ASSETS is a boolean. When set to True, it exports all assets in the given space. ASSET_IDS is an array containing the list of assets ids to be exported. ASSET_TYPES is for providing the asset types to be exported. All assets of that asset type will be exported.

Eg: wml_model, wml_model_definition, wml_pipeline, wml_function, wml_experiment, software_specification, hardware_specification, package_extension, script

Parameters:
  • meta_props (dict) – meta data, to see available meta names use client.export_assets.ConfigurationMetaNames.get()

  • space_id (str, optional) – space identifier

  • project_id – project identifier

Returns:

Response json

Return type:

dict

Example

metadata = {
    client.export_assets.ConfigurationMetaNames.NAME: "export_model",
    client.export_assets.ConfigurationMetaNames.ASSET_IDS: ["13a53931-a8c0-4c2f-8319-c793155e7517",
                                                            "13a53931-a8c0-4c2f-8319-c793155e7518"]}

details = client.export_assets.start(meta_props=metadata, space_id="98a53931-a8c0-4c2f-8319-c793155e4598")
metadata = {
    client.export_assets.ConfigurationMetaNames.NAME: "export_model",
    client.export_assets.ConfigurationMetaNames.ASSET_TYPES: ["wml_model"]}

details = client.export_assets.start(meta_props=metadata, space_id="98a53931-a8c0-4c2f-8319-c793155e4598")
metadata = {
    client.export_assets.ConfigurationMetaNames.NAME: "export_model",
    client.export_assets.ConfigurationMetaNames.ALL_ASSETS: True}

details = client.export_assets.start(meta_props=metadata, space_id="98a53931-a8c0-4c2f-8319-c793155e4598")
class client.Import(client)[source]#
cancel(import_id, space_id=None, project_id=None)[source]#

Cancel an import job. Either space_id or project_id has to be provided.

Note

To delete an import_id job, use delete() api

Parameters:
  • import_id (str) – import job identifier

  • space_id (str, optional) – space identifier

  • project_id (str, optional) – project identifier

Returns:

status (“SUCCESS” or “FAILED”)

Return type:

str

Example

client.import_assets.cancel(import_id='6213cf1-252f-424b-b52d-5cdd9814956c',
                            space_id='3421cf1-252f-424b-b52d-5cdd981495fe')
delete(import_id, space_id=None, project_id=None)[source]#

Deletes the given import_id job. space_id or project_id has to be provided.

Parameters:
  • import_id (str) – import job identifier

  • space_id (str, optional) – space identifier

  • project_id (str, optional) – project identifier

Returns:

status (“SUCCESS” or “FAILED”)

Return type:

str

Example

client.import_assets.delete(import_id='6213cf1-252f-424b-b52d-5cdd9814956c',
                            space_id= '98a53931-a8c0-4c2f-8319-c793155e4598')
get_details(import_id=None, space_id=None, project_id=None, limit=None, asynchronous=False, get_all=False)[source]#

Get metadata of the given import job. if no import_id is specified, all imports metadata is returned.

Parameters:
  • import_id (str, optional) – import job identifier

  • space_id (str, optional) – space identifier

  • project_id (str, optional) – project identifier

  • limit (int, optional) – limit number of fetched records

  • asynchronous (bool, optional) – if True, it will work as a generator

  • get_all (bool, optional) – if True, it will get all entries in ‘limited’ chunks

Returns:

import(s) metadata

Return type:

dict (if import_id is not None) or {“resources”: [dict]} (if import_id is None)

Example

details = client.import_assets.get_details(import_id)
details = client.import_assets.get_details()
details = client.import_assets.get_details(limit=100)
details = client.import_assets.get_details(limit=100, get_all=True)
details = []
for entry in client.import_assets.get_details(limit=100, asynchronous=True, get_all=True):
    details.extend(entry)
static get_id(import_details)[source]#

Get ID of import job from import details.

Parameters:

import_details (dict) – metadata of the import job

Returns:

ID of the import job

Return type:

str

Example

id = client.import_assets.get_id(import_details)
list(space_id=None, project_id=None, limit=None, return_as_df=True)[source]#

Print import jobs in a table format. If limit is set to None there will be only first 50 records shown.

Parameters:
  • space_id (str, optional) – space identifier

  • project_id (str, optional) – project identifier

  • limit (int, optional) – limit number of fetched records

  • return_as_df (bool, optional) – determinate if table should be returned as pandas.DataFrame object, default: True

Returns:

pandas.DataFrame with listed assets or None if return_as_df is False

Return type:

pandas.DataFrame or None

Example

client.import_assets.list()
start(file_path=None, space_id=None, project_id=None)[source]#

Start the import. Either space_id or project_id has to be provided. Note that on IBM Cloud Pak® for Data 3.5, import into non-empty space/project is not supported.

Parameters:
  • file_path (dict) – file path to zip file with exported assets

  • space_id (str, optional) – space identifier

  • project_id (str, optional) – project identifier

Returns:

response json

Return type:

dict

Example

details = client.import_assets.start(space_id="98a53931-a8c0-4c2f-8319-c793155e4598",
                                     file_path="/home/user/data_to_be_imported.zip")

Factsheets (IBM Cloud only)#

Warning! Not supported for IBM Cloud Pak for Data.

class client.Factsheets(client)[source]#

Link WML Model to Model Entry.

list_model_entries(catalog_id=None)[source]#

Returns all WKC Model Entry assets for a catalog.

Parameters:

catalog_id (str, optional) – catalog ID where you want to register model, if None list from all catalogs

Returns:

all WKC Model Entry assets for a catalog

Return type:

dict

Example

model_entries = client.factsheets.list_model_entries(catalog_id)
register_model_entry(model_id, meta_props, catalog_id=None)[source]#

Link WML Model to Model Entry

Parameters:
  • model_id (str) – published model/asset ID

  • meta_props (dict) –

    metaprops, to see the available list of metanames use:

    client.factsheets.ConfigurationMetaNames.get()
    

  • catalog_id (str, optional) – catalog ID where you want to register model

Returns:

metadata of the registration

Return type:

dict

Example

meta_props = {
    wml_client.factsheets.ConfigurationMetaNames.ASSET_ID: '83a53931-a8c0-4c2f-8319-c793155e7517'}

registration_details = client.factsheets.register_model_entry(model_id, catalog_id, meta_props)

or

meta_props = {
    wml_client.factsheets.ConfigurationMetaNames.NAME: "New model entry",
    wml_client.factsheets.ConfigurationMetaNames.DESCRIPTION: "New model entry"}

registration_details = client.factsheets.register_model_entry(model_id, meta_props)
unregister_model_entry(asset_id, catalog_id=None)[source]#

Unregister WKC Model Entry

Parameters:
  • asset_id (str) – WKC model entry id

  • catalog_id (str, optional) – catalog ID where asset is stored, when not provided, default client space or project will be taken

Example

model_entries = client.factsheets.unregister_model_entry(asset_id='83a53931-a8c0-4c2f-8319-c793155e7517',
                                                         catalog_id='34553931-a8c0-4c2f-8319-c793155e7517')

or

client.set.default_space('98f53931-a8c0-4c2f-8319-c793155e7517')
model_entries = client.factsheets.unregister_model_entry(asset_id='83a53931-a8c0-4c2f-8319-c793155e7517')
class metanames.FactsheetsMetaNames[source]#

Set of MetaNames for Factsheets metanames.

Available MetaNames:

MetaName

Type

Required

Example value

ASSET_ID

str

N

13a53931-a8c0-4c2f-8319-c793155e7517

NAME

str

N

New model entry

DESCRIPTION

str

N

New model entry

MODEL_ENTRY_CATALOG_ID

str

Y

13a53931-a8c0-4c2f-8319-c793155e7517

Hardware specifications#

class client.HwSpec(client)[source]#

Store and manage hardware specs.

ConfigurationMetaNames = <ibm_watson_machine_learning.metanames.HwSpecMetaNames object>#

MetaNames for Hardware Specification.

delete(hw_spec_id)[source]#

Delete a hardware specifications.

Parameters:

hw_spec_id (str) – Unique Id of hardware specification which should be deleted

Returns:

status (“SUCCESS” or “FAILED”)

Return type:

str

get_details(hw_spec_uid)[source]#

Get hardware specification details.

Parameters:

hw_spec_uid (str) – Unique id of the hardware spec

Returns:

metadata of the hardware specifications

Return type:

dict

Example

hw_spec_details = client.hardware_specifications.get_details(hw_spec_uid)
static get_href(hw_spec_details)[source]#

Get url of hardware specifications.

Parameters:

hw_spec_details (dict) – hardware specifications details

Returns:

href of hardware specifications

Return type:

str

Example

hw_spec_details = client.hw_spec.get_details(hw_spec_uid)
hw_spec_href = client.hw_spec.get_href(hw_spec_details)
static get_id(hw_spec_details)[source]#

Get ID of hardware specifications asset.

Parameters:

hw_spec_details (dict) – metadata of the hardware specifications

Returns:

Unique Id of hardware specifications

Return type:

str

Example

asset_uid = client.hardware_specifications.get_id(hw_spec_details)
get_id_by_name(hw_spec_name)[source]#

Get Unique Id of hardware specification for the given name.

Parameters:

hw_spec_name (str) – name of the hardware spec

Returns:

Unique Id of hardware specification

Return type:

str

Example

asset_uid = client.hardware_specifications.get_id_by_name(hw_spec_name)
static get_uid(hw_spec_details)[source]#

Get UID of hardware specifications asset.

Deprecated: Use get_id(hw_spec_details) instead.

Parameters:

hw_spec_details (dict) – metadata of the hardware specifications

Returns:

Unique Id of hardware specifications

Return type:

str

Example

asset_uid = client.hardware_specifications.get_uid(hw_spec_details)
get_uid_by_name(hw_spec_name)[source]#

Get Unique Id of hardware specification for the given name.

Deprecated: Use get_id_by_name(hw_spec_name) instead.

Parameters:

hw_spec_name (str) – name of the hardware spec

Returns:

Unique Id of hardware specification

Return type:

str

Example

asset_uid = client.hardware_specifications.get_uid_by_name(hw_spec_name)
list(name=None, return_as_df=True)[source]#

Print hardware specifications in a table format.

Parameters:
  • name (str, optional) – Unique id of the hardware spec

  • return_as_df (bool, optional) – determinate if table should be returned as pandas.DataFrame object, default: True

Returns:

pandas.DataFrame with listed hardware specifications or None if return_as_df is False

Return type:

pandas.DataFrame or None

Example

client.hardware_specifications.list()
store(meta_props)[source]#

Create a hardware specification.

Returns:

metadata of the created hardware specification

Return type:

dict

Example

meta_props = {
    client.hardware_specifications.ConfigurationMetaNames.NAME: "custom hardware specification",
    client.hardware_specifications.ConfigurationMetaNames.DESCRIPTION: "Custom hardware specification creted with SDK",
    client.hardware_specifications.ConfigurationMetaNames.NODES:{"cpu":{"units":"2"},"mem":{"size":"128Gi"},"gpu":{"num_gpu":1}}
 }

client.hardware_specifications.store(meta_props)
class metanames.HwSpecMetaNames[source]#

Set of MetaNames for Hardware Specifications Specs.

Available MetaNames:

MetaName

Type

Required

Example value

NAME

str

Y

Custom Hardware Specification

DESCRIPTION

str

N

my_description

NODES

dict

N

{}

SPARK

dict

N

{}

DATASTAGE

dict

N

{}

Helpers#

class ibm_watson_machine_learning.helpers.helpers.get_credentials_from_config(env_name, credentials_name, config_path='./config.ini')[source]#

Bases:

Load credentials from config file.

[DEV_LC]

wml_credentials = { }
cos_credentials = { }
Parameters:
  • env_name (str) – the name of [ENV] defined in config file

  • credentials_name (str) – name of credentials

  • config_path (str) – path to the config file

Returns:

loaded credentials

Return type:

dict

Example

get_credentials_from_config(env_name='DEV_LC', credentials_name='wml_credentials')

Model definitions#

class client.ModelDefinition(client)[source]#

Store and manage model definitions.

ConfigurationMetaNames = <ibm_watson_machine_learning.metanames.ModelDefinitionMetaNames object>#

MetaNames for model definition creation.

create_revision(model_definition_uid)[source]#

Create revision for the given model definition. Revisions are immutable once created. The metadata and attachment at model definition is taken and a revision is created out of it.

Parameters:

model_definition_uid (str) – model definition ID

Returns:

stored model definition revisions metadata

Return type:

dict

Example

model_definition_revision = client.model_definitions.create_revision(model_definition_id)
delete(model_definition_uid)[source]#

Delete a stored model definition.

Parameters:

model_definition_uid (str) – Unique Id of stored model definition

Returns:

status (“SUCCESS” or “FAILED”)

Return type:

str

Example

client.model_definitions.delete(model_definition_uid)
download(model_definition_uid, filename, rev_id=None)[source]#

Download the content of a model definition asset.

Parameters:
  • model_definition_uid (str) – the Unique Id of the model definition asset to be downloaded

  • filename (str) – filename to be used for the downloaded file

  • rev_id (str, optional) – revision id

Returns:

path to the downloaded asset content

Return type:

str

Example

client.model_definitions.download(model_definition_uid, "model_definition_file")
get_details(model_definition_uid=None)[source]#

Get metadata of stored model definition. If no model_definition_uid is passed, details for all model definitions will be returned.

Parameters:

model_definition_uid (str, optional) – Unique Id of model definition

Returns:

metadata of model definition

Return type:

dict (if model_definition_uid is not None)

Example

get_href(model_definition_details)[source]#

Get href of stored model definition.

Parameters:

model_definition_details (dict) – stored model definition details

Returns:

href of stored model definition

Return type:

str

Example

model_definition_uid = client.model_definitions.get_href(model_definition_details)
get_id(model_definition_details)[source]#

Get Unique Id of stored model definition asset.

Parameters:

model_definition_details (dict) – metadata of the stored model definition asset

Returns:

Unique Id of stored model definition asset

Return type:

str

Example

asset_uid = client.model_definition.get_id(asset_details)
get_revision_details(model_definition_uid, rev_uid=None)[source]#

Get metadata of model definition.

Parameters:
  • model_definition_uid (str) – model definition ID

  • rev_uid (str, optional) – revision ID, if this parameter is not provided, returns latest revision if existing else error

Returns:

stored model definitions metadata

Return type:

dict

Example

script_details = client.model_definitions.get_revision_details(model_definition_uid, rev_uid)
get_uid(model_definition_details)[source]#

Get uid of stored model.

Deprecated: Use get_id(model_definition_details) instead.

Parameters:

model_definition_details (dict) – stored model definition details

Returns:

uid of stored model definition

Return type:

str

Example

model_definition_uid = client.model_definitions.get_uid(model_definition_details)
list(limit=None, return_as_df=True)[source]#

Print stored model definition assets in a table format. If limit is set to None there will be only first 50 records shown.

Parameters:
  • limit (int, optional) – limit number of fetched records

  • return_as_df (bool, optional) – determinate if table should be returned as pandas.DataFrame object, default: True

Returns:

pandas.DataFrame with listed model definitions or None if return_as_df is False

Return type:

pandas.DataFrame or None

Example

client.model_definitions.list()
list_revisions(model_definition_uid, limit=None)[source]#

Print stored model definition assets in a table format. If limit is set to None there will be only first 50 records shown.

Parameters:
  • model_definition_uid (str) – Unique id of model definition

  • limit (int, optional) – limit number of fetched records

Example

client.model_definitions.list_revisions()
store(model_definition, meta_props)[source]#

Create a model definition.

Parameters:
  • meta_props (dict) –

    meta data of the model definition configuration, to see available meta names use:

    client.model_definitions.ConfigurationMetaNames.get()
    

  • model_definition (str) – path to the content file to be uploaded

Returns:

metadata of the model definition created

Return type:

dict

Example

client.model_definitions.store(model_definition, meta_props)
update(model_definition_id, meta_props=None, file_path=None)[source]#

Update model definition with either metadata or attachment or both.

Parameters:
  • model_definition_id (str) – model definition ID

  • meta_props (dict) – meta data of the model definition configuration to be updated

  • file_path (str, optional) – path to the content file to be uploaded

Returns:

updated metadata of model definition

Return type:

dict

Example

model_definition_details = client.model_definition.update(model_definition_id, meta_props, file_path)
class metanames.ModelDefinitionMetaNames[source]#

Set of MetaNames for Model Definition.

Available MetaNames:

MetaName

Type

Required

Schema

Example value

NAME

str

Y

my_model_definition

DESCRIPTION

str

N

my model_definition

PLATFORM

dict

Y

{'name(required)': 'string', 'versions(required)': ['versions']}

{'name': 'python', 'versions': ['3.10']}

VERSION

str

Y

1.0

COMMAND

str

N

python3 convolutional_network.py

CUSTOM

dict

N

{'field1': 'value1'}

SPACE_UID

str

N

3c1ce536-20dc-426e-aac7-7284cf3befc6

Package extensions#

class client.PkgExtn(client)[source]#

Store and manage software Packages Extension specs.

ConfigurationMetaNames = <ibm_watson_machine_learning.metanames.PkgExtnMetaNames object>#

MetaNames for Package Extensions creation.

delete(pkg_extn_id)[source]#

Delete a package extension.

Parameters:

pkg_extn_id (str) – Unique Id of package extension

Returns:

status (“SUCCESS” or “FAILED”)

Return type:

str

Example

client.package_extensions.delete(pkg_extn_id)
download(pkg_extn_id, filename)[source]#

Download a package extension.

Parameters:
  • pkg_extn_id (str) – Unique Id of the package extension to be downloaded

  • filename (str) – filename to be used for the downloaded file

Returns:

path to the downloaded package extension content

Return type:

str

Example

client.package_extensions.download(pkg_extn_id,"sample_conda.yml/custom_library.zip")
get_details(pkg_extn_id)[source]#

Get package extensions details.

Parameters:

pkg_extn_id (str) – Unique Id of package extension

Returns:

details of the package extensions

Return type:

dict

Example

pkg_extn_details = client.pkg_extn.get_details(pkg_extn_id)
static get_href(pkg_extn_details)[source]#

Get url of stored package extensions.

Parameters:

pkg_extn_details (dict) – details of the package extensions

Returns:

href of package extension

Return type:

str

Example

pkg_extn_details = client.package_extensions.get_details(pkg_extn_uid)
pkg_extn_href = client.package_extensions.get_href(pkg_extn_details)
static get_id(pkg_extn_details)[source]#

Get Unique Id of package extensions.

Parameters:

pkg_extn_details (dict) – details of the package extensions

Returns:

Unique Id of package extension

Return type:

str

Example

asset_id = client.package_extensions.get_id(pkg_extn_details)
get_id_by_name(pkg_extn_name)[source]#

Get ID of package extensions.

Parameters:

pkg_extn_name (str) – name of the package extension

Returns:

Unique Id of package extension

Return type:

str

Example

asset_id = client.package_extensions.get_id_by_name(pkg_extn_name)
static get_uid(pkg_extn_details)[source]#

Get Unique Id of package extensions.

Deprecated: Use get_id(pkg_extn_details) instead.

Parameters:

pkg_extn_details (dict) – details of the package extensions

Returns:

Unique Id of package extension

Return type:

str

Example

asset_uid = client.package_extensions.get_uid(pkg_extn_details)
get_uid_by_name(pkg_extn_name)[source]#

Get UID of package extensions.

Deprecated: Use get_id_by_name(pkg_extn_name) instead.

Parameters:

pkg_extn_name (str) – name of the package extension

Returns:

Unique Id of package extension

Return type:

str

Example

asset_uid = client.package_extensions.get_uid_by_name(pkg_extn_name)
list(return_as_df=True)[source]#

List package extensions in a table format.

Parameters:

return_as_df (bool, optional) – determinate if table should be returned as pandas.DataFrame object, default: True

Returns:

pandas.DataFrame with listed package extensions or None if return_as_df is False

Return type:

pandas.DataFrame or None

client.package_extensions.list()
store(meta_props, file_path)[source]#

Create a package extensions.

Parameters:
  • meta_props (dict) –

    meta data of the package extension. To see available meta names use:

    client.package_extensions.ConfigurationMetaNames.get()
    

  • file_path (str) – path to file which will be uploaded as package extension

Returns:

metadata of the package extensions

Return type:

dict

Example

meta_props = {
    client.package_extensions.ConfigurationMetaNames.NAME: "skl_pipeline_heart_problem_prediction",
    client.package_extensions.ConfigurationMetaNames.DESCRIPTION: "description scikit-learn_0.20",
    client.package_extensions.ConfigurationMetaNames.TYPE: "conda_yml"
}

pkg_extn_details = client.package_extensions.store(meta_props=meta_props, file_path="/path/to/file")
class metanames.PkgExtnMetaNames[source]#

Set of MetaNames for Package Extensions Specs.

Available MetaNames:

MetaName

Type

Required

Example value

NAME

str

Y

Python 3.10 with pre-installed ML package

DESCRIPTION

str

N

my_description

TYPE

str

Y

conda_yml/custom_library

Repository#

class client.Repository(client)[source]#

Store and manage models, functions, spaces, pipelines and experiments using Watson Machine Learning Repository.

To view ModelMetaNames, use:

client.repository.ModelMetaNames.show()

To view ExperimentMetaNames, use:

client.repository.ExperimentMetaNames.show()

To view FunctionMetaNames, use:

client.repository.FunctionMetaNames.show()

To view PipelineMetaNames, use:

client.repository.PipelineMetaNames.show()
create_experiment_revision(experiment_uid)[source]#

Create a new experiment revision.

Parameters:

experiment_uid (str) – Unique Id of the stored experiment

Returns:

stored experiment new revision details

Return type:

dict

Example

experiment_revision_artifact = client.repository.create_experiment_revision(experiment_uid)
create_function_revision(function_uid)[source]#

Create a new functions revision.

Parameters:

function_uid (str) – Unique function ID

Returns:

stored function revision metadata

Return type:

dict

Example

client.repository.create_function_revision(pipeline_uid)
create_member(space_uid, meta_props)[source]#

Create a member within a space.

Parameters:
  • space_uid (str) – UID of space

  • meta_props (dict) –

    metadata of the member configuration. To see available meta names use:

    client.spaces.MemberMetaNames.get()
    

Returns:

metadata of the stored member

Return type:

dict

Note

  • client.spaces.MemberMetaNames.ROLE can be any one of the following “viewer”, “editor”, “admin”

  • client.spaces.MemberMetaNames.IDENTITY_TYPE can be any one of the following “user”, “service”

  • client.spaces.MemberMetaNames.IDENTITY can be either service-ID or IAM-userID

Example

metadata = {
    client.spaces.MemberMetaNames.ROLE:"Admin",
    client.spaces.MemberMetaNames.IDENTITY:"iam-ServiceId-5a216e59-6592-43b9-8669-625d341aca71",
    client.spaces.MemberMetaNames.IDENTITY_TYPE:"service"
}
members_details = client.repository.create_member(space_uid=space_id, meta_props=metadata)
create_model_revision(model_uid)[source]#

Create revision for the given model uid.

Parameters:

model_uid (str) – stored model UID

Returns:

stored model revisions metadata

Return type:

dict

Example

model_details = client.repository.create_model_revision(model_uid)
create_pipeline_revision(pipeline_uid)[source]#

Create a new pipeline revision.

Parameters:

pipeline_uid (str) – Unique pipeline ID

Returns:

pipeline revision details

Return type:

dict

Example

client.repository.create_pipeline_revision(pipeline_uid)
create_revision(artifact_uid)[source]#

Create revision for passed artifact_uid.

Parameters:

artifact_uid (str) – Unique id of stored model, experiment, function or pipelines

Returns:

artifact new revision metadata

Return type:

dict

Example

details = client.repository.create_revision(artifact_uid)
delete(artifact_uid)[source]#

Delete model, experiment, pipeline, space, runtime, library or function from repository.

Parameters:

artifact_uid (str) – Unique id of stored model, experiment, function, pipeline, space, library or runtime

Returns:

status (“SUCCESS” or “FAILED”)

Return type:

str

Example

client.repository.delete(artifact_uid)
download(artifact_uid, filename='downloaded_artifact.tar.gz', rev_uid=None, format=None)[source]#

Downloads configuration file for artifact with specified uid.

Parameters:
  • artifact_uid (str) – Unique Id of model, function, runtime or library

  • filename (str, optional) – name of the file to which the artifact content has to be downloaded

Returns:

path to the downloaded artifact content

Return type:

str

Examples

client.repository.download(model_uid, 'my_model.tar.gz')
client.repository.download(model_uid, 'my_model.json') # if original model was saved as json, works only for xgboost 1.3
get_details(artifact_uid=None, spec_state=None)[source]#

Get metadata of stored artifacts. If artifact_uid is not specified returns all models, experiments, functions, pipelines, spaces, libraries and runtimes metadata.

Parameters:
  • artifact_uid (str, optional) – Unique Id of stored model, experiment, function, pipeline, space, library or runtime

  • spec_state (SpecStates, optional) – software specification state, can be used only when artifact_uid is None

Returns:

stored artifact(s) metadata

Return type:

dict (if artifact_uid is not None) or {“resources”: [dict]} (if artifact_uid is None)

Examples

details = client.repository.get_details(artifact_uid)
details = client.repository.get_details()

Example of getting all repository assets with deprecated software specifications:

from ibm_watson_machine_learning.lifecycle import SpecStates

details = client.repository.get_details(spec_state=SpecStates.DEPRECATED)
get_experiment_details(experiment_uid=None, limit=None, asynchronous=False, get_all=False)[source]#

Get metadata of experiment(s). If no experiment UID is specified all experiments metadata is returned.

Parameters:
  • experiment_uid (str, optional) – UID of experiment

  • limit (int, optional) – limit number of fetched records

  • asynchronous (bool, optional) – if True, it will work as a generator

  • get_all (bool, optional) – if True, it will get all entries in ‘limited’ chunks

Returns:

experiment(s) metadata

Return type:

dict (if UID is not None) or {“resources”: [dict]} (if UID is None)

Example

experiment_details = client.repository.get_experiment_details(experiment_uid)
experiment_details = client.repository.get_experiment_details()
experiment_details = client.repository.get_experiment_details(limit=100)
experiment_details = client.repository.get_experiment_details(limit=100, get_all=True)
experiment_details = []
for entry in client.repository.get_experiment_details(limit=100, asynchronous=True, get_all=True):
    experiment_details.extend(entry)
static get_experiment_href(experiment_details)[source]#

Get href of stored experiment.

Parameters:

experiment_details (dict) – metadata of the stored experiment

Returns:

href of stored experiment

Return type:

str

Example

experiment_details = client.repository.get_experiment_details(experiment_uid)
experiment_href = client.repository.get_experiment_href(experiment_details)
static get_experiment_id(experiment_details)[source]#

Get Unique Id of stored experiment.

Parameters:

experiment_details (dict) – metadata of the stored experiment

Returns:

Unique Id of stored experiment

Return type:

str

Example

experiment_details = client.repository.get_experiment_details(experiment_id)
experiment_uid = client.repository.get_experiment_id(experiment_details)
get_experiment_revision_details(experiment_uid, rev_id)[source]#

Get metadata of stored experiments revisions.

Parameters:
  • experiment_uid (str) – stored experiment UID

  • rev_id (str) – rev_id number of experiment

Returns:

stored experiment revision metadata

Return type:

dict

Example:

experiment_details = client.repository.get_experiment_revision_details(experiment_uid, rev_id)
static get_experiment_uid(experiment_details)[source]#

Get Unique Id of stored experiment.

Parameters:

experiment_details (dict) – metadata of the stored experiment

Returns:

Unique Id of stored experiment

Return type:

str

Example

experiment_details = client.repository.get_experiment_details(experiment_uid)
experiment_uid = client.repository.get_experiment_uid(experiment_details)
get_function_details(function_uid=None, limit=None, asynchronous=False, get_all=False, spec_state=None)[source]#

Get metadata of function(s). If no function UID is specified all functions metadata is returned.

Parameters:
  • function_uid – UID of function

  • limit (int, optional) – limit number of fetched records

  • asynchronous (bool, optional) – if True, it will work as a generator

  • get_all (bool, optional) – if True, it will get all entries in ‘limited’ chunks

  • spec_state (SpecStates, optional) – software specification state, can be used only when model_uid is None

Type:

str, optional

Returns:

function(s) metadata

Return type:

dict (if UID is not None) or {“resources”: [dict]} (if UID is None)

Note

In current implementation setting spec_state=True may break set limit, returning less records than stated by set limit.

Examples

function_details = client.repository.get_function_details(function_uid)
function_details = client.repository.get_function_details()
function_details = client.repository.get_function_details(limit=100)
function_details = client.repository.get_function_details(limit=100, get_all=True)
function_details = []
for entry in client.repository.get_function_details(limit=100, asynchronous=True, get_all=True):
    function_details.extend(entry)
static get_function_href(function_details)[source]#

Get url of stored function.

Parameters:

function_details (dict) – stored function details

Returns:

href of stored function

Return type:

str

Example

function_details = client.repository.get_function_details(function_uid)
function_url = client.repository.get_function_href(function_details)
static get_function_id(function_details)[source]#

Get ID of stored function.

Parameters:

function_details (dict) – metadata of the stored function

Returns:

ID of stored function

Return type:

str

Example

function_details = client.repository.get_function_details(function_uid)
function_id = client.repository.get_function_id(function_details)
get_function_revision_details(function_uid, rev_id)[source]#

Get metadata of specific revision of stored functions.

Parameters:
  • function_uid (str) – stored functions, definition

  • rev_id (str) – Unique id of the function revision

Returns:

stored function revision metadata

Return type:

dict

Example

function_revision_details = client.repository.get_function_revision_details(function_uid, rev_id)
static get_function_uid(function_details)[source]#

Get UID of stored function.

Deprecated: Use get_id(function_details) instead.

Parameters:

function_details (dict) – metadata of the stored function

Returns:

UID of stored function

Return type:

str

Example

function_details = client.repository.get_function_details(function_uid)
function_uid = client.repository.get_function_uid(function_details)
static get_member_href(member_details)[source]#

Get member href from member details.

Parameters:

member_details (dict) – metadata of the stored member

Returns:

member href

Return type:

str

Example

member_details = client.repository.get_member_details(member_id)
member_href = client.repository.get_member_href(member_details)
static get_member_uid(member_details)[source]#

Get member uid from member details.

Parameters:

member_details (dict) – metadata of the created member

Returns:

member UID

Return type:

str

Example

member_details = client.repository.get_member_details(member_id)
member_id = client.repository.get_member_uid(member_details)
get_members_details(space_uid, member_id=None, limit=None)[source]#

Get metadata of members associated with a space. If member UID is not specified, it returns all the members metadata.

Parameters:
  • space_uid (str) – space UID

  • member_id (str, optional) – member UID

  • limit (int, optional) – limit number of fetched records

Returns:

metadata of member(s) of a space

Return type:

dict (if UID is not None) or {“resources”: [dict]} (if UID is None)

Example

member_details = client.repository.get_members_details(space_uid,member_id)
get_model_details(model_uid=None, limit=None, asynchronous=False, get_all=False, spec_state=None)[source]#

Get metadata of stored models. If model uid is not specified returns all models metadata.

Parameters:
  • model_uid (str, optional) – stored model, definition or pipeline UID

  • limit (int, optional) – limit number of fetched records

  • asynchronous (bool, optional) – if True, it will work as a generator

  • get_all (bool, optional) – if True, it will get all entries in ‘limited’ chunks

  • spec_state (SpecStates, optional) – software specification state, can be used only when model_uid is None

Returns:

stored model(s) metadata

Return type:

dict (if UID is not None) or {“resources”: [dict]} (if UID is None)

Note

In current implementation setting spec_state=True may break set limit, returning less records than stated by set limit.

Example

model_details = client.repository.get_model_details(model_uid)
models_details = client.repository.get_model_details()
models_details = client.repository.get_model_details(limit=100)
models_details = client.repository.get_model_details(limit=100, get_all=True)
models_details = []
for entry in client.repository.get_model_details(limit=100, asynchronous=True, get_all=True):
    models_details.extend(entry)
static get_model_href(model_details)[source]#

Get url of stored model.

Parameters:

model_details (dict) – stored model details

Returns:

url to stored model

Return type:

str

Example

model_url = client.repository.get_model_href(model_details)
static get_model_id(model_details)[source]#

Get id of stored model.

Parameters:

model_details (dict) – stored model details

Returns:

uid of stored model

Return type:

str

Example

model_id = client.repository.get_model_id(model_details)
get_model_revision_details(model_uid, rev_uid)[source]#

Get metadata of stored models specific revision.

Parameters:
  • model_uid (str) – stored model, definition or pipeline UID

  • rev_uid (str) – Unique Id of the stored model revision

Returns:

stored model(s) metadata

Return type:

dict

Example

model_details = client.repository.get_model_revision_details(model_uid, rev_uid)
static get_model_uid(model_details)[source]#

This method is deprecated, please use get_id() instead.”

get_pipeline_details(pipeline_uid=None, limit=None, asynchronous=False, get_all=False)[source]#

Get metadata of stored pipeline(s). If pipeline UID is not specified returns all pipelines metadata.

Parameters:
  • pipeline_uid (str, optional) – Pipeline UID

  • limit (int, optional) – limit number of fetched records

  • asynchronous (bool, optional) – if True, it will work as a generator

  • get_all (bool, optional) – if True, it will get all entries in ‘limited’ chunks

Returns:

metadata of pipeline(s)

Return type:

dict (if UID is not None) or {“resources”: [dict]} (if UID is None)

Example

pipeline_details = client.repository.get_pipeline_details(pipeline_uid)
pipeline_details = client.repository.get_pipeline_details()
pipeline_details = client.repository.get_pipeline_details(limit=100)
pipeline_details = client.repository.get_pipeline_details(limit=100, get_all=True)
pipeline_details = []
for entry in client.repository.get_pipeline_details(limit=100, asynchronous=True, get_all=True):
    pipeline_details.extend(entry)
static get_pipeline_href(pipeline_details)[source]#

Get href from pipeline details.

Parameters:

pipeline_details (dict) – metadata of the stored pipeline

Returns:

pipeline href

Return type:

str

Example

pipeline_details = client.repository.get_pipeline_details(pipeline_uid)
pipeline_href = client.repository.get_pipeline_href(pipeline_details)
static get_pipeline_id(pipeline_details)[source]#

Get pipeline id from pipeline details.

Parameters:

pipeline_details (dict) – metadata of the stored pipeline

Returns:

Unique Id of pipeline

Return type:

str

Example

pipeline_uid = client.repository.get_pipeline_id(pipeline_details)
get_pipeline_revision_details(pipeline_uid, rev_id)[source]#

Get metadata of pipeline revision.

Parameters:
  • pipeline_uid (str) – stored pipeline UID

  • rev_id (str) – stored pipeline revision ID

Returns:

stored pipeline revision metadata

Return type:

dict

Example:

pipeline_details = client.repository.get_pipeline_revision_details(pipeline_uid, rev_id)

Note

rev_id parameter is not applicable in Cloud platform.

static get_pipeline_uid(pipeline_details)[source]#

Get pipeline_uid from pipeline details.

Parameters:

pipeline_details (dict) – metadata of the stored pipeline

Returns:

Unique Id of pipeline

Return type:

str

Example

pipeline_uid = client.repository.get_pipeline_uid(pipeline_details)
get_space_details(space_uid=None, limit=None)[source]#

Get metadata of stored space(s). If space UID is not specified, it returns all the spaces metadata.

Parameters:
  • space_uid (str, optional) – Space UID

  • limit (int, optional) – limit number of fetched records

Returns:

metadata of stored space(s)

Return type:

dict (if UID is not None) or {“resources”: [dict]} (if UID is None)

Example

space_details = client.repository.get_space_details(space_uid)
space_details = client.repository.get_space_details()
static get_space_href(space_details)[source]#

Get space href from space details.

Parameters:

spaces_details (dict) – metadata of the stored space

Returns:

space href

Return type:

str

Example

space_details = client.repository.get_space_details(space_uid)
space_href = client.repository.get_space_href(space_details)
static get_space_uid(space_details)[source]#

Get space uid from space details.

Parameters:

spaces_details (dict) – metadata of the stored space

Returns:

space UID

Return type:

str

Example

space_details = client.repository.get_space_details(space_uid)
space_uid = client.repository.get_space_uid(space_details)
list(framework_filter=None, return_as_df=True)[source]#

Print/get stored models, pipelines, runtimes, libraries, functions, spaces and experiments in a table/DataFrame format. If limit is set to None there will be only first 50 records shown.

Parameters:
  • framework_filter (str, optional) – Get only frameworks with desired names

  • return_as_df (bool, optional) – Determinate if table should be returned as pandas.DataFrame object, default: True

Returns:

DataFrame with listed names and ids of stored models or None if return_as_df is False

Return type:

pandas.DataFrame or None

Example

client.repository.list()
client.repository.list(return_as_df=False)
client.repository.list(framework_filter='prompt_tune')
client.repository.list(framework_filter='prompt_tune', return_as_df=False)
list_experiments(limit=None, return_as_df=True)[source]#

Print stored experiments in a table format. If limit is set to None there will be only first 50 records shown.

Parameters:
  • limit (int, optional) – limit number of fetched records

  • return_as_df (bool, optional) – determinate if table should be returned as pandas.DataFrame object, default: True

Returns:

pandas.DataFrame with listed experiments or None if return_as_df is False

Return type:

pandas.DataFrame or None

Example

client.repository.list_experiments()
list_experiments_revisions(experiment_uid, limit=None, return_as_df=True)[source]#

Print all revision for the given experiment uid in a table format.

Parameters:
  • experiment_uid (str) – Unique id of stored experiment

  • limit (int, optional) – limit number of fetched records

  • return_as_df (bool, optional) – determinate if table should be returned as pandas.DataFrame object, default: True

Returns:

pandas.DataFrame with listed revisions or None if return_as_df is False

Return type:

pandas.DataFrame or None

Example

client.repository.list_experiments_revisions(experiment_uid)
list_functions(limit=None, return_as_df=True)[source]#

Print stored functions in a table format. If limit is set to None there will be only first 50 records shown.

Parameters:
  • limit (int, optional) – limit number of fetched records

  • return_as_df (bool, optional) – determinate if table should be returned as pandas.DataFrame object, default: True

Returns:

pandas.DataFrame with listed functions or None if return_as_df is False

Return type:

pandas.DataFrame or None

Example

client.repository.list_functions()
list_functions_revisions(function_uid, limit=None, return_as_df=True)[source]#

Print all revision for the given function uid in a table format.

Parameters:
  • function_uid (str) – Unique id of stored function

  • limit (int, optional) – limit number of fetched records

  • return_as_df (bool, optional) – determinate if table should be returned as pandas.DataFrame object, default: True

Returns:

pandas.DataFrame with listed revisions or None if return_as_df is False

Return type:

pandas.DataFrame or None

Example

client.repository.list_functions_revisions(function_uid)
list_members(space_uid, limit=None, return_as_df=True)[source]#

Print stored members of a space in a table format. If limit is set to None there will be only first 50 records shown.

Parameters:
  • space_uid (str) – UID of space

  • limit (int, optional) – limit number of fetched records

  • return_as_df (bool, optional) – determinate if table should be returned as pandas.DataFrame object, default: True

Returns:

pandas.DataFrame with listed members or None if return_as_df is False

Return type:

pandas.DataFrame or None

Example

client.repository.list_members()
list_models(limit=None, asynchronous=False, get_all=False, return_as_df=True)[source]#

Print stored models in a table format. If limit is set to None there will be only first 50 records shown.

Parameters:
  • limit (int, optional) – limit number of fetched records

  • asynchronous (bool, optional) – if True, it will work as a generator

  • get_all (bool, optional) – if True, it will get all entries in ‘limited’ chunks

  • return_as_df (bool, optional) – determinate if table should be returned as pandas.DataFrame object, default: True

Returns:

pandas.DataFrame with listed models or None if return_as_df is False

Return type:

pandas.DataFrame or None

Example

client.repository.list_models()
client.repository.list_models(limit=100)
client.repository.list_models(limit=100, get_all=True)
[entry for entry in client.repository.list_models(limit=100, asynchronous=True, get_all=True)]
list_models_revisions(model_uid, limit=None, return_as_df=True)[source]#

Print all revision for the given model uid in a table format.

Parameters:
  • model_uid (str) – Unique id of stored model

  • limit (int, optional) – limit number of fetched records

  • return_as_df (bool, optional) – determinate if table should be returned as pandas.DataFrame object, default: True

Returns:

pandas.DataFrame with listed revisions or None if return_as_df is False

Return type:

pandas.DataFrame or None

Example

client.repository.list_models_revisions(model_uid)
list_pipelines(limit=None, return_as_df=True)[source]#

Print stored pipelines in a table format. If limit is set to None there will be only first 50 records shown.

Parameters:
  • limit (int, optional) – limit number of fetched records

  • return_as_df (bool, optional) – determinate if table should be returned as pandas.DataFrame object, default: True

Returns:

pandas.DataFrame with listed pipelines or None if return_as_df is False

Return type:

pandas.DataFrame or None

Example

client.repository.list_pipelines()
list_pipelines_revisions(pipeline_uid, limit=None, return_as_df=True)[source]#

Print all revision for the given pipeline uid in a table format.

Parameters:
  • pipeline_uid (str) – Unique id of stored pipeline

  • limit (int, optional) – limit number of fetched records

  • return_as_df (bool, optional) – determinate if table should be returned as pandas.DataFrame object, default: True

Returns:

pandas.DataFrame with listed revisions or None if return_as_df is False

Return type:

pandas.DataFrame or None

Example

client.repository.list_pipelines_revisions(pipeline_uid)
list_spaces(limit=None, return_as_df=True)[source]#

Print stored spaces in a table format. If limit is set to None there will be only first 50 records shown.

Parameters:
  • limit (int, optional) – limit number of fetched records

  • return_as_df (bool, optional) – determinate if table should be returned as pandas.DataFrame object, default: True

Returns:

pandas.DataFrame with listed spaces or None if return_as_df is False

Return type:

pandas.DataFrame or None

Example

client.repository.list_spaces()
load(artifact_uid)[source]#

Load model from repository to object in local environment.

Parameters:

artifact_uid (str) – stored model UID

Returns:

trained model

Return type:

object

Example

model = client.models.load(model_uid)
promote_model(model_id, source_project_id, target_space_id)[source]#

Promote model from project to space. Supported only for IBM Cloud Pak® for Data.

Deprecated: Use client.spaces.promote(asset_id, source_project_id, target_space_id) instead.

store_experiment(meta_props)[source]#

Create an experiment.

Parameters:

meta_props (dict) –

meta data of the experiment configuration. To see available meta names use:

client.repository.ExperimentMetaNames.get()

Returns:

stored experiment metadata

Return type:

dict

Example

metadata = {
    client.repository.ExperimentMetaNames.NAME: 'my_experiment',
    client.repository.ExperimentMetaNames.EVALUATION_METRICS: ['accuracy'],
    client.repository.ExperimentMetaNames.TRAINING_REFERENCES: [
        {'pipeline': {'href': pipeline_href_1}},
        {'pipeline': {'href':pipeline_href_2}}
    ]
}
experiment_details = client.repository.store_experiment(meta_props=metadata)
experiment_href = client.repository.get_experiment_href(experiment_details)
store_function(function, meta_props)[source]#

Create a function.

As a ‘function’ may be used one of the following:
  • filepath to gz file

  • ‘score’ function reference, where the function is the function which will be deployed

  • generator function, which takes no argument or arguments which all have primitive python default values and as result return ‘score’ function

Parameters:
  • function (str or function) – path to file with archived function content or function (as described above)

  • meta_props (str or dict) – meta data or name of the function, to see available meta names use client.repository.FunctionMetaNames.show()

Returns:

stored function metadata

Return type:

dict

Examples

The most simple use is (using score function):

meta_props = {
    client.repository.FunctionMetaNames.NAME: "function",
    client.repository.FunctionMetaNames.DESCRIPTION: "This is ai function",
    client.repository.FunctionMetaNames.SOFTWARE_SPEC_UID: "53dc4cf1-252f-424b-b52d-5cdd9814987f"}

def score(payload):
    values = [[row[0]*row[1]] for row in payload['values']]
    return {'fields': ['multiplication'], 'values': values}

stored_function_details = client.repository.store_function(score, meta_props)

Other, more interesting example is using generator function. In this situation it is possible to pass some variables:

wml_creds = {...}

def gen_function(wml_credentials=wml_creds, x=2):
    def f(payload):
        values = [[row[0]*row[1]*x] for row in payload['values']]
        return {'fields': ['multiplication'], 'values': values}
    return f

stored_function_details = client.repository.store_function(gen_function, meta_props)
store_model(model=None, meta_props=None, training_data=None, training_target=None, pipeline=None, feature_names=None, label_column_names=None, subtrainingId=None, round_number=None, experiment_metadata=None, training_id=None)[source]#

Create a model.

Parameters:
  • model (str (for filename or path) or object (corresponding to model type)) –

    Can be one of following:

    • The train model object:
      • scikit-learn

      • xgboost

      • spark (PipelineModel)

    • path to saved model in format:

      • keras (.tgz)

      • pmml (.xml)

      • scikit-learn (.tar.gz)

      • tensorflow (.tar.gz)

      • spss (.str)

      • spark (.tar.gz)

    • directory containing model file(s):

      • scikit-learn

      • xgboost

      • tensorflow

    • unique id of trained model

  • meta_props (dict, optional) –

    meta data of the models configuration. To see available meta names use:

    client.repository.ModelMetaNames.get()
    

  • training_data (spark dataframe, pandas dataframe, numpy.ndarray or array, optional) – Spark DataFrame supported for spark models. Pandas dataframe, numpy.ndarray or array supported for scikit-learn models

  • training_target (array, optional) – array with labels required for scikit-learn models

  • pipeline (object, optional) – pipeline required for spark mllib models

  • feature_names (numpy.ndarray or list, optional) – feature names for the training data in case of Scikit-Learn/XGBoost models, this is applicable only in the case where the training data is not of type - pandas.DataFrame

  • label_column_names (numpy.ndarray or list, optional) – label column names of the trained Scikit-Learn/XGBoost models

  • round_number (int, optional) – round number of a Federated Learning experiment that has been configured to save intermediate models, this applies when model is a training id

  • experiment_metadata (dict, optional) – metadata retrieved from the experiment that created the model

  • training_id (str, optional) – Run id of AutoAI or TuneExperiment experiment.

Returns:

metadata of the model created

Return type:

dict

Note

  • For a keras model, model content is expected to contain a .h5 file and an archived version of it.

  • feature_names is an optional argument containing the feature names for the training data in case of Scikit-Learn/XGBoost models. Valid types are numpy.ndarray and list. This is applicable only in the case where the training data is not of type - pandas.DataFrame.

  • If the training_data is of type pandas.DataFrame and feature_names are provided, feature_names are ignored.

  • For INPUT_DATA_SCHEMA meta prop use list even when passing single input data schema. You can provide multiple schemas as dictionaries inside a list.

Examples

stored_model_details = client.repository.store_model(model, name)

In more complicated cases you should create proper metadata, similar to this one:

sw_spec_id = client.software_specifications.get_id_by_name('scikit-learn_0.23-py3.7')

metadata = {
    client.repository.ModelMetaNames.NAME: 'customer satisfaction prediction model',
    client.repository.ModelMetaNames.SOFTWARE_SPEC_UID: sw_spec_id,
    client.repository.ModelMetaNames.TYPE: 'scikit-learn_0.23'
}

In case when you want to provide input data schema of the model, you can provide it as part of meta:

sw_spec_id = client.software_specifications.get_id_by_name('spss-modeler_18.1')

metadata = {
    client.repository.ModelMetaNames.NAME: 'customer satisfaction prediction model',
    client.repository.ModelMetaNames.SOFTWARE_SPEC_UID: sw_spec_id,
    client.repository.ModelMetaNames.TYPE: 'spss-modeler_18.1',
    client.repository.ModelMetaNames.INPUT_DATA_SCHEMA: [{'id': 'test',
                                                          'type': 'list',
                                                          'fields': [{'name': 'age', 'type': 'float'},
                                                                     {'name': 'sex', 'type': 'float'},
                                                                     {'name': 'fbs', 'type': 'float'},
                                                                     {'name': 'restbp', 'type': 'float'}]
                                                          },
                                                          {'id': 'test2',
                                                           'type': 'list',
                                                           'fields': [{'name': 'age', 'type': 'float'},
                                                                      {'name': 'sex', 'type': 'float'},
                                                                      {'name': 'fbs', 'type': 'float'},
                                                                      {'name': 'restbp', 'type': 'float'}]
    }]
}

store_model() method used with a local tar.gz file that contains a model:

stored_model_details = client.repository.store_model(path_to_tar_gz, meta_props=metadata, training_data=None)

store_model() method used with a local directory that contains model files:

stored_model_details = client.repository.store_model(path_to_model_directory, meta_props=metadata, training_data=None)

store_model() method used with the GUID of a trained model:

stored_model_details = client.repository.store_model(trained_model_guid, meta_props=metadata, training_data=None)

store_model() method used with a pipeline that was generated by an AutoAI experiment:

metadata = {
    client.repository.ModelMetaNames.NAME: 'AutoAI prediction model stored from object'
}
stored_model_details = client.repository.store_model(pipeline_model, meta_props=metadata, experiment_metadata=experiment_metadata)
metadata = {
    client.repository.ModelMetaNames.NAME: 'AutoAI prediction Pipeline_1 model'
}
stored_model_details = client.repository.store_model(model="Pipeline_1", meta_props=metadata, training_id = training_id)

Example of storing a prompt tuned model: .. code-block:: python

stored_model_details = client.repository.store_model(training_id = prompt_tuning_run_id)

store_pipeline(meta_props)[source]#

Create a pipeline.

Parameters:

meta_props (dict) –

meta data of the pipeline configuration. To see available meta names use:

client.repository.PipelineMetaNames.get()

Returns:

stored pipeline metadata

Return type:

dict

Example

metadata = {
    client.repository.PipelineMetaNames.NAME: 'my_training_definition',
    client.repository.PipelineMetaNames.DOCUMENT: {"doc_type":"pipeline",
                                                       "version": "2.0",
                                                       "primary_pipeline": "dlaas_only",
                                                       "pipelines": [{"id": "dlaas_only",
                                                                      "runtime_ref": "hybrid",
                                                                      "nodes": [{"id": "training",
                                                                                 "type": "model_node",
                                                                                 "op": "dl_train",
                                                                                 "runtime_ref": "DL",
                                                                                 "inputs": [],
                                                                                 "outputs": [],
                                                                                 "parameters": {"name": "tf-mnist",
                                                                                                "description": "Simple MNIST model implemented in TF",
                                                                                                "command": "python3 convolutional_network.py --trainImagesFile ${DATA_DIR}/train-images-idx3-ubyte.gz --trainLabelsFile ${DATA_DIR}/train-labels-idx1-ubyte.gz --testImagesFile ${DATA_DIR}/t10k-images-idx3-ubyte.gz --testLabelsFile ${DATA_DIR}/t10k-labels-idx1-ubyte.gz --learningRate 0.001 --trainingIters 6000",
                                                                                                "compute": {"name": "k80","nodes": 1},
                                                                                                "training_lib_href": "/v4/libraries/64758251-bt01-4aa5-a7ay-72639e2ff4d2/content"
                                                                                 },
                                                                                 "target_bucket": "wml-dev-results"
                                                                      }]
                                                       }]
    }
}
pipeline_details = client.repository.store_pipeline(training_definition_filepath, meta_props=metadata)
store_space(meta_props)[source]#

Create a space.

Parameters:

meta_props (dict) –

meta data of the space configuration. To see available meta names use:

client.spaces.ConfigurationMetaNames.get()

Returns:

metadata of the stored space

Return type:

dict

Example

metadata = {
    client.spaces.ConfigurationMetaNames.NAME: 'my_space',
    client.spaces.ConfigurationMetaNames.DESCRIPTION: 'spaces',
}
spaces_details = client.repository.store_space(meta_props=metadata)
update_experiment(experiment_uid, changes)[source]#

Updates existing experiment metadata.

Parameters:
  • experiment_uid (str) – UID of experiment which definition should be updated

  • changes (dict) – elements which should be changed, where keys are ExperimentMetaNames

Returns:

metadata of updated experiment

Return type:

dict

Example

metadata = {
    client.repository.ExperimentMetaNames.NAME: "updated_exp"
}
exp_details = client.repository.update_experiment(experiment_uid, changes=metadata)
update_function(function_uid, changes, update_function=None)[source]#

Updates existing function metadata.

Parameters:
  • function_uid (str) – UID of function which define what should be updated

  • changes (dict) – elements which should be changed, where keys are FunctionMetaNames

  • update_function (str or function, optional) – path to file with archived function content or function which should be changed for specific function_uid, this parameter is valid only for CP4D 3.0.0

Example

metadata = {
    client.repository.FunctionMetaNames.NAME: "updated_function"
}

function_details = client.repository.update_function(function_uid, changes=metadata)
update_model(model_uid, updated_meta_props=None, update_model=None)[source]#

Update existing model.

Parameters:
  • model_uid (str) – UID of model which define what should be updated

  • updated_meta_props (dict, optional) – new set of updated_meta_props that needs to updated

  • update_model (object or model, optional) – archived model content file or path to directory containing archived model file which should be changed for specific model_uid

Returns:

updated metadata of model

Return type:

dict

Example

model_details = client.repository.update_model(model_uid, update_model=updated_content)
update_pipeline(pipeline_uid, changes)[source]#

Updates existing pipeline metadata.

Parameters:
  • pipeline_uid (str) – Unique Id of pipeline which definition should be updated

  • changes (dict) – elements which should be changed, where keys are PipelineMetaNames

Returns:

metadata of updated pipeline

Return type:

dict

Example

metadata = {
    client.repository.PipelineMetaNames.NAME: "updated_pipeline"
}
pipeline_details = client.repository.update_pipeline(pipeline_uid, changes=metadata)
update_space(space_uid, changes)[source]#

Updates existing space metadata.

Parameters:
  • space_uid (str) – UID of space which definition should be updated

  • changes (dict) – elements which should be changed, where keys are ConfigurationMetaNames

Returns:

metadata of updated space

Return type:

dict

Example

metadata = {
    client.spaces.ConfigurationMetaNames.NAME: "updated_space"
}
space_details = client.repository.update_space(space_uid, changes=metadata)
class metanames.ModelMetaNames[source]#

Set of MetaNames for models.

Available MetaNames:

MetaName

Type

Required

Schema

Example value

NAME

str

Y

my_model

DESCRIPTION

str

N

my_description

INPUT_DATA_SCHEMA

list

N

{'id(required)': 'string', 'fields(required)': [{'name(required)': 'string', 'type(required)': 'string', 'nullable(optional)': 'string'}]}

{'id': '1', 'type': 'struct', 'fields': [{'name': 'x', 'type': 'double', 'nullable': False, 'metadata': {}}, {'name': 'y', 'type': 'double', 'nullable': False, 'metadata': {}}]}

TRAINING_DATA_REFERENCES

list

N

[{'name(optional)': 'string', 'type(required)': 'string', 'connection(required)': {'endpoint_url(required)': 'string', 'access_key_id(required)': 'string', 'secret_access_key(required)': 'string'}, 'location(required)': {'bucket': 'string', 'path': 'string'}, 'schema(optional)': {'id(required)': 'string', 'fields(required)': [{'name(required)': 'string', 'type(required)': 'string', 'nullable(optional)': 'string'}]}}]

[]

TEST_DATA_REFERENCES

list

N

[{'name(optional)': 'string', 'type(required)': 'string', 'connection(required)': {'endpoint_url(required)': 'string', 'access_key_id(required)': 'string', 'secret_access_key(required)': 'string'}, 'location(required)': {'bucket': 'string', 'path': 'string'}, 'schema(optional)': {'id(required)': 'string', 'fields(required)': [{'name(required)': 'string', 'type(required)': 'string', 'nullable(optional)': 'string'}]}}]

[]

OUTPUT_DATA_SCHEMA

dict

N

{'id(required)': 'string', 'fields(required)': [{'name(required)': 'string', 'type(required)': 'string', 'nullable(optional)': 'string'}]}

{'id': '1', 'type': 'struct', 'fields': [{'name': 'x', 'type': 'double', 'nullable': False, 'metadata': {}}, {'name': 'y', 'type': 'double', 'nullable': False, 'metadata': {}}]}

LABEL_FIELD

str

N

PRODUCT_LINE

TRANSFORMED_LABEL_FIELD

str

N

PRODUCT_LINE_IX

TAGS

list

N

['string', 'string']

['string', 'string']

SIZE

dict

N

{'in_memory(optional)': 'string', 'content(optional)': 'string'}

{'in_memory': 0, 'content': 0}

PIPELINE_UID

str

N

53628d69-ced9-4f43-a8cd-9954344039a8

RUNTIME_UID

str

N

53628d69-ced9-4f43-a8cd-9954344039a8

TYPE

str

Y

mllib_2.1

CUSTOM

dict

N

{}

DOMAIN

str

N

Watson Machine Learning

HYPER_PARAMETERS

dict

N

METRICS

list

N

IMPORT

dict

N

{'name(optional)': 'string', 'type(required)': 'string', 'connection(required)': {'endpoint_url(required)': 'string', 'access_key_id(required)': 'string', 'secret_access_key(required)': 'string'}, 'location(required)': {'bucket': 'string', 'path': 'string'}}

{'connection': {'endpoint_url': 'https://s3-api.us-geo.objectstorage.softlayer.net', 'access_key_id': '***', 'secret_access_key': '***'}, 'location': {'bucket': 'train-data', 'path': 'training_path'}, 'type': 's3'}

TRAINING_LIB_UID

str

N

53628d69-ced9-4f43-a8cd-9954344039a8

MODEL_DEFINITION_UID

str

N

53628d6_cdee13-35d3-s8989343

SOFTWARE_SPEC_UID

str

N

53628d69-ced9-4f43-a8cd-9954344039a8

TF_MODEL_PARAMS

dict

N

{'save_format': 'None', 'signatures': 'struct', 'options': 'None', 'custom_objects': 'string'}

FAIRNESS_INFO

dict

N

{'favorable_labels': ['X']}

Note: project (MetaNames.PROJECT_UID) and space (MetaNames.SPACE_UID) meta names are not supported and considered as invalid. Instead use client.set.default_space(<SPACE_GUID>) to set the space or client.set.default_project(<PROJECT_GUID>).

class metanames.ExperimentMetaNames[source]#

Set of MetaNames for experiments.

Available MetaNames:

MetaName

Type

Required

Schema

Example value

NAME

str

Y

Hand-written Digit Recognition

DESCRIPTION

str

N

Hand-written Digit Recognition training

TAGS

list

N

[{'value(required)': 'string', 'description(optional)': 'string'}]

[{'value': 'dsx-project.<project-guid>', 'description': 'DSX project guid'}]

EVALUATION_METHOD

str

N

multiclass

EVALUATION_METRICS

list

N

[{'name(required)': 'string', 'maximize(optional)': 'boolean'}]

[{'name': 'accuracy', 'maximize': False}]

TRAINING_REFERENCES

list

Y

[{'pipeline(optional)': {'href(required)': 'string', 'data_bindings(optional)': [{'data_reference(required)': 'string', 'node_id(required)': 'string'}], 'nodes_parameters(optional)': [{'node_id(required)': 'string', 'parameters(required)': 'dict'}]}, 'training_lib(optional)': {'href(required)': 'string', 'compute(optional)': {'name(required)': 'string', 'nodes(optional)': 'number'}, 'runtime(optional)': {'href(required)': 'string'}, 'command(optional)': 'string', 'parameters(optional)': 'dict'}}]

[{'pipeline': {'href': '/v4/pipelines/6d758251-bb01-4aa5-a7a3-72339e2ff4d8'}}]

SPACE_UID

str

N

3c1ce536-20dc-426e-aac7-7284cf3befc6

LABEL_COLUMN

str

N

label

CUSTOM

dict

N

{'field1': 'value1'}

class metanames.FunctionMetaNames[source]#

Set of MetaNames for AI functions.

Available MetaNames:

MetaName

Type

Required

Schema

Example value

NAME

str

Y

ai_function

DESCRIPTION

str

N

This is ai function

RUNTIME_UID

str

N

53628d69-ced9-4f43-a8cd-9954344039a8

SOFTWARE_SPEC_UID

str

N

53628d69-ced9-4f43-a8cd-9954344039a8

INPUT_DATA_SCHEMAS

list

N

[{'id(required)': 'string', 'fields(required)': [{'name(required)': 'string', 'type(required)': 'string', 'nullable(optional)': 'string'}]}]

[{'id': '1', 'type': 'struct', 'fields': [{'name': 'x', 'type': 'double', 'nullable': False, 'metadata': {}}, {'name': 'y', 'type': 'double', 'nullable': False, 'metadata': {}}]}]

OUTPUT_DATA_SCHEMAS

list

N

[{'id(required)': 'string', 'fields(required)': [{'name(required)': 'string', 'type(required)': 'string', 'nullable(optional)': 'string'}]}]

[{'id': '1', 'type': 'struct', 'fields': [{'name': 'multiplication', 'type': 'double', 'nullable': False, 'metadata': {}}]}]

TAGS

list

N

[{'value(required)': 'string', 'description(optional)': 'string'}]

[{'value': 'ProjectA', 'description': 'Functions created for ProjectA'}]

TYPE

str

N

python

CUSTOM

dict

N

{}

SAMPLE_SCORING_INPUT

list

N

{'id(optional)': 'string', 'fields(optional)': 'array', 'values(optional)': 'array'}

{'input_data': [{'fields': ['name', 'age', 'occupation'], 'values': [['john', 23, 'student'], ['paul', 33, 'engineer']]}]}

SPACE_UID

str

N

3628d69-ced9-4f43-a8cd-9954344039a8

class metanames.PipelineMetanames[source]#

Set of MetaNames for pipelines.

Available MetaNames:

MetaName

Type

Required

Schema

Example value

NAME

str

Y

Hand-written Digit Recognitionu

DESCRIPTION

str

N

Hand-written Digit Recognition training

SPACE_UID

str

N

3c1ce536-20dc-426e-aac7-7284cf3befc6

TAGS

list

N

[{'value(required)': 'string', 'description(optional)': 'string'}]

[{'value': 'dsx-project.<project-guid>', 'description': 'DSX project guid'}]

DOCUMENT

dict

N

{'doc_type(required)': 'string', 'version(required)': 'string', 'primary_pipeline(required)': 'string', 'pipelines(required)': [{'id(required)': 'string', 'runtime_ref(required)': 'string', 'nodes(required)': [{'id': 'string', 'type': 'string', 'inputs': 'list', 'outputs': 'list', 'parameters': {'training_lib_href': 'string'}}]}]}

{'doc_type': 'pipeline', 'version': '2.0', 'primary_pipeline': 'dlaas_only', 'pipelines': [{'id': 'dlaas_only', 'runtime_ref': 'hybrid', 'nodes': [{'id': 'training', 'type': 'model_node', 'op': 'dl_train', 'runtime_ref': 'DL', 'inputs': [], 'outputs': [], 'parameters': {'name': 'tf-mnist', 'description': 'Simple MNIST model implemented in TF', 'command': 'python3 convolutional_network.py --trainImagesFile ${DATA_DIR}/train-images-idx3-ubyte.gz --trainLabelsFile ${DATA_DIR}/train-labels-idx1-ubyte.gz --testImagesFile ${DATA_DIR}/t10k-images-idx3-ubyte.gz --testLabelsFile ${DATA_DIR}/t10k-labels-idx1-ubyte.gz --learningRate 0.001 --trainingIters 6000', 'compute': {'name': 'k80', 'nodes': 1}, 'training_lib_href': '/v4/libraries/64758251-bt01-4aa5-a7ay-72639e2ff4d2/content'}, 'target_bucket': 'wml-dev-results'}]}]}

CUSTOM

dict

N

{'field1': 'value1'}

IMPORT

dict

N

{'name(optional)': 'string', 'type(required)': 'string', 'connection(required)': {'endpoint_url(required)': 'string', 'access_key_id(required)': 'string', 'secret_access_key(required)': 'string'}, 'location(required)': {'bucket': 'string', 'path': 'string'}}

{'connection': {'endpoint_url': 'https://s3-api.us-geo.objectstorage.softlayer.net', 'access_key_id': '***', 'secret_access_key': '***'}, 'location': {'bucket': 'train-data', 'path': 'training_path'}, 'type': 's3'}

RUNTIMES

list

N

[{'id': 'id', 'name': 'tensorflow', 'version': '1.13-py3'}]

COMMAND

str

N

convolutional_network.py --trainImagesFile train-images-idx3-ubyte.gz --trainLabelsFile train-labels-idx1-ubyte.gz --testImagesFile t10k-images-idx3-ubyte.gz --testLabelsFile t10k-labels-idx1-ubyte.gz --learningRate 0.001 --trainingIters 6000

LIBRARY_UID

str

N

fb9752c9-301a-415d-814f-cf658d7b856a

COMPUTE

dict

N

{'name': 'k80', 'nodes': 1}

Runtimes#

class ibm_watson_machine_learning.runtimes.Runtimes(client)[source]#

Create Runtime Specs and associated Custom Libraries.

Note

There are a list of pre-defined runtimes available. To see the list of pre-defined runtimes, use:

client.runtimes.list(pre_defined=True)
clone_library(library_uid, space_id=None, action='copy', rev_id=None)[source]#

Create a new function library with the given library either in the same space or in a new space. All dependent assets will be cloned too.

Parameters:
  • library_uid (str) – UID of the library to be cloned

  • space_id (str, optional) – UID of the space to which the library needs to be cloned

  • action (str, optional) – action specifying “copy” or “move”

  • rev_id (str, optional) – revision ID of the library

Returns:

metadata of the library cloned

Return type:

dict

Note

  • If revision id is not specified, all revisions of the artifact are cloned.

  • Space guid is mandatory for “move” action.

Example

client.runtimes.clone_library(library_uid=artifact_id, space_id=space_uid, action="copy")
clone_runtime(runtime_uid, space_id=None, action='copy', rev_id=None)[source]#

Create a new runtime identical with the given runtime either in the same space or in a new space. All dependent assets will be cloned too.

Parameters:
  • runtime_uid (str) – UID of the runtime to be cloned

  • space_id (str, optional) – UID of the space to which the runtime needs to be cloned

  • action (str, optional) – action specifying “copy” or “move”

  • rev_id (str, optional) – revision ID of the runtime

Returns:

metadata of the runtime cloned

Rtype**:

dict

Note

  • If revision id is not specified, all revisions of the artifact are cloned.

  • Space guid is mandatory for “move” action.

Example

client.runtimes.clone_runtime(runtime_uid=artifact_id,space_id=space_uid,action="copy")
delete(runtime_uid, with_libraries=False)[source]#

Delete a runtime.

Parameters:
  • runtime_uid (str) – runtime UID

  • with_libraries (bool, optional) – boolean value indicating an option to delete the libraries associated with the runtime

Returns:

status (“SUCCESS” or “FAILED”)

Return type:

str

Example

client.runtimes.delete(deployment_uid)
delete_library(library_uid)[source]#

Delete a library.

Parameters:

library_uid (str) – library UID

Returns:

status (“SUCCESS” or “FAILED”)

Return type:

str

Example

client.runtimes.delete_library(library_uid)
download_configuration(runtime_uid, filename='runtime_configuration.yaml')[source]#

Downloads configuration file for runtime with specified uid.

Parameters:
  • runtime_uid (str) – UID of runtime

  • filename (str, optional) – filename of downloaded archive

Returns:

path to the downloaded runtime configuration

Return type:

str

Example

filename="runtime.yml"
client.runtimes.download_configuration(runtime_uid, filename=filename)
download_library(library_uid, filename=None)[source]#

Downloads library content with specified uid.

Parameters:
  • library_uid (str) – UID of library

  • filename (str, optional) – filename of downloaded archive, default value: <LIBRARY-NAME>-<LIBRARY-VERSION>.zip

Returns:

path to the downloaded library content

Return type:

str

Example

filename="library.tgz"
client.runtimes.download_library(runtime_uid, filename=filename)
get_details(runtime_uid=None, pre_defined=False, limit=None)[source]#

Get metadata of stored runtime(s). If runtime UID is not specified returns all runtimes metadata.

Parameters:
  • runtime_uid (str, optional) – runtime UID

  • pre_defined (bool, optional) – boolean indicating to display predefined runtimes only

  • limit (int, optional) – limit number of fetched records

Returns:

metadata of runtime(s)

Return type:

  • dict - if runtime_uid is not None

  • {“resources”: [dict]} - if runtime_uid is None

Examples

runtime_details = client.runtimes.get_details(runtime_uid)
runtime_details = client.runtimes.get_details(runtime_uid=runtime_uid)
runtime_details = client.runtimes.get_details()
static get_href(details)[source]#

Get runtime href from runtime details.

Parameters:

details (dict) – metadata of the runtime

Returns:

runtime href

Return type:

str

Example

runtime_details = client.runtimes.get_details(runtime_uid)
runtime_href = client.runtimes.get_href(runtime_details)
get_library_details(library_uid=None, limit=None)[source]#

Get metadata of stored librarie(s). If library UID is not specified returns all libraries metadata.

Parameters:
  • library_uid (str, optional) – library UID

  • limit (int, optional) – limit number of fetched records

Returns:

metadata of library(s)

Return type:

  • dict - if runtime_uid is not None

  • {“resources”: [dict]} - if runtime_uid is None

Examples

library_details = client.runtimes.get_library_details(library_uid)
library_details = client.runtimes.get_library_details(library_uid=library_uid)
library_details = client.runtimes.get_library_details()
static get_library_href(library_details)[source]#

Get library href from library details.

Parameters:

library_details (dict) – metadata of the library

Returns:

library href

Return type:

str

Example

library_details = client.runtimes.get_library_details(library_uid)
library_url = client.runtimes.get_library_href(library_details)
static get_library_uid(library_details)[source]#

Get library uid from library details.

Parameters:

library_details (dict) – metadata of the library

Returns:

library UID

Return type:

str

Example

library_details = client.runtimes.get_library_details(library_uid)
library_uid = client.runtimes.get_library_uid(library_details)
static get_uid(details)[source]#

Get runtime uid from runtime details.

Parameters:

details (dict) – metadata of the runtime

Returns:

runtime UID

Return type:

str

Example

runtime_details = client.runtimes.get_details(runtime_uid)
runtime_uid = client.runtimes.get_uid(runtime_details)
list(limit=None, pre_defined=False)[source]#

Print stored runtimes in a table format. If limit is set to None there will be only first 50 records shown.

Parameters:
  • limit (int, optional) – limit number of fetched records

  • pre_defined (bool, optional) – boolean indicating to display predefined runtimes only

Example

client.runtimes.list()
client.runtimes.list(pre_defined=True)
list_libraries(runtime_uid=None, limit=None)[source]#

Print stored libraries in a table format. If runtime UID is not provided, all libraries are listed else, libraries associated with a runtime are listed. If limit is set to None there will be only first 50 records shown.

Parameters:
  • runtime_uid (str, optional) – runtime UID

  • limit (int, optional) – limit number of fetched records

Example

client.runtimes.list_libraries()
client.runtimes.list_libraries(runtime_uid)
store(meta_props)[source]#

Create a runtime.

Parameters:

meta_props (dict) –

metadata of the runtime configuration. To see available meta names use:

client.runtimes.ConfigurationMetaNames.get()

Returns:

metadata of the runtime created

Return type:

dict

Examples

Creating a library:

lib_meta = {
    client.runtimes.LibraryMetaNames.NAME: "libraries_custom",
    client.runtimes.LibraryMetaNames.DESCRIPTION: "custom libraries for scoring",
    client.runtimes.LibraryMetaNames.FILEPATH: "/home/user/my_lib.zip",
    client.runtimes.LibraryMetaNames.VERSION: "1.0",
    client.runtimes.LibraryMetaNames.PLATFORM: {"name": "python", "versions": ["3.5"]}
}

custom_library_details = client.runtimes.store_library(lib_meta)
custom_library_uid = client.runtimes.get_library_uid(custom_library_details)

Creating a runtime:

runtime_meta = {
    client.runtimes.ConfigurationMetaNames.NAME: "runtime_spec_python_3.5",
    client.runtimes.ConfigurationMetaNames.DESCRIPTION: "test",
    client.runtimes.ConfigurationMetaNames.PLATFORM: {
        "name": "python",
        "version": "3.5"
    },
    client.runtimes.ConfigurationMetaNames.LIBRARIES_UIDS: [custom_library_uid] # already existing lib is linked here
}

runtime_details = client.runtimes.store(runtime_meta)
store_library(meta_props)[source]#

Create a library.

Parameters:

meta_props (dict) –

metadata of the library configuration. To see available meta names use:

client.runtimes.LibraryMetaNames.get()

Returns:

metadata of the library created

Return type:

dict

Example

library_details = client.runtimes.store_library({
    client.runtimes.LibraryMetaNames.NAME: "libraries_custom",
    client.runtimes.LibraryMetaNames.DESCRIPTION: "custom libraries for scoring",
    client.runtimes.LibraryMetaNames.FILEPATH: custom_library_path,
    client.runtimes.LibraryMetaNames.VERSION: "1.0",
    client.runtimes.LibraryMetaNames.PLATFORM: {"name": "python", "versions": ["3.5"]}
})
update_library(library_uid, changes)[source]#

Updates existing library metadata.

Parameters:
  • library_uid (str) – UID of library which definition should be updated

  • changes (dict) – elements which should be changed, where keys are ConfigurationMetaNames

Returns:

metadata of updated library

Return type:

dict

Example

metadata = {
    client.runtimes.LibraryMetaNames.NAME: "updated_lib"
}

library_details = client.runtimes.update_library(library_uid, changes=metadata)
update_runtime(runtime_uid, changes)[source]#

Updates existing runtime metadata.

Parameters:
  • runtime_uid (str) – UID of runtime which definition should be updated

  • changes (dict) – elements which should be changed, where keys are ConfigurationMetaNames

Returns:

metadata of updated runtime

Return type:

dict

Example

metadata = {
    client.runtimes.ConfigurationMetaNames.NAME: "updated_runtime"
}

runtime_details = client.runtimes.update(runtime_uid, changes=metadata)
class metanames.RuntimeMetaNames[source]#

Set of MetaNames for Runtime Specs.

Available MetaNames:

MetaName

Type

Required

Schema

Example value

NAME

str

Y

runtime_spec_python_3.10

DESCRIPTION

str

N

sample runtime

PLATFORM

dict

Y

{'name(required)': 'string', 'version(required)': 'version'}

{"name":python","version":"3.10")

LIBRARIES_UIDS

list

N

['46dc9cf1-252f-424b-b52d-5cdd9814987f']

CONFIGURATION_FILEPATH

str

N

/home/env_config.yaml

TAGS

list

N

[{'value(required)': 'string', 'description(optional)': 'string'}]

[{'value': 'dsx-project.<project-guid>', 'description': 'DSX project guid'}]

CUSTOM

dict

N

{"field1": "value1"}

SPACE_UID

str

N

46dc9cf1-252f-424b-b52d-5cdd9814987f

COMPUTE

dict

N

{'name(required)': 'string', 'nodes(optional)': 'string'}

{'name': 'name1', 'nodes': 1}

Script#

class client.Script(client)[source]#

Store and manage scripts assets.

ConfigurationMetaNames = <ibm_watson_machine_learning.metanames.ScriptMetaNames object>#

MetaNames for script assets creation.

create_revision(script_uid)[source]#

Create revision for the given script. Revisions are immutable once created. The metadata and attachment at script_uid is taken and a revision is created out of it.

Parameters:

script_uid (str) – script ID

Returns:

stored script revisions metadata

Return type:

dict

Example

script_revision = client.script.create_revision(script_uid)
delete(asset_uid)[source]#

Delete a stored script asset.

Parameters:

asset_uid (str) – UID of script asset

Returns:

status (“SUCCESS” or “FAILED”)

Return type:

str

Example

client.script.delete(asset_uid)
download(asset_uid, filename, rev_uid=None)[source]#

Download the content of a script asset.

Parameters:
  • asset_uid (str) – the Unique Id of the script asset to be downloaded

  • filename (str) – filename to be used for the downloaded file

  • rev_uid (str, optional) – revision id

Returns:

path to the downloaded asset content

Return type:

str

Example

client.script.download(asset_uid, "script_file")
get_details(script_uid=None)[source]#

Get script asset details. If no script_uid is passed, details for all script assets will be returned.

Parameters:

script_uid (str, optional) – Unique id of script

Returns:

metadata of the stored script asset

Return type:

  • dict - if runtime_uid is not None

  • {“resources”: [dict]} - if runtime_uid is None

Example

script_details = client.script.get_details(script_uid)
static get_href(asset_details)[source]#

Get url of stored scripts asset.

Parameters:

asset_details (dict) – stored script details

Returns:

href of stored script asset

Return type:

str

Example

asset_details = client.script.get_details(asset_uid)
asset_href = client.script.get_href(asset_details)
static get_id(asset_details)[source]#

Get Unique Id of stored script asset.

Parameters:

asset_details (dict) – metadata of the stored script asset

Returns:

Unique Id of stored script asset

Return type:

str

Example

asset_uid = client.script.get_id(asset_details)
get_revision_details(script_uid, rev_uid=None)[source]#

Get metadata of script revision.

Parameters:
  • script_uid (str) – script ID

  • rev_uid (str, optional) – revision ID, if this parameter is not provided, returns latest revision if existing else error

Returns:

stored script(s) metadata

Return type:

dict

Example

script_details = client.script.get_revision_details(script_uid, rev_uid)
static get_uid(asset_details)[source]#

Get Unique Id of stored script asset.

Deprecated: Use get_id(asset_details) instead.

Parameters:

asset_details (dict) – metadata of the stored script asset

Returns:

Unique Id of stored script asset

Return type:

str

Example

asset_uid = client.script.get_uid(asset_details)
list(limit=None, return_as_df=True)[source]#

Print stored scripts in a table format. If limit is set to None there will be only first 50 records shown.

Parameters:
  • limit (int, optional) – limit number of fetched records

  • return_as_df (bool, optional) – determinate if table should be returned as pandas.DataFrame object, default: True

Returns:

pandas.DataFrame with listed scripts or None if return_as_df is False

Return type:

pandas.DataFrame or None

Example

client.script.list()
list_revisions(script_uid, limit=None, return_as_df=True)[source]#

Print all revisions for the given script uid in a table format.

Parameters:
  • script_uid (str) – stored script ID

  • limit (int, optional) – limit number of fetched records

  • return_as_df (bool, optional) – determinate if table should be returned as pandas.DataFrame object, default: True

Returns:

pandas.DataFrame with listed revisions or None if return_as_df is False

Return type:

pandas.DataFrame or None

Example

client.script.list_revisions(script_uid)
store(meta_props, file_path)[source]#

Create a script asset and upload content to it.

Parameters:
  • meta_props (str) – name to be given to the script asset

  • file_path (str) – path to the content file to be uploaded

Returns:

metadata of the stored script asset

Return type:

dict

Example

metadata = {
    client.script.ConfigurationMetaNames.NAME: 'my first script',
    client.script.ConfigurationMetaNames.DESCRIPTION: 'description of the script',
    client.script.ConfigurationMetaNames.SOFTWARE_SPEC_UID: '0cdb0f1e-5376-4f4d-92dd-da3b69aa9bda'
}

asset_details = client.script.store(meta_props=metadata, file_path="/path/to/file")
update(script_uid, meta_props=None, file_path=None)[source]#

Update script with either metadata or attachment or both.

Parameters:
  • script_uid (str) – script UID

  • meta_props (dict, optional) – changes for script matadata

  • file_path (str, optional) – file path to new attachment

Returns:

updated metadata of script

Return type:

dict

Example

script_details = client.script.update(model_uid, meta, content_path)
class metanames.ScriptMetaNames[source]#

Set of MetaNames for Script Specifications.

Available MetaNames:

MetaName

Type

Required

Example value

NAME

str

Y

Python script

DESCRIPTION

str

N

my_description

SOFTWARE_SPEC_UID

str

Y

53628d69-ced9-4f43-a8cd-9954344039a8

Service instance#

class client.ServiceInstanceNewPlan(client)[source]#

Connect, get details and check usage of Watson Machine Learning service instance.

get_api_key()[source]#

Get api key of Watson Machine Learning service.

Returns:

api key

Return type:

str

Example

instance_details = client.service_instance.get_api_key()
get_details()[source]#

Get information about Watson Machine Learning instance.

Returns:

metadata of service instance

Return type:

dict

Example

instance_details = client.service_instance.get_details()
get_instance_id()[source]#

Get instance id of Watson Machine Learning service.

Returns:

instance id

Return type:

str

Example

instance_details = client.service_instance.get_instance_id()
get_password()[source]#

Get password for Watson Machine Learning service. Applicable only for IBM Cloud Pak® for Data.

Returns:

password

Return type:

str

Example

instance_details = client.service_instance.get_password()
get_url()[source]#

Get instance url of Watson Machine Learning service.

Returns:

instance url

Return type:

str

Example

instance_details = client.service_instance.get_url()
get_username()[source]#

Get username for Watson Machine Learning service. Applicable only for IBM Cloud Pak® for Data.

Returns:

username

Return type:

str

Example

instance_details = client.service_instance.get_username()

Set#

class client.Set(client)[source]#

Set a space_id/project_id to be used in the subsequent actions.

default_project(project_id)[source]#

Set a project ID.

Parameters:

project_id (str) – UID of the project

Returns:

status (“SUCCESS” if succeeded)

Return type:

str

Example

client.set.default_project(project_id)
default_space(space_uid)[source]#

Set a space ID.

Parameters:

space_uid (str) – UID of the space to be used

Returns:

status (“SUCCESS” if succeeded)

Return type:

str

Example

client.set.default_space(space_uid)

Shiny (IBM Cloud Pak for Data only)#

Warning! Not supported for IBM Cloud.

class client.Shiny(client)[source]#

Store and manage shiny assets.

ConfigurationMetaNames = <ibm_watson_machine_learning.metanames.ShinyMetaNames object>#

MetaNames for Shiny Assets creation.

create_revision(shiny_uid)[source]#

Create revision for the given Shiny asset. Revisions are immutable once created. The metadata and attachment at script_uid is taken and a revision is created out of it.

Parameters:

shiny_uid (str) – shiny asset ID

Returns:

stored shiny asset revisions metadata

Return type:

dict

Example

shiny_revision = client.shiny.create_revision(shiny_uid)
delete(shiny_uid)[source]#

Delete a stored shiny asset.

Parameters:

shiny_uid (str) – Unique Id of shiny asset

Returns:

status (“SUCCESS” or “FAILED”)

Return type:

str

Example

client.shiny.delete(shiny_uid)
download(shiny_uid, filename, rev_uid=None)[source]#

Download the content of a shiny asset.

Parameters:
  • shiny_uid (str) – the Unique Id of the shiny asset to be downloaded

  • filename (str) – filename to be used for the downloaded file

  • rev_uid (str, optional) – revision id

Returns:

path to the downloaded shiny asset content

Return type:

str

Example

client.shiny.download(shiny_uid, "shiny_asset.zip")
get_details(shiny_uid=None)[source]#

Get shiny asset details. If no shiny_uid is passed, details for all shiny assets will be returned.

Parameters:

shiny_uid (str, optional) – Unique id of shiny asset

Returns:

metadata of the stored shiny asset

Return type:

  • dict - if runtime_uid is not None

  • {“resources”: [dict]} - if runtime_uid is None

Example

shiny_details = client.shiny.get_details(shiny_uid)
static get_href(shiny_details)[source]#

Get url of stored shiny asset.

Parameters:

shiny_details (dict) – stored shiny asset details

Returns:

href of stored shiny asset

Return type:

str

Example

shiny_details = client.shiny.get_details(shiny_uid)
shiny_href = client.shiny.get_href(shiny_details)
static get_id(shiny_details)[source]#

Get Unique Id of stored shiny asset.

Parameters:

shiny_details (dict) – metadata of the stored shiny asset

Returns:

Unique Id of stored shiny asset

Return type:

str

Example

shiny_uid = client.shiny.get_id(shiny_details)
get_revision_details(shiny_uid=None, rev_uid=None)[source]#

Get metadata of shiny_uid revision.

Parameters:
  • shiny_uid (str, optional) – shiny asset ID

  • rev_uid (str, optional) – revision ID, if this parameter is not provided, returns latest revision if existing else error

Returns:

stored shiny(s) metadata

Return type:

dict

Example

shiny_details = client.shiny.get_revision_details(shiny_uid, rev_uid)
static get_uid(shiny_details)[source]#

Get Unique Id of stored shiny asset.

Deprecated: Use get_id(shiny_details) instead.

Parameters:

shiny_details (dict) – metadata of the stored shiny asset

Returns:

Unique Id of stored shiny asset

Return type:

str

Example

shiny_uid = client.shiny.get_uid(shiny_details)
list(limit=None, return_as_df=True)[source]#

Print stored shiny assets in a table format. If limit is set to None there will be only first 50 records shown.

Parameters:
  • limit (int, optional) – limit number of fetched records

  • return_as_df (bool, optional) – determinate if table should be returned as pandas.DataFrame object, default: True

Returns:

pandas.DataFrame with listed shiny assets or None if return_as_df is False

Return type:

pandas.DataFrame or None

Example

client.shiny.list()
list_revisions(shiny_uid, limit=None, return_as_df=True)[source]#

Print all revisions for the given shiny asset uid in a table format.

Parameters:
  • shiny_uid (str) – stored shiny asset ID

  • limit (int, optional) – limit number of fetched records

  • return_as_df (bool, optional) – determinate if table should be returned as pandas.DataFrame object, default: True

Returns:

pandas.DataFrame with listed shiny revisions or None if return_as_df is False

Return type:

pandas.DataFrame or None

Example

client.shiny.list_revisions(shiny_uid)
store(meta_props, file_path)[source]#

Create a shiny asset and uploads content to it.

Parameters:
  • meta_props (dict) – metadata of shiny asset

  • file_path (str) – path to the content file to be uploaded

Returns:

metadata of the stored shiny asset

Return type:

dict

Example

meta_props = {
    client.shiny.ConfigurationMetaNames.NAME: "shiny app name"
}

shiny_details = client.shiny.store(meta_props, file_path="/path/to/file")
update(shiny_uid, meta_props=None, file_path=None)[source]#

Update shiny with either metadata or attachment or both.

Parameters:
  • shiny_uid (str) – Shiny UID

  • meta_props (dict, optional) – changes to shiny metadata

  • file_path (str, optional) – file path to new attachment

Returns:

updated metadata of shiny asset

Return type:

dict

Example

script_details = client.script.update(model_uid, meta, content_path)

Software specifications#

class client.SwSpec(client)[source]#

Store and manage software specs.

ConfigurationMetaNames = <ibm_watson_machine_learning.metanames.SwSpecMetaNames object>#

MetaNames for Software Specification creation.

add_package_extension(sw_spec_uid, pkg_extn_id)[source]#

Add a package extension to software specifications existing metadata.

Parameters:
  • sw_spec_uid (str) – Unique Id of software specification which should be updated

  • pkg_extn_id (str) – Unique Id of package extension which should needs to added to software specification

Example

client.software_specifications.add_package_extension(sw_spec_uid, pkg_extn_id)
delete(sw_spec_uid)[source]#

Delete a software specification.

Parameters:

sw_spec_uid (str) – Unique Id of software specification

Returns:

status (“SUCCESS” or “FAILED”)

Return type:

str

Example

client.software_specifications.delete(sw_spec_uid)
delete_package_extension(sw_spec_uid, pkg_extn_id)[source]#

Delete a package extension from software specifications existing metadata.

Parameters:
  • sw_spec_uid (str) – Unique Id of software specification which should be updated

  • pkg_extn_id (str) – Unique Id of package extension which should needs to deleted from software specification

Example

client.software_specifications.delete_package_extension(sw_spec_uid, pkg_extn_id)
get_details(sw_spec_uid=None, state_info=False)[source]#

Get software specification details. If no sw_spec_id is passed, details for all software specifications will be returned.

Parameters:
  • sw_spec_uid (str, optional) – UID of software specification

  • state_info – works only when sw_spec_uid is None, instead of returning details of software specs returns state of software specs information (supported, unsupported, deprecated), containing suggested replacement in case of unsupported or deprecated software specs

Returns:

metadata of the stored software specification(s)

Return type:

  • dict - if sw_spec_uid is not None

  • {“resources”: [dict]} - if sw_spec_uid is None

Examples

sw_spec_details = client.software_specifications.get_details(sw_spec_uid)
sw_spec_details = client.software_specifications.get_details()
sw_spec_state_details = client.software_specifications.get_details(state_info=True)
static get_href(sw_spec_details)[source]#

Get url of software specification.

Parameters:

sw_spec_details (dict) – software specification details

Returns:

href of software specification

Return type:

str

Example

sw_spec_details = client.software_specifications.get_details(sw_spec_uid)
sw_spec_href = client.software_specifications.get_href(sw_spec_details)
static get_id(sw_spec_details)[source]#

Get Unique Id of software specification.

Parameters:

sw_spec_details (dict) – metadata of the software specification

Returns:

Unique Id of software specification

Return type:

str

Example

asset_uid = client.software_specifications.get_id(sw_spec_details)
get_id_by_name(sw_spec_name)[source]#

Get Unique Id of software specification.

Parameters:

sw_spec_name (str) – name of the software specification

Returns:

Unique Id of software specification

Return type:

str

Example

asset_uid = client.software_specifications.get_id_by_name(sw_spec_name)
static get_uid(sw_spec_details)[source]#

Get Unique Id of software specification.

Deprecated: Use get_id(sw_spec_details) instead.

Parameters:

sw_spec_details (dict) – metadata of the software specification

Returns:

Unique Id of software specification

Return type:

str

Example

asset_uid = client.software_specifications.get_uid(sw_spec_details)
get_uid_by_name(sw_spec_name)[source]#

Get Unique Id of software specification.

Deprecated: Use get_id_by_name(self, sw_spec_name) instead.

Parameters:

sw_spec_name (str) – name of the software specification

Returns:

Unique Id of software specification

Return type:

str

Example

asset_uid = client.software_specifications.get_uid_by_name(sw_spec_name)
list(limit=None, return_as_df=True)[source]#

Print software specifications in a table format.

Parameters:
  • limit (int, optional) – limit number of fetched records

  • return_as_df (bool, optional) – determinate if table should be returned as pandas.DataFrame object, default: True

Returns:

pandas.DataFrame with listed software specifications or None if return_as_df is False

Return type:

pandas.DataFrame or None

Example

client.software_specifications.list()
store(meta_props)[source]#

Create a software specification.

Parameters:

meta_props (dict) –

metadata of the space configuration. To see available meta names use:

client.software_specifications.ConfigurationMetaNames.get()

Returns:

metadata of the stored space

Return type:

dict

Example

meta_props = {
    client.software_specifications.ConfigurationMetaNames.NAME: "skl_pipeline_heart_problem_prediction",
    client.software_specifications.ConfigurationMetaNames.DESCRIPTION: "description scikit-learn_0.20",
    client.software_specifications.ConfigurationMetaNames.PACKAGE_EXTENSIONS_UID: [],
    client.software_specifications.ConfigurationMetaNames.SOFTWARE_CONFIGURATIONS: {},
    client.software_specifications.ConfigurationMetaNames.BASE_SOFTWARE_SPECIFICATION_ID: "guid"
}

sw_spec_details = client.software_specifications.store(meta_props)
class metanames.SwSpecMetaNames[source]#

Set of MetaNames for Software Specifications Specs.

Available MetaNames:

MetaName

Type

Required

Schema

Example value

NAME

str

Y

Python 3.10 with pre-installed ML package

DESCRIPTION

str

N

my_description

PACKAGE_EXTENSIONS

list

N

[{'guid': 'value'}]

SOFTWARE_CONFIGURATION

dict

N

{'platform(required)': 'string'}

{'platform': {'name': 'python', 'version': '3.10'}}

BASE_SOFTWARE_SPECIFICATION

dict

Y

{'guid': 'BASE_SOFTWARE_SPECIFICATION_ID'}

Spaces#

class client.PlatformSpaces(client)[source]#

Store and manage spaces.

ConfigurationMetaNames = <ibm_watson_machine_learning.metanames.SpacesPlatformMetaNames object>#

MetaNames for spaces creation.

MemberMetaNames = <ibm_watson_machine_learning.metanames.SpacesPlatformMemberMetaNames object>#

MetaNames for space members creation.

create_member(space_id, meta_props)[source]#

Create a member within a space.

Parameters:
  • space_id (str) – ID of space which definition should be updated

  • meta_props (dict) –

    metadata of the member configuration. To see available meta names use:

    client.spaces.MemberMetaNames.get()
    

Returns:

metadata of the stored member

Return type:

dict

Note

  • role can be any one of the following: “viewer”, “editor”, “admin”

  • type can be any one of the following: “user”, “service”

  • id can be either service-ID or IAM-userID

Examples

metadata = {
    client.spaces.MemberMetaNames.MEMBERS: [{"id":"IBMid-100000DK0B",
                                             "type": "user",
                                             "role": "admin" }]
}
members_details = client.spaces.create_member(space_id=space_id, meta_props=metadata)
metadata = {
    client.spaces.MemberMetaNames.MEMBERS: [{"id":"iam-ServiceId-5a216e59-6592-43b9-8669-625d341aca71",
                                             "type": "service",
                                             "role": "admin" }]
}
members_details = client.spaces.create_member(space_id=space_id, meta_props=metadata)
delete(space_id)[source]#

Delete a stored space.

Parameters:

space_id (str) – space ID

Returns:

status (“SUCCESS” or “FAILED”)

Return type:

str

Example

client.spaces.delete(space_id)
delete_member(space_id, member_id)[source]#

Delete a member associated with a space.

Parameters:
  • space_id (str) – space UID

  • member_id (str) – member UID

Returns:

status (“SUCCESS” or “FAILED”)

Return type:

str

Example

client.spaces.delete_member(space_id,member_id)
get_details(space_id=None, limit=None, asynchronous=False, get_all=False)[source]#

Get metadata of stored space(s).

Parameters:
  • space_id (str, optional) – space ID

  • limit (str, optional) – applicable when space_id is not provided, otherwise limit will be ignored

  • asynchronous (bool, optional) – if True, it will work as a generator

  • get_all (bool, optional) – if True, it will get all entries in ‘limited’ chunks

Returns:

metadata of stored space(s)

Return type:

dict

Example

space_details = client.spaces.get_details(space_uid)
space_details = client.spaces.get_details(limit=100)
space_details = client.spaces.get_details(limit=100, get_all=True)
space_details = []
for entry in client.spaces.get_details(limit=100, asynchronous=True, get_all=True):
    space_details.extend(entry)
static get_id(space_details)[source]#

Get space_id from space details.

Parameters:

space_details (dict) – metadata of the stored space

Returns:

space ID

Return type:

str

Example

space_details = client.spaces.store(meta_props)
space_id = client.spaces.get_id(space_details)
get_member_details(space_id, member_id)[source]#

Get metadata of member associated with a space.

Parameters:
  • space_id (str) – ID of space which definition should be updated

  • member_id (str) – member ID

Returns:

metadata of member of a space

Return type:

dict

Example

member_details = client.spaces.get_member_details(space_uid,member_id)
static get_uid(space_details)[source]#

Get Unique Id of the space.

Deprecated: Use get_id(space_details) instead.

param space_details:

metadata of the space

type space_details:

dict

return:

Unique Id of space

rtype:

str

Example

space_details = client.spaces.store(meta_props)
space_uid = client.spaces.get_uid(space_details)
list(limit=None, member=None, roles=None, return_as_df=True, space_type=None)[source]#

Print stored spaces in a table format. If limit is set to None there will be only first 50 records shown.

Parameters:
  • limit (int, optional) – limit number of fetched records

  • member (str, optional) – filters the result list to only include spaces where the user with a matching user id is a member

  • roles (str, optional) – limit number of fetched records

  • return_as_df (bool, optional) – determinate if table should be returned as pandas.DataFrame object, default: True

  • space_type (str, optional) – filter spaces by their type; available types: ‘wx’, ‘cpd’, ‘wca’

Returns:

pandas.DataFrame with listed spaces or None if return_as_df is False

Return type:

pandas.DataFrame or None

Example

client.spaces.list()
list_members(space_id, limit=None, identity_type=None, role=None, state=None, return_as_df=True)[source]#

Print stored members of a space in a table format. If limit is set to None there will be only first 50 records shown.

Parameters:
  • space_id (str) – ID of space

  • limit (int, optional) – limit number of fetched records

  • identity_type (str, optional) – filter the members by type

  • role (str, optional) – filter the members by role

  • state (str, optional) – filter the members by state

  • return_as_df (bool, optional) – determinate if table should be returned as pandas.DataFrame object, default: True

Returns:

pandas.DataFrame with listed members or None if return_as_df is False

Return type:

pandas.DataFrame or None

Example

client.spaces.list_members(space_id)
promote(asset_id, source_project_id, target_space_id, rev_id=None)[source]#

Promote asset from project to space.

Parameters:
  • asset_id (str) – stored asset

  • source_project_id (str) – source project, from which asset is promoted

  • target_space_id (str) – target space, where asset is promoted

  • rev_id (str, optional) – revision ID of the promoted asset

Returns:

promoted asset id

Return type:

str

Examples

promoted_asset_id = client.spaces.promote(asset_id, source_project_id=project_id, target_space_id=space_id)
promoted_model_id = client.spaces.promote(model_id, source_project_id=project_id, target_space_id=space_id)
promoted_function_id = client.spaces.promote(function_id, source_project_id=project_id, target_space_id=space_id)
promoted_data_asset_id = client.spaces.promote(data_asset_id, source_project_id=project_id, target_space_id=space_id)
promoted_connection_asset_id = client.spaces.promote(connection_id, source_project_id=project_id, target_space_id=space_id)
store(meta_props, background_mode=True)[source]#

Create a space. The instance associated with the space via COMPUTE will be used for billing purposes on cloud. Note that STORAGE and COMPUTE are applicable only for cloud.

Parameters:
  • meta_props (dict) –

    meta data of the space configuration. To see available meta names use:

    client.spaces.ConfigurationMetaNames.get()
    

  • background_mode (bool, optional) – indicator if store() method will run in background (async) or (sync)

Returns:

metadata of the stored space

Return type:

dict

Example

metadata = {
    client.spaces.ConfigurationMetaNames.NAME: "my_space",
    client.spaces.ConfigurationMetaNames.DESCRIPTION: "spaces",
    client.spaces.ConfigurationMetaNames.STORAGE: {"resource_crn": "provide crn of the COS storage"},
    client.spaces.ConfigurationMetaNames.COMPUTE: {"name": "test_instance",
                                                   "crn": "provide crn of the instance"},
    client.spaces.ConfigurationMetaNames.STAGE: {"production": True,
                                                 "name": "stage_name"},
    client.spaces.ConfigurationMetaNames.TAGS: ["sample_tag_1", "sample_tag_2"],
    client.spaces.ConfigurationMetaNames.TYPE: "cpd",
}
spaces_details = client.spaces.store(meta_props=metadata)
update(space_id, changes)[source]#

Updates existing space metadata. ‘STORAGE’ cannot be updated. STORAGE and COMPUTE are applicable only for cloud.

Parameters:
  • space_id (str) – ID of space which definition should be updated

  • changes (dict) – elements which should be changed, where keys are ConfigurationMetaNames

Returns:

metadata of updated space

Return type:

dict

Example

metadata = {
    client.spaces.ConfigurationMetaNames.NAME:"updated_space",
    client.spaces.ConfigurationMetaNames.COMPUTE: {"name": "test_instance",
                                                   "crn": "v1:staging:public:pm-20-dev:us-south:a/09796a1b4cddfcc9f7fe17824a68a0f8:f1026e4b-77cf-4703-843d-c9984eac7272::"
    }
}
space_details = client.spaces.update(space_id, changes=metadata)
update_member(space_id, member_id, changes)[source]#

Updates existing member metadata.

Parameters:
  • space_id (str) – ID of space

  • member_id (str) – ID of member that needs to be updated

  • changes (dict) – elements which should be changed, where keys are ConfigurationMetaNames

Returns:

metadata of updated member

Return type:

dict

Example

metadata = {
    client.spaces.MemberMetaNames.MEMBER: {"role": "editor"}
}
member_details = client.spaces.update_member(space_id, member_id, changes=metadata)
class metanames.SpacesPlatformMetaNames[source]#

Set of MetaNames for Platform Spaces Specs.

Available MetaNames:

MetaName

Type

Required

Example value

NAME

str

Y

my_space

DESCRIPTION

str

N

my_description

STORAGE

dict

N

{'type': 'bmcos_object_storage', 'resource_crn': '', 'delegated(optional)': 'false'}

COMPUTE

dict

N

{'name': 'name', 'crn': 'crn of the instance'}

STAGE

dict

N

{'production': True, 'name': 'name of the stage'}

TAGS

list

N

['sample_tag']

TYPE

str

N

cpd

class metanames.SpacesPlatformMemberMetaNames[source]#

Set of MetaNames for Platform Spaces Member Specs.

Available MetaNames:

MetaName

Type

Required

Schema

Example value

MEMBERS

list

N

[{'id(required)': 'string', 'role(required)': 'string', 'type(required)': 'string', 'state(optional)': 'string'}]

[{'id': 'iam-id1', 'role': 'editor', 'type': 'user', 'state': 'active'}, {'id': 'iam-id2', 'role': 'viewer', 'type': 'user', 'state': 'active'}]

MEMBER

dict

N

{'id': 'iam-id1', 'role': 'editor', 'type': 'user', 'state': 'active'}

Training#

class client.Training(client)[source]#

Train new models.

cancel(training_uid, hard_delete=False)[source]#

Cancel a training which is currently running and remove it. This method is also be used to delete metadata details of the completed or canceled training run when hard_delete parameter is set to True.

Parameters:
  • training_uid (str) – training UID

  • hard_delete (bool, optional) –

    specify True or False:

    • True - to delete the completed or canceled training run

    • False - to cancel the currently running training run

Returns:

status (“SUCCESS” or “FAILED”)

Return type:

str

Example

client.training.cancel(training_uid)
get_details(training_uid=None, limit=None, asynchronous=False, get_all=False, training_type=None, state=None, tag_value=None, training_definition_id=None, _internal=False)[source]#

Get metadata of training(s). If training_uid is not specified returns all model spaces metadata.

Parameters:
  • training_uid (str, optional) – Unique Id of training

  • limit (int, optional) – limit number of fetched records

  • asynchronous (bool, optional) – if True, it will work as a generator

  • get_all (bool, optional) – if True, it will get all entries in ‘limited’ chunks

  • training_type (str, optional) – filter the fetched list of trainings based on training type [“pipeline” or “experiment”]

  • state (str, optional) – filter the fetched list of training based on their state: [queued, running, completed, failed]

  • tag_value (str, optional) – filter the fetched list of training based on ther tag value

  • training_definition_id (str, optional) – filter the fetched trainings which are using the given training definition

Returns:

metadata of training(s)

Return type:

  • dict - if training_uid is not None

  • {“resources”: [dict]} - if training_uid is None

Examples

training_run_details = client.training.get_details(training_uid)
training_runs_details = client.training.get_details()
training_runs_details = client.training.get_details(limit=100)
training_runs_details = client.training.get_details(limit=100, get_all=True)
training_runs_details = []
for entry in client.training.get_details(limit=100, asynchronous=True, get_all=True):
    training_runs_details.extend(entry)
static get_href(training_details)[source]#

Get training href from training details.

Parameters:

training_details (dict) – metadata of the training created

Returns:

training href

Return type:

str

Example

training_details = client.training.get_details(training_uid)
run_url = client.training.get_href(training_details)
static get_id(training_details)[source]#

Get training id from training details.

Parameters:

training_details (dict) – metadata of the training created

Returns:

Unique id of training

Return type:

str

Example

training_details = client.training.get_details(training_id)
training_id = client.training.get_id(training_details)
get_metrics(training_uid)[source]#

Get metrics.

Parameters:

training_uid (str) – training UID

Returns:

metrics of a training run

Return type:

list of dict

Example

training_status = client.training.get_metrics(training_uid)
get_status(training_uid)[source]#

Get the status of a training created.

Parameters:

training_uid (str) – training UID

Returns:

training_status

Return type:

dict

Example

training_status = client.training.get_status(training_uid)
static get_uid(training_details)[source]#

This method is deprecated, please use get_id() instead.

list(limit=None, asynchronous=False, get_all=False, return_as_df=True)[source]#

Print stored trainings in a table format. If limit is set to None there will be only first 50 records shown.

Parameters:
  • limit (int, optional) – limit number of fetched records at once

  • asynchronous (bool, optional) – if True, it will work as a generator

  • get_all (bool, optional) – if True, it will get all entries in ‘limited’ chunks

  • return_as_df (bool, optional) – determinate if table should be returned as pandas.DataFrame object when asynchronous is False, default: True

Examples

client.training.list()
training_runs_df = client.training.list(limit=100)
training_runs_df = client.training.list(limit=100, get_all=True)
training_runs_df = []
for entry in client.training.list(limit=100, asynchronous=True, get_all=True):
    training_runs_df.extend(entry)
list_intermediate_models(training_uid)[source]#

Print the intermediate_models in a table format.

Parameters:

training_uid (str) – training ID

Note

This method is not supported for IBM Cloud Pak® for Data.

Example

client.training.list_intermediate_models()
list_subtrainings(training_uid)[source]#

Print the sub-trainings in a table format.

Parameters:

training_uid (str) – training ID

Example

client.training.list_subtrainings()
monitor_logs(training_uid)[source]#

Print the logs of a training created.

Parameters:

training_uid (str) – training UID

Note

This method is not supported for IBM Cloud Pak® for Data.

Example

client.training.monitor_logs(training_uid)
monitor_metrics(training_uid)[source]#

Print the metrics of a training created.

Parameters:

training_uid (str) – training UID

Note

This method is not supported for IBM Cloud Pak® for Data.

Example

client.training.monitor_metrics(training_uid)
run(meta_props, asynchronous=True)[source]#

Create a new Machine Learning training.

Parameters:
  • meta_props (str) –

    metadata of the training configuration. To see available meta names use:

    client.training.ConfigurationMetaNames.show()
    

  • asynchronous (bool, optional) –

    • True - training job is submitted and progress can be checked later

    • False - method will wait till job completion and print training stats

Returns:

metadata of the training created

Return type:

dict

Note

You can provide one of the below values for training:
  • client.training.ConfigurationMetaNames.EXPERIMENT

  • client.training.ConfigurationMetaNames.PIPELINE

  • client.training.ConfigurationMetaNames.MODEL_DEFINITION

Examples

Example meta_props for Training run creation in IBM Cloud Pak® for Data version 3.0.1 or above:

metadata = {
    client.training.ConfigurationMetaNames.NAME: 'Hand-written Digit Recognition',
    client.training.ConfigurationMetaNames.DESCRIPTION: 'Hand-written Digit Recognition Training',
    client.training.ConfigurationMetaNames.PIPELINE: {
        "id": "4cedab6d-e8e4-4214-b81a-2ddb122db2ab",
        "rev": "12",
        "model_type": "string",
        "data_bindings": [
            {
                "data_reference_name": "string",
                "node_id": "string"
            }
        ],
        "nodes_parameters": [
            {
                "node_id": "string",
                "parameters": {}
            }
        ],
        "hardware_spec": {
            "id": "4cedab6d-e8e4-4214-b81a-2ddb122db2ab",
            "rev": "12",
            "name": "string",
            "num_nodes": "2"
        }
    },
    client.training.ConfigurationMetaNames.TRAINING_DATA_REFERENCES: [{
        'type': 's3',
        'connection': {},
        'location': {'href': 'v2/assets/asset1233456'},
        'schema': { 'id': 't1', 'name': 'Tasks', 'fields': [ { 'name': 'duration', 'type': 'number' } ]}
    }],
    client.training.ConfigurationMetaNames.TRAINING_RESULTS_REFERENCE: {
        'id' : 'string',
        'connection': {
            'endpoint_url': 'https://s3-api.us-geo.objectstorage.service.networklayer.com',
            'access_key_id': '***',
            'secret_access_key': '***'
        },
        'location': {
            'bucket': 'wml-dev-results',
            'path' : "path"
        }
        'type': 's3'
    }
}

Example meta_prop values for training run creation in other version:

metadata = {
    client.training.ConfigurationMetaNames.NAME: 'Hand-written Digit Recognition',
    client.training.ConfigurationMetaNames.TRAINING_DATA_REFERENCES: [{
        'connection': {
            'endpoint_url': 'https://s3-api.us-geo.objectstorage.service.networklayer.com',
            'access_key_id': '***',
            'secret_access_key': '***'
        },
        'source': {
            'bucket': 'wml-dev',
        }
        'type': 's3'
    }],
    client.training.ConfigurationMetaNames.TRAINING_RESULTS_REFERENCE: {
        'connection': {
            'endpoint_url': 'https://s3-api.us-geo.objectstorage.service.networklayer.com',
            'access_key_id': '***',
            'secret_access_key': '***'
        },
        'target': {
            'bucket': 'wml-dev-results',
        }
        'type': 's3'
    },
    client.training.ConfigurationMetaNames.PIPELINE_UID : "/v4/pipelines/<PIPELINE-ID>"
}
training_details = client.training.run(definition_uid, meta_props=metadata)
training_uid = client.training.get_id(training_details)

Example of a Federated Learning training job:

aggregator_metadata = {
    wml_client.training.ConfigurationMetaNames.NAME: 'Federated_Learning_Tensorflow_MNIST',
    wml_client.training.ConfigurationMetaNames.DESCRIPTION: 'MNIST digit recognition with Federated Learning using Tensorflow',
    wml_client.training.ConfigurationMetaNames.TRAINING_DATA_REFERENCES: [],
    wml_client.training.ConfigurationMetaNames.TRAINING_RESULTS_REFERENCE: {
        'type': results_type,
        'name': 'outputData',
        'connection': {},
        'location': { 'path': '/projects/' + PROJECT_ID + '/assets/trainings/'}
    },
    wml_client.training.ConfigurationMetaNames.FEDERATED_LEARNING: {
        'model': {
            'type': 'tensorflow',
            'spec': {
            'id': untrained_model_id
        },
        'model_file': untrained_model_name
    },
    'fusion_type': 'iter_avg',
    'metrics': 'accuracy',
    'epochs': 3,
    'rounds': 10,
    'remote_training' : {
        'quorum': 1.0,
        'max_timeout': 3600,
        'remote_training_systems': [ { 'id': prime_rts_id }, { 'id': nonprime_rts_id} ]
    },
    'hardware_spec': {
        'name': 'S'
    },
    'software_spec': {
        'name': 'runtime-22.1-py3.9'
    }
}

aggregator = wml_client.training.run(aggregator_metadata, asynchronous=True)
aggregator_id = wml_client.training.get_id(aggregator)
class metanames.TrainingConfigurationMetaNames[source]#

Set of MetaNames for trainings.

Available MetaNames:

MetaName

Type

Required

Schema

Example value

TRAINING_DATA_REFERENCES

list

Y

[{'name(optional)': 'string', 'type(required)': 'string', 'connection(required)': {'href(required)': 'string'}, 'location(required)': {'bucket': 'string', 'path': 'string'}, 'schema(optional)': {'id(required)': 'string', 'fields(required)': [{'name(required)': 'string', 'type(required)': 'string', 'nullable(optional)': 'string'}]}}]

[{'connection': {'href': '/v2/connections/2d07a6b4-8fa9-43ab-91c8-befcd9dab8d2?space_id=440ada9b-af87-4da8-a9fa-a5450825e260'}, 'location': {'bucket': 'train-data', 'path': 'training_path'}, 'type': 'connection_asset', 'schema': {'id': '1', 'fields': [{'name': 'x', 'type': 'double', 'nullable': 'False'}]}}]

TRAINING_RESULTS_REFERENCE

dict

Y

{'name(optional)': 'string', 'type(required)': 'string', 'connection(required)': {'href(required)': 'string'}, 'location(required)': {'bucket': 'string', 'path': 'string'}}

{'connection': {'href': '/v2/connections/2d07a6b4-8fa9-43ab-91c8-befcd9dab8d2?space_id=440ada9b-af87-4da8-a9fa-a5450825e260'}, 'location': {'bucket': 'test-results', 'path': 'training_path'}, 'type': 'connection_asset'}

TEST_DATA_REFERENCES

list

N

[{'name(optional)': 'string', 'type(required)': 'string', 'connection(required)': {'href(required)': 'string'}, 'location(required)': {'bucket': 'string', 'path': 'string'}, 'schema(optional)': {'id(required)': 'string', 'fields(required)': [{'name(required)': 'string', 'type(required)': 'string', 'nullable(optional)': 'string'}]}}]

[{'connection': {'href': '/v2/connections/2d07a6b4-8fa9-43ab-91c8-befcd9dab8d2?space_id=440ada9b-af87-4da8-a9fa-a5450825e260'}, 'location': {'bucket': 'train-data', 'path': 'training_path'}, 'type': 'connection_asset', 'schema': {'id': '1', 'fields': [{'name': 'x', 'type': 'double', 'nullable': 'False'}]}}]

TAGS

list

N

[{'value(required)': 'string', 'description(optional)': 'string'}]

[{'value': 'string', 'description': 'string'}]

PIPELINE_UID

str

N

3c1ce536-20dc-426e-aac7-7284cf3befc6

EXPERIMENT_UID

str

N

3c1ce536-20dc-426e-aac7-7284cf3befc6

PIPELINE_DATA_BINDINGS

str

N

[{'data_reference_name(required)': 'string', 'node_id(required)': 'string'}]

[{'data_reference_name': 'string', 'node_id': 'string'}]

PIPELINE_NODE_PARAMETERS

dict

N

[{'node_id(required)': 'string', 'parameters(required)': 'dict'}]

[{'node_id': 'string', 'parameters': {}}]

SPACE_UID

str

N

3c1ce536-20dc-426e-aac7-7284cf3befc6

TRAINING_LIB

dict

N

{'href(required)': 'string', 'type(required)': 'string', 'runtime(optional)': {'href': 'string'}, 'command(optional)': 'string', 'parameters(optional)': 'dict'}

{'href': '/v4/libraries/3c1ce536-20dc-426e-aac7-7284cf3befc6', 'compute': {'name': 'k80', 'nodes': 0}, 'runtime': {'href': '/v4/runtimes/3c1ce536-20dc-426e-aac7-7284cf3befc6'}, 'command': 'python3 convolutional_network.py', 'parameters': {}}

TRAINING_LIB_UID

str

N

3c1ce536-20dc-426e-aac7-7284cf3befc6

TRAINING_LIB_MODEL_TYPE

str

N

3c1ce536-20dc-426e-aac7-7284cf3befc6

TRAINING_LIB_RUNTIME_UID

str

N

3c1ce536-20dc-426e-aac7-7284cf3befc6

TRAINING_LIB_PARAMETERS

dict

N

3c1ce536-20dc-426e-aac7-7284cf3befc6

COMMAND

str

N

3c1ce536-20dc-426e-aac7-7284cf3befc6

COMPUTE

dict

N

3c1ce536-20dc-426e-aac7-7284cf3befc6

PIPELINE_MODEL_TYPE

str

N

tensorflow_1.1.3-py3

Enums#

class ibm_watson_machine_learning.utils.autoai.enums.ClassificationAlgorithms(value)[source]#

Bases: Enum

Classification algorithms that AutoAI could use for IBM Cloud.

DT = 'DecisionTreeClassifier'#
EX_TREES = 'ExtraTreesClassifier'#
GB = 'GradientBoostingClassifier'#
LGBM = 'LGBMClassifier'#
LR = 'LogisticRegression'#
RF = 'RandomForestClassifier'#
SnapBM = 'SnapBoostingMachineClassifier'#
SnapDT = 'SnapDecisionTreeClassifier'#
SnapLR = 'SnapLogisticRegression'#
SnapRF = 'SnapRandomForestClassifier'#
SnapSVM = 'SnapSVMClassifier'#
XGB = 'XGBClassifier'#
class ibm_watson_machine_learning.utils.autoai.enums.ClassificationAlgorithmsCP4D(value)[source]#

Bases: Enum

Classification algorithms that AutoAI could use for IBM Cloud Pak® for Data(CP4D). The SnapML estimators (SnapDT, SnapRF, SnapSVM, SnapLR) are supported on IBM Cloud Pak® for Data version 4.0.2 and above.

DT = 'DecisionTreeClassifierEstimator'#
EX_TREES = 'ExtraTreesClassifierEstimator'#
GB = 'GradientBoostingClassifierEstimator'#
LGBM = 'LGBMClassifierEstimator'#
LR = 'LogisticRegressionEstimator'#
RF = 'RandomForestClassifierEstimator'#
SnapBM = 'SnapBoostingMachineClassifier'#
SnapDT = 'SnapDecisionTreeClassifier'#
SnapLR = 'SnapLogisticRegression'#
SnapRF = 'SnapRandomForestClassifier'#
SnapSVM = 'SnapSVMClassifier'#
XGB = 'XGBClassifierEstimator'#
class ibm_watson_machine_learning.utils.autoai.enums.DataConnectionTypes[source]#

Bases: object

Supported types of DataConnection.

CA = 'connection_asset'#
CN = 'container'#
DS = 'data_asset'#
FS = 'fs'#
S3 = 's3'#
class ibm_watson_machine_learning.utils.autoai.enums.Directions[source]#

Bases: object

Possible metrics directions

ASCENDING = 'ascending'#
DESCENDING = 'descending'#
class ibm_watson_machine_learning.utils.autoai.enums.ForecastingAlgorithms(value)[source]#

Bases: Enum

Forecasting algorithms that AutoAI could use for IBM Cloud Pak® for Data(CP4D).

ARIMA = 'ARIMA'#
BATS = 'BATS'#
ENSEMBLER = 'Ensembler'#
HW = 'HoltWinters'#
LR = 'LinearRegression'#
RF = 'RandomForest'#
SVM = 'SVM'#
class ibm_watson_machine_learning.utils.autoai.enums.ForecastingAlgorithmsCP4D(value)[source]#

Bases: Enum

Forecasting algorithms that AutoAI could use for IBM Cloud.

ARIMA = 'ARIMA'#
BATS = 'BATS'#
ENSEMBLER = 'Ensembler'#
HW = 'HoltWinters'#
LR = 'LinearRegression'#
RF = 'RandomForest'#
SVM = 'SVM'#
class ibm_watson_machine_learning.utils.autoai.enums.ForecastingPipelineTypes(value)[source]#

Bases: Enum

Forecasting pipeline types that AutoAI could use for IBM Cloud Pak® for Data(CP4D).

ARIMA = 'ARIMA'#
ARIMAX = 'ARIMAX'#
ARIMAX_DMLR = 'ARIMAX_DMLR'#
ARIMAX_PALR = 'ARIMAX_PALR'#
ARIMAX_RAR = 'ARIMAX_RAR'#
ARIMAX_RSAR = 'ARIMAX_RSAR'#
Bats = 'Bats'#
DifferenceFlattenEnsembler = 'DifferenceFlattenEnsembler'#
ExogenousDifferenceFlattenEnsembler = 'ExogenousDifferenceFlattenEnsembler'#
ExogenousFlattenEnsembler = 'ExogenousFlattenEnsembler'#
ExogenousLocalizedFlattenEnsembler = 'ExogenousLocalizedFlattenEnsembler'#
ExogenousMT2RForecaster = 'ExogenousMT2RForecaster'#
ExogenousRandomForestRegressor = 'ExogenousRandomForestRegressor'#
ExogenousSVM = 'ExogenousSVM'#
FlattenEnsembler = 'FlattenEnsembler'#
HoltWinterAdditive = 'HoltWinterAdditive'#
HoltWinterMultiplicative = 'HoltWinterMultiplicative'#
LocalizedFlattenEnsembler = 'LocalizedFlattenEnsembler'#
MT2RForecaster = 'MT2RForecaster'#
RandomForestRegressor = 'RandomForestRegressor'#
SVM = 'SVM'#
static get_exogenous()[source]#

Get list of pipelines that use supporting features (exogenous pipelines).

Returns:

list of pipelines using supporting features

Return type:

list[ForecastingPipelineTypes]

static get_non_exogenous()[source]#

Get list of pipelines not using supporting features (non-exogenous pipelines).

Returns:

list of pipelines that do not use supporting features

Return type:

list[ForecastingPipelineTypes]

class ibm_watson_machine_learning.utils.autoai.enums.ImputationStrategy(value)[source]#

Bases: Enum

Missing values imputation strategies.

BEST_OF_DEFAULT_IMPUTERS = 'best_of_default_imputers'#
CUBIC = 'cubic'#
FLATTEN_ITERATIVE = 'flatten_iterative'#
LINEAR = 'linear'#
MEAN = 'mean'#
MEDIAN = 'median'#
MOST_FREQUENT = 'most_frequent'#
NEXT = 'next'#
NO_IMPUTATION = 'no_imputation'#
PREVIOUS = 'previous'#
VALUE = 'value'#
class ibm_watson_machine_learning.utils.autoai.enums.Metrics[source]#

Bases: object

Supported types of classification and regression metrics in autoai.

ACCURACY_AND_DISPARATE_IMPACT_SCORE = 'accuracy_and_disparate_impact'#
ACCURACY_SCORE = 'accuracy'#
AVERAGE_PRECISION_SCORE = 'average_precision'#
EXPLAINED_VARIANCE_SCORE = 'explained_variance'#
F1_SCORE = 'f1'#
F1_SCORE_MACRO = 'f1_macro'#
F1_SCORE_MICRO = 'f1_micro'#
F1_SCORE_WEIGHTED = 'f1_weighted'#
LOG_LOSS = 'neg_log_loss'#
MEAN_ABSOLUTE_ERROR = 'neg_mean_absolute_error'#
MEAN_SQUARED_ERROR = 'neg_mean_squared_error'#
MEAN_SQUARED_LOG_ERROR = 'neg_mean_squared_log_error'#
MEDIAN_ABSOLUTE_ERROR = 'neg_median_absolute_error'#
PRECISION_SCORE = 'precision'#
PRECISION_SCORE_MACRO = 'precision_macro'#
PRECISION_SCORE_MICRO = 'precision_micro'#
PRECISION_SCORE_WEIGHTED = 'precision_weighted'#
R2_AND_DISPARATE_IMPACT_SCORE = 'r2_and_disparate_impact'#
R2_SCORE = 'r2'#
RECALL_SCORE = 'recall'#
RECALL_SCORE_MACRO = 'recall_macro'#
RECALL_SCORE_MICRO = 'recall_micro'#
RECALL_SCORE_WEIGHTED = 'recall_weighted'#
ROC_AUC_SCORE = 'roc_auc'#
ROOT_MEAN_SQUARED_ERROR = 'neg_root_mean_squared_error'#
ROOT_MEAN_SQUARED_LOG_ERROR = 'neg_root_mean_squared_log_error'#
class ibm_watson_machine_learning.utils.autoai.enums.MetricsToDirections(value)[source]#

Bases: Enum

Map of metrics directions.

ACCURACY = 'ascending'#
AVERAGE_PRECISION = 'ascending'#
EXPLAINED_VARIANCE = 'ascending'#
F1 = 'ascending'#
F1_MACRO = 'ascending'#
F1_MICRO = 'ascending'#
F1_WEIGHTED = 'ascending'#
NEG_LOG_LOSS = 'descending'#
NEG_MEAN_ABSOLUTE_ERROR = 'descending'#
NEG_MEAN_SQUARED_ERROR = 'descending'#
NEG_MEAN_SQUARED_LOG_ERROR = 'descending'#
NEG_MEDIAN_ABSOLUTE_ERROR = 'descending'#
NEG_ROOT_MEAN_SQUARED_ERROR = 'descending'#
NEG_ROOT_MEAN_SQUARED_LOG_ERROR = 'descending'#
NORMALIZED_GINI_COEFFICIENT = 'ascending'#
PRECISION = 'ascending'#
PRECISION_MACRO = 'ascending'#
PRECISION_MICRO = 'ascending'#
PRECISION_WEIGHTED = 'ascending'#
R2 = 'ascending'#
RECALL = 'ascending'#
RECALL_MACRO = 'ascending'#
RECALL_MICRO = 'ascending'#
RECALL_WEIGHTED = 'ascending'#
ROC_AUC = 'ascending'#
class ibm_watson_machine_learning.utils.autoai.enums.PipelineTypes[source]#

Bases: object

Supported types of Pipelines.

LALE = 'lale'#
SKLEARN = 'sklearn'#
class ibm_watson_machine_learning.utils.autoai.enums.PositiveLabelClass[source]#

Bases: object

Metrics that need positive label definition for binary classification.

AVERAGE_PRECISION_SCORE = 'average_precision'#
F1_SCORE = 'f1'#
F1_SCORE_MACRO = 'f1_macro'#
F1_SCORE_MICRO = 'f1_micro'#
F1_SCORE_WEIGHTED = 'f1_weighted'#
PRECISION_SCORE = 'precision'#
PRECISION_SCORE_MACRO = 'precision_macro'#
PRECISION_SCORE_MICRO = 'precision_micro'#
PRECISION_SCORE_WEIGHTED = 'precision_weighted'#
RECALL_SCORE = 'recall'#
RECALL_SCORE_MACRO = 'recall_macro'#
RECALL_SCORE_MICRO = 'recall_micro'#
RECALL_SCORE_WEIGHTED = 'recall_weighted'#
class ibm_watson_machine_learning.utils.autoai.enums.PredictionType[source]#

Bases: object

Supported types of learning.

BINARY = 'binary'#
CLASSIFICATION = 'classification'#
FORECASTING = 'forecasting'#
MULTICLASS = 'multiclass'#
REGRESSION = 'regression'#
TIMESERIES_ANOMALY_PREDICTION = 'timeseries_anomaly_prediction'#
class ibm_watson_machine_learning.utils.autoai.enums.RegressionAlgorithms(value)[source]#

Bases: Enum

Regression algorithms that AutoAI could use for IBM Cloud.

DT = 'DecisionTreeRegressor'#
EX_TREES = 'ExtraTreesRegressor'#
GB = 'GradientBoostingRegressor'#
LGBM = 'LGBMRegressor'#
LR = 'LinearRegression'#
RF = 'RandomForestRegressor'#
RIDGE = 'Ridge'#
SnapBM = 'SnapBoostingMachineRegressor'#
SnapDT = 'SnapDecisionTreeRegressor'#
SnapRF = 'SnapRandomForestRegressor'#
XGB = 'XGBRegressor'#
class ibm_watson_machine_learning.utils.autoai.enums.RegressionAlgorithmsCP4D(value)[source]#

Bases: Enum

Regression algorithms that AutoAI could use for IBM Cloud Pak® for Data(CP4D). The SnapML estimators (SnapDT, SnapRF, SnapBM) are supported on IBM Cloud Pak® for Data version 4.0.2 and above.

DT = 'DecisionTreeRegressorEstimator'#
EX_TREES = 'ExtraTreesRegressorEstimator'#
GB = 'GradientBoostingRegressorEstimator'#
LGBM = 'LGBMRegressorEstimator'#
LR = 'LinearRegressionEstimator'#
RF = 'RandomForestRegressorEstimator'#
RIDGE = 'RidgeEstimator'#
SnapBM = 'SnapBoostingMachineRegressor'#
SnapDT = 'SnapDecisionTreeRegressor'#
SnapRF = 'SnapRandomForestRegressor'#
XGB = 'XGBRegressorEstimator'#
class ibm_watson_machine_learning.utils.autoai.enums.RunStateTypes[source]#

Bases: object

Supported types of AutoAI fit/run.

COMPLETED = 'completed'#
FAILED = 'failed'#
class ibm_watson_machine_learning.utils.autoai.enums.SamplingTypes[source]#

Bases: object

Types of training data sampling.

FIRST_VALUES = 'first_n_records'#
LAST_VALUES = 'truncate'#
RANDOM = 'random'#
STRATIFIED = 'stratified'#
class ibm_watson_machine_learning.utils.autoai.enums.TShirtSize[source]#

Bases: object

Possible sizes of the AutoAI POD. Depends on the POD size, AutoAI could support different data sets sizes.

  • S - small (2vCPUs and 8GB of RAM)

  • M - Medium (4vCPUs and 16GB of RAM)

  • L - Large (8vCPUs and 32GB of RAM))

  • XL - Extra Large (16vCPUs and 64GB of RAM)

L = 'l'#
M = 'm'#
S = 's'#
XL = 'xl'#
class ibm_watson_machine_learning.utils.autoai.enums.TimeseriesAnomalyPredictionAlgorithms(value)[source]#

Bases: Enum

Timeseries Anomaly Prediction algorithms that AutoAI could use for IBM Cloud.

Forecasting = 'Forecasting'#
Relationship = 'Relationship'#
Window = 'Window'#
class ibm_watson_machine_learning.utils.autoai.enums.TimeseriesAnomalyPredictionPipelineTypes(value)[source]#

Bases: Enum

Timeseries Anomaly Prediction pipeline types that AutoAI could use for IBM Cloud.

PointwiseBoundedBATS = 'PointwiseBoundedBATS'#
PointwiseBoundedBATSForceUpdate = 'PointwiseBoundedBATSForceUpdate'#
PointwiseBoundedHoltWintersAdditive = 'PointwiseBoundedHoltWintersAdditive'#
WindowLOF = 'WindowLOF'#
WindowNN = 'WindowNN'#
WindowPCA = 'WindowPCA'#
class ibm_watson_machine_learning.utils.autoai.enums.Transformers[source]#

Bases: object

Supported types of congito transformers names in autoai.

ABS = 'abs'#
CBRT = 'cbrt'#
COS = 'cos'#
CUBE = 'cube'#
DIFF = 'diff'#
DIVIDE = 'divide'#
FEATUREAGGLOMERATION = 'featureagglomeration'#
ISOFORESTANOMALY = 'isoforestanomaly'#
LOG = 'log'#
MAX = 'max'#
MINMAXSCALER = 'minmaxscaler'#
NXOR = 'nxor'#
PCA = 'pca'#
PRODUCT = 'product'#
ROUND = 'round'#
SIGMOID = 'sigmoid'#
SIN = 'sin'#
SQRT = 'sqrt'#
SQUARE = 'square'#
STDSCALER = 'stdscaler'#
SUM = 'sum'#
TAN = 'tan'#
class ibm_watson_machine_learning.utils.autoai.enums.VisualizationTypes[source]#

Bases: object

Types of visualization options.

INPLACE = 'inplace'#
PDF = 'pdf'#