Deployment Modules for AutoAI models#

Web Service#

For usage instruction see Web Service.

class ibm_watson_machine_learning.deployment.WebService(source_wml_credentials=None, source_project_id=None, source_space_id=None, target_wml_credentials=None, target_project_id=None, target_space_id=None, wml_credentials=None, project_id=None, space_id=None)[source]#

Bases: BaseDeployment

An Online Deployment class aka. WebService. With this class object you can manage any online (WebService) deployment.

Parameters:
  • source_wml_credentials (dict) – credentials to Watson Machine Learning instance where training was performed

  • source_project_id (str, optional) – ID of the Watson Studio project where training was performed

  • source_space_id (str, optional) – ID of the Watson Studio Space where training was performed

  • target_wml_credentials (dict) – credentials to Watson Machine Learning instance where you want to deploy

  • target_project_id (str, optional) – ID of the Watson Studio project where you want to deploy

  • target_space_id (str, optional) – ID of the Watson Studio Space where you want to deploy

create(model, deployment_name, serving_name=None, metadata=None, training_data=None, training_target=None, experiment_run_id=None, hardware_spec=None)[source]#

Create deployment from a model.

Parameters:
  • model (str) – AutoAI model name

  • deployment_name (str) – name of the deployment

  • training_data (pandas.DataFrame or numpy.ndarray, optional) – training data for the model

  • training_target (pandas.DataFrame or numpy.ndarray, optional) – target/label data for the model

  • serving_name (str, optional) – serving name of the deployment

  • metadata (dict, optional) – model meta properties

  • experiment_run_id (str, optional) – ID of a training/experiment (only applicable for AutoAI deployments)

  • hardware_spec (dict, optional) – hardware specification for deployment

Example

from ibm_watson_machine_learning.deployment import WebService

deployment = WebService(
       wml_credentials={
             "apikey": "...",
             "iam_apikey_description": "...",
             "iam_apikey_name": "...",
             "iam_role_crn": "...",
             "iam_serviceid_crn": "...",
             "instance_id": "...",
             "url": "https://us-south.ml.cloud.ibm.com"
           },
        project_id="...",
        space_id="...")

deployment.create(
       experiment_run_id="...",
       model=model,
       deployment_name='My new deployment',
       serving_name='my_new_deployment'
   )
delete(deployment_id=None)[source]#

Delete deployment on WML.

Parameters:

deployment_id (str, optional) – ID of the deployment to delete, if empty, current deployment will be deleted

Example

deployment = WebService(workspace=...)
# Delete current deployment
deployment.delete()
# Or delete a specific deployment
deployment.delete(deployment_id='...')
get(deployment_id)[source]#

Get WML deployment.

Parameters:

deployment_id (str) – ID of the deployment to work with

Example

deployment = WebService(workspace=...)
deployment.get(deployment_id="...")
get_params()[source]#

Get deployment parameters.

list(limit=None)[source]#

List WML deployments.

Parameters:

limit (int, optional) – set the limit of how many deployments to list, default is None (all deployments should be fetched)

Returns:

Pandas DataFrame with information about deployments

Return type:

pandas.DataFrame

Example

deployment = WebService(workspace=...)
deployments_list = deployment.list()
print(deployments_list)

# Result:
#                  created_at  ...  status
# 0  2020-03-06T10:50:49.401Z  ...   ready
# 1  2020-03-06T13:16:09.789Z  ...   ready
# 4  2020-03-11T14:46:36.035Z  ...  failed
# 3  2020-03-11T14:49:55.052Z  ...  failed
# 2  2020-03-11T15:13:53.708Z  ...   ready
score(payload=Empty DataFrame Columns: [] Index: [], transaction_id=None)[source]#

Online scoring on WML. Payload is passed to the WML scoring endpoint where model have been deployed.

Parameters:
  • payload (pandas.DataFrame or dict) – DataFrame with data to test the model or dictionary with keys observations and supporting_features and DataFrames with data for observations and supporting_features to score forecasting models

  • transaction_id (str, optional) – can be used to indicate under which id the records will be saved into payload table in IBM OpenScale

Returns:

dictionary with list od model output/predicted targets

Return type:

dict

Examples

predictions = web_service.score(payload=test_data)
print(predictions)

# Result:
# {'predictions':
#     [{
#         'fields': ['prediction', 'probability'],
#         'values': [['no', [0.9221385608558003, 0.07786143914419975]],
#                   ['no', [0.9798324002736079, 0.020167599726392187]]
#     }]}

predictions = web_service.score(payload={'observations': new_observations_df})
predictions = web_service.score(payload={'observations': new_observations_df, 'supporting_features': supporting_features_df}) # supporting features time series forecasting sceanrio

Batch#

For usage instruction see Batch.

class ibm_watson_machine_learning.deployment.Batch(source_wml_credentials=None, source_project_id=None, source_space_id=None, target_wml_credentials=None, target_project_id=None, target_space_id=None, wml_credentials=None, project_id=None, space_id=None)[source]#

Bases: BaseDeployment

The Batch Deployment class. With this class object you can manage any batch deployment.

Parameters:
  • source_wml_credentials (dict) – credentials to Watson Machine Learning instance where training was performed

  • source_project_id (str, optional) – ID of the Watson Studio project where training was performed

  • source_space_id (str, optional) – ID of the Watson Studio Space where training was performed

  • target_wml_credentials (dict) – credentials to Watson Machine Learning instance where you want to deploy

  • target_project_id (str, optional) – ID of the Watson Studio project where you want to deploy

  • target_space_id (str, optional) – ID of the Watson Studio Space where you want to deploy

create(model, deployment_name, metadata=None, training_data=None, training_target=None, experiment_run_id=None)[source]#

Create deployment from a model.

Parameters:
  • model (str) – AutoAI model name

  • deployment_name (str) – name of the deployment

  • training_data (pandas.DataFrame or numpy.ndarray, optional) – training data for the model

  • training_target (pandas.DataFrame or numpy.ndarray, optional) – target/label data for the model

  • metadata (dict, optional) – model meta properties

  • experiment_run_id (str, optional) – ID of a training/experiment (only applicable for AutoAI deployments)

Example

from ibm_watson_machine_learning.deployment import Batch

deployment = Batch(
       wml_credentials={
             "apikey": "...",
             "iam_apikey_description": "...",
             "iam_apikey_name": "...",
             "iam_role_crn": "...",
             "iam_serviceid_crn": "...",
             "instance_id": "...",
             "url": "https://us-south.ml.cloud.ibm.com"
           },
        project_id="...",
        space_id="...")

deployment.create(
       experiment_run_id="...",
       model=model,
       deployment_name='My new deployment'
   )
delete(deployment_id=None)[source]#

Delete deployment on WML.

Parameters:

deployment_id (str, optional) – ID of the deployment to delete, if empty, current deployment will be deleted

Example

deployment = Batch(workspace=...)
# Delete current deployment
deployment.delete()
# Or delete a specific deployment
deployment.delete(deployment_id='...')
get(deployment_id)[source]#

Get WML deployment.

Parameters:

deployment_id (str) – ID of the deployment to work with

Example

deployment = Batch(workspace=...)
deployment.get(deployment_id="...")
get_job_id(batch_scoring_details)[source]#

Get id from batch scoring details.

get_job_params(scoring_job_id=None)[source]#

Get batch deployment job parameters.

Parameters:

scoring_job_id (str) – Id of scoring job

Returns:

parameters of the scoring job

Return type:

dict

get_job_result(scoring_job_id)[source]#

Get batch deployment results of job with id scoring_job_id.

Parameters:

scoring_job_id (str) – Id of scoring job which results will be returned

Returns:

result

Return type:

pandas.DataFrame

Raises:

MissingScoringResults – in case of incompleted or failed job MissingScoringResults scoring exception is raised

get_job_status(scoring_job_id)[source]#

Get status of scoring job.

Parameters:

scoring_job_id (str) – Id of scoring job

Returns:

dictionary with state of scoring job (one of: [completed, failed, starting, queued]) and additional details if they exist

Return type:

dict

get_params()[source]#

Get deployment parameters.

list(limit=None)[source]#

List WML deployments.

Parameters:

limit (int, optional) – set the limit of how many deployments to list, default is None (all deployments should be fetched)

Returns:

Pandas DataFrame with information about deployments

Return type:

pandas.DataFrame

Example

deployment = Batch(workspace=...)
deployments_list = deployment.list()
print(deployments_list)

# Result:
#                  created_at  ...  status
# 0  2020-03-06T10:50:49.401Z  ...   ready
# 1  2020-03-06T13:16:09.789Z  ...   ready
# 4  2020-03-11T14:46:36.035Z  ...  failed
# 3  2020-03-11T14:49:55.052Z  ...  failed
# 2  2020-03-11T15:13:53.708Z  ...   ready
list_jobs()[source]#

Returns pandas DataFrame with list of deployment jobs

rerun_job(scoring_job_id, background_mode=True)[source]#

Rerun scoring job with the same parameters as job described by scoring_job_id.

Parameters:
  • scoring_job_id (str) – Id described scoring job

  • background_mode (bool, optional) – indicator if score_rerun() method will run in background (async) or (sync)

Returns:

scoring job details

Return type:

dict

Example

scoring_details = deployment.score_rerun(scoring_job_id)
run_job(payload=Empty DataFrame Columns: [] Index: [], output_data_reference=None, transaction_id=None, background_mode=True)[source]#

Batch scoring job on WML. Payload or Payload data reference is required. It is passed to the WML where model have been deployed.

Parameters:
  • payload (pandas.DataFrame or List[DataConnection] or Dict) – DataFrame that contains data to test the model or data storage connection details that inform the model where payload data is stored

  • output_data_reference (DataConnection, optional) – DataConnection to the output COS for storing predictions, required only when DataConnections are used as a payload

  • transaction_id (str, optional) – can be used to indicate under which id the records will be saved into payload table in IBM OpenScale

  • background_mode (bool, optional) – indicator if score() method will run in background (async) or (sync)

Returns:

scoring job details

Return type:

dict

Examples

score_details = batch_service.run_job(payload=test_data)
print(score_details['entity']['scoring'])

# Result:
# {'input_data': [{'fields': ['sepal_length',
#               'sepal_width',
#               'petal_length',
#               'petal_width'],
#              'values': [[4.9, 3.0, 1.4, 0.2]]}],
# 'predictions': [{'fields': ['prediction', 'probability'],
#               'values': [['setosa',
#                 [0.9999320742502246,
#                  5.1519823540224506e-05,
#                  1.6405926235405522e-05]]]}]

payload_reference = DataConnection(location=DSLocation(asset_id=asset_id))
score_details = batch_service.run_job(payload=payload_reference, output_data_filename = "scoring_output.csv")
score_details = batch_service.run_job(payload={'observations': payload_reference})
score_details = batch_service.run_job(payload=[payload_reference])
score_details = batch_service.run_job(payload={'observations': payload_reference, 'supporting_features': supporting_features_reference})  # supporting features time series forecasting sceanrio
score(**kwargs)[source]#

Scoring on WML. Payload is passed to the WML scoring endpoint where model have been deployed.

Parameters:

payload (pandas.DataFrame) – data to test the model