Deployment Modules for AutoAI models¶
Web Service¶
For usage instruction, see Web Service.
- class ibm_watsonx_ai.deployment.WebService(source_instance_credentials=None, source_project_id=None, source_space_id=None, target_instance_credentials=None, target_project_id=None, target_space_id=None, project_id=None, space_id=None, **kwargs)[source]¶
Bases:
BaseDeployment
WebService is an Online Deployment class. With this class object, you can manage any online (WebService) deployment.
- Parameters:
source_instance_credentials (dict) – credentials to the instance where the training was performed
source_project_id (str, optional) – ID of the Watson Studio project where the training was performed
source_space_id (str, optional) – ID of the Watson Studio Space where the training was performed
target_instance_credentials (dict) – credentials to the instance where you want to deploy
target_project_id (str, optional) – ID of the Watson Studio project where you want to deploy
target_space_id (str, optional) – ID of the Watson Studio Space where you want to deploy
- create(model, deployment_name, serving_name=None, metadata=None, training_data=None, training_target=None, experiment_run_id=None, hardware_spec=None)[source]¶
Create a deployment from a model.
- Parameters:
model (str) – name of the AutoAI model
deployment_name (str) – name of the deployment
training_data (pandas.DataFrame or numpy.ndarray, optional) – training data for the model
training_target (pandas.DataFrame or numpy.ndarray, optional) – target/label data for the model
serving_name (str, optional) – serving name of the deployment
metadata (dict, optional) – meta properties of the model
experiment_run_id (str, optional) – ID of a training/experiment (only applicable for AutoAI deployments)
hardware_spec (dict, optional) – hardware specification for the deployment
Example:
from ibm_watsonx_ai.deployment import WebService from ibm_watsonx_ai import Credentials deployment = WebService( source_instance_credentials=Credentials(...), source_project_id="...", target_space_id="...") deployment.create( experiment_run_id="...", model=model, deployment_name='My new deployment', serving_name='my_new_deployment' )
- delete(deployment_id=None)[source]¶
Delete a deployment.
- Parameters:
deployment_id (str, optional) – ID of the deployment to be deleted, if empty, current deployment will be deleted
Example:
deployment = WebService(workspace=...) # Delete current deployment deployment.delete() # Or delete a specific deployment deployment.delete(deployment_id='...')
- get(deployment_id)[source]¶
Get a deployment.
- Parameters:
deployment_id (str) – ID of the deployment
Example:
deployment = WebService(workspace=...) deployment.get(deployment_id="...")
- list(limit=None)[source]¶
List deployments.
- Parameters:
limit (int, optional) – set the limit for number of listed deployments, default is None (all deployments should be fetched)
- Returns:
Pandas DataFrame with information about deployments
- Return type:
pandas.DataFrame
Example:
deployment = WebService(workspace=...) deployments_list = deployment.list() print(deployments_list) # Result: # created_at ... status # 0 2020-03-06T10:50:49.401Z ... ready # 1 2020-03-06T13:16:09.789Z ... ready # 4 2020-03-11T14:46:36.035Z ... failed # 3 2020-03-11T14:49:55.052Z ... failed # 2 2020-03-11T15:13:53.708Z ... ready
- score(payload=Empty DataFrame Columns: [] Index: [], *, forecast_window=None, transaction_id=None)[source]¶
Online scoring. Payload is passed to the Service scoring endpoint where the model has been deployed.
- Parameters:
payload (pandas.DataFrame or dict) – DataFrame with data to test the model or dictionary with keys observations and supporting_features, and DataFrames with data for observations and supporting_features to score forecasting models
forecast_window (int, optional) – size of forecast window, supported only for forcasting, supported for CPD 5.0 and later
transaction_id (str, optional) – ID under which the records should be saved in the payload table in IBM OpenScale
- Returns:
dictionary with list of model output/predicted targets
- Return type:
dict
Examples
predictions = web_service.score(payload=test_data) print(predictions) # Result: # {'predictions': # [{ # 'fields': ['prediction', 'probability'], # 'values': [['no', [0.9221385608558003, 0.07786143914419975]], # ['no', [0.9798324002736079, 0.020167599726392187]] # }]} predictions = web_service.score(payload={'observations': new_observations_df}) predictions = web_service.score(payload={'observations': new_observations_df, 'supporting_features': supporting_features_df}) # supporting features time series forecasting scenario predictions = web_service.score(payload={'observations': new_observations_df} forecast_window=1000) # forecast_window time series forecasting scenario
Batch¶
For usage instruction, see Batch.
- class ibm_watsonx_ai.deployment.Batch(source_instance_credentials=None, source_project_id=None, source_space_id=None, target_instance_credentials=None, target_project_id=None, target_space_id=None, project_id=None, space_id=None, **kwargs)[source]¶
Bases:
BaseDeployment
The Batch Deployment class. With this class object, you can manage any batch deployment.
- Parameters:
source_instance_credentials (dict) – credentials to the instance where the training was performed
source_project_id (str, optional) – ID of the Watson Studio project where the training was performed
source_space_id (str, optional) – ID of the Watson Studio Space where the training was performed
target_instance_credentials (dict) – credentials to the instance where you want to deploy
target_project_id (str, optional) – ID of the Watson Studio project where you want to deploy
target_space_id (str, optional) – ID of the Watson Studio Space where you want to deploy
- create(model, deployment_name, metadata=None, training_data=None, training_target=None, experiment_run_id=None, hardware_spec=None)[source]¶
Create a deployment from a model.
- Parameters:
model (str) – name of the AutoAI model
deployment_name (str) – name of the deployment
training_data (pandas.DataFrame or numpy.ndarray, optional) – training data for the model
training_target (pandas.DataFrame or numpy.ndarray, optional) – target/label data for the model
metadata (dict, optional) – meta properties of the model
experiment_run_id (str, optional) – ID of a training/experiment (only applicable for AutoAI deployments)
hardware_spec (str, optional) – hardware specification name of the deployment
Example:
from ibm_watsonx_ai.deployment import Batch deployment = Batch( source_instance_credentials=Credentials(...), source_project_id="...", target_space_id="...") deployment.create( experiment_run_id="...", model=model, deployment_name='My new deployment' hardware_spec='L' )
- delete(deployment_id=None)[source]¶
Delete a deployment.
- Parameters:
deployment_id (str, optional) – ID of the deployment to be deleted, if empty, current deployment will be deleted
Example:
deployment = Batch(workspace=...) # Delete current deployment deployment.delete() # Or delete a specific deployment deployment.delete(deployment_id='...')
- get(deployment_id)[source]¶
Get a deployment.
- Parameters:
deployment_id (str) – ID of the deployment
Example:
deployment = Batch(workspace=...) deployment.get(deployment_id="...")
- get_job_params(scoring_job_id=None)[source]¶
Get batch deployment job parameters.
- Parameters:
scoring_job_id (str) – ID of the scoring job
- Returns:
parameters of the scoring job
- Return type:
dict
- get_job_result(scoring_job_id)[source]¶
Get batch deployment results of a scoring job.
- Parameters:
scoring_job_id (str) – ID of the scoring job
- Returns:
batch deployment results of the scoring job
- Return type:
pandas.DataFrame
- Raises:
MissingScoringResults – in case of incompleted or failed job MissingScoringResults scoring exception is raised
- get_job_status(scoring_job_id)[source]¶
Get the status of a scoring job.
- Parameters:
scoring_job_id (str) – ID of the scoring job
- Returns:
dictionary with state of the scoring job (one of: [completed, failed, starting, queued]) and additional details if they exist
- Return type:
dict
- list(limit=None)[source]¶
List deployments.
- Parameters:
limit (int, optional) – set the limit for number of listed deployments, default is None (all deployments should be fetched)
- Returns:
Pandas DataFrame with information about deployments
- Return type:
pandas.DataFrame
Example:
deployment = Batch(workspace=...) deployments_list = deployment.list() print(deployments_list) # Result: # created_at ... status # 0 2020-03-06T10:50:49.401Z ... ready # 1 2020-03-06T13:16:09.789Z ... ready # 4 2020-03-11T14:46:36.035Z ... failed # 3 2020-03-11T14:49:55.052Z ... failed # 2 2020-03-11T15:13:53.708Z ... ready
- rerun_job(scoring_job_id, background_mode=True)[source]¶
Rerun scoring job with the same parameters as job described by scoring_job_id.
- Parameters:
scoring_job_id (str) – ID of the described scoring job
background_mode (bool, optional) – indicator whether the score_rerun() method will run in the background (async) or (sync)
- Returns:
details of the scoring job
- Return type:
dict
Example:
scoring_details = deployment.score_rerun(scoring_job_id)
- run_job(payload=Empty DataFrame Columns: [] Index: [], output_data_reference=None, transaction_id=None, background_mode=True, hardware_spec=None)[source]¶
Batch scoring job. Payload or Payload data reference is required. Passed to the Service where the model has been deployed.
- Parameters:
payload (pandas.DataFrame or List[DataConnection] or Dict) – DataFrame that contains data to test the model or data storage connection details that inform the model where the payload data is stored
output_data_reference (DataConnection, optional) – DataConnection to the output COS for storing predictions, required only when DataConnections are used as a payload
transaction_id (str, optional) – ID under which the records should be saved in the payload table in IBM OpenScale
background_mode (bool, optional) – indicator whether the score() method will run in the background (async) or (sync)
hardware_spec (str, optional) – hardware specification name for the scoring job
- Returns:
details of the scoring job
- Return type:
dict
Examples
score_details = batch_service.run_job(payload=test_data) print(score_details['entity']['scoring']) # Result: # {'input_data': [{'fields': ['sepal_length', # 'sepal_width', # 'petal_length', # 'petal_width'], # 'values': [[4.9, 3.0, 1.4, 0.2]]}], # 'predictions': [{'fields': ['prediction', 'probability'], # 'values': [['setosa', # [0.9999320742502246, # 5.1519823540224506e-05, # 1.6405926235405522e-05]]]}] payload_reference = DataConnection(location=DSLocation(asset_id=asset_id)) score_details = batch_service.run_job(payload=payload_reference, output_data_filename = "scoring_output.csv") score_details = batch_service.run_job(payload={'observations': payload_reference}) score_details = batch_service.run_job(payload=[payload_reference]) score_details = batch_service.run_job(payload={'observations': payload_reference, 'supporting_features': supporting_features_reference}) # supporting features time series forecasting sceanrio score_details = batch_service.run_job(payload=test_df, hardware_spec='S') score_details = batch_service.run_job(payload=test_df, hardware_spec=TShirtSize.L)