Class ZCL_IBMX_WML_V4
public inheriting from ZCL_IBMX_SERVICE_EXT create public
Documentation
watsonx.ai Runtime
No documentation available.Types
Visibility and Level | Name | Documentation |
---|---|---|
public | tt_json_patch_operation TYPE STANDARD TABLE OF T_JSON_PATCH_OPERATION WITH NON-UNIQUE DEFAULT KEY | Array parameter type for method DEPLOYMENT_JOB_DEF_UPDATE |
public | t_allowed_identities TYPE STANDARD TABLE OF T_ALLOWED_IDENTITY WITH NON-UNIQUE DEFAULT KEY | The list of allowed identities that are allowed to access the remote system. |
public | t_allowed_identity (structured type) | The allowed identity. |
public | t_all_content_metadata (structured type) | The metadata related to the attachments. |
public | t_api_error (structured type) | An error message. |
public | t_api_error_response (structured type) | The data returned when an error is encountered. |
public | t_api_error_target (structured type) | The target of the error. |
public | t_base_model (structured type) | The model id of the base model for this job. |
public | t_batch_request (structured type) | Indicates that this is a batch deployment. An empty object has to be specified. |
public | t_binary_classification (structured type) | No documentation available. |
public | t_bluemix_account (structured type) | No documentation available. |
public | t_cnsmptn_capacity_unit_hours (structured type) | No documentation available. |
public | t_cnsmptn_deployment_job_count (structured type) | Limit for deployment jobs. |
public | t_common_patch_request_helper (structured type) | The common fields that can be patched. This is a helper for `cpdctl`. |
public | t_compute_usage_metrics (structured type) | Compute usage metrics. |
public | t_confusion_matrix (structured type) | The confusion matrix for the selected class. |
public | t_consumption (structured type) | The consumption part is available only when `plan.version` is `2`. All the values are calculated at the account level the instance belongs to, not the instance itself. |
public | t_consumption_details (structured type) | Compute usage details in a given context and framework. |
public | t_consumption_do_job_count (structured type) | No documentation available. |
public | t_consumption_gpu_count (structured type) | No documentation available. |
public | t_content_gzip type FILE | Arbitrary `gzip` file. |
public | t_content_info (structured type) | The content information to be uploaded. |
public | t_content_json type JSONOBJECT | Arbitrary `JSON` file. |
public | t_content_location (structured type) | Details about the attachments that should be uploaded with this model. |
public | t_content_location_field (structured type) | No documentation available. |
public | t_content_metadata (structured type) | The metadata related to the attachment. |
public | t_content_text type STRING | Arbitrary `text` file. |
public | t_content_xml type JSONOBJECT | Arbitrary `XML` file. |
public | t_content_zip type FILE | Arbitrary `zip` file. |
public | t_custom type JSONOBJECT | User defined properties specified as key-value pairs. |
public | t_data_connection type JSONOBJECT | Contains a set of fields specific to each connection. See here for [details about specifying connections](#datareferences). |
public | t_data_connection_reference (structured type) | A reference to data with an optional data schema. If necessary, it is possible to provide a data connection that contains just the data schema. |
public | t_data_input (structured type) | Data shape (rows, columns) passed as input to the transformer/transformation. |
public | t_data_location type MAP | Contains a set of fields that describe the location of the data with respect to the `connection`. |
public | t_data_output (structured type) | Data shape after the transformation. |
public | t_data_preprocessing_trans (structured type) | Informational information about the preprocessing transformation that was executed during the training run. |
public | t_data_schema (structured type) | The schema of the expected data, see [datarecord-metadata-v2-schema](https://raw.githubusercontent.com/elyra-ai/pipel ine-schemas/master/common-pipeline/datarecord-metadata/datarecord-metadata-v2-sc hema.json) for the schema definition. |
public | t_deployment_entity (structured type) | The definition of the deployment. |
public | t_deployment_entity_common (structured type) | See the description in `POST /ml/v4/deployments`. |
public | t_deployment_entity_request (structured type) | See the description in `POST /ml/v4/deployments`. |
public | t_deployment_patch_req_helper (structured type) | The common fields that can be patched. This is a helper for `cpdctl`. |
public | t_deployment_resource (structured type) | A deployment resource. |
public | t_deployment_resources (structured type) | The deployment resources. |
public | t_deployment_resources_system (structured type) | System details including warnings and stats. This will get populated only if 'stats' query parameter is passed with 'true'. |
public | t_deployment_scaling (structured type) | Status information related to the state of the scaling, if scaling is in progress or has completed. |
public | t_deployment_status (structured type) | Specifies the current status, additional information about the deployment and any failure messages in case of deployment failures. |
public | t_deployment_system_details (structured type) | Optional details provided by the service about statistics of the number of deployments created. The deployments that are counted will depend on the request parameters. |
public | t_dplymnt_job_def_patch_helper (structured type) | Can patch the deployment id. |
public | t_entity_request_space_body (structured type) | The properties that are part of a request that supports spaces. |
public | t_entity_req_spc_project_body (structured type) | The properties that are part of a request that supports spaces and projects. Either `space_id` or `project_id` has to be provided and is mandatory. |
public | t_environment_variables type MAP | This property is used to specify environment variables and their values required to be consumed by the batch deployment job. The environment variables and values must be specified as key-value pairs.This property is currently supported only for Python Scripts in batch deployment jobs. |
public | t_evaluations_spec TYPE STANDARD TABLE OF T_EVALUATIONS_SPEC_ITEM WITH NON-UNIQUE DEFAULT KEY | A list of evaluation specifications. |
public | t_evaluations_spec_item (structured type) | An evaluation specification used to support evaluations for TensorFlow. |
public | t_evaluation_definition (structured type) | The optional evaluation definition. |
public | t_evaluation_metric (structured type) | An evaluation metric. |
public | t_experiment_entity (structured type) | The details of the experiment to be created. |
public | t_experiment_entity_request (structured type) | The details of the experiment to be created. |
public | t_experiment_resource (structured type) | The information for an experiment. |
public | t_experiment_resources (structured type) | A paginated list of experiments. |
public | t_experiment_rev_entity_req (structured type) | The details for the revision. |
public | t_extra_model_entity (structured type) | Information related to the upload of the model content. |
public | t_fdrtd_learning_model_spec (structured type) | No documentation available. |
public | t_fdrtd_learning_remote_train (structured type) | The remote training for federated learning. |
public | t_fdrtd_lrnng_info_aggregator (structured type) | No documentation available. |
public | t_fdrtd_lrnng_inf_aggrgtr_cnn1 (structured type) | No documentation available. |
public | t_fdrtd_lrnng_inf_rmt_train_s1 (structured type) | No documentation available. |
public | t_fdrtd_lrnng_inf_rmt_train_s2 (structured type) | No documentation available. |
public | t_fdrtd_lrnng_inf_rmt_train_s3 (structured type) | No documentation available. |
public | t_fdrtd_lrnng_rmt_train_rmt_t1 (structured type) | No documentation available. |
public | t_features_importance TYPE STANDARD TABLE OF T_FEATURE_IMPORTANCE WITH NON-UNIQUE DEFAULT KEY | No documentation available. |
public | t_feature_coefficients type MAP | The feature names where the calculated score describes the importance of each feature in the decision-making process. |
public | t_feature_importance (structured type) | No documentation available. |
public | t_federated_learning (structured type) | Federated Learning. |
public | t_federated_learning_crypto (structured type) | Settings for cryptographic fusion for federated learning. |
public | t_federated_learning_info (structured type) | Federated learning info. |
public | t_federated_learning_model (structured type) | The model type for federated_learning. |
public | t_federated_learning_optimizer (structured type) | The optimizer for federated learning. |
public | t_field_job_status (structured type) | The status of the job. |
public | t_field_solve_state (structured type) | The solve state for a Decision Optimization job. |
public | t_function_entity (structured type) | The details of the function to be created. |
public | t_function_entity_request (structured type) | The details of the function to be created. |
public | t_function_entity_schemas (structured type) | The schemas of the expected data. |
public | t_function_resource (structured type) | The information for a function. |
public | t_function_resources (structured type) | A paginated list of functions. |
public | t_func_revision_entity_request (structured type) | The details for the revision. |
public | t_gpu_metrics (structured type) | GPU metrics. |
public | t_gpu_metrics_memory (structured type) | No documentation available. |
public | t_hardware_spec (structured type) | A hardware specification. |
public | t_hybrd_ppln_hardware_specs TYPE STANDARD TABLE OF T_HYBRD_PPLN_HRDWR_SPECS_ITEM WITH NON-UNIQUE DEFAULT KEY | Hybrid pipeline hardware specification. |
public | t_hybrd_ppln_hrdwr_specs_item (structured type) | No documentation available. |
public | t_hyper_parameter (structured type) | A set of hyper parameters. |
public | t_incremental_training (structured type) | The process of training the model in batches. |
public | t_inference (structured type) | The details of an inference API. |
public | t_instance_resource (structured type) | No documentation available. |
public | t_instance_resources (structured type) | Information for paging when querying resources. |
public | t_instance_resource_entity (structured type) | No documentation available. |
public | t_instance_resrc_entity_plan (structured type) | No documentation available. |
public | t_intermediate_model (structured type) | The details of the intermediate model. |
public | t_jb_decision_optimization_req (structured type) | Details about the input/output data and other properties to be used for a batch deployment job of a Decision Optimization problem. You can find more information in [Deploying Decision Optimization](https://dataplatform.cloud.ibm.com/docs/content/DO/WML_Deployment /ModelIODataDefn.html) documentation. Use the `solve_parameters` as named value pairs to control the Decision Optimization solve behavior. Use the `input_data` and `output_data` properties to specify respectively input and output data for batch processing as part of the job's payload. Use the `input_data_references` and `output_data_references` properties to specify respectively input and output data for batch processing as remote data sources. |
public | t_jb_decision_optimization_res (structured type) | The solve state for a Decision Optimization job. |
public | t_jobs_resource (structured type) | The information related to the job. |
public | t_jobs_resources (structured type) | The information related to the jobs. |
public | t_job_entity (structured type) | Details about the batch deployment job. The `deployment` is a reference to the deployment associated with the deployment job or deployment job definition. The `scoring` and `decision_optimization` properties are mutually exclusive. Specify only one of these when submitting a batch deployment job. Use `hybrid_pipeline_hardware_specs` only in a batch deployment job of a hybrid pipeline in order to specify compute configuration for each pipeline node. For all other cases use `hardware_spec` to specify compute configuration. In case of output data references where the data asset is a remote database, users can specify if the batch deployment output must be appended to the table or if the table is to be truncated and output data updated. `output_data_references.location.write_mode` parameter can be used to for this purpose. The values `truncate` or `append` can be specified for `output_data_references.location.write_mode` parameter. * Specifying `truncate` as value will truncate the table and the batch output data will be inserted. * Specifying `append` as value will insert the batch output data to the remote database table. * The `write_mode` parameter is applicable only for `output_data_references` parameter. * The `write_mode` parameter will be applicable only for remote database related data assets. This parameter will not be applicable for local data asset or COS based data asset. |
public | t_job_entity_request (structured type) | Details about the batch deployment job. The `deployment` is a reference to the deployment associated with the deployment job or deployment job definition. The `scoring` and `decision_optimization` properties are mutually exclusive. Specify only one of these when submitting a batch deployment job. Use `hybrid_pipeline_hardware_specs` only in a batch deployment job of a hybrid pipeline in order to specify compute configuration for each pipeline node. For all other cases use `hardware_spec` to specify compute configuration. In case of output data references where the data asset is a remote database, users can specify if the batch deployment output must be appended to the table or if the table is to be truncated and output data updated. `output_data_references.location.write_mode` parameter can be used to for this purpose. The values `truncate` or `append` can be specified for `output_data_references.location.write_mode` parameter. * Specifying `truncate` as value will truncate the table and the batch output data will be inserted. * Specifying `append` as value will insert the batch output data to the remote database table. * The `write_mode` parameter is applicable only for `output_data_references` parameter. * The `write_mode` parameter will be applicable only for remote database related data assets. This parameter will not be applicable for local data asset or COS based data asset. |
public | t_job_entity_result (structured type) | Details about the batch deployment job. The `deployment` is a reference to the deployment associated with the deployment job or deployment job definition. The `scoring` and `decision_optimization` properties are mutually exclusive. |
public | t_job_resource (structured type) | The information for a deployment job definition. |
public | t_job_resources (structured type) | A paginated list of deployment job definitions. |
public | t_job_resource_entity (structured type) | Details about the batch deployment job. The `deployment` is a reference to the deployment associated with the deployment job or deployment job definition. The `scoring` and `decision_optimization` properties are mutually exclusive. Specify only one of these when submitting a batch deployment job. Use `hybrid_pipeline_hardware_specs` only in a batch deployment job of a hybrid pipeline in order to specify compute configuration for each pipeline node. For all other cases use `hardware_spec` to specify compute configuration. In case of output data references where the data asset is a remote database, users can specify if the batch deployment output must be appended to the table or if the table is to be truncated and output data updated. `output_data_references.location.write_mode` parameter can be used to for this purpose. The values `truncate` or `append` can be specified for `output_data_references.location.write_mode` parameter. * Specifying `truncate` as value will truncate the table and the batch output data will be inserted. * Specifying `append` as value will insert the batch output data to the remote database table. * The `write_mode` parameter is applicable only for `output_data_references` parameter. * The `write_mode` parameter will be applicable only for remote database related data assets. This parameter will not be applicable for local data asset or COS based data asset. |
public | t_job_resource_metadata (structured type) | Common metadata for a resource where `project_id` or `space_id` must be present. |
public | t_job_revision_entity_request (structured type) | The details for the revision. |
public | t_job_scoring_request (structured type) | Details about the input/output data and other properties to be used for a batch deployment job of a model, Python Function or a Python Scripts.Use `input_data` property to specify the input data for batch processing as part of the job's payload. The `input_data` property is mutually exclusive with `input_data_references` property, only use one of these. When `input_data` is specified, the processed output of batch deployment job will be available in `scoring.predictions` parameter in the deployment job response. `input_data` property is not supported for batch deployment of Python Scripts. Use `input_data_references` property to specify the details pertaining to the remote source where the input data for batch deployment job is available. The `input_data_references` must be used with `output_data_references`. The `input_data_references` property is mutually exclusive with `input_data` property, only use one of these. The `input_data_references` property is not supported for batch deployment job of Spark models and Python Functions. Use `output_data_references` property to specify the details pertaining to the remote source where the input data for batch deployment job is available. `output_data_references` must be used with `input_data_references`. The `output_data_references` property is not supported for batch deployment job of Spark models and Python Functions. |
public | t_job_scoring_result (structured type) | The status of the job. |
public | t_job_status (structured type) | The status of the job. |
public | t_job_status_entity (structured type) | Information about the platform job assets related to this execution. |
public | t_job_status_message (structured type) | An optional message related to the job status. |
public | t_json_patch TYPE STANDARD TABLE OF T_JSON_PATCH_OPERATION WITH NON-UNIQUE DEFAULT KEY | See [JSON PATCH RFC 6902](https://tools.ietf.org/html/rfc6902). |
public | t_json_patch_operation (structured type) | This model represents an individual patch operation to be performed on an object, as defined by[RFC 6902](https://tools.ietf.org/html/rfc6902). |
public | t_limit_expiration_date type DATE | The expiration date of the instance limit. |
public | t_mdl_def_entity_req_platform (structured type) | No documentation available. |
public | t_message (structured type) | Optional messages related to the resource. |
public | t_metric (structured type) | A metric. |
public | t_metrics TYPE STANDARD TABLE OF T_METRIC WITH NON-UNIQUE DEFAULT KEY | Metrics that can be returned by an operation. |
public | t_metrics_context (structured type) | Provides extra information for this training stage in the context of auto-ml. |
public | t_metric_tsad_metrics type JSONOBJECT | The metrics from the time series anomaly detection. For more information, please see the [Creating a Time Series Anomaly Prediction](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/au toai-ts-ad.html?audience=wdp) documentation. |
public | t_metric_ts_metrics type JSONOBJECT | The metrics from the time series. For more information, please see the [Time Series Implementation](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-dat a/autoai-timeseries-details.html?audience=wdp#ts-metrics) documentation. |
public | t_metric_ts_metrics_ts_holdout (structured type) | Metrics generated during evaluation of the pipeline on holdout data. |
public | t_metric_ts_metrics_ts_train (structured type) | No documentation available. |
public | t_ml_federated_metric (structured type) | The metrics from federated training. |
public | t_model_definition_entity (structured type) | The definition of a model. The `software_spec` is used only for training. Either space_id or project_id has to be provided and is mandatory. |
public | t_model_definition_id (structured type) | The model definition. |
public | t_model_definition_rel (structured type) | A model. The `software_spec` is a reference to a software specification. The `hardware_spec` is a reference to a hardware specification. |
public | t_model_definition_resource (structured type) | The information for a model definition. |
public | t_model_definition_resources (structured type) | A paginated list of model definitions. |
public | t_model_def_entity_platform (structured type) | No documentation available. |
public | t_model_def_entity_request (structured type) | The definition of a model. The `software_spec` is used only for training. Either space_id or project_id has to be provided and is mandatory. |
public | t_model_def_rev_entity_request (structured type) | The details for the revision. |
public | t_model_entity (structured type) | The details of the model to be created. |
public | t_model_entity_model_version (structured type) | Optional metadata that can be used to provide information about this model that can be tracked with IBM AI Factsheets. See [Using AI Factsheets](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fa ctsheets-model-inventory.html) for more details. |
public | t_model_entity_request (structured type) | The details of the model to be created. |
public | t_model_entity_schemas (structured type) | If the prediction schemas are provided here then they take precedent over any schemas provided in the data references. Note that data references contain the schema for the associated data and this object contains the schema(s) for the associated prediction, if any. In the case that the prediction input data matches exactly the schema of the training data references then the prediction schema can be omitted. However it is highly recommended to always specify the prediction schemas using this field. |
public | t_model_entity_size (structured type) | This will be used by scoring to record the size of the model. |
public | t_model_location (structured type) | The location of the intermediate model. |
public | t_model_reference (structured type) | A reference to a model that is used by this function. Note that the reference can be to a model that is in a different space or project from this function. For this reason either a `space_id` or a `project_id` must be provided or the `space_id` or a `project_id` of the function will be assumed. |
public | t_model_resource (structured type) | The information for a model. |
public | t_model_resources (structured type) | A paginated list of models. |
public | t_model_resource_entity (structured type) | Information related to the upload of the model content. |
public | t_model_rev_entity_request (structured type) | The details for the revision. |
public | t_mtrc_tsad_metrics_tsad_train (structured type) | Metrics generated in model selection phase. |
public | t_mtrc_tsd_mtrcs_tsad_holdout (structured type) | Metrics generated in model evaluation phase. |
public | t_mtrc_tsd_mtrcs_tsd_train_ag1 (structured type) | No documentation available. |
public | t_mtrc_ts_metrics_ts_backtest (structured type) | Metrics generated during evaluation of the pipeline on backtest data. |
public | t_multi_class_classification (structured type) | No documentation available. |
public | t_multi_class_classifications (structured type) | No documentation available. |
public | t_object_location (structured type) | A reference to data. |
public | t_object_location_optim (structured type) | A reference to data. |
public | t_online_parameters (structured type) | A set of key-value pairs where `key` is the parameter name. |
public | t_online_request (structured type) | Indicates that this is an online deployment. An empty object has to be specified. If the online scoring schema has a `type` of `DataFrame` then the scoring payload will be converted to a `Pandas` data frame. |
public | t_organization (structured type) | A remote organization. |
public | t_pagination (structured type) | Information for paging when querying resources. |
public | t_pagination_base (structured type) | No documentation available. |
public | t_pagination_first (structured type) | The reference to the first item in the current page. |
public | t_pagination_next (structured type) | A reference to the first item of the next page, if any. |
public | t_pipeline_entity (structured type) | The details of the pipeline to be created. |
public | t_pipeline_entity_request (structured type) | The details of the pipeline to be created. |
public | t_pipeline_rel (structured type) | A pipeline. The `hardware_spec` is a reference to the hardware specification. The `hybrid_pipeline_hardware_specs` are used only when training a hybrid pipeline in order to specify compute requirement for each pipeline node. |
public | t_pipeline_resource (structured type) | The information for a pipeline. |
public | t_pipeline_resources (structured type) | A paginated list of pipelines. |
public | t_pipeline_rev_entity_request (structured type) | The details for the revision. |
public | t_platform_job (structured type) | Information about the platform job assets related to this execution. Depending on the `version` date passed, the `platform_jobs` section in the response may or may not be populated. Use the GET call to retrieve the deployment job, this GET call will eventually populate the `platform_jobs` section. Refer to the `version date` description for more details. |
public | t_ppln_rel_data_bindings_item (structured type) | No documentation available. |
public | t_ppln_rel_nodes_param_item (structured type) | No documentation available. |
public | t_project_id type STRING | The project that contains the resource. Either `space_id` or `project_id` has to be given. |
public | t_project_id_only type STRING | The project that contains the resource. |
public | t_rel (structured type) | A reference to a resource. |
public | t_remote_admin (structured type) | The details of the remote administrator for the organization and identities. |
public | t_remote_train_system_entity (structured type) | The definition of a remote system used by Federated Learning. |
public | t_remote_train_system_metric (structured type) | The remote training system metric. |
public | t_remote_train_system_resource (structured type) | The information for a remote training system. |
public | t_remote_train_sys_entity_req (structured type) | The definition of a remote system used by Federated Learning. |
public | t_remote_train_sys_resources (structured type) | A paginated list of remote training systems. |
public | t_resource_commit_info (structured type) | Information related to the revision. |
public | t_resource_meta (structured type) | Common metadata for a resource where `project_id` or `space_id` must be present. |
public | t_resource_meta_base (structured type) | Common metadata for a resource. |
public | t_resource_meta_simple (structured type) | Common metadata for a simple resource. |
public | t_rev_entity_space_request (structured type) | The details for the revision. |
public | t_rev_entity_spc_project_req (structured type) | The details for the revision. |
public | t_rmt_train_sys_patch_helper (structured type) | Fields that can be patched. |
public | t_rmt_train_sys_rev_entity_req (structured type) | The details for the revision. |
public | t_roc_curve (structured type) | The roc (receiver operating characteristic) curve for the selected class. |
public | t_scoring_parameters (structured type) | Parameters that can be used to control the prediction request. |
public | t_scoring_payload (structured type) | The payload for scoring. |
public | t_scoring_payload_optim (structured type) | The payload for scoring. |
public | t_scoring_payload_optim_value type JSONOBJECT | The record. |
public | t_scoring_payload_optim_values TYPE STANDARD TABLE OF T_SCORING_PAYLOAD_OPTIM_VALUE WITH NON-UNIQUE DEFAULT KEY | The records. |
public | t_scoring_targets type TT_JSONOBJECT | No documentation available. |
public | t_scrng_param_forecast_window type INTEGER | The forecast window to use for the prediction. If no value is set then the value used during training will be used. |
public | t_simple_rel (structured type) | A reference to a resource. |
public | t_software_spec_rel (structured type) | A software specification. |
public | t_solve_parameters type JSONOBJECT | To control solve behavior, you can specify solve parameters in your request as key-value pairs. |
public | t_solve_state (structured type) | The solve state for a Decision Optimization job. |
public | t_space_id type STRING | The space that contains the resource. Either `space_id` or `project_id` has to be given. |
public | t_space_id_only type STRING | The space that contains the resource. |
public | t_stats (structured type) | The stats about deployments for a space. |
public | t_step_info (structured type) | Details about the step. |
public | t_sync_scoring_data (structured type) | Scoring data. |
public | t_sync_scoring_data_item (structured type) | The input data. |
public | t_sync_scoring_data_results (structured type) | Scoring results. |
public | t_sync_scrng_data_item_values type TT_JSONOBJECT | No documentation available. |
public | t_system (structured type) | System details. |
public | t_system_details (structured type) | Optional details coming from the service and related to the API call or the associated resource. |
public | t_tags type TT_STRING | A list of tags for this resource. |
public | t_token_count (structured type) | The token count for the account. |
public | t_training_definition_entity (structured type) | The `training_data_references` contain the training datasets and the`results_reference` the connection where results will be stored. |
public | t_training_definition_resource (structured type) | The information for a training definition. |
public | t_training_details (structured type) | Information about the training job that created this model. |
public | t_training_reference (structured type) | The `pipeline` is a reference to the pipeline. The `model_definition` is the library reference that contains the training library. |
public | t_training_resource (structured type) | Training resource. |
public | t_training_resources (structured type) | Information for paging when querying resources. |
public | t_training_resource_entity (structured type) | The `training_data_references` contain the training datasets and the`results_reference` the connection where results will be stored. |
public | t_training_status (structured type) | Status of the model. |
public | t_training_status_hpo (structured type) | Hyperparameter optimization. |
public | t_training_status_message (structured type) | Message. |
public | t_training_websocket (structured type) | No documentation available. |
public | t_train_definition_resources (structured type) | A paginated list of training definitions. |
public | t_train_def_entity_request (structured type) | The `training_data_references` contain the training datasets and the`results_reference` the connection where results will be stored. |
public | t_train_def_patch_helper (structured type) | Fields that can be patched. |
public | t_train_def_rev_entity_request (structured type) | The details for the revision. |
public | t_train_ref_hypr_prm_optmztn (structured type) | The hyper parameters used in the experiment. |
public | t_train_ref_hypr_prm_optmztn_1 (structured type) | Optimization algorithm. |
public | t_train_resource_entity_common (structured type) | The `training_data_references` contain the training datasets and the`results_reference` the connection where results will be stored. |
public | t_train_resrc_entity_request (structured type) | The `training_data_references` contain the training datasets and the`results_reference` the connection where results will be stored. |
public | t_tsad_holdout (structured type) | Metrics generated in model evaluation phase. |
public | t_tsad_holdout_agg (structured type) | Aggregated scores of anomaly types per metric. |
public | t_tsad_holdout_agg_f1 (structured type) | Harmonic average of the precision and recall, with best value of 1 (perfect precision and recall) and worst at 0. |
public | t_tsad_holdout_agg_precision (structured type) | Measures the accuracy of a prediction based on percent of positive predictions that are correct. |
public | t_tsad_holdout_agg_recall (structured type) | Measures the percentage of identified positive predictions against possible positives in data set. |
public | t_tsad_holdout_agg_roc_auc (structured type) | Measure of how well a parameter can distinguish between two groups. |
public | t_tsad_holdout_iterations_item (structured type) | No documentation available. |
public | t_tsad_holdout_supporting_rank (structured type) | Pipeline ranking based on the specified metric. |
public | t_tsad_training (structured type) | Metrics generated in model selection phase. |
public | t_tsd_hldt_aggrgtd_score_item (structured type) | No documentation available. |
public | t_tsd_hldt_agg_avg_prcsn_lclz1 (structured type) | Localized extreme anomaly refers to an unusual data point in a time series, which deviates significantly from the data points around it. |
public | t_tsd_hldt_agg_avg_prcsn_lvl_1 (structured type) | Level shift anomaly refers to a segment in which the mean value of a time series is changed. |
public | t_tsd_hldt_agg_avg_prcsn_trend (structured type) | trend anomaly refers to a segment of time series, which has a trend change compared to the time series before the segment. |
public | t_tsd_hldt_agg_avg_precision (structured type) | Average of the accuracy of predictions based on percent of positive predictions that are correct. |
public | t_tsd_hldt_itrtns_itm_avg_prc1 (structured type) | (Recommended): Average of the accuracy of predictions based on percent of positive predictions that are correct. |
public | t_tsd_hldt_spprtng_rnk_avg_pr1 (structured type) | (Recommended): Average of the accuracy of predictions based on percent of positive predictions that are correct. |
public | t_tsd_hldt_spprtng_rnk_avg_pr2 (structured type) | Level shift anomaly refers to a segment in which the mean value of a time series is changed. Includes scores for all pipelines. |
public | t_tsd_train_aggrgtd_score_item (structured type) | No documentation available. |
public | t_ts_backtest (structured type) | Metrics generated during evaluation of the pipeline on backtest data. |
public | t_ts_holdout (structured type) | Metrics generated during evaluation of the pipeline on holdout data. |
public | t_ts_metric_backtest (structured type) | Metrics from the backtest data. |
public | t_ts_metric_levels (structured type) | A set of metrics. |
public | t_ts_mtrc_bcktst_itrtns_item (structured type) | No documentation available. |
public | t_ts_training (structured type) | No documentation available. |
public | t_ts_training_training (structured type) | Metrics generated during training. |
public | t_variance (structured type) | Variance anomaly refers to a segment of time series in which the variance of a time series is changed. |
public | t_warning (structured type) | A warning message. |
Constants
Visibility and Level | Name | Documentation |
---|---|---|
public static | c_abapname_dictionary (structured type) | Map ABAP identifiers to service identifiers. |
public static | c_required_fields (structured type) | List of required fields per type. |
Methods
Visibility and Level | Name | Documentation | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
public instance |
| Execute a synchronous deployment prediction Execute a synchronous prediction for the deployment with the specified identifier.If a `serving_name` is used then it must match the `serving_name` that is returned in the `inference` field. Parameters
Class-based Exceptions
| |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
public instance |
| Create a new WML deployment Create a new deployment, the parameters specifying the deployment type are `online` and `batch`.These parameters are mutually exclusive, specify only one of these when creating a deployment. Use `hybrid_pipeline_hardware_specs` only when creating a `batch` deployment job of a hybrid pipeline in order to specify compute configuration for each pipeline node. For all other `batch` deployment cases use `hardware_spec` to specify compute configuration. The `hybrid_pipeline_hardware_specs` and `hardware_spec` are mutually exclusive, specify only one of these when creating a deployment. For `batch` deployments, `hardware_spec.num_nodes` parameter is not currently supported. For `online` deployments, `hardware_spec` cannot be specified at the time of creation, `hardware_spec.num_nodes` parameter is not supported as part of `POST /ml/v4/deployments` API request, but it can be updated using `PATCH /ml/v4/deployments/<deployment id>`. For `online` deployments, `serving_name` can be provided in `online.parameters`. The serving name can have the characters `[a-z,0-9,_]` and must not be more than 36 characters. The `serving_name` can be used in the prediction URL in place of the `deployment_id`. See the documentation [supported frameworks](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/pm _service_supported_frameworks.html?context=cpdaas&audience=wdp) for details about which frameworks can be used in a deployment. Parameters
Class-based Exceptions
| |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
public instance |
| Delete the deployment Delete the deployment with the specified identifier.Parameters
Class-based Exceptions
| |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
public instance |
| Retrieve the deployment details Retrieve the deployment details with the specified identifier.Parameters
Class-based Exceptions
| |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
public instance |
| Retrieve the deployments Retrieve the list of deployments for the specified space.Parameters
Class-based Exceptions
| |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
public instance |
| Update the deployment metadata Update the deployment metadata.The following parameters of deployment metadata are supported for the patch operation. - `/tags` - `/name` - `/description` - `/custom` - `/hardware_spec` - `/hybrid_pipeline_hardware_specs` - `/asset` - `/online/parameters` In case of online deployments, using PATCH operation of `/ml/v4/deployments`, users can update the number of copies of an online deployment. Users can specify the desired value of number of copies in `hardware_spec.num_nodes` parameter. As `hardware_spec.name` or `hardware_spec.id` is mandatory for `hardware_spec` schema, a valid value such as `XS`, `S` must be specified for `hardware_spec.name` parameter as part of PATCH request. Alternatively, users can also specify a valid ID of a hardware specification in `hardware_spec.id` parameter. However, changes related to `hardware_spec.name` or `hardware_spec.id` specified in PATCH operation will not be applied for online deployments. <br /> In case of batch deployments, using PATCH operation of `/ml/v4/deployments`, users can update the hardware specification so that subsequent batch deployment jobs can make use of the updated compute configurations. To update the compute configuration, users must specify a valid value for either `hardware_spec.name` or `hardware_spec.id` of the hardware specification that suits their requirement. In the batch deployment context, `hardware_spec.num_nodes` parameter is not currently supported. <br /> When 'asset' is patched with id/rev: - Deployment with the patched id/rev is started. - With an asynchronous deployment (`version` greater than [2021-05-01](#vd-2021-05-01)), 202 response code will be returned and one can poll the deployment for the status. - If any failures, deployment will be reverted back to the previous id/rev and the failure message will be captured in 'failures' field in the response. In the case of an online deployment, the PATCH operation with path specified as `/online/parameters` can be used to update the `serving_name`. Parameters
Class-based Exceptions
| |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
public instance |
| Start an asynchronous deployment job Start a deployment job asynchronously. This can perform batch scoring, streaming, or other types of batchoperations, such as solving a Decision Optimization problem. Depending on the `version` date passed, the `platform_jobs` section in the response may or may not be populated. Use the GET call to retrieve the deployment job, this GET call will eventually populate the `platform_jobs` section. Refer to the `version date` description for more details. Parameters
Class-based Exceptions
| |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
public instance |
| Cancel the deployment job Cancel the specified deployment job.Parameters
Class-based Exceptions
| |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
public instance |
| Retrieve the deployment job Retrieve the deployment job. The predicted data bound to this job_id is going tobe kept around for a limited time based on the service configuration. Parameters
Class-based Exceptions
| |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
public instance |
| Retrieve the deployment jobs Retrieve the status of the current jobs. The system will apply a max limit of jobs retained by the systemas we cannot accumulate an infinite number of jobs. Only most recent 300 jobs (system configurable) will be preserved. Older jobs will be purged by the system. Parameters
Class-based Exceptions
| |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
public instance |
| Retrieve the deployment job definition Retrieve the deployment job definition with the specified identifier. If `rev` query parameter is provided,`rev=latest` will fetch the latest revision. A call with `rev={revision_number}` will fetch the given revision_number record. Parameters
Class-based Exceptions
| |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
public instance |
| Create a new deployment job definition Create a new deployment job definition with the given payload. A deployment job definition represents the deployment metadata information in order to create a batch job in WML. This contains the same metadata used by the /ml/v4/deployment_jobs endpoint. This means that when submitting batch deployment jobs the user can either provide the job definition inline or reference a job definition in a query parameter.Parameters
Class-based Exceptions
| |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
public instance |
| Create a new deployment job definition revision Create a new deployment job definition revision.The current metadata and content for job_definition_id will be taken and a new revision created. Parameters
Class-based Exceptions
| |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
public instance |
| Delete the deployment job definition Delete the deployment job definition with the specified identifier. This will delete all revisions ofthis deployment job definition as well. For each revision all attachments will also be deleted. Parameters
Class-based Exceptions
| |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
public instance |
| Retrieve the deployment job definitions Retrieve the deployment job definitions for the specified space.Parameters
Class-based Exceptions
| |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
public instance |
| Retrieve the deployment job definition revisions Retrieve the deployment job definition revisions.Parameters
Class-based Exceptions
| |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
public instance |
| Update the deployment job definition Update the deployment job definition with the provided patch data.The following fields can be patched: - `/tags` - `/name` - `/description` - `/custom` - `/deployment` Parameters
Class-based Exceptions
| |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
public instance |
| Create a new experiment Create a new experiment with the given payload. An experiment represents an asset that captures a set of `pipeline` or `model definition` assets that will be trained at the same time on the same data set.Parameters
Class-based Exceptions
| |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
public instance |
| Create a new experiment revision Create a new experiment revision.The current metadata and content for experiment_id will be taken and a new revision created. Either `space_id` or `project_id` has to be provided and is mandatory. Parameters
Class-based Exceptions
| |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
public instance |
| Delete the experiment Delete the experiment with the specified identifier. This will delete all revisions ofthis experiment as well. For each revision all attachments will also be deleted. Parameters
Class-based Exceptions
| |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
public instance |
| Retrieve the experiment Retrieve the experiment with the specified identifier. If `rev` query parameter is provided,`rev=latest` will fetch the latest revision. A call with `rev={revision_number}` will fetch the given revision_number record. Either `space_id` or `project_id` has to be provided and is mandatory. Parameters
Class-based Exceptions
| |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
public instance |
| Retrieve the experiments Retrieve the experiments for the specified space or project.Parameters
Class-based Exceptions
| |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
public instance |
| Retrieve the experiment revisions Retrieve the experiment revisions.Parameters
Class-based Exceptions
| |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
public instance |
| Update the experiment Update the experiment with the provided patch data.The following fields can be patched: - `/tags` - `/name` - `/description` - `/custom` Parameters
Class-based Exceptions
| |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
public instance |
| Create a new function Create a new function with the given payload. A function is some code that can be deployed as an online, or batch deployment.Parameters
Class-based Exceptions
| |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
public instance |
| Create a new function revision Create a new function revision.The current metadata and content for function_id will be taken and a new revision created. Either `space_id` or `project_id` has to be provided and is mandatory. Parameters
Class-based Exceptions
| |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
public instance |
| Delete the function Delete the function with the specified identifier. This will delete all revisions ofthis function as well. For each revision all attachments will also be deleted. Parameters
Class-based Exceptions
| |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
public instance |
| Download the function code Download the function code.It is possible to get the `code` for a given revision of the `function`. Functions expect a zip file that contains a python file that make up the function. Python functions specify what needs to be run when the function is deployed and what needs to be run when the scoring function is called. In other words, you are able to customize what preparation WML does in the environment when you deploy the function, as well as what steps WML takes to generate the output when you call the API later on. The function consists of the outer function (any place that is not within the score function) and the inner score function. The code that sits in the outer function runs when the function is deployed, and the environment is then frozen and ready to be used whenever the online scoring or batch inline job processing API is called. The code that sits in the inner score function runs when the online scoring or batch inline job processing API is called, in the environment customized by the outer function. See [Deploying Python function](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-d eploy-py-function.html?context=cpdaas${content_description}audience=wd p) for more details. This is illustrated in the example below: <pre> <br /> ...python code used to set up the environment... <br /> <br /> def score(payload): <br /> df_payload = pd.DataFrame(payload[values]) <br /> df_payload.columns = payload[fields] <br /> ... <br /> output = {result : res} <br /> return output <br /> <br /> return score <br /> </pre> Parameters
Class-based Exceptions
| |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
public instance |
| Retrieve the function Retrieve the function with the specified identifier. If `rev` query parameter is provided,`rev=latest` will fetch the latest revision. A call with `rev={revision_number}` will fetch the given revision_number record. Either `space_id` or `project_id` has to be provided and is mandatory. Parameters
Class-based Exceptions
| |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
public instance |
| Retrieve the functions Retrieve the functions for the specified space or project.Parameters
Class-based Exceptions
| |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
public instance |
| Retrieve the function revisions Retrieve the function revisions.Parameters
Class-based Exceptions
| |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
public instance |
| Update the function Update the function with the provided patch data.The following fields can be patched: - `/tags` - `/name` - `/description` - `/custom` Parameters
Class-based Exceptions
| |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
public instance |
| Upload the function code Upload the function code. Functions expect a zip file that contains a python file that make up the function. Python functions specify what needs to be run when the function is deployed and what needs to be run when the scoring function is called. In other words, you are able to customize what preparation WML does in the environment when you deploy the function, as well as what steps WML takes to generate the output when you call the API later on. The function consists of the outer function (any place that is not within the score function) and the inner score function. The code that sits in the outer function runs when the function is deployed, and the environment is then frozen and ready to be used whenever the online scoring or batch inline job processing API is called. The code that sits in the inner score function runs when the online scoring or batch inline job processing API is called, in the environment customized by the outer function. See [Deploying Python function](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-d eploy-py-function.html?context=cpdaas${content_description}audience=wd p) for more details. This is illustrated in the example below: <pre> <br /> ...python code used to set up the environment... <br /> <br /> def score(payload): <br /> df_payload = pd.DataFrame(payload[values]) <br /> df_payload.columns = payload[fields] <br /> ... <br /> output = {result : res} <br /> return output <br /> <br /> return score <br /> </pre>Parameters
Class-based Exceptions
| |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
public instance | get_appname redefinition | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
public instance | get_request_prop redefinition | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
public instance | get_sdk_version_date redefinition | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
public instance |
| Retrieve the service instance Retrieve the service instance details.Parameters
Class-based Exceptions
| |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
public instance |
| Retrieve the service instances Retrieve the service instances. Either `space_id` or `project_id` query parameter is mandatory but both can be provided.Parameters
Class-based Exceptions
| |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
public instance |
| Create a new model Create a new model with the given payload. A model represents a machine learning model asset.If a `202` status is returned then the model will be ready when the `content_import_state` in the model status (/ml/v4/models/{model_id}) is `completed`. If `content_import_state` is not used then a `201` status is returned. Parameters
Class-based Exceptions
| |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
public instance |
| Create a new model revision Create a new model revision.The current metadata and content for model_id will be taken and a new revision created. Either `space_id` or `project_id` has to be provided and is mandatory. Parameters
Class-based Exceptions
| |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
public instance |
| Delete the model Delete the model with the specified identifier. This will delete all revisions ofthis model as well. For each revision all attachments will also be deleted. Parameters
Class-based Exceptions
| |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
public instance |
| Delete the model content Delete the content for the specified model.Parameters
Class-based Exceptions
| |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
public instance |
| Download the model content Download the model content.Parameters
Class-based Exceptions
| |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
public instance |
| Download the model content that matches a certain criteria Download the model content identified by the provided criteria.If more than one attachment is found for the given filter then a `400` error is returned. If there are no attachments that match the filter then a `404` error is returned. If there are no filters then, if there is a single attachment, the attachment content will be returned otherwise a `400` or `404` error will be returned as described above. This method is just a shortcut for getting the attachment metadata and then downloading by attachment id. Parameters
Class-based Exceptions
| |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
public instance |
| Retrieve the model Retrieve the model with the specified identifier. If `rev` query parameter is provided,`rev=latest` will fetch the latest revision. A call with `rev={revision_number}` will fetch the given revision_number record. Either `space_id` or `project_id` has to be provided and is mandatory. Parameters
Class-based Exceptions
| |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
public instance |
| Retrieve the models Retrieve the models for the specified space or project.Parameters
Class-based Exceptions
| |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
public instance |
| Retrieve the model content metadata list Retrieve the content metadata list for the specified model attachments.Parameters
Class-based Exceptions
| |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
public instance |
| Retrieve the model revisions Retrieve the model revisions.Parameters
Class-based Exceptions
| |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
public instance |
| Update the model Update the model with the provided patch data.The following fields can be patched: - `/tags` - `/name` - `/description` - `/custom` - `/software_spec` (operation `replace` only) - `/model_version` (operation `add` and `replace` only) Parameters
Class-based Exceptions
| |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
public instance |
| Upload the model content Upload the content for the specified model.Parameters
Class-based Exceptions
| |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
public instance |
| Create a new model definition Create a new model definition with the given payload. A model definition represents the code that is used to train one or more models.Parameters
Class-based Exceptions
| |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
public instance |
| Delete the model definition Delete the model definition with the specified identifier. This will delete all revisions ofthis model definition as well. For each revision all attachments will also be deleted. Parameters
Class-based Exceptions
| |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
public instance |
| Retrieve the model definition Retrieve the model definition with the specified identifier. If `rev` query parameter is provided,`rev=latest` will fetch the latest revision. A call with `rev={revision_number}` will fetch the given revision_number record. Either `space_id` or `project_id` has to be provided and is mandatory. Parameters
Class-based Exceptions
| |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
public instance |
| Retrieve the model definitions Retrieve the model definitions for the specified space or project.Parameters
Class-based Exceptions
| |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
public instance |
| Update the model definition Update the model definition with the provided patch data.The following fields can be patched: - `/tags` - `/name` - `/description` - `/custom` Parameters
Class-based Exceptions
| |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
public instance |
| Upload the model definition model Upload the model definition model. Model definitions for Deep Learning accept a zip file that contains one or more python files organized in any directory structure.Parameters
Class-based Exceptions
| |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
public instance |
| Create a new model definition revision Create a new model definition revision.The current metadata and content for model_definition_id will be taken and a new revision created. Either `space_id` or `project_id` has to be provided and is mandatory. Parameters
Class-based Exceptions
| |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
public instance |
| Download the model definition model Download the model definition model.It is possible to get the `model` for a given revision of the `model definition`. Model definitions for Deep Learning accept a zip file that contains one or more python files organized in any directory structure. Parameters
Class-based Exceptions
| |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
public instance |
| Retrieve the model definition revisions Retrieve the model definition revisions.Parameters
Class-based Exceptions
| |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
public instance |
| Create a new pipeline Create a new pipeline with the given payload. A pipeline represents a hybrid-pipeline, as a JSON document, that is used to train one or more models.Parameters
Class-based Exceptions
| |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
public instance |
| Create a new pipeline revision Create a new pipeline revision.The current metadata and content for pipeline_id will be taken and a new revision created. Either `space_id` or `project_id` has to be provided and is mandatory. Parameters
Class-based Exceptions
| |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
public instance |
| Delete the pipeline Delete the pipeline with the specified identifier. This will delete all revisions ofthis pipeline as well. For each revision all attachments will also be deleted. Parameters
Class-based Exceptions
| |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
public instance |
| Retrieve the pipeline Retrieve the pipeline with the specified identifier. If `rev` query parameter is provided,`rev=latest` will fetch the latest revision. A call with `rev={revision_number}` will fetch the given revision_number record. Either `space_id` or `project_id` has to be provided and is mandatory. Parameters
Class-based Exceptions
| |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
public instance |
| Retrieve the pipelines Retrieve the pipelines for the specified space or project.Parameters
Class-based Exceptions
| |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
public instance |
| Retrieve the pipeline revisions Retrieve the pipeline revisions.Parameters
Class-based Exceptions
| |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
public instance |
| Update the pipeline Update the pipeline with the provided patch data.The following fields can be patched: - `/tags` - `/name` - `/description` - `/custom` Parameters
Class-based Exceptions
| |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
public instance |
| Create a new remote training system Create a new remote training system with the given payload. A remote training system represents a remote system used by Federated Learning. This definition is necessary to control who can register to a federated learning job.Parameters
Class-based Exceptions
| |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
public instance |
| Delete the remote training system Delete the remote training system with the specified identifier. This will delete all revisions ofthis remote training system as well. For each revision all attachments will also be deleted. Parameters
Class-based Exceptions
| |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
public instance |
| Retrieve the remote training system Retrieve the remote training system with the specified identifier. If `rev` query parameter is provided,`rev=latest` will fetch the latest revision. A call with `rev={revision_number}` will fetch the given revision_number record. Either `space_id` or `project_id` has to be provided and is mandatory. Parameters
Class-based Exceptions
| |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
public instance |
| Retrieve the remote training systems Retrieve the remote training systems for the specified space or project.Parameters
Class-based Exceptions
| |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
public instance |
| Update the remote training system Update the remote training system with the provided patch data.The following fields can be patched: - `/tags` - `/name` - `/description` - `/custom` - `/organization` - `/allowed_identities` - `/remote_admin` Parameters
Class-based Exceptions
| |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
public instance |
| Create a new remote training system revision Create a new remote training system revision.The current metadata and content for remote_training_system_id will be taken and a new revision created. Either `space_id` or `project_id` has to be provided and is mandatory. Parameters
Class-based Exceptions
| |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
public instance |
| Retrieve the remote training system revisions Retrieve the remote training system revisions.Parameters
Class-based Exceptions
| |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
public instance |
| Create a new WML training Create a new WML training.Parameters
Class-based Exceptions
| |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
public instance |
| Cancel the training Cancel the specified training and remove it.Parameters
Class-based Exceptions
| |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
public instance |
| Retrieve the training Retrieve the training with the specified identifier. This call supports Web-Socket upgrade.However in order to preserve bandwidth, web-socket messages are not context complete. Meaning that a single web-socket message only reflects a message or metric happening in the context of a training job or sub-job (in case of experiment trainings or HPO/AutoML trainings). Hence the metadata property of a web-socket message contains a parent with the href information of the parent job that triggered this particular job. Also the metrics will be provided as they arrive from the backend runtime, and not as a cumulative list. In order to get the full view of the running training job the caller should do a regular GET call. Parameters
Class-based Exceptions
| |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
public instance |
| Retrieve the list of trainings Retrieve the list of trainings for the specified space or project.Parameters
Class-based Exceptions
| |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
public instance |
| Create a new training definition Create a new training definition with the given payload. A training definition represents the training meta-data necessary to start a training job.Parameters
Class-based Exceptions
| |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
public instance |
| Delete the training definition Delete the training definition with the specified identifier. This will delete all revisions ofthis training definition as well. For each revision all attachments will also be deleted. Parameters
Class-based Exceptions
| |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
public instance |
| Retrieve the training definition Retrieve the training definition with the specified identifier. If `rev` query parameter is provided,`rev=latest` will fetch the latest revision. A call with `rev={revision_number}` will fetch the given revision_number record. Either `space_id` or `project_id` has to be provided and is mandatory. Parameters
Class-based Exceptions
| |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
public instance |
| Retrieve the training definitions Retrieve the training definitions for the specified space or project.Parameters
Class-based Exceptions
| |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
public instance |
| Update the training definition Update the training definition with the provided patch data.The following fields can be patched: - `/tags` - `/name` - `/description` - `/custom` - `/federated_learning` Parameters
Class-based Exceptions
| |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
public instance |
| Create a new training definition revision Create a new training definition revision.The current metadata and content for training_definition_id will be taken and a new revision created. Either `space_id` or `project_id` has to be provided and is mandatory. Parameters
Class-based Exceptions
| |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
public instance |
| Retrieve the training definition revisions Retrieve the training definition revisions.Parameters
Class-based Exceptions
|