AI Experiments Client

class ibm_watsonx_gov.ai_experiments.ai_experiments_client.AIExperimentsClient(api_client: APIClient, project_id: str = None, space_id: str = None)

Bases: object

create(experiment_details: AIExperiment) AIExperiment

Creates AI experiment asset with specified details

Parameters:

experiment_details (-) – The instance of AIExperiment having details of the experiment to be created.

Returns: An instance of AIExperiment.

Examples:

Create an AI Experiment:

# Initialize the API client with credentials api_client = APIClient(credentials=Credentials(api_key=””, url=””))

# Create the AI Experiment client with your project ID ai_experiment_client = AIExperimentClient(api_client=api_client, project_id=”your_project_id”)

# Create the AIExperiment instance ai_experiment = AIExperiment(name=””, description=””, component_type=”agent”, component_name=””)

ai_experiment_asset = ai_experiment_client.create(ai_experiment)

create_ai_evaluation_asset(ai_experiment_ids: List[str] = None, ai_experiment_runs: Dict[str, List[AIExperimentRun]] = None, ai_evaluation_details: AIEvaluationAsset = None) AIEvaluationAsset

Creates an AI Evaluation asset from either experiment IDs or experiment run mappings.

Parameters:
  • ai_experiment_ids (List[str], optional) – A list of AI experiment IDs for which the evaluation asset should be created.

  • ai_experiment_runs (Dict[str, List[AIExperimentRun]], optional) – A list of dictionaries where each dictionary maps an experiment ID (str) to an AIExperimentRun object.

  • ai_evaluation_details (AIEvaluationAsset, optional) – An instance of AIEvaluationAsset having details (name, description and metrics configuration)

Returns:

An instance of AIEvaluationAsset.

Note

Only one of ai_experiment_ids or ai_experiment_runs should be provided.

Examples:

Comparing a list of AI experiments:
# Initialize the API client with credentials
api_client = APIClient(credentials=Credentials(api_key="", url="wos_url"))

# Create the AI Experiment client with your project ID
ai_experiment_client = AIExperimentClient(api_client=api_client, project_id="your_project_id")

# Create AI Experiments
ai_experiment = ai_experiment_client.create(name="",description="",component_type="",component_name="")

# Define evaluation configuration
evaluation_config = EvaluationConfig(
    monitors={
        "agentic_ai_quality": {
            "parameters": {
                "metrics_configuration": {}
            }
        }
    }
)

# Create the evaluation asset
ai_evaluation_asset = AIEvaluationAsset(
    name="AI Evaluation for agent",
    evaluation_configuration=evaluation_config
)

# Compare two or more AI experiments using the evaluation asset
response = ai_experiment_client.compare_ai_experiments(
    ai_experiment_ids=["experiment_id_1", "experiment_id_2"],
    ai_evaluation_asset=ai_evaluation_asset
)
# Link for AIEvaluationAsset
response.href
get(ai_experiment_id: str) AIExperiment

Retrieves AI experiment asset details :param - ai_experiment_id: The ID of AI experiment asset.

Returns: An instance of AIExperiment.

get_ai_evaluation_asset(ai_evaluation_asset_id: str) AIEvaluationAsset

Return an instance of the AIEvaluation with the given id.

Parameters:

ai_evaluation_asset_id (-) – The asset id of the AI Evaluation asset.

Returns:

An instance of AIEvaluationAsset with the given asset id.

get_ai_evaluation_asset_href(ai_evaluation_asset: AIEvaluationAsset) str

Returns the URL of Evaluation studio UI for the given AI evaluation asset.

Parameters:

ai_evaluation_asset (-) – The AI Evaluation asset details.

Returns:

URL of Evaluation studio UI

get_experiment_notebook(ai_experiment_id: str, run_id: str, custom_filename: str | None = None) str

Download an experiment notebook from a specific AI experiment run.

Parameters:
  • ai_experiment_id (str) – The unique identifier for the AI experiment

  • run_id (str) – The specific run ID within the experiment

  • custom_filename (Optional[str]) – Custom filename for the downloaded file. If None, uses format: “{ai_experiment.source_name}”

Example

>>> client.get_experiment_asset("exp_123", "run_456")
Downloaded: my_notebook.ipynb
list_experiment_runs(ai_experiment_id) List[AIExperimentRun]

List all ai_experiment_runs for a given ai_experiment_id in a project.

Parameters:

-ai_experiment_id – The ID of ai experiment asset.

Return: List of AIExperimentRun instances.

list_experiments() List[AIExperiment]

List all AI Experiments under selected project.

Returns: List of AIExperiment instances.

search(ai_experiment_name: str) AIExperiment

Searches AI experiment with specified name :param - ai_experiment_name: The name of AI experiment to be searched.

Returns: An instance of AIExperiment.

update(ai_experiment_id: str, experiment_run_details: AIExperimentRun, evaluation_results=None, track_notebook=False) AIExperiment

Updates AI experiment asset details, with the given experiment run details. :param - ai_experiment_id: The ID of AI experiment asset to be updated :param - experiment_run_details: An instance of AIExperimentRun, payload to create attachment :param - evaluation_result: (DataFrame|ToolMetricResult) The content of attachment to be uploaded as file (Optional) :param - track_notebook: (bool) If set to True the notebook will be stored as attachment

Returns: The updated AI experiment asset details

Examples:

Updating a AI Experiment with the evaluation results:
# Initialize the API client with credentials
api_client = APIClient(credentials=Credentials(api_key="", url="wos_url"))

# Create the AI Experiment client with your project ID
ai_experiment_client = AIExperimentClient(api_client=api_client, project_id="your_project_id")

# Define ai_experiment_runs
experiment_run_details = AIExperimentRun(run_id=str(uuid.uuid4()), run_name="", test_data={}, node=[])

# evaluation_result will be an instnace of ToolMetricResult or DataFrame

# Update the AI experiment asset with run results
updated_ai_experiment_details = ai_experiment_client.update(
    ai_experiment_asset_id="",
    experiment_run_details=experiment_run_details,
    evaluation_result=run_result
)