Agentic App

pydantic model ibm_watsonx_gov.entities.agentic_app.AgenticApp

Bases: BaseModel

The configuration class representing an agentic application. An agent is composed of a set of nodes. The metrics to be computed at the agent or interaction level should be specified in the metrics_configuration and the metrics to be computed for the node level should be specifed in the nodes list.

Examples

  1. Create AgenticApp with agent level metrics configuration.
    # Below example provides the agent configuration to compute the AnswerRelevanceMetric and all the metrics in Content Safety group on agent or interaction level.
    agentic_app = AgenticApp(name="Agentic App",
                        metrics_configuration=MetricsConfiguration(metrics=[AnswerRelevanceMetric()],
                                                                    metric_groups=[MetricGroup.CONTENT_SAFETY]))
    agentic_evaluator = AgenticEvaluator(agentic_app=agentic_app)
    ...
    
  2. Create AgenticApp with agent and nodel level metrics configuration and default agentic ai configuration for metrics.
    # Below example provides the node configuration to compute the ContextRelevanceMetric and all the metrics in Retrieval Quality group.
    nodes = [Node(name="Retrieval Node",
                metrics_configurations=[MetricsConfiguration(metrics=[ContextRelevanceMetric()],
                                                             metric_groups=[MetricGroup.RETRIEVAL_QUALITY])])]
    
    # Below example provides the agent configuration to compute the AnswerRelevanceMetric and all the metrics in Content Safety group on agent or interaction level.
    agentic_app = AgenticApp(name="Agentic App",
                        metrics_configuration=MetricsConfiguration(metrics=[AnswerRelevanceMetric()],
                                                                    metric_groups=[MetricGroup.CONTENT_SAFETY]),
                        nodes=nodes)
    agentic_evaluator = AgenticEvaluator(agentic_app=agentic_app)
    ...
    
  3. Create AgenticApp with agent and nodel level metrics configuration and with agentic ai configuration for metrics.
    # Below example provides the node configuration to compute the ContextRelevanceMetric and all the metrics in Retrieval Quality group.
    node_fields_config = {
        "input_fields": ["input"],
        "context_fields": ["web_context"]
    }
    nodes = [Node(name="Retrieval Node",
                metrics_configurations=[MetricsConfiguration(configuration=AgenticAIConfiguration(**node_fields_config)
                                                             metrics=[ContextRelevanceMetric()],
                                                             metric_groups=[MetricGroup.RETRIEVAL_QUALITY])])]
    
    # Below example provides the agent configuration to compute the AnswerRelevanceMetric and all the metrics in Content Safety group on agent or interaction level.
    agent_fields_config = {
        "input_fields": ["input"],
        "output_fields": ["output"]
    }
    agentic_app = AgenticApp(name="Agentic App",
                        metrics_configuration=MetricsConfiguration(configuration=AgenticAIConfiguration(**agent_fields_config)
                                                                   metrics=[AnswerRelevanceMetric()],
                                                                   metric_groups=[MetricGroup.CONTENT_SAFETY]),
                        nodes=nodes)
    agentic_evaluator = AgenticEvaluator(agentic_app=agentic_app)
    ...
    

Show JSON schema
{
   "title": "AgenticApp",
   "description": "The configuration class representing an agentic application.\nAn agent is composed of a set of nodes.\nThe metrics to be computed at the agent or interaction level should be specified in the metrics_configuration and the metrics to be computed for the node level should be specifed in the nodes list.\n\nExamples:\n    1. Create AgenticApp with agent level metrics configuration. \n        .. code-block:: python\n\n            # Below example provides the agent configuration to compute the AnswerRelevanceMetric and all the metrics in Content Safety group on agent or interaction level.\n            agentic_app = AgenticApp(name=\"Agentic App\",\n                                metrics_configuration=MetricsConfiguration(metrics=[AnswerRelevanceMetric()],\n                                                                            metric_groups=[MetricGroup.CONTENT_SAFETY]))\n            agentic_evaluator = AgenticEvaluator(agentic_app=agentic_app)\n            ...\n\n    2. Create AgenticApp with agent and nodel level metrics configuration and default agentic ai configuration for metrics. \n        .. code-block:: python\n\n            # Below example provides the node configuration to compute the ContextRelevanceMetric and all the metrics in Retrieval Quality group. \n            nodes = [Node(name=\"Retrieval Node\",\n                        metrics_configurations=[MetricsConfiguration(metrics=[ContextRelevanceMetric()],\n                                                                     metric_groups=[MetricGroup.RETRIEVAL_QUALITY])])]\n\n            # Below example provides the agent configuration to compute the AnswerRelevanceMetric and all the metrics in Content Safety group on agent or interaction level.\n            agentic_app = AgenticApp(name=\"Agentic App\",\n                                metrics_configuration=MetricsConfiguration(metrics=[AnswerRelevanceMetric()],\n                                                                            metric_groups=[MetricGroup.CONTENT_SAFETY]),\n                                nodes=nodes)\n            agentic_evaluator = AgenticEvaluator(agentic_app=agentic_app)\n            ...\n\n    3. Create AgenticApp with agent and nodel level metrics configuration and with agentic ai configuration for metrics. \n        .. code-block:: python\n\n            # Below example provides the node configuration to compute the ContextRelevanceMetric and all the metrics in Retrieval Quality group.\n            node_fields_config = {\n                \"input_fields\": [\"input\"],\n                \"context_fields\": [\"web_context\"]\n            }\n            nodes = [Node(name=\"Retrieval Node\",\n                        metrics_configurations=[MetricsConfiguration(configuration=AgenticAIConfiguration(**node_fields_config)\n                                                                     metrics=[ContextRelevanceMetric()],\n                                                                     metric_groups=[MetricGroup.RETRIEVAL_QUALITY])])]\n\n            # Below example provides the agent configuration to compute the AnswerRelevanceMetric and all the metrics in Content Safety group on agent or interaction level.\n            agent_fields_config = {\n                \"input_fields\": [\"input\"],\n                \"output_fields\": [\"output\"]\n            }\n            agentic_app = AgenticApp(name=\"Agentic App\",\n                                metrics_configuration=MetricsConfiguration(configuration=AgenticAIConfiguration(**agent_fields_config)\n                                                                           metrics=[AnswerRelevanceMetric()],\n                                                                           metric_groups=[MetricGroup.CONTENT_SAFETY]),\n                                nodes=nodes)\n            agentic_evaluator = AgenticEvaluator(agentic_app=agentic_app)\n            ...",
   "type": "object",
   "properties": {
      "name": {
         "default": "Agentic App",
         "description": "The name of the agentic application.",
         "title": "Agentic application name",
         "type": "string"
      },
      "metrics_configuration": {
         "anyOf": [
            {
               "$ref": "#/$defs/MetricsConfiguration"
            },
            {
               "type": "null"
            }
         ],
         "default": {
            "configuration": {
               "available_tools_field": "available_tools",
               "context_fields": [
                  "context"
               ],
               "conversation_id_field": "conversation_id",
               "input_fields": [
                  "input_text"
               ],
               "interaction_id_field": "interaction_id",
               "llm_judge": null,
               "locale": null,
               "output_fields": [
                  "generated_text"
               ],
               "prompt_field": "model_prompt",
               "record_id_field": "record_id",
               "record_timestamp_field": "record_timestamp",
               "reference_fields": [
                  "ground_truth"
               ],
               "task_type": null,
               "tool_calls_field": "tool_calls",
               "tools": []
            },
            "metrics": [],
            "metric_groups": []
         },
         "description": "The list of metrics to be computed on the agentic application and their configuration details.",
         "title": "Metrics configuration"
      },
      "nodes": {
         "anyOf": [
            {
               "items": {
                  "$ref": "#/$defs/Node"
               },
               "type": "array"
            },
            {
               "type": "null"
            }
         ],
         "default": [],
         "description": "The nodes details.",
         "title": "Node details"
      }
   },
   "$defs": {
      "AgenticAIConfiguration": {
         "description": "Defines the AgenticAIConfiguration class.\n\nThe configuration interface for Agentic AI tools and applications.\nThis is used to specify the fields mapping details in the data and other configuration parameters needed for evaluation.\n\nExamples:\n    1. Create configuration with default parameters\n        .. code-block:: python\n\n            configuration = AgenticAIConfiguration()\n\n    2. Create configuration with parameters\n        .. code-block:: python\n\n            configuration = AgenticAIConfiguration(input_fields=[\"input\"], \n                                                   output_fields=[\"output\"])\n\n    2. Create configuration with dict parameters\n        .. code-block:: python\n\n            config = {\"input_fields\": [\"input\"],\n                      \"output_fields\": [\"output\"],\n                      \"context_fields\": [\"contexts\"],\n                      \"reference_fields\": [\"reference\"]}\n            configuration = AgenticAIConfiguration(**config)",
         "properties": {
            "record_id_field": {
               "default": "record_id",
               "description": "The record identifier field name.",
               "examples": [
                  "record_id"
               ],
               "title": "Record id field",
               "type": "string"
            },
            "record_timestamp_field": {
               "default": "record_timestamp",
               "description": "The record timestamp field name.",
               "examples": [
                  "record_timestamp"
               ],
               "title": "Record timestamp field",
               "type": "string"
            },
            "task_type": {
               "anyOf": [
                  {
                     "$ref": "#/$defs/TaskType"
                  },
                  {
                     "type": "null"
                  }
               ],
               "default": null,
               "description": "The generative task type. Default value is None.",
               "examples": [
                  "retrieval_augmented_generation"
               ],
               "title": "Task Type"
            },
            "input_fields": {
               "default": [
                  "input_text"
               ],
               "description": "The list of model input fields in the data. Default value is ['input_text'].",
               "examples": [
                  [
                     "question"
                  ]
               ],
               "items": {
                  "type": "string"
               },
               "title": "Input Fields",
               "type": "array"
            },
            "context_fields": {
               "default": [
                  "context"
               ],
               "description": "The list of context fields in the input fields. Default value is ['context'].",
               "examples": [
                  [
                     "context1",
                     "context2"
                  ]
               ],
               "items": {
                  "type": "string"
               },
               "title": "Context Fields",
               "type": "array"
            },
            "output_fields": {
               "default": [
                  "generated_text"
               ],
               "description": "The list of model output fields in the data. Default value is ['generated_text'].",
               "examples": [
                  [
                     "output"
                  ]
               ],
               "items": {
                  "type": "string"
               },
               "title": "Output Fields",
               "type": "array"
            },
            "reference_fields": {
               "default": [
                  "ground_truth"
               ],
               "description": "The list of reference fields in the data. Default value is ['ground_truth'].",
               "examples": [
                  [
                     "reference"
                  ]
               ],
               "items": {
                  "type": "string"
               },
               "title": "Reference Fields",
               "type": "array"
            },
            "locale": {
               "anyOf": [
                  {
                     "$ref": "#/$defs/Locale"
                  },
                  {
                     "type": "null"
                  }
               ],
               "default": null,
               "description": "The language locale of the input, output and reference fields in the data.",
               "title": "Locale"
            },
            "tools": {
               "default": [],
               "description": "The list of tools used by the LLM.",
               "examples": [
                  [
                     "function1",
                     "function2"
                  ]
               ],
               "items": {
                  "type": "object"
               },
               "title": "Tools",
               "type": "array"
            },
            "tool_calls_field": {
               "anyOf": [
                  {
                     "type": "string"
                  },
                  {
                     "type": "null"
                  }
               ],
               "default": "tool_calls",
               "description": "The tool calls field in the input fields. Default value is 'tool_calls'.",
               "examples": [
                  "tool_calls"
               ],
               "title": "Tool Calls Field"
            },
            "available_tools_field": {
               "anyOf": [
                  {
                     "type": "string"
                  },
                  {
                     "type": "null"
                  }
               ],
               "default": "available_tools",
               "description": "The tool inventory field in the data. Default value is 'available_tools'.",
               "examples": [
                  "available_tools"
               ],
               "title": "Available Tools Field"
            },
            "llm_judge": {
               "anyOf": [
                  {
                     "$ref": "#/$defs/LLMJudge"
                  },
                  {
                     "type": "null"
                  }
               ],
               "default": null,
               "description": "LLM as Judge Model details.",
               "title": "LLM Judge"
            },
            "prompt_field": {
               "anyOf": [
                  {
                     "type": "string"
                  },
                  {
                     "type": "null"
                  }
               ],
               "default": "model_prompt",
               "description": "The prompt field in the input fields. Default value is 'model_prompt'.",
               "examples": [
                  "model_prompt"
               ],
               "title": "Model Prompt Field"
            },
            "interaction_id_field": {
               "anyOf": [
                  {
                     "type": "string"
                  },
                  {
                     "type": "null"
                  }
               ],
               "default": "interaction_id",
               "description": "The interaction identifier field name. Default value is 'interaction_id'.",
               "examples": [
                  "interaction_id"
               ],
               "title": "Interaction id field"
            },
            "conversation_id_field": {
               "anyOf": [
                  {
                     "type": "string"
                  },
                  {
                     "type": "null"
                  }
               ],
               "default": "conversation_id",
               "description": "The conversation identifier field name. Default value is 'conversation_id'.",
               "examples": [
                  "conversation_id"
               ],
               "title": "Conversation id field"
            }
         },
         "title": "AgenticAIConfiguration",
         "type": "object"
      },
      "AzureOpenAICredentials": {
         "properties": {
            "url": {
               "anyOf": [
                  {
                     "type": "string"
                  },
                  {
                     "type": "null"
                  }
               ],
               "description": "Azure OpenAI url. This attribute can be read from `AZURE_OPENAI_HOST` environment variable.",
               "title": "Url"
            },
            "api_key": {
               "anyOf": [
                  {
                     "type": "string"
                  },
                  {
                     "type": "null"
                  }
               ],
               "description": "API key for Azure OpenAI. This attribute can be read from `AZURE_OPENAI_API_KEY` environment variable.",
               "title": "Api Key"
            },
            "api_version": {
               "anyOf": [
                  {
                     "type": "string"
                  },
                  {
                     "type": "null"
                  }
               ],
               "description": "The model API version from Azure OpenAI. This attribute can be read from `AZURE_OPENAI_API_VERSION` environment variable.",
               "title": "Api Version"
            }
         },
         "required": [
            "url",
            "api_key",
            "api_version"
         ],
         "title": "AzureOpenAICredentials",
         "type": "object"
      },
      "AzureOpenAIFoundationModel": {
         "description": "The Azure OpenAI foundation model details\n\nExamples:\n    1. Create Azure OpenAI foundation model by passing the credentials during object creation.\n        .. code-block:: python\n\n            azure_openai_foundation_model = AzureOpenAIFoundationModel(\n                model_id=\"gpt-4o-mini\",\n                provider=AzureOpenAIModelProvider(\n                    credentials=AzureOpenAICredentials(\n                        api_key=azure_api_key,\n                        url=azure_host_url,\n                        api_version=azure_api_model_version,\n                    )\n                )\n            )\n\n2. Create Azure OpenAI foundation model by setting the credentials in environment variables:\n    * ``AZURE_OPENAI_API_KEY`` is used to set the api key for OpenAI.\n    * ``AZURE_OPENAI_HOST`` is used to set the url for Azure OpenAI.\n    * ``AZURE_OPENAI_API_VERSION`` is uses to set the the api version for Azure OpenAI.\n\n        .. code-block:: python\n\n            openai_foundation_model = AzureOpenAIFoundationModel(\n                model_id=\"gpt-4o-mini\",\n            )",
         "properties": {
            "model_name": {
               "anyOf": [
                  {
                     "type": "string"
                  },
                  {
                     "type": "null"
                  }
               ],
               "default": null,
               "description": "The name of the foundation model.",
               "title": "Model Name"
            },
            "provider": {
               "$ref": "#/$defs/AzureOpenAIModelProvider",
               "description": "Azure OpenAI provider"
            },
            "model_id": {
               "description": "Model deployment name from Azure OpenAI",
               "title": "Model Id",
               "type": "string"
            }
         },
         "required": [
            "model_id"
         ],
         "title": "AzureOpenAIFoundationModel",
         "type": "object"
      },
      "AzureOpenAIModelProvider": {
         "properties": {
            "type": {
               "$ref": "#/$defs/ModelProviderType",
               "default": "azure_openai",
               "description": "The type of model provider."
            },
            "credentials": {
               "anyOf": [
                  {
                     "$ref": "#/$defs/AzureOpenAICredentials"
                  },
                  {
                     "type": "null"
                  }
               ],
               "default": null,
               "description": "Azure OpenAI credentials."
            }
         },
         "title": "AzureOpenAIModelProvider",
         "type": "object"
      },
      "FoundationModelInfo": {
         "description": "Represents a foundation model used in an experiment.",
         "properties": {
            "model_name": {
               "anyOf": [
                  {
                     "type": "string"
                  },
                  {
                     "type": "null"
                  }
               ],
               "default": null,
               "description": "The name of the foundation model.",
               "title": "Model Name"
            },
            "model_id": {
               "anyOf": [
                  {
                     "type": "string"
                  },
                  {
                     "type": "null"
                  }
               ],
               "default": null,
               "description": "The id of the foundation model.",
               "title": "Model Id"
            },
            "provider": {
               "description": "The provider of the foundation model.",
               "title": "Provider",
               "type": "string"
            },
            "type": {
               "description": "The type of foundation model.",
               "example": [
                  "chat",
                  "embedding",
                  "text-generation"
               ],
               "title": "Type",
               "type": "string"
            }
         },
         "required": [
            "provider",
            "type"
         ],
         "title": "FoundationModelInfo",
         "type": "object"
      },
      "GenAIMetric": {
         "description": "Defines the Generative AI metric interface",
         "properties": {
            "name": {
               "description": "The name of the metric",
               "title": "Metric Name",
               "type": "string"
            },
            "thresholds": {
               "default": [],
               "description": "The list of thresholds",
               "items": {
                  "$ref": "#/$defs/MetricThreshold"
               },
               "title": "Thresholds",
               "type": "array"
            },
            "tasks": {
               "description": "The task types this metric is associated with.",
               "items": {
                  "$ref": "#/$defs/TaskType"
               },
               "title": "Tasks",
               "type": "array"
            },
            "group": {
               "anyOf": [
                  {
                     "$ref": "#/$defs/MetricGroup"
                  },
                  {
                     "type": "null"
                  }
               ],
               "default": null,
               "description": "The metric group this metric belongs to."
            },
            "is_reference_free": {
               "default": true,
               "description": "Decides whether this metric needs a reference for computation",
               "title": "Is Reference Free",
               "type": "boolean"
            },
            "method": {
               "anyOf": [
                  {
                     "type": "string"
                  },
                  {
                     "type": "null"
                  }
               ],
               "default": null,
               "description": "The method used to compute the metric.",
               "title": "Method"
            },
            "metric_dependencies": {
               "default": [],
               "description": "Metrics that needs to be evaluated first",
               "items": {
                  "$ref": "#/$defs/GenAIMetric"
               },
               "title": "Metric Dependencies",
               "type": "array"
            }
         },
         "required": [
            "name",
            "tasks"
         ],
         "title": "GenAIMetric",
         "type": "object"
      },
      "LLMJudge": {
         "description": "Defines the LLMJudge.\n\nThe LLMJudge class contains the details of the llm judge model to be used for computing the metric.\n\nExamples:\n    1. Create LLMJudge using watsonx.ai foundation model:\n        .. code-block:: python\n\n            wx_ai_foundation_model = WxAIFoundationModel(\n                model_id=\"google/flan-ul2\",\n                project_id=PROJECT_ID,\n                provider=WxAIModelProvider(\n                    credentials=WxAICredentials(api_key=wx_apikey)\n                )\n            )\n            llm_judge = LLMJudge(model=wx_ai_foundation_model)",
         "properties": {
            "model": {
               "anyOf": [
                  {
                     "$ref": "#/$defs/WxAIFoundationModel"
                  },
                  {
                     "$ref": "#/$defs/OpenAIFoundationModel"
                  },
                  {
                     "$ref": "#/$defs/AzureOpenAIFoundationModel"
                  },
                  {
                     "$ref": "#/$defs/RITSFoundationModel"
                  }
               ],
               "description": "The foundation model to be used as judge",
               "title": "Model"
            }
         },
         "required": [
            "model"
         ],
         "title": "LLMJudge",
         "type": "object"
      },
      "Locale": {
         "properties": {
            "input": {
               "anyOf": [
                  {
                     "items": {
                        "type": "string"
                     },
                     "type": "array"
                  },
                  {
                     "additionalProperties": {
                        "type": "string"
                     },
                     "type": "object"
                  },
                  {
                     "type": "string"
                  },
                  {
                     "type": "null"
                  }
               ],
               "default": null,
               "title": "Input"
            },
            "output": {
               "anyOf": [
                  {
                     "items": {
                        "type": "string"
                     },
                     "type": "array"
                  },
                  {
                     "type": "null"
                  }
               ],
               "default": null,
               "title": "Output"
            },
            "reference": {
               "anyOf": [
                  {
                     "items": {
                        "type": "string"
                     },
                     "type": "array"
                  },
                  {
                     "additionalProperties": {
                        "type": "string"
                     },
                     "type": "object"
                  },
                  {
                     "type": "string"
                  },
                  {
                     "type": "null"
                  }
               ],
               "default": null,
               "title": "Reference"
            }
         },
         "title": "Locale",
         "type": "object"
      },
      "MetricGroup": {
         "enum": [
            "retrieval_quality",
            "answer_quality",
            "content_safety",
            "performance",
            "usage",
            "tool_call_quality",
            "readability"
         ],
         "title": "MetricGroup",
         "type": "string"
      },
      "MetricThreshold": {
         "description": "The class that defines the threshold for a metric.",
         "properties": {
            "type": {
               "description": "Threshold type. One of 'lower_limit', 'upper_limit'",
               "enum": [
                  "lower_limit",
                  "upper_limit"
               ],
               "title": "Type",
               "type": "string"
            },
            "value": {
               "default": 0,
               "description": "The value of metric threshold",
               "title": "Threshold value",
               "type": "number"
            }
         },
         "required": [
            "type"
         ],
         "title": "MetricThreshold",
         "type": "object"
      },
      "MetricsConfiguration": {
         "description": "The class representing the metrics to be computed and the configuration details required for them.\n\nExamples:\n    1. Create MetricsConfiguration with default agentic ai configuration\n        .. code-block:: python\n\n            metrics_configuration = MetricsConfiguration(metrics=[ContextRelevanceMetric()],\n                                                         metric_groups=[MetricGroup.RETRIEVAL_QUALITY])])\n\n    2. Create MetricsConfiguration by specifying agentic ai configuration\n        .. code-block:: python\n\n            config = {\n                \"input_fields\": [\"input\"],\n                \"context_fields\": [\"contexts\"]\n            }\n            metrics_configuration = MetricsConfiguration(configuration=AgenticAIConfiguration(**config)\n                                                           metrics=[ContextRelevanceMetric()],\n                                                           metric_groups=[MetricGroup.RETRIEVAL_QUALITY])])",
         "properties": {
            "configuration": {
               "$ref": "#/$defs/AgenticAIConfiguration",
               "default": {
                  "record_id_field": "record_id",
                  "record_timestamp_field": "record_timestamp",
                  "task_type": null,
                  "input_fields": [
                     "input_text"
                  ],
                  "context_fields": [
                     "context"
                  ],
                  "output_fields": [
                     "generated_text"
                  ],
                  "reference_fields": [
                     "ground_truth"
                  ],
                  "locale": null,
                  "tools": [],
                  "tool_calls_field": "tool_calls",
                  "available_tools_field": "available_tools",
                  "llm_judge": null,
                  "prompt_field": "model_prompt",
                  "interaction_id_field": "interaction_id",
                  "conversation_id_field": "conversation_id"
               },
               "description": "The configuration of the metrics to compute. The configuration contains the fields names to be read when computing the metrics.",
               "title": "Metrics configuration"
            },
            "metrics": {
               "anyOf": [
                  {
                     "items": {
                        "$ref": "#/$defs/GenAIMetric"
                     },
                     "type": "array"
                  },
                  {
                     "type": "null"
                  }
               ],
               "default": [],
               "description": "The list of metrics to compute.",
               "title": "Metrics"
            },
            "metric_groups": {
               "anyOf": [
                  {
                     "items": {
                        "$ref": "#/$defs/MetricGroup"
                     },
                     "type": "array"
                  },
                  {
                     "type": "null"
                  }
               ],
               "default": [],
               "description": "The list of metric groups to compute.",
               "title": "Metric Groups"
            }
         },
         "title": "MetricsConfiguration",
         "type": "object"
      },
      "ModelProviderType": {
         "description": "Supported model provider types for Generative AI",
         "enum": [
            "ibm_watsonx.ai",
            "azure_openai",
            "rits",
            "openai",
            "custom"
         ],
         "title": "ModelProviderType",
         "type": "string"
      },
      "Node": {
         "description": "The class representing a node in an agentic application.\n\nExamples:\n    1. Create Node with metrics configuration and default agentic ai configuration\n        .. code-block:: python\n\n            metrics_configurations = [MetricsConfiguration(metrics=[ContextRelevanceMetric()],\n                                                           metric_groups=[MetricGroup.RETRIEVAL_QUALITY])])]\n            node = Node(name=\"Retrieval Node\",\n                        metrics_configurations=metrics_configurations)\n\n    2. Create Node with metrics configuration and specifying agentic ai configuration\n        .. code-block:: python\n\n            node_config = {\"input_fields\": [\"input\"],\n                           \"output_fields\": [\"output\"],\n                           \"context_fields\": [\"contexts\"],\n                           \"reference_fields\": [\"reference\"]}\n            metrics_configurations = [MetricsConfiguration(configuration=AgenticAIConfiguration(**node_config)\n                                                           metrics=[ContextRelevanceMetric()],\n                                                           metric_groups=[MetricGroup.RETRIEVAL_QUALITY])])]\n            node = Node(name=\"Retrieval Node\",\n                        metrics_configurations=metrics_configurations)",
         "properties": {
            "name": {
               "description": "The name of the node.",
               "title": "Name",
               "type": "string"
            },
            "func_name": {
               "anyOf": [
                  {
                     "type": "string"
                  },
                  {
                     "type": "null"
                  }
               ],
               "default": null,
               "description": "The name of the node function.",
               "title": "Node function name"
            },
            "metrics_configurations": {
               "default": [],
               "description": "The list of metrics and their configuration details.",
               "items": {
                  "$ref": "#/$defs/MetricsConfiguration"
               },
               "title": "Metrics configuration",
               "type": "array"
            },
            "foundation_models": {
               "default": [],
               "description": "The Foundation models invoked by the node",
               "items": {
                  "$ref": "#/$defs/FoundationModelInfo"
               },
               "title": "Foundation Models",
               "type": "array"
            }
         },
         "required": [
            "name"
         ],
         "title": "Node",
         "type": "object"
      },
      "OpenAICredentials": {
         "description": "Defines the OpenAICredentials class to specify the OpenAI server details.\n\nExamples:\n    1. Create OpenAICredentials with default parameters. By default Dallas region is used.\n        .. code-block:: python\n\n            openai_credentials = OpenAICredentials(api_key=api_key,\n                                                   url=openai_url)\n\n    2. Create OpenAICredentials by reading from environment variables.\n        .. code-block:: python\n\n            os.environ[\"OPENAI_API_KEY\"] = \"...\"\n            os.environ[\"OPENAI_URL\"] = \"...\"\n            openai_credentials = OpenAICredentials.create_from_env()",
         "properties": {
            "url": {
               "anyOf": [
                  {
                     "type": "string"
                  },
                  {
                     "type": "null"
                  }
               ],
               "title": "Url"
            },
            "api_key": {
               "anyOf": [
                  {
                     "type": "string"
                  },
                  {
                     "type": "null"
                  }
               ],
               "title": "Api Key"
            }
         },
         "required": [
            "url",
            "api_key"
         ],
         "title": "OpenAICredentials",
         "type": "object"
      },
      "OpenAIFoundationModel": {
         "description": "The OpenAI foundation model details\n\nExamples:\n    1. Create OpenAI foundation model by passing the credentials during object creation. Note that the url is optional and will be set to the default value for OpenAI. To change the default value, the url should be passed to ``OpenAICredentials`` object.\n        .. code-block:: python\n\n            openai_foundation_model = OpenAIFoundationModel(\n                model_id=\"gpt-4o-mini\",\n                provider=OpenAIModelProvider(\n                    credentials=OpenAICredentials(\n                        api_key=api_key,\n                        url=openai_url,\n                    )\n                )\n            )\n\n    2. Create OpenAI foundation model by setting the credentials in environment variables:\n        * ``OPENAI_API_KEY`` is used to set the api key for OpenAI.\n        * ``OPENAI_URL`` is used to set the url for OpenAI\n\n        .. code-block:: python\n\n            openai_foundation_model = OpenAIFoundationModel(\n                model_id=\"gpt-4o-mini\",\n            )",
         "properties": {
            "model_name": {
               "anyOf": [
                  {
                     "type": "string"
                  },
                  {
                     "type": "null"
                  }
               ],
               "default": null,
               "description": "The name of the foundation model.",
               "title": "Model Name"
            },
            "provider": {
               "$ref": "#/$defs/OpenAIModelProvider",
               "description": "OpenAI provider"
            },
            "model_id": {
               "description": "Model name from OpenAI",
               "title": "Model Id",
               "type": "string"
            }
         },
         "required": [
            "model_id"
         ],
         "title": "OpenAIFoundationModel",
         "type": "object"
      },
      "OpenAIModelProvider": {
         "properties": {
            "type": {
               "$ref": "#/$defs/ModelProviderType",
               "default": "openai",
               "description": "The type of model provider."
            },
            "credentials": {
               "anyOf": [
                  {
                     "$ref": "#/$defs/OpenAICredentials"
                  },
                  {
                     "type": "null"
                  }
               ],
               "default": null,
               "description": "OpenAI credentials. This can also be set by using `OPENAI_API_KEY` environment variable."
            }
         },
         "title": "OpenAIModelProvider",
         "type": "object"
      },
      "RITSCredentials": {
         "properties": {
            "hostname": {
               "anyOf": [
                  {
                     "type": "string"
                  },
                  {
                     "type": "null"
                  }
               ],
               "default": "https://inference-3scale-apicast-production.apps.rits.fmaas.res.ibm.com",
               "description": "The rits hostname",
               "title": "Hostname"
            },
            "api_key": {
               "title": "Api Key",
               "type": "string"
            }
         },
         "required": [
            "api_key"
         ],
         "title": "RITSCredentials",
         "type": "object"
      },
      "RITSFoundationModel": {
         "properties": {
            "model_name": {
               "anyOf": [
                  {
                     "type": "string"
                  },
                  {
                     "type": "null"
                  }
               ],
               "default": null,
               "description": "The name of the foundation model.",
               "title": "Model Name"
            },
            "provider": {
               "$ref": "#/$defs/RITSModelProvider",
               "description": "The provider of the model."
            }
         },
         "title": "RITSFoundationModel",
         "type": "object"
      },
      "RITSModelProvider": {
         "properties": {
            "type": {
               "$ref": "#/$defs/ModelProviderType",
               "default": "rits",
               "description": "The type of model provider."
            },
            "credentials": {
               "anyOf": [
                  {
                     "$ref": "#/$defs/RITSCredentials"
                  },
                  {
                     "type": "null"
                  }
               ],
               "default": null,
               "description": "RITS credentials."
            }
         },
         "title": "RITSModelProvider",
         "type": "object"
      },
      "TaskType": {
         "description": "Supported task types for generative AI models",
         "enum": [
            "question_answering",
            "classification",
            "summarization",
            "generation",
            "extraction",
            "retrieval_augmented_generation"
         ],
         "title": "TaskType",
         "type": "string"
      },
      "WxAICredentials": {
         "description": "Defines the WxAICredentials class to specify the watsonx.ai server details.\n\nExamples:\n    1. Create WxAICredentials with default parameters. By default Dallas region is used.\n        .. code-block:: python\n\n            wxai_credentials = WxAICredentials(api_key=\"...\")\n\n    2. Create WxAICredentials by specifying region url.\n        .. code-block:: python\n\n            wxai_credentials = WxAICredentials(api_key=\"...\",\n                                               url=\"https://au-syd.ml.cloud.ibm.com\")\n\n    3. Create WxAICredentials by reading from environment variables.\n        .. code-block:: python\n\n            os.environ[\"WATSONX_APIKEY\"] = \"...\"\n            # [Optional] Specify watsonx region specific url. Default is https://us-south.ml.cloud.ibm.com .\n            os.environ[\"WATSONX_URL\"] = \"https://eu-gb.ml.cloud.ibm.com\"\n            wxai_credentials = WxAICredentials.create_from_env()\n\n    4. Create WxAICredentials for on-prem.\n        .. code-block:: python\n\n            wxai_credentials = WxAICredentials(url=\"https://<hostname>\",\n                                               username=\"...\"\n                                               api_key=\"...\",\n                                               version=\"5.2\")\n\n    5. Create WxAICredentials by reading from environment variables for on-prem.\n        .. code-block:: python\n\n            os.environ[\"WATSONX_URL\"] = \"https://<hostname>\"\n            os.environ[\"WATSONX_VERSION\"] = \"5.2\"\n            os.environ[\"WATSONX_USERNAME\"] = \"...\"\n            os.environ[\"WATSONX_APIKEY\"] = \"...\"\n            # Only one of api_key or password is needed\n            #os.environ[\"WATSONX_PASSWORD\"] = \"...\"\n            wxai_credentials = WxAICredentials.create_from_env()",
         "properties": {
            "url": {
               "default": "https://us-south.ml.cloud.ibm.com",
               "description": "The url for watsonx ai service",
               "examples": [
                  "https://us-south.ml.cloud.ibm.com",
                  "https://eu-de.ml.cloud.ibm.com",
                  "https://eu-gb.ml.cloud.ibm.com",
                  "https://jp-tok.ml.cloud.ibm.com",
                  "https://au-syd.ml.cloud.ibm.com"
               ],
               "title": "watsonx.ai url",
               "type": "string"
            },
            "api_key": {
               "anyOf": [
                  {
                     "type": "string"
                  },
                  {
                     "type": "null"
                  }
               ],
               "default": null,
               "description": "The user api key. Required for using watsonx as a service and one of api_key or password is required for using watsonx on-prem software.",
               "strip_whitespace": true,
               "title": "Api Key"
            },
            "version": {
               "anyOf": [
                  {
                     "type": "string"
                  },
                  {
                     "type": "null"
                  }
               ],
               "default": null,
               "description": "The watsonx on-prem software version. Required for using watsonx on-prem software.",
               "title": "Version"
            },
            "username": {
               "anyOf": [
                  {
                     "type": "string"
                  },
                  {
                     "type": "null"
                  }
               ],
               "default": null,
               "description": "The user name. Required for using watsonx on-prem software.",
               "title": "User name"
            },
            "password": {
               "anyOf": [
                  {
                     "type": "string"
                  },
                  {
                     "type": "null"
                  }
               ],
               "default": null,
               "description": "The user password. One of api_key or password is required for using watsonx on-prem software.",
               "title": "Password"
            },
            "instance_id": {
               "anyOf": [
                  {
                     "type": "string"
                  },
                  {
                     "type": "null"
                  }
               ],
               "default": "openshift",
               "description": "The watsonx.ai instance id. Default value is openshift.",
               "title": "Instance id"
            }
         },
         "title": "WxAICredentials",
         "type": "object"
      },
      "WxAIFoundationModel": {
         "description": "The IBM watsonx.ai foundation model details\n\nTo initialize the foundation model, you can either pass in the credentials directly or set the environment.\nYou can follow these examples to create the provider.\n\nExamples:\n    1. Create foundation model by specifying the credentials during object creation:\n        .. code-block:: python\n\n            # Specify the credentials during object creation\n            wx_ai_foundation_model = WxAIFoundationModel(\n                model_id=\"google/flan-ul2\",\n                project_id=<PROJECT_ID>,\n                provider=WxAIModelProvider(\n                    credentials=WxAICredentials(\n                        url=wx_url, # This is optional field, by default US-Dallas region is selected\n                        api_key=wx_apikey,\n                    )\n                )\n            )\n\n    2. Create foundation model by setting the credentials environment variables:\n        * The api key can be set using one of the environment variables ``WXAI_API_KEY``, ``WATSONX_APIKEY``, or ``WXG_API_KEY``. These will be read in the order of precedence.\n        * The url is optional and will be set to US-Dallas region by default. It can be set using one of the environment variables ``WXAI_URL``, ``WATSONX_URL``, or ``WXG_URL``. These will be read in the order of precedence.\n\n        .. code-block:: python\n\n            wx_ai_foundation_model = WxAIFoundationModel(\n                model_id=\"google/flan-ul2\",\n                project_id=<PROJECT_ID>,\n            )\n\n    3. Create foundation model by specifying watsonx.governance software credentials during object creation:\n        .. code-block:: python\n\n            wx_ai_foundation_model = WxAIFoundationModel(\n                model_id=\"google/flan-ul2\",\n                project_id=project_id,\n                provider=WxAIModelProvider(\n                    credentials=WxAICredentials(\n                        url=wx_url,\n                        api_key=wx_apikey,\n                        username=wx_username,\n                        version=wx_version,\n                    )\n                )\n            )\n\n    4. Create foundation model by setting watsonx.governance software credentials environment variables:\n        * The api key can be set using one of the environment variables ``WXAI_API_KEY``, ``WATSONX_APIKEY``, or ``WXG_API_KEY``. These will be read in the order of precedence.\n        * The url can be set using one of these environment variable ``WXAI_URL``, ``WATSONX_URL``, or ``WXG_URL``. These will be read in the order of precedence.\n        * The username can be set using one of these environment variable ``WXAI_USERNAME``, ``WATSONX_USERNAME``, or ``WXG_USERNAME``. These will be read in the order of precedence.\n        * The version of watsonx.governance software can be set using one of these environment variable ``WXAI_VERSION``, ``WATSONX_VERSION``, or ``WXG_VERSION``. These will be read in the order of precedence.\n\n        .. code-block:: python\n\n            wx_ai_foundation_model = WxAIFoundationModel(\n                model_id=\"google/flan-ul2\",\n                project_id=project_id,\n            )",
         "properties": {
            "model_name": {
               "anyOf": [
                  {
                     "type": "string"
                  },
                  {
                     "type": "null"
                  }
               ],
               "default": null,
               "description": "The name of the foundation model.",
               "title": "Model Name"
            },
            "provider": {
               "$ref": "#/$defs/WxAIModelProvider",
               "description": "The provider of the model."
            },
            "model_id": {
               "description": "The unique identifier for the watsonx.ai model.",
               "title": "Model Id",
               "type": "string"
            },
            "project_id": {
               "anyOf": [
                  {
                     "type": "string"
                  },
                  {
                     "type": "null"
                  }
               ],
               "default": null,
               "description": "The project ID associated with the model.",
               "title": "Project Id"
            },
            "space_id": {
               "anyOf": [
                  {
                     "type": "string"
                  },
                  {
                     "type": "null"
                  }
               ],
               "default": null,
               "description": "The space ID associated with the model.",
               "title": "Space Id"
            }
         },
         "required": [
            "model_id"
         ],
         "title": "WxAIFoundationModel",
         "type": "object"
      },
      "WxAIModelProvider": {
         "description": "This class represents a model provider configuration for IBM watsonx.ai. It includes the provider type and\ncredentials required to authenticate and interact with the watsonx.ai platform. If credentials are not explicitly\nprovided, it attempts to load them from environment variables.\n\nExamples:\n    1. Create provider using credentials object:\n        .. code-block:: python\n\n            credentials = WxAICredentials(\n                url=\"https://us-south.ml.cloud.ibm.com\",\n                api_key=\"your-api-key\"\n            )\n            provider = WxAIModelProvider(credentials=credentials)\n\n    2. Create provider using environment variables:\n        .. code-block:: python\n\n            import os\n\n            os.environ['WATSONX_URL'] = \"https://us-south.ml.cloud.ibm.com\"\n            os.environ['WATSONX_APIKEY'] = \"your-api-key\"\n\n            provider = WxAIModelProvider()",
         "properties": {
            "type": {
               "$ref": "#/$defs/ModelProviderType",
               "default": "ibm_watsonx.ai",
               "description": "The type of model provider."
            },
            "credentials": {
               "anyOf": [
                  {
                     "$ref": "#/$defs/WxAICredentials"
                  },
                  {
                     "type": "null"
                  }
               ],
               "default": null,
               "description": "The credentials used to authenticate with watsonx.ai. If not provided, they will be loaded from environment variables."
            }
         },
         "title": "WxAIModelProvider",
         "type": "object"
      }
   }
}

Fields:
field metrics_configuration: Annotated[MetricsConfiguration | None, FieldInfo(annotation=NoneType, required=False, default=MetricsConfiguration(configuration=AgenticAIConfiguration(record_id_field='record_id', record_timestamp_field='record_timestamp', task_type=None, input_fields=['input_text'], context_fields=['context'], output_fields=['generated_text'], reference_fields=['ground_truth'], locale=None, tools=[], tool_calls_field='tool_calls', available_tools_field='available_tools', llm_judge=None, prompt_field='model_prompt', interaction_id_field='interaction_id', conversation_id_field='conversation_id'), metrics=[], metric_groups=[]), title='Metrics configuration', description='The list of metrics to be computed on the agentic application and their configuration details.')] = MetricsConfiguration(configuration=AgenticAIConfiguration(record_id_field='record_id', record_timestamp_field='record_timestamp', task_type=None, input_fields=['input_text'], context_fields=['context'], output_fields=['generated_text'], reference_fields=['ground_truth'], locale=None, tools=[], tool_calls_field='tool_calls', available_tools_field='available_tools', llm_judge=None, prompt_field='model_prompt', interaction_id_field='interaction_id', conversation_id_field='conversation_id'), metrics=[], metric_groups=[])

The list of metrics to be computed on the agentic application and their configuration details.

field name: Annotated[str, FieldInfo(annotation=NoneType, required=False, default='Agentic App', title='Agentic application name', description='The name of the agentic application.')] = 'Agentic App'

The name of the agentic application.

field nodes: Annotated[list[Node] | None, FieldInfo(annotation=NoneType, required=False, default=[], title='Node details', description='The nodes details.')] = []

The nodes details.

classmethod model_validate_json(json_data, **kwargs)

Usage docs: https://docs.pydantic.dev/2.10/concepts/json/#json-parsing

Validate the given JSON data against the Pydantic model.

Parameters:
  • json_data – The JSON data to validate.

  • strict – Whether to enforce types strictly.

  • context – Extra variables to pass to the validator.

Returns:

The validated Pydantic model.

Raises:

ValidationError – If json_data is not a JSON string or the object could not be validated.

pydantic model ibm_watsonx_gov.entities.agentic_app.MetricsConfiguration

Bases: BaseModel

The class representing the metrics to be computed and the configuration details required for them.

Examples

  1. Create MetricsConfiguration with default agentic ai configuration
    metrics_configuration = MetricsConfiguration(metrics=[ContextRelevanceMetric()],
                                                 metric_groups=[MetricGroup.RETRIEVAL_QUALITY])])
    
  2. Create MetricsConfiguration by specifying agentic ai configuration
    config = {
        "input_fields": ["input"],
        "context_fields": ["contexts"]
    }
    metrics_configuration = MetricsConfiguration(configuration=AgenticAIConfiguration(**config)
                                                   metrics=[ContextRelevanceMetric()],
                                                   metric_groups=[MetricGroup.RETRIEVAL_QUALITY])])
    

Show JSON schema
{
   "title": "MetricsConfiguration",
   "description": "The class representing the metrics to be computed and the configuration details required for them.\n\nExamples:\n    1. Create MetricsConfiguration with default agentic ai configuration\n        .. code-block:: python\n\n            metrics_configuration = MetricsConfiguration(metrics=[ContextRelevanceMetric()],\n                                                         metric_groups=[MetricGroup.RETRIEVAL_QUALITY])])\n\n    2. Create MetricsConfiguration by specifying agentic ai configuration\n        .. code-block:: python\n\n            config = {\n                \"input_fields\": [\"input\"],\n                \"context_fields\": [\"contexts\"]\n            }\n            metrics_configuration = MetricsConfiguration(configuration=AgenticAIConfiguration(**config)\n                                                           metrics=[ContextRelevanceMetric()],\n                                                           metric_groups=[MetricGroup.RETRIEVAL_QUALITY])])",
   "type": "object",
   "properties": {
      "configuration": {
         "$ref": "#/$defs/AgenticAIConfiguration",
         "default": {
            "record_id_field": "record_id",
            "record_timestamp_field": "record_timestamp",
            "task_type": null,
            "input_fields": [
               "input_text"
            ],
            "context_fields": [
               "context"
            ],
            "output_fields": [
               "generated_text"
            ],
            "reference_fields": [
               "ground_truth"
            ],
            "locale": null,
            "tools": [],
            "tool_calls_field": "tool_calls",
            "available_tools_field": "available_tools",
            "llm_judge": null,
            "prompt_field": "model_prompt",
            "interaction_id_field": "interaction_id",
            "conversation_id_field": "conversation_id"
         },
         "description": "The configuration of the metrics to compute. The configuration contains the fields names to be read when computing the metrics.",
         "title": "Metrics configuration"
      },
      "metrics": {
         "anyOf": [
            {
               "items": {
                  "$ref": "#/$defs/GenAIMetric"
               },
               "type": "array"
            },
            {
               "type": "null"
            }
         ],
         "default": [],
         "description": "The list of metrics to compute.",
         "title": "Metrics"
      },
      "metric_groups": {
         "anyOf": [
            {
               "items": {
                  "$ref": "#/$defs/MetricGroup"
               },
               "type": "array"
            },
            {
               "type": "null"
            }
         ],
         "default": [],
         "description": "The list of metric groups to compute.",
         "title": "Metric Groups"
      }
   },
   "$defs": {
      "AgenticAIConfiguration": {
         "description": "Defines the AgenticAIConfiguration class.\n\nThe configuration interface for Agentic AI tools and applications.\nThis is used to specify the fields mapping details in the data and other configuration parameters needed for evaluation.\n\nExamples:\n    1. Create configuration with default parameters\n        .. code-block:: python\n\n            configuration = AgenticAIConfiguration()\n\n    2. Create configuration with parameters\n        .. code-block:: python\n\n            configuration = AgenticAIConfiguration(input_fields=[\"input\"], \n                                                   output_fields=[\"output\"])\n\n    2. Create configuration with dict parameters\n        .. code-block:: python\n\n            config = {\"input_fields\": [\"input\"],\n                      \"output_fields\": [\"output\"],\n                      \"context_fields\": [\"contexts\"],\n                      \"reference_fields\": [\"reference\"]}\n            configuration = AgenticAIConfiguration(**config)",
         "properties": {
            "record_id_field": {
               "default": "record_id",
               "description": "The record identifier field name.",
               "examples": [
                  "record_id"
               ],
               "title": "Record id field",
               "type": "string"
            },
            "record_timestamp_field": {
               "default": "record_timestamp",
               "description": "The record timestamp field name.",
               "examples": [
                  "record_timestamp"
               ],
               "title": "Record timestamp field",
               "type": "string"
            },
            "task_type": {
               "anyOf": [
                  {
                     "$ref": "#/$defs/TaskType"
                  },
                  {
                     "type": "null"
                  }
               ],
               "default": null,
               "description": "The generative task type. Default value is None.",
               "examples": [
                  "retrieval_augmented_generation"
               ],
               "title": "Task Type"
            },
            "input_fields": {
               "default": [
                  "input_text"
               ],
               "description": "The list of model input fields in the data. Default value is ['input_text'].",
               "examples": [
                  [
                     "question"
                  ]
               ],
               "items": {
                  "type": "string"
               },
               "title": "Input Fields",
               "type": "array"
            },
            "context_fields": {
               "default": [
                  "context"
               ],
               "description": "The list of context fields in the input fields. Default value is ['context'].",
               "examples": [
                  [
                     "context1",
                     "context2"
                  ]
               ],
               "items": {
                  "type": "string"
               },
               "title": "Context Fields",
               "type": "array"
            },
            "output_fields": {
               "default": [
                  "generated_text"
               ],
               "description": "The list of model output fields in the data. Default value is ['generated_text'].",
               "examples": [
                  [
                     "output"
                  ]
               ],
               "items": {
                  "type": "string"
               },
               "title": "Output Fields",
               "type": "array"
            },
            "reference_fields": {
               "default": [
                  "ground_truth"
               ],
               "description": "The list of reference fields in the data. Default value is ['ground_truth'].",
               "examples": [
                  [
                     "reference"
                  ]
               ],
               "items": {
                  "type": "string"
               },
               "title": "Reference Fields",
               "type": "array"
            },
            "locale": {
               "anyOf": [
                  {
                     "$ref": "#/$defs/Locale"
                  },
                  {
                     "type": "null"
                  }
               ],
               "default": null,
               "description": "The language locale of the input, output and reference fields in the data.",
               "title": "Locale"
            },
            "tools": {
               "default": [],
               "description": "The list of tools used by the LLM.",
               "examples": [
                  [
                     "function1",
                     "function2"
                  ]
               ],
               "items": {
                  "type": "object"
               },
               "title": "Tools",
               "type": "array"
            },
            "tool_calls_field": {
               "anyOf": [
                  {
                     "type": "string"
                  },
                  {
                     "type": "null"
                  }
               ],
               "default": "tool_calls",
               "description": "The tool calls field in the input fields. Default value is 'tool_calls'.",
               "examples": [
                  "tool_calls"
               ],
               "title": "Tool Calls Field"
            },
            "available_tools_field": {
               "anyOf": [
                  {
                     "type": "string"
                  },
                  {
                     "type": "null"
                  }
               ],
               "default": "available_tools",
               "description": "The tool inventory field in the data. Default value is 'available_tools'.",
               "examples": [
                  "available_tools"
               ],
               "title": "Available Tools Field"
            },
            "llm_judge": {
               "anyOf": [
                  {
                     "$ref": "#/$defs/LLMJudge"
                  },
                  {
                     "type": "null"
                  }
               ],
               "default": null,
               "description": "LLM as Judge Model details.",
               "title": "LLM Judge"
            },
            "prompt_field": {
               "anyOf": [
                  {
                     "type": "string"
                  },
                  {
                     "type": "null"
                  }
               ],
               "default": "model_prompt",
               "description": "The prompt field in the input fields. Default value is 'model_prompt'.",
               "examples": [
                  "model_prompt"
               ],
               "title": "Model Prompt Field"
            },
            "interaction_id_field": {
               "anyOf": [
                  {
                     "type": "string"
                  },
                  {
                     "type": "null"
                  }
               ],
               "default": "interaction_id",
               "description": "The interaction identifier field name. Default value is 'interaction_id'.",
               "examples": [
                  "interaction_id"
               ],
               "title": "Interaction id field"
            },
            "conversation_id_field": {
               "anyOf": [
                  {
                     "type": "string"
                  },
                  {
                     "type": "null"
                  }
               ],
               "default": "conversation_id",
               "description": "The conversation identifier field name. Default value is 'conversation_id'.",
               "examples": [
                  "conversation_id"
               ],
               "title": "Conversation id field"
            }
         },
         "title": "AgenticAIConfiguration",
         "type": "object"
      },
      "AzureOpenAICredentials": {
         "properties": {
            "url": {
               "anyOf": [
                  {
                     "type": "string"
                  },
                  {
                     "type": "null"
                  }
               ],
               "description": "Azure OpenAI url. This attribute can be read from `AZURE_OPENAI_HOST` environment variable.",
               "title": "Url"
            },
            "api_key": {
               "anyOf": [
                  {
                     "type": "string"
                  },
                  {
                     "type": "null"
                  }
               ],
               "description": "API key for Azure OpenAI. This attribute can be read from `AZURE_OPENAI_API_KEY` environment variable.",
               "title": "Api Key"
            },
            "api_version": {
               "anyOf": [
                  {
                     "type": "string"
                  },
                  {
                     "type": "null"
                  }
               ],
               "description": "The model API version from Azure OpenAI. This attribute can be read from `AZURE_OPENAI_API_VERSION` environment variable.",
               "title": "Api Version"
            }
         },
         "required": [
            "url",
            "api_key",
            "api_version"
         ],
         "title": "AzureOpenAICredentials",
         "type": "object"
      },
      "AzureOpenAIFoundationModel": {
         "description": "The Azure OpenAI foundation model details\n\nExamples:\n    1. Create Azure OpenAI foundation model by passing the credentials during object creation.\n        .. code-block:: python\n\n            azure_openai_foundation_model = AzureOpenAIFoundationModel(\n                model_id=\"gpt-4o-mini\",\n                provider=AzureOpenAIModelProvider(\n                    credentials=AzureOpenAICredentials(\n                        api_key=azure_api_key,\n                        url=azure_host_url,\n                        api_version=azure_api_model_version,\n                    )\n                )\n            )\n\n2. Create Azure OpenAI foundation model by setting the credentials in environment variables:\n    * ``AZURE_OPENAI_API_KEY`` is used to set the api key for OpenAI.\n    * ``AZURE_OPENAI_HOST`` is used to set the url for Azure OpenAI.\n    * ``AZURE_OPENAI_API_VERSION`` is uses to set the the api version for Azure OpenAI.\n\n        .. code-block:: python\n\n            openai_foundation_model = AzureOpenAIFoundationModel(\n                model_id=\"gpt-4o-mini\",\n            )",
         "properties": {
            "model_name": {
               "anyOf": [
                  {
                     "type": "string"
                  },
                  {
                     "type": "null"
                  }
               ],
               "default": null,
               "description": "The name of the foundation model.",
               "title": "Model Name"
            },
            "provider": {
               "$ref": "#/$defs/AzureOpenAIModelProvider",
               "description": "Azure OpenAI provider"
            },
            "model_id": {
               "description": "Model deployment name from Azure OpenAI",
               "title": "Model Id",
               "type": "string"
            }
         },
         "required": [
            "model_id"
         ],
         "title": "AzureOpenAIFoundationModel",
         "type": "object"
      },
      "AzureOpenAIModelProvider": {
         "properties": {
            "type": {
               "$ref": "#/$defs/ModelProviderType",
               "default": "azure_openai",
               "description": "The type of model provider."
            },
            "credentials": {
               "anyOf": [
                  {
                     "$ref": "#/$defs/AzureOpenAICredentials"
                  },
                  {
                     "type": "null"
                  }
               ],
               "default": null,
               "description": "Azure OpenAI credentials."
            }
         },
         "title": "AzureOpenAIModelProvider",
         "type": "object"
      },
      "GenAIMetric": {
         "description": "Defines the Generative AI metric interface",
         "properties": {
            "name": {
               "description": "The name of the metric",
               "title": "Metric Name",
               "type": "string"
            },
            "thresholds": {
               "default": [],
               "description": "The list of thresholds",
               "items": {
                  "$ref": "#/$defs/MetricThreshold"
               },
               "title": "Thresholds",
               "type": "array"
            },
            "tasks": {
               "description": "The task types this metric is associated with.",
               "items": {
                  "$ref": "#/$defs/TaskType"
               },
               "title": "Tasks",
               "type": "array"
            },
            "group": {
               "anyOf": [
                  {
                     "$ref": "#/$defs/MetricGroup"
                  },
                  {
                     "type": "null"
                  }
               ],
               "default": null,
               "description": "The metric group this metric belongs to."
            },
            "is_reference_free": {
               "default": true,
               "description": "Decides whether this metric needs a reference for computation",
               "title": "Is Reference Free",
               "type": "boolean"
            },
            "method": {
               "anyOf": [
                  {
                     "type": "string"
                  },
                  {
                     "type": "null"
                  }
               ],
               "default": null,
               "description": "The method used to compute the metric.",
               "title": "Method"
            },
            "metric_dependencies": {
               "default": [],
               "description": "Metrics that needs to be evaluated first",
               "items": {
                  "$ref": "#/$defs/GenAIMetric"
               },
               "title": "Metric Dependencies",
               "type": "array"
            }
         },
         "required": [
            "name",
            "tasks"
         ],
         "title": "GenAIMetric",
         "type": "object"
      },
      "LLMJudge": {
         "description": "Defines the LLMJudge.\n\nThe LLMJudge class contains the details of the llm judge model to be used for computing the metric.\n\nExamples:\n    1. Create LLMJudge using watsonx.ai foundation model:\n        .. code-block:: python\n\n            wx_ai_foundation_model = WxAIFoundationModel(\n                model_id=\"google/flan-ul2\",\n                project_id=PROJECT_ID,\n                provider=WxAIModelProvider(\n                    credentials=WxAICredentials(api_key=wx_apikey)\n                )\n            )\n            llm_judge = LLMJudge(model=wx_ai_foundation_model)",
         "properties": {
            "model": {
               "anyOf": [
                  {
                     "$ref": "#/$defs/WxAIFoundationModel"
                  },
                  {
                     "$ref": "#/$defs/OpenAIFoundationModel"
                  },
                  {
                     "$ref": "#/$defs/AzureOpenAIFoundationModel"
                  },
                  {
                     "$ref": "#/$defs/RITSFoundationModel"
                  }
               ],
               "description": "The foundation model to be used as judge",
               "title": "Model"
            }
         },
         "required": [
            "model"
         ],
         "title": "LLMJudge",
         "type": "object"
      },
      "Locale": {
         "properties": {
            "input": {
               "anyOf": [
                  {
                     "items": {
                        "type": "string"
                     },
                     "type": "array"
                  },
                  {
                     "additionalProperties": {
                        "type": "string"
                     },
                     "type": "object"
                  },
                  {
                     "type": "string"
                  },
                  {
                     "type": "null"
                  }
               ],
               "default": null,
               "title": "Input"
            },
            "output": {
               "anyOf": [
                  {
                     "items": {
                        "type": "string"
                     },
                     "type": "array"
                  },
                  {
                     "type": "null"
                  }
               ],
               "default": null,
               "title": "Output"
            },
            "reference": {
               "anyOf": [
                  {
                     "items": {
                        "type": "string"
                     },
                     "type": "array"
                  },
                  {
                     "additionalProperties": {
                        "type": "string"
                     },
                     "type": "object"
                  },
                  {
                     "type": "string"
                  },
                  {
                     "type": "null"
                  }
               ],
               "default": null,
               "title": "Reference"
            }
         },
         "title": "Locale",
         "type": "object"
      },
      "MetricGroup": {
         "enum": [
            "retrieval_quality",
            "answer_quality",
            "content_safety",
            "performance",
            "usage",
            "tool_call_quality",
            "readability"
         ],
         "title": "MetricGroup",
         "type": "string"
      },
      "MetricThreshold": {
         "description": "The class that defines the threshold for a metric.",
         "properties": {
            "type": {
               "description": "Threshold type. One of 'lower_limit', 'upper_limit'",
               "enum": [
                  "lower_limit",
                  "upper_limit"
               ],
               "title": "Type",
               "type": "string"
            },
            "value": {
               "default": 0,
               "description": "The value of metric threshold",
               "title": "Threshold value",
               "type": "number"
            }
         },
         "required": [
            "type"
         ],
         "title": "MetricThreshold",
         "type": "object"
      },
      "ModelProviderType": {
         "description": "Supported model provider types for Generative AI",
         "enum": [
            "ibm_watsonx.ai",
            "azure_openai",
            "rits",
            "openai",
            "custom"
         ],
         "title": "ModelProviderType",
         "type": "string"
      },
      "OpenAICredentials": {
         "description": "Defines the OpenAICredentials class to specify the OpenAI server details.\n\nExamples:\n    1. Create OpenAICredentials with default parameters. By default Dallas region is used.\n        .. code-block:: python\n\n            openai_credentials = OpenAICredentials(api_key=api_key,\n                                                   url=openai_url)\n\n    2. Create OpenAICredentials by reading from environment variables.\n        .. code-block:: python\n\n            os.environ[\"OPENAI_API_KEY\"] = \"...\"\n            os.environ[\"OPENAI_URL\"] = \"...\"\n            openai_credentials = OpenAICredentials.create_from_env()",
         "properties": {
            "url": {
               "anyOf": [
                  {
                     "type": "string"
                  },
                  {
                     "type": "null"
                  }
               ],
               "title": "Url"
            },
            "api_key": {
               "anyOf": [
                  {
                     "type": "string"
                  },
                  {
                     "type": "null"
                  }
               ],
               "title": "Api Key"
            }
         },
         "required": [
            "url",
            "api_key"
         ],
         "title": "OpenAICredentials",
         "type": "object"
      },
      "OpenAIFoundationModel": {
         "description": "The OpenAI foundation model details\n\nExamples:\n    1. Create OpenAI foundation model by passing the credentials during object creation. Note that the url is optional and will be set to the default value for OpenAI. To change the default value, the url should be passed to ``OpenAICredentials`` object.\n        .. code-block:: python\n\n            openai_foundation_model = OpenAIFoundationModel(\n                model_id=\"gpt-4o-mini\",\n                provider=OpenAIModelProvider(\n                    credentials=OpenAICredentials(\n                        api_key=api_key,\n                        url=openai_url,\n                    )\n                )\n            )\n\n    2. Create OpenAI foundation model by setting the credentials in environment variables:\n        * ``OPENAI_API_KEY`` is used to set the api key for OpenAI.\n        * ``OPENAI_URL`` is used to set the url for OpenAI\n\n        .. code-block:: python\n\n            openai_foundation_model = OpenAIFoundationModel(\n                model_id=\"gpt-4o-mini\",\n            )",
         "properties": {
            "model_name": {
               "anyOf": [
                  {
                     "type": "string"
                  },
                  {
                     "type": "null"
                  }
               ],
               "default": null,
               "description": "The name of the foundation model.",
               "title": "Model Name"
            },
            "provider": {
               "$ref": "#/$defs/OpenAIModelProvider",
               "description": "OpenAI provider"
            },
            "model_id": {
               "description": "Model name from OpenAI",
               "title": "Model Id",
               "type": "string"
            }
         },
         "required": [
            "model_id"
         ],
         "title": "OpenAIFoundationModel",
         "type": "object"
      },
      "OpenAIModelProvider": {
         "properties": {
            "type": {
               "$ref": "#/$defs/ModelProviderType",
               "default": "openai",
               "description": "The type of model provider."
            },
            "credentials": {
               "anyOf": [
                  {
                     "$ref": "#/$defs/OpenAICredentials"
                  },
                  {
                     "type": "null"
                  }
               ],
               "default": null,
               "description": "OpenAI credentials. This can also be set by using `OPENAI_API_KEY` environment variable."
            }
         },
         "title": "OpenAIModelProvider",
         "type": "object"
      },
      "RITSCredentials": {
         "properties": {
            "hostname": {
               "anyOf": [
                  {
                     "type": "string"
                  },
                  {
                     "type": "null"
                  }
               ],
               "default": "https://inference-3scale-apicast-production.apps.rits.fmaas.res.ibm.com",
               "description": "The rits hostname",
               "title": "Hostname"
            },
            "api_key": {
               "title": "Api Key",
               "type": "string"
            }
         },
         "required": [
            "api_key"
         ],
         "title": "RITSCredentials",
         "type": "object"
      },
      "RITSFoundationModel": {
         "properties": {
            "model_name": {
               "anyOf": [
                  {
                     "type": "string"
                  },
                  {
                     "type": "null"
                  }
               ],
               "default": null,
               "description": "The name of the foundation model.",
               "title": "Model Name"
            },
            "provider": {
               "$ref": "#/$defs/RITSModelProvider",
               "description": "The provider of the model."
            }
         },
         "title": "RITSFoundationModel",
         "type": "object"
      },
      "RITSModelProvider": {
         "properties": {
            "type": {
               "$ref": "#/$defs/ModelProviderType",
               "default": "rits",
               "description": "The type of model provider."
            },
            "credentials": {
               "anyOf": [
                  {
                     "$ref": "#/$defs/RITSCredentials"
                  },
                  {
                     "type": "null"
                  }
               ],
               "default": null,
               "description": "RITS credentials."
            }
         },
         "title": "RITSModelProvider",
         "type": "object"
      },
      "TaskType": {
         "description": "Supported task types for generative AI models",
         "enum": [
            "question_answering",
            "classification",
            "summarization",
            "generation",
            "extraction",
            "retrieval_augmented_generation"
         ],
         "title": "TaskType",
         "type": "string"
      },
      "WxAICredentials": {
         "description": "Defines the WxAICredentials class to specify the watsonx.ai server details.\n\nExamples:\n    1. Create WxAICredentials with default parameters. By default Dallas region is used.\n        .. code-block:: python\n\n            wxai_credentials = WxAICredentials(api_key=\"...\")\n\n    2. Create WxAICredentials by specifying region url.\n        .. code-block:: python\n\n            wxai_credentials = WxAICredentials(api_key=\"...\",\n                                               url=\"https://au-syd.ml.cloud.ibm.com\")\n\n    3. Create WxAICredentials by reading from environment variables.\n        .. code-block:: python\n\n            os.environ[\"WATSONX_APIKEY\"] = \"...\"\n            # [Optional] Specify watsonx region specific url. Default is https://us-south.ml.cloud.ibm.com .\n            os.environ[\"WATSONX_URL\"] = \"https://eu-gb.ml.cloud.ibm.com\"\n            wxai_credentials = WxAICredentials.create_from_env()\n\n    4. Create WxAICredentials for on-prem.\n        .. code-block:: python\n\n            wxai_credentials = WxAICredentials(url=\"https://<hostname>\",\n                                               username=\"...\"\n                                               api_key=\"...\",\n                                               version=\"5.2\")\n\n    5. Create WxAICredentials by reading from environment variables for on-prem.\n        .. code-block:: python\n\n            os.environ[\"WATSONX_URL\"] = \"https://<hostname>\"\n            os.environ[\"WATSONX_VERSION\"] = \"5.2\"\n            os.environ[\"WATSONX_USERNAME\"] = \"...\"\n            os.environ[\"WATSONX_APIKEY\"] = \"...\"\n            # Only one of api_key or password is needed\n            #os.environ[\"WATSONX_PASSWORD\"] = \"...\"\n            wxai_credentials = WxAICredentials.create_from_env()",
         "properties": {
            "url": {
               "default": "https://us-south.ml.cloud.ibm.com",
               "description": "The url for watsonx ai service",
               "examples": [
                  "https://us-south.ml.cloud.ibm.com",
                  "https://eu-de.ml.cloud.ibm.com",
                  "https://eu-gb.ml.cloud.ibm.com",
                  "https://jp-tok.ml.cloud.ibm.com",
                  "https://au-syd.ml.cloud.ibm.com"
               ],
               "title": "watsonx.ai url",
               "type": "string"
            },
            "api_key": {
               "anyOf": [
                  {
                     "type": "string"
                  },
                  {
                     "type": "null"
                  }
               ],
               "default": null,
               "description": "The user api key. Required for using watsonx as a service and one of api_key or password is required for using watsonx on-prem software.",
               "strip_whitespace": true,
               "title": "Api Key"
            },
            "version": {
               "anyOf": [
                  {
                     "type": "string"
                  },
                  {
                     "type": "null"
                  }
               ],
               "default": null,
               "description": "The watsonx on-prem software version. Required for using watsonx on-prem software.",
               "title": "Version"
            },
            "username": {
               "anyOf": [
                  {
                     "type": "string"
                  },
                  {
                     "type": "null"
                  }
               ],
               "default": null,
               "description": "The user name. Required for using watsonx on-prem software.",
               "title": "User name"
            },
            "password": {
               "anyOf": [
                  {
                     "type": "string"
                  },
                  {
                     "type": "null"
                  }
               ],
               "default": null,
               "description": "The user password. One of api_key or password is required for using watsonx on-prem software.",
               "title": "Password"
            },
            "instance_id": {
               "anyOf": [
                  {
                     "type": "string"
                  },
                  {
                     "type": "null"
                  }
               ],
               "default": "openshift",
               "description": "The watsonx.ai instance id. Default value is openshift.",
               "title": "Instance id"
            }
         },
         "title": "WxAICredentials",
         "type": "object"
      },
      "WxAIFoundationModel": {
         "description": "The IBM watsonx.ai foundation model details\n\nTo initialize the foundation model, you can either pass in the credentials directly or set the environment.\nYou can follow these examples to create the provider.\n\nExamples:\n    1. Create foundation model by specifying the credentials during object creation:\n        .. code-block:: python\n\n            # Specify the credentials during object creation\n            wx_ai_foundation_model = WxAIFoundationModel(\n                model_id=\"google/flan-ul2\",\n                project_id=<PROJECT_ID>,\n                provider=WxAIModelProvider(\n                    credentials=WxAICredentials(\n                        url=wx_url, # This is optional field, by default US-Dallas region is selected\n                        api_key=wx_apikey,\n                    )\n                )\n            )\n\n    2. Create foundation model by setting the credentials environment variables:\n        * The api key can be set using one of the environment variables ``WXAI_API_KEY``, ``WATSONX_APIKEY``, or ``WXG_API_KEY``. These will be read in the order of precedence.\n        * The url is optional and will be set to US-Dallas region by default. It can be set using one of the environment variables ``WXAI_URL``, ``WATSONX_URL``, or ``WXG_URL``. These will be read in the order of precedence.\n\n        .. code-block:: python\n\n            wx_ai_foundation_model = WxAIFoundationModel(\n                model_id=\"google/flan-ul2\",\n                project_id=<PROJECT_ID>,\n            )\n\n    3. Create foundation model by specifying watsonx.governance software credentials during object creation:\n        .. code-block:: python\n\n            wx_ai_foundation_model = WxAIFoundationModel(\n                model_id=\"google/flan-ul2\",\n                project_id=project_id,\n                provider=WxAIModelProvider(\n                    credentials=WxAICredentials(\n                        url=wx_url,\n                        api_key=wx_apikey,\n                        username=wx_username,\n                        version=wx_version,\n                    )\n                )\n            )\n\n    4. Create foundation model by setting watsonx.governance software credentials environment variables:\n        * The api key can be set using one of the environment variables ``WXAI_API_KEY``, ``WATSONX_APIKEY``, or ``WXG_API_KEY``. These will be read in the order of precedence.\n        * The url can be set using one of these environment variable ``WXAI_URL``, ``WATSONX_URL``, or ``WXG_URL``. These will be read in the order of precedence.\n        * The username can be set using one of these environment variable ``WXAI_USERNAME``, ``WATSONX_USERNAME``, or ``WXG_USERNAME``. These will be read in the order of precedence.\n        * The version of watsonx.governance software can be set using one of these environment variable ``WXAI_VERSION``, ``WATSONX_VERSION``, or ``WXG_VERSION``. These will be read in the order of precedence.\n\n        .. code-block:: python\n\n            wx_ai_foundation_model = WxAIFoundationModel(\n                model_id=\"google/flan-ul2\",\n                project_id=project_id,\n            )",
         "properties": {
            "model_name": {
               "anyOf": [
                  {
                     "type": "string"
                  },
                  {
                     "type": "null"
                  }
               ],
               "default": null,
               "description": "The name of the foundation model.",
               "title": "Model Name"
            },
            "provider": {
               "$ref": "#/$defs/WxAIModelProvider",
               "description": "The provider of the model."
            },
            "model_id": {
               "description": "The unique identifier for the watsonx.ai model.",
               "title": "Model Id",
               "type": "string"
            },
            "project_id": {
               "anyOf": [
                  {
                     "type": "string"
                  },
                  {
                     "type": "null"
                  }
               ],
               "default": null,
               "description": "The project ID associated with the model.",
               "title": "Project Id"
            },
            "space_id": {
               "anyOf": [
                  {
                     "type": "string"
                  },
                  {
                     "type": "null"
                  }
               ],
               "default": null,
               "description": "The space ID associated with the model.",
               "title": "Space Id"
            }
         },
         "required": [
            "model_id"
         ],
         "title": "WxAIFoundationModel",
         "type": "object"
      },
      "WxAIModelProvider": {
         "description": "This class represents a model provider configuration for IBM watsonx.ai. It includes the provider type and\ncredentials required to authenticate and interact with the watsonx.ai platform. If credentials are not explicitly\nprovided, it attempts to load them from environment variables.\n\nExamples:\n    1. Create provider using credentials object:\n        .. code-block:: python\n\n            credentials = WxAICredentials(\n                url=\"https://us-south.ml.cloud.ibm.com\",\n                api_key=\"your-api-key\"\n            )\n            provider = WxAIModelProvider(credentials=credentials)\n\n    2. Create provider using environment variables:\n        .. code-block:: python\n\n            import os\n\n            os.environ['WATSONX_URL'] = \"https://us-south.ml.cloud.ibm.com\"\n            os.environ['WATSONX_APIKEY'] = \"your-api-key\"\n\n            provider = WxAIModelProvider()",
         "properties": {
            "type": {
               "$ref": "#/$defs/ModelProviderType",
               "default": "ibm_watsonx.ai",
               "description": "The type of model provider."
            },
            "credentials": {
               "anyOf": [
                  {
                     "$ref": "#/$defs/WxAICredentials"
                  },
                  {
                     "type": "null"
                  }
               ],
               "default": null,
               "description": "The credentials used to authenticate with watsonx.ai. If not provided, they will be loaded from environment variables."
            }
         },
         "title": "WxAIModelProvider",
         "type": "object"
      }
   }
}

Fields:
field configuration: Annotated[AgenticAIConfiguration, FieldInfo(annotation=NoneType, required=False, default=AgenticAIConfiguration(record_id_field='record_id', record_timestamp_field='record_timestamp', task_type=None, input_fields=['input_text'], context_fields=['context'], output_fields=['generated_text'], reference_fields=['ground_truth'], locale=None, tools=[], tool_calls_field='tool_calls', available_tools_field='available_tools', llm_judge=None, prompt_field='model_prompt', interaction_id_field='interaction_id', conversation_id_field='conversation_id'), title='Metrics configuration', description='The configuration of the metrics to compute. The configuration contains the fields names to be read when computing the metrics.')] = AgenticAIConfiguration(record_id_field='record_id', record_timestamp_field='record_timestamp', task_type=None, input_fields=['input_text'], context_fields=['context'], output_fields=['generated_text'], reference_fields=['ground_truth'], locale=None, tools=[], tool_calls_field='tool_calls', available_tools_field='available_tools', llm_judge=None, prompt_field='model_prompt', interaction_id_field='interaction_id', conversation_id_field='conversation_id')

The configuration of the metrics to compute. The configuration contains the fields names to be read when computing the metrics.

field metric_groups: Annotated[list[MetricGroup] | None, FieldInfo(annotation=NoneType, required=False, default=[], title='Metric Groups', description='The list of metric groups to compute.')] = []

The list of metric groups to compute.

field metrics: Annotated[list[GenAIMetric] | None, FieldInfo(annotation=NoneType, required=False, default=[], title='Metrics', description='The list of metrics to compute.')] = []

The list of metrics to compute.

metrics_serializer(metrics: list[GenAIMetric])
classmethod model_validate(obj, **kwargs)

Validate a pydantic model instance.

Parameters:
  • obj – The object to validate.

  • strict – Whether to enforce types strictly.

  • from_attributes – Whether to extract data from object attributes.

  • context – Additional context to pass to the validator.

Raises:

ValidationError – If the object could not be validated.

Returns:

The validated model instance.

pydantic model ibm_watsonx_gov.entities.agentic_app.Node

Bases: BaseModel

The class representing a node in an agentic application.

Examples

  1. Create Node with metrics configuration and default agentic ai configuration
    metrics_configurations = [MetricsConfiguration(metrics=[ContextRelevanceMetric()],
                                                   metric_groups=[MetricGroup.RETRIEVAL_QUALITY])])]
    node = Node(name="Retrieval Node",
                metrics_configurations=metrics_configurations)
    
  2. Create Node with metrics configuration and specifying agentic ai configuration
    node_config = {"input_fields": ["input"],
                   "output_fields": ["output"],
                   "context_fields": ["contexts"],
                   "reference_fields": ["reference"]}
    metrics_configurations = [MetricsConfiguration(configuration=AgenticAIConfiguration(**node_config)
                                                   metrics=[ContextRelevanceMetric()],
                                                   metric_groups=[MetricGroup.RETRIEVAL_QUALITY])])]
    node = Node(name="Retrieval Node",
                metrics_configurations=metrics_configurations)
    

Show JSON schema
{
   "title": "Node",
   "description": "The class representing a node in an agentic application.\n\nExamples:\n    1. Create Node with metrics configuration and default agentic ai configuration\n        .. code-block:: python\n\n            metrics_configurations = [MetricsConfiguration(metrics=[ContextRelevanceMetric()],\n                                                           metric_groups=[MetricGroup.RETRIEVAL_QUALITY])])]\n            node = Node(name=\"Retrieval Node\",\n                        metrics_configurations=metrics_configurations)\n\n    2. Create Node with metrics configuration and specifying agentic ai configuration\n        .. code-block:: python\n\n            node_config = {\"input_fields\": [\"input\"],\n                           \"output_fields\": [\"output\"],\n                           \"context_fields\": [\"contexts\"],\n                           \"reference_fields\": [\"reference\"]}\n            metrics_configurations = [MetricsConfiguration(configuration=AgenticAIConfiguration(**node_config)\n                                                           metrics=[ContextRelevanceMetric()],\n                                                           metric_groups=[MetricGroup.RETRIEVAL_QUALITY])])]\n            node = Node(name=\"Retrieval Node\",\n                        metrics_configurations=metrics_configurations)",
   "type": "object",
   "properties": {
      "name": {
         "description": "The name of the node.",
         "title": "Name",
         "type": "string"
      },
      "func_name": {
         "anyOf": [
            {
               "type": "string"
            },
            {
               "type": "null"
            }
         ],
         "default": null,
         "description": "The name of the node function.",
         "title": "Node function name"
      },
      "metrics_configurations": {
         "default": [],
         "description": "The list of metrics and their configuration details.",
         "items": {
            "$ref": "#/$defs/MetricsConfiguration"
         },
         "title": "Metrics configuration",
         "type": "array"
      },
      "foundation_models": {
         "default": [],
         "description": "The Foundation models invoked by the node",
         "items": {
            "$ref": "#/$defs/FoundationModelInfo"
         },
         "title": "Foundation Models",
         "type": "array"
      }
   },
   "$defs": {
      "AgenticAIConfiguration": {
         "description": "Defines the AgenticAIConfiguration class.\n\nThe configuration interface for Agentic AI tools and applications.\nThis is used to specify the fields mapping details in the data and other configuration parameters needed for evaluation.\n\nExamples:\n    1. Create configuration with default parameters\n        .. code-block:: python\n\n            configuration = AgenticAIConfiguration()\n\n    2. Create configuration with parameters\n        .. code-block:: python\n\n            configuration = AgenticAIConfiguration(input_fields=[\"input\"], \n                                                   output_fields=[\"output\"])\n\n    2. Create configuration with dict parameters\n        .. code-block:: python\n\n            config = {\"input_fields\": [\"input\"],\n                      \"output_fields\": [\"output\"],\n                      \"context_fields\": [\"contexts\"],\n                      \"reference_fields\": [\"reference\"]}\n            configuration = AgenticAIConfiguration(**config)",
         "properties": {
            "record_id_field": {
               "default": "record_id",
               "description": "The record identifier field name.",
               "examples": [
                  "record_id"
               ],
               "title": "Record id field",
               "type": "string"
            },
            "record_timestamp_field": {
               "default": "record_timestamp",
               "description": "The record timestamp field name.",
               "examples": [
                  "record_timestamp"
               ],
               "title": "Record timestamp field",
               "type": "string"
            },
            "task_type": {
               "anyOf": [
                  {
                     "$ref": "#/$defs/TaskType"
                  },
                  {
                     "type": "null"
                  }
               ],
               "default": null,
               "description": "The generative task type. Default value is None.",
               "examples": [
                  "retrieval_augmented_generation"
               ],
               "title": "Task Type"
            },
            "input_fields": {
               "default": [
                  "input_text"
               ],
               "description": "The list of model input fields in the data. Default value is ['input_text'].",
               "examples": [
                  [
                     "question"
                  ]
               ],
               "items": {
                  "type": "string"
               },
               "title": "Input Fields",
               "type": "array"
            },
            "context_fields": {
               "default": [
                  "context"
               ],
               "description": "The list of context fields in the input fields. Default value is ['context'].",
               "examples": [
                  [
                     "context1",
                     "context2"
                  ]
               ],
               "items": {
                  "type": "string"
               },
               "title": "Context Fields",
               "type": "array"
            },
            "output_fields": {
               "default": [
                  "generated_text"
               ],
               "description": "The list of model output fields in the data. Default value is ['generated_text'].",
               "examples": [
                  [
                     "output"
                  ]
               ],
               "items": {
                  "type": "string"
               },
               "title": "Output Fields",
               "type": "array"
            },
            "reference_fields": {
               "default": [
                  "ground_truth"
               ],
               "description": "The list of reference fields in the data. Default value is ['ground_truth'].",
               "examples": [
                  [
                     "reference"
                  ]
               ],
               "items": {
                  "type": "string"
               },
               "title": "Reference Fields",
               "type": "array"
            },
            "locale": {
               "anyOf": [
                  {
                     "$ref": "#/$defs/Locale"
                  },
                  {
                     "type": "null"
                  }
               ],
               "default": null,
               "description": "The language locale of the input, output and reference fields in the data.",
               "title": "Locale"
            },
            "tools": {
               "default": [],
               "description": "The list of tools used by the LLM.",
               "examples": [
                  [
                     "function1",
                     "function2"
                  ]
               ],
               "items": {
                  "type": "object"
               },
               "title": "Tools",
               "type": "array"
            },
            "tool_calls_field": {
               "anyOf": [
                  {
                     "type": "string"
                  },
                  {
                     "type": "null"
                  }
               ],
               "default": "tool_calls",
               "description": "The tool calls field in the input fields. Default value is 'tool_calls'.",
               "examples": [
                  "tool_calls"
               ],
               "title": "Tool Calls Field"
            },
            "available_tools_field": {
               "anyOf": [
                  {
                     "type": "string"
                  },
                  {
                     "type": "null"
                  }
               ],
               "default": "available_tools",
               "description": "The tool inventory field in the data. Default value is 'available_tools'.",
               "examples": [
                  "available_tools"
               ],
               "title": "Available Tools Field"
            },
            "llm_judge": {
               "anyOf": [
                  {
                     "$ref": "#/$defs/LLMJudge"
                  },
                  {
                     "type": "null"
                  }
               ],
               "default": null,
               "description": "LLM as Judge Model details.",
               "title": "LLM Judge"
            },
            "prompt_field": {
               "anyOf": [
                  {
                     "type": "string"
                  },
                  {
                     "type": "null"
                  }
               ],
               "default": "model_prompt",
               "description": "The prompt field in the input fields. Default value is 'model_prompt'.",
               "examples": [
                  "model_prompt"
               ],
               "title": "Model Prompt Field"
            },
            "interaction_id_field": {
               "anyOf": [
                  {
                     "type": "string"
                  },
                  {
                     "type": "null"
                  }
               ],
               "default": "interaction_id",
               "description": "The interaction identifier field name. Default value is 'interaction_id'.",
               "examples": [
                  "interaction_id"
               ],
               "title": "Interaction id field"
            },
            "conversation_id_field": {
               "anyOf": [
                  {
                     "type": "string"
                  },
                  {
                     "type": "null"
                  }
               ],
               "default": "conversation_id",
               "description": "The conversation identifier field name. Default value is 'conversation_id'.",
               "examples": [
                  "conversation_id"
               ],
               "title": "Conversation id field"
            }
         },
         "title": "AgenticAIConfiguration",
         "type": "object"
      },
      "AzureOpenAICredentials": {
         "properties": {
            "url": {
               "anyOf": [
                  {
                     "type": "string"
                  },
                  {
                     "type": "null"
                  }
               ],
               "description": "Azure OpenAI url. This attribute can be read from `AZURE_OPENAI_HOST` environment variable.",
               "title": "Url"
            },
            "api_key": {
               "anyOf": [
                  {
                     "type": "string"
                  },
                  {
                     "type": "null"
                  }
               ],
               "description": "API key for Azure OpenAI. This attribute can be read from `AZURE_OPENAI_API_KEY` environment variable.",
               "title": "Api Key"
            },
            "api_version": {
               "anyOf": [
                  {
                     "type": "string"
                  },
                  {
                     "type": "null"
                  }
               ],
               "description": "The model API version from Azure OpenAI. This attribute can be read from `AZURE_OPENAI_API_VERSION` environment variable.",
               "title": "Api Version"
            }
         },
         "required": [
            "url",
            "api_key",
            "api_version"
         ],
         "title": "AzureOpenAICredentials",
         "type": "object"
      },
      "AzureOpenAIFoundationModel": {
         "description": "The Azure OpenAI foundation model details\n\nExamples:\n    1. Create Azure OpenAI foundation model by passing the credentials during object creation.\n        .. code-block:: python\n\n            azure_openai_foundation_model = AzureOpenAIFoundationModel(\n                model_id=\"gpt-4o-mini\",\n                provider=AzureOpenAIModelProvider(\n                    credentials=AzureOpenAICredentials(\n                        api_key=azure_api_key,\n                        url=azure_host_url,\n                        api_version=azure_api_model_version,\n                    )\n                )\n            )\n\n2. Create Azure OpenAI foundation model by setting the credentials in environment variables:\n    * ``AZURE_OPENAI_API_KEY`` is used to set the api key for OpenAI.\n    * ``AZURE_OPENAI_HOST`` is used to set the url for Azure OpenAI.\n    * ``AZURE_OPENAI_API_VERSION`` is uses to set the the api version for Azure OpenAI.\n\n        .. code-block:: python\n\n            openai_foundation_model = AzureOpenAIFoundationModel(\n                model_id=\"gpt-4o-mini\",\n            )",
         "properties": {
            "model_name": {
               "anyOf": [
                  {
                     "type": "string"
                  },
                  {
                     "type": "null"
                  }
               ],
               "default": null,
               "description": "The name of the foundation model.",
               "title": "Model Name"
            },
            "provider": {
               "$ref": "#/$defs/AzureOpenAIModelProvider",
               "description": "Azure OpenAI provider"
            },
            "model_id": {
               "description": "Model deployment name from Azure OpenAI",
               "title": "Model Id",
               "type": "string"
            }
         },
         "required": [
            "model_id"
         ],
         "title": "AzureOpenAIFoundationModel",
         "type": "object"
      },
      "AzureOpenAIModelProvider": {
         "properties": {
            "type": {
               "$ref": "#/$defs/ModelProviderType",
               "default": "azure_openai",
               "description": "The type of model provider."
            },
            "credentials": {
               "anyOf": [
                  {
                     "$ref": "#/$defs/AzureOpenAICredentials"
                  },
                  {
                     "type": "null"
                  }
               ],
               "default": null,
               "description": "Azure OpenAI credentials."
            }
         },
         "title": "AzureOpenAIModelProvider",
         "type": "object"
      },
      "FoundationModelInfo": {
         "description": "Represents a foundation model used in an experiment.",
         "properties": {
            "model_name": {
               "anyOf": [
                  {
                     "type": "string"
                  },
                  {
                     "type": "null"
                  }
               ],
               "default": null,
               "description": "The name of the foundation model.",
               "title": "Model Name"
            },
            "model_id": {
               "anyOf": [
                  {
                     "type": "string"
                  },
                  {
                     "type": "null"
                  }
               ],
               "default": null,
               "description": "The id of the foundation model.",
               "title": "Model Id"
            },
            "provider": {
               "description": "The provider of the foundation model.",
               "title": "Provider",
               "type": "string"
            },
            "type": {
               "description": "The type of foundation model.",
               "example": [
                  "chat",
                  "embedding",
                  "text-generation"
               ],
               "title": "Type",
               "type": "string"
            }
         },
         "required": [
            "provider",
            "type"
         ],
         "title": "FoundationModelInfo",
         "type": "object"
      },
      "GenAIMetric": {
         "description": "Defines the Generative AI metric interface",
         "properties": {
            "name": {
               "description": "The name of the metric",
               "title": "Metric Name",
               "type": "string"
            },
            "thresholds": {
               "default": [],
               "description": "The list of thresholds",
               "items": {
                  "$ref": "#/$defs/MetricThreshold"
               },
               "title": "Thresholds",
               "type": "array"
            },
            "tasks": {
               "description": "The task types this metric is associated with.",
               "items": {
                  "$ref": "#/$defs/TaskType"
               },
               "title": "Tasks",
               "type": "array"
            },
            "group": {
               "anyOf": [
                  {
                     "$ref": "#/$defs/MetricGroup"
                  },
                  {
                     "type": "null"
                  }
               ],
               "default": null,
               "description": "The metric group this metric belongs to."
            },
            "is_reference_free": {
               "default": true,
               "description": "Decides whether this metric needs a reference for computation",
               "title": "Is Reference Free",
               "type": "boolean"
            },
            "method": {
               "anyOf": [
                  {
                     "type": "string"
                  },
                  {
                     "type": "null"
                  }
               ],
               "default": null,
               "description": "The method used to compute the metric.",
               "title": "Method"
            },
            "metric_dependencies": {
               "default": [],
               "description": "Metrics that needs to be evaluated first",
               "items": {
                  "$ref": "#/$defs/GenAIMetric"
               },
               "title": "Metric Dependencies",
               "type": "array"
            }
         },
         "required": [
            "name",
            "tasks"
         ],
         "title": "GenAIMetric",
         "type": "object"
      },
      "LLMJudge": {
         "description": "Defines the LLMJudge.\n\nThe LLMJudge class contains the details of the llm judge model to be used for computing the metric.\n\nExamples:\n    1. Create LLMJudge using watsonx.ai foundation model:\n        .. code-block:: python\n\n            wx_ai_foundation_model = WxAIFoundationModel(\n                model_id=\"google/flan-ul2\",\n                project_id=PROJECT_ID,\n                provider=WxAIModelProvider(\n                    credentials=WxAICredentials(api_key=wx_apikey)\n                )\n            )\n            llm_judge = LLMJudge(model=wx_ai_foundation_model)",
         "properties": {
            "model": {
               "anyOf": [
                  {
                     "$ref": "#/$defs/WxAIFoundationModel"
                  },
                  {
                     "$ref": "#/$defs/OpenAIFoundationModel"
                  },
                  {
                     "$ref": "#/$defs/AzureOpenAIFoundationModel"
                  },
                  {
                     "$ref": "#/$defs/RITSFoundationModel"
                  }
               ],
               "description": "The foundation model to be used as judge",
               "title": "Model"
            }
         },
         "required": [
            "model"
         ],
         "title": "LLMJudge",
         "type": "object"
      },
      "Locale": {
         "properties": {
            "input": {
               "anyOf": [
                  {
                     "items": {
                        "type": "string"
                     },
                     "type": "array"
                  },
                  {
                     "additionalProperties": {
                        "type": "string"
                     },
                     "type": "object"
                  },
                  {
                     "type": "string"
                  },
                  {
                     "type": "null"
                  }
               ],
               "default": null,
               "title": "Input"
            },
            "output": {
               "anyOf": [
                  {
                     "items": {
                        "type": "string"
                     },
                     "type": "array"
                  },
                  {
                     "type": "null"
                  }
               ],
               "default": null,
               "title": "Output"
            },
            "reference": {
               "anyOf": [
                  {
                     "items": {
                        "type": "string"
                     },
                     "type": "array"
                  },
                  {
                     "additionalProperties": {
                        "type": "string"
                     },
                     "type": "object"
                  },
                  {
                     "type": "string"
                  },
                  {
                     "type": "null"
                  }
               ],
               "default": null,
               "title": "Reference"
            }
         },
         "title": "Locale",
         "type": "object"
      },
      "MetricGroup": {
         "enum": [
            "retrieval_quality",
            "answer_quality",
            "content_safety",
            "performance",
            "usage",
            "tool_call_quality",
            "readability"
         ],
         "title": "MetricGroup",
         "type": "string"
      },
      "MetricThreshold": {
         "description": "The class that defines the threshold for a metric.",
         "properties": {
            "type": {
               "description": "Threshold type. One of 'lower_limit', 'upper_limit'",
               "enum": [
                  "lower_limit",
                  "upper_limit"
               ],
               "title": "Type",
               "type": "string"
            },
            "value": {
               "default": 0,
               "description": "The value of metric threshold",
               "title": "Threshold value",
               "type": "number"
            }
         },
         "required": [
            "type"
         ],
         "title": "MetricThreshold",
         "type": "object"
      },
      "MetricsConfiguration": {
         "description": "The class representing the metrics to be computed and the configuration details required for them.\n\nExamples:\n    1. Create MetricsConfiguration with default agentic ai configuration\n        .. code-block:: python\n\n            metrics_configuration = MetricsConfiguration(metrics=[ContextRelevanceMetric()],\n                                                         metric_groups=[MetricGroup.RETRIEVAL_QUALITY])])\n\n    2. Create MetricsConfiguration by specifying agentic ai configuration\n        .. code-block:: python\n\n            config = {\n                \"input_fields\": [\"input\"],\n                \"context_fields\": [\"contexts\"]\n            }\n            metrics_configuration = MetricsConfiguration(configuration=AgenticAIConfiguration(**config)\n                                                           metrics=[ContextRelevanceMetric()],\n                                                           metric_groups=[MetricGroup.RETRIEVAL_QUALITY])])",
         "properties": {
            "configuration": {
               "$ref": "#/$defs/AgenticAIConfiguration",
               "default": {
                  "record_id_field": "record_id",
                  "record_timestamp_field": "record_timestamp",
                  "task_type": null,
                  "input_fields": [
                     "input_text"
                  ],
                  "context_fields": [
                     "context"
                  ],
                  "output_fields": [
                     "generated_text"
                  ],
                  "reference_fields": [
                     "ground_truth"
                  ],
                  "locale": null,
                  "tools": [],
                  "tool_calls_field": "tool_calls",
                  "available_tools_field": "available_tools",
                  "llm_judge": null,
                  "prompt_field": "model_prompt",
                  "interaction_id_field": "interaction_id",
                  "conversation_id_field": "conversation_id"
               },
               "description": "The configuration of the metrics to compute. The configuration contains the fields names to be read when computing the metrics.",
               "title": "Metrics configuration"
            },
            "metrics": {
               "anyOf": [
                  {
                     "items": {
                        "$ref": "#/$defs/GenAIMetric"
                     },
                     "type": "array"
                  },
                  {
                     "type": "null"
                  }
               ],
               "default": [],
               "description": "The list of metrics to compute.",
               "title": "Metrics"
            },
            "metric_groups": {
               "anyOf": [
                  {
                     "items": {
                        "$ref": "#/$defs/MetricGroup"
                     },
                     "type": "array"
                  },
                  {
                     "type": "null"
                  }
               ],
               "default": [],
               "description": "The list of metric groups to compute.",
               "title": "Metric Groups"
            }
         },
         "title": "MetricsConfiguration",
         "type": "object"
      },
      "ModelProviderType": {
         "description": "Supported model provider types for Generative AI",
         "enum": [
            "ibm_watsonx.ai",
            "azure_openai",
            "rits",
            "openai",
            "custom"
         ],
         "title": "ModelProviderType",
         "type": "string"
      },
      "OpenAICredentials": {
         "description": "Defines the OpenAICredentials class to specify the OpenAI server details.\n\nExamples:\n    1. Create OpenAICredentials with default parameters. By default Dallas region is used.\n        .. code-block:: python\n\n            openai_credentials = OpenAICredentials(api_key=api_key,\n                                                   url=openai_url)\n\n    2. Create OpenAICredentials by reading from environment variables.\n        .. code-block:: python\n\n            os.environ[\"OPENAI_API_KEY\"] = \"...\"\n            os.environ[\"OPENAI_URL\"] = \"...\"\n            openai_credentials = OpenAICredentials.create_from_env()",
         "properties": {
            "url": {
               "anyOf": [
                  {
                     "type": "string"
                  },
                  {
                     "type": "null"
                  }
               ],
               "title": "Url"
            },
            "api_key": {
               "anyOf": [
                  {
                     "type": "string"
                  },
                  {
                     "type": "null"
                  }
               ],
               "title": "Api Key"
            }
         },
         "required": [
            "url",
            "api_key"
         ],
         "title": "OpenAICredentials",
         "type": "object"
      },
      "OpenAIFoundationModel": {
         "description": "The OpenAI foundation model details\n\nExamples:\n    1. Create OpenAI foundation model by passing the credentials during object creation. Note that the url is optional and will be set to the default value for OpenAI. To change the default value, the url should be passed to ``OpenAICredentials`` object.\n        .. code-block:: python\n\n            openai_foundation_model = OpenAIFoundationModel(\n                model_id=\"gpt-4o-mini\",\n                provider=OpenAIModelProvider(\n                    credentials=OpenAICredentials(\n                        api_key=api_key,\n                        url=openai_url,\n                    )\n                )\n            )\n\n    2. Create OpenAI foundation model by setting the credentials in environment variables:\n        * ``OPENAI_API_KEY`` is used to set the api key for OpenAI.\n        * ``OPENAI_URL`` is used to set the url for OpenAI\n\n        .. code-block:: python\n\n            openai_foundation_model = OpenAIFoundationModel(\n                model_id=\"gpt-4o-mini\",\n            )",
         "properties": {
            "model_name": {
               "anyOf": [
                  {
                     "type": "string"
                  },
                  {
                     "type": "null"
                  }
               ],
               "default": null,
               "description": "The name of the foundation model.",
               "title": "Model Name"
            },
            "provider": {
               "$ref": "#/$defs/OpenAIModelProvider",
               "description": "OpenAI provider"
            },
            "model_id": {
               "description": "Model name from OpenAI",
               "title": "Model Id",
               "type": "string"
            }
         },
         "required": [
            "model_id"
         ],
         "title": "OpenAIFoundationModel",
         "type": "object"
      },
      "OpenAIModelProvider": {
         "properties": {
            "type": {
               "$ref": "#/$defs/ModelProviderType",
               "default": "openai",
               "description": "The type of model provider."
            },
            "credentials": {
               "anyOf": [
                  {
                     "$ref": "#/$defs/OpenAICredentials"
                  },
                  {
                     "type": "null"
                  }
               ],
               "default": null,
               "description": "OpenAI credentials. This can also be set by using `OPENAI_API_KEY` environment variable."
            }
         },
         "title": "OpenAIModelProvider",
         "type": "object"
      },
      "RITSCredentials": {
         "properties": {
            "hostname": {
               "anyOf": [
                  {
                     "type": "string"
                  },
                  {
                     "type": "null"
                  }
               ],
               "default": "https://inference-3scale-apicast-production.apps.rits.fmaas.res.ibm.com",
               "description": "The rits hostname",
               "title": "Hostname"
            },
            "api_key": {
               "title": "Api Key",
               "type": "string"
            }
         },
         "required": [
            "api_key"
         ],
         "title": "RITSCredentials",
         "type": "object"
      },
      "RITSFoundationModel": {
         "properties": {
            "model_name": {
               "anyOf": [
                  {
                     "type": "string"
                  },
                  {
                     "type": "null"
                  }
               ],
               "default": null,
               "description": "The name of the foundation model.",
               "title": "Model Name"
            },
            "provider": {
               "$ref": "#/$defs/RITSModelProvider",
               "description": "The provider of the model."
            }
         },
         "title": "RITSFoundationModel",
         "type": "object"
      },
      "RITSModelProvider": {
         "properties": {
            "type": {
               "$ref": "#/$defs/ModelProviderType",
               "default": "rits",
               "description": "The type of model provider."
            },
            "credentials": {
               "anyOf": [
                  {
                     "$ref": "#/$defs/RITSCredentials"
                  },
                  {
                     "type": "null"
                  }
               ],
               "default": null,
               "description": "RITS credentials."
            }
         },
         "title": "RITSModelProvider",
         "type": "object"
      },
      "TaskType": {
         "description": "Supported task types for generative AI models",
         "enum": [
            "question_answering",
            "classification",
            "summarization",
            "generation",
            "extraction",
            "retrieval_augmented_generation"
         ],
         "title": "TaskType",
         "type": "string"
      },
      "WxAICredentials": {
         "description": "Defines the WxAICredentials class to specify the watsonx.ai server details.\n\nExamples:\n    1. Create WxAICredentials with default parameters. By default Dallas region is used.\n        .. code-block:: python\n\n            wxai_credentials = WxAICredentials(api_key=\"...\")\n\n    2. Create WxAICredentials by specifying region url.\n        .. code-block:: python\n\n            wxai_credentials = WxAICredentials(api_key=\"...\",\n                                               url=\"https://au-syd.ml.cloud.ibm.com\")\n\n    3. Create WxAICredentials by reading from environment variables.\n        .. code-block:: python\n\n            os.environ[\"WATSONX_APIKEY\"] = \"...\"\n            # [Optional] Specify watsonx region specific url. Default is https://us-south.ml.cloud.ibm.com .\n            os.environ[\"WATSONX_URL\"] = \"https://eu-gb.ml.cloud.ibm.com\"\n            wxai_credentials = WxAICredentials.create_from_env()\n\n    4. Create WxAICredentials for on-prem.\n        .. code-block:: python\n\n            wxai_credentials = WxAICredentials(url=\"https://<hostname>\",\n                                               username=\"...\"\n                                               api_key=\"...\",\n                                               version=\"5.2\")\n\n    5. Create WxAICredentials by reading from environment variables for on-prem.\n        .. code-block:: python\n\n            os.environ[\"WATSONX_URL\"] = \"https://<hostname>\"\n            os.environ[\"WATSONX_VERSION\"] = \"5.2\"\n            os.environ[\"WATSONX_USERNAME\"] = \"...\"\n            os.environ[\"WATSONX_APIKEY\"] = \"...\"\n            # Only one of api_key or password is needed\n            #os.environ[\"WATSONX_PASSWORD\"] = \"...\"\n            wxai_credentials = WxAICredentials.create_from_env()",
         "properties": {
            "url": {
               "default": "https://us-south.ml.cloud.ibm.com",
               "description": "The url for watsonx ai service",
               "examples": [
                  "https://us-south.ml.cloud.ibm.com",
                  "https://eu-de.ml.cloud.ibm.com",
                  "https://eu-gb.ml.cloud.ibm.com",
                  "https://jp-tok.ml.cloud.ibm.com",
                  "https://au-syd.ml.cloud.ibm.com"
               ],
               "title": "watsonx.ai url",
               "type": "string"
            },
            "api_key": {
               "anyOf": [
                  {
                     "type": "string"
                  },
                  {
                     "type": "null"
                  }
               ],
               "default": null,
               "description": "The user api key. Required for using watsonx as a service and one of api_key or password is required for using watsonx on-prem software.",
               "strip_whitespace": true,
               "title": "Api Key"
            },
            "version": {
               "anyOf": [
                  {
                     "type": "string"
                  },
                  {
                     "type": "null"
                  }
               ],
               "default": null,
               "description": "The watsonx on-prem software version. Required for using watsonx on-prem software.",
               "title": "Version"
            },
            "username": {
               "anyOf": [
                  {
                     "type": "string"
                  },
                  {
                     "type": "null"
                  }
               ],
               "default": null,
               "description": "The user name. Required for using watsonx on-prem software.",
               "title": "User name"
            },
            "password": {
               "anyOf": [
                  {
                     "type": "string"
                  },
                  {
                     "type": "null"
                  }
               ],
               "default": null,
               "description": "The user password. One of api_key or password is required for using watsonx on-prem software.",
               "title": "Password"
            },
            "instance_id": {
               "anyOf": [
                  {
                     "type": "string"
                  },
                  {
                     "type": "null"
                  }
               ],
               "default": "openshift",
               "description": "The watsonx.ai instance id. Default value is openshift.",
               "title": "Instance id"
            }
         },
         "title": "WxAICredentials",
         "type": "object"
      },
      "WxAIFoundationModel": {
         "description": "The IBM watsonx.ai foundation model details\n\nTo initialize the foundation model, you can either pass in the credentials directly or set the environment.\nYou can follow these examples to create the provider.\n\nExamples:\n    1. Create foundation model by specifying the credentials during object creation:\n        .. code-block:: python\n\n            # Specify the credentials during object creation\n            wx_ai_foundation_model = WxAIFoundationModel(\n                model_id=\"google/flan-ul2\",\n                project_id=<PROJECT_ID>,\n                provider=WxAIModelProvider(\n                    credentials=WxAICredentials(\n                        url=wx_url, # This is optional field, by default US-Dallas region is selected\n                        api_key=wx_apikey,\n                    )\n                )\n            )\n\n    2. Create foundation model by setting the credentials environment variables:\n        * The api key can be set using one of the environment variables ``WXAI_API_KEY``, ``WATSONX_APIKEY``, or ``WXG_API_KEY``. These will be read in the order of precedence.\n        * The url is optional and will be set to US-Dallas region by default. It can be set using one of the environment variables ``WXAI_URL``, ``WATSONX_URL``, or ``WXG_URL``. These will be read in the order of precedence.\n\n        .. code-block:: python\n\n            wx_ai_foundation_model = WxAIFoundationModel(\n                model_id=\"google/flan-ul2\",\n                project_id=<PROJECT_ID>,\n            )\n\n    3. Create foundation model by specifying watsonx.governance software credentials during object creation:\n        .. code-block:: python\n\n            wx_ai_foundation_model = WxAIFoundationModel(\n                model_id=\"google/flan-ul2\",\n                project_id=project_id,\n                provider=WxAIModelProvider(\n                    credentials=WxAICredentials(\n                        url=wx_url,\n                        api_key=wx_apikey,\n                        username=wx_username,\n                        version=wx_version,\n                    )\n                )\n            )\n\n    4. Create foundation model by setting watsonx.governance software credentials environment variables:\n        * The api key can be set using one of the environment variables ``WXAI_API_KEY``, ``WATSONX_APIKEY``, or ``WXG_API_KEY``. These will be read in the order of precedence.\n        * The url can be set using one of these environment variable ``WXAI_URL``, ``WATSONX_URL``, or ``WXG_URL``. These will be read in the order of precedence.\n        * The username can be set using one of these environment variable ``WXAI_USERNAME``, ``WATSONX_USERNAME``, or ``WXG_USERNAME``. These will be read in the order of precedence.\n        * The version of watsonx.governance software can be set using one of these environment variable ``WXAI_VERSION``, ``WATSONX_VERSION``, or ``WXG_VERSION``. These will be read in the order of precedence.\n\n        .. code-block:: python\n\n            wx_ai_foundation_model = WxAIFoundationModel(\n                model_id=\"google/flan-ul2\",\n                project_id=project_id,\n            )",
         "properties": {
            "model_name": {
               "anyOf": [
                  {
                     "type": "string"
                  },
                  {
                     "type": "null"
                  }
               ],
               "default": null,
               "description": "The name of the foundation model.",
               "title": "Model Name"
            },
            "provider": {
               "$ref": "#/$defs/WxAIModelProvider",
               "description": "The provider of the model."
            },
            "model_id": {
               "description": "The unique identifier for the watsonx.ai model.",
               "title": "Model Id",
               "type": "string"
            },
            "project_id": {
               "anyOf": [
                  {
                     "type": "string"
                  },
                  {
                     "type": "null"
                  }
               ],
               "default": null,
               "description": "The project ID associated with the model.",
               "title": "Project Id"
            },
            "space_id": {
               "anyOf": [
                  {
                     "type": "string"
                  },
                  {
                     "type": "null"
                  }
               ],
               "default": null,
               "description": "The space ID associated with the model.",
               "title": "Space Id"
            }
         },
         "required": [
            "model_id"
         ],
         "title": "WxAIFoundationModel",
         "type": "object"
      },
      "WxAIModelProvider": {
         "description": "This class represents a model provider configuration for IBM watsonx.ai. It includes the provider type and\ncredentials required to authenticate and interact with the watsonx.ai platform. If credentials are not explicitly\nprovided, it attempts to load them from environment variables.\n\nExamples:\n    1. Create provider using credentials object:\n        .. code-block:: python\n\n            credentials = WxAICredentials(\n                url=\"https://us-south.ml.cloud.ibm.com\",\n                api_key=\"your-api-key\"\n            )\n            provider = WxAIModelProvider(credentials=credentials)\n\n    2. Create provider using environment variables:\n        .. code-block:: python\n\n            import os\n\n            os.environ['WATSONX_URL'] = \"https://us-south.ml.cloud.ibm.com\"\n            os.environ['WATSONX_APIKEY'] = \"your-api-key\"\n\n            provider = WxAIModelProvider()",
         "properties": {
            "type": {
               "$ref": "#/$defs/ModelProviderType",
               "default": "ibm_watsonx.ai",
               "description": "The type of model provider."
            },
            "credentials": {
               "anyOf": [
                  {
                     "$ref": "#/$defs/WxAICredentials"
                  },
                  {
                     "type": "null"
                  }
               ],
               "default": null,
               "description": "The credentials used to authenticate with watsonx.ai. If not provided, they will be loaded from environment variables."
            }
         },
         "title": "WxAIModelProvider",
         "type": "object"
      }
   },
   "required": [
      "name"
   ]
}

Fields:
field foundation_models: Annotated[list[FoundationModelInfo], FieldInfo(annotation=NoneType, required=False, default=[], description='The Foundation models invoked by the node')] = []

The Foundation models invoked by the node

field func_name: Annotated[str | None, FieldInfo(annotation=NoneType, required=False, default=None, title='Node function name', description='The name of the node function.')] = None

The name of the node function.

field metrics_configurations: Annotated[list[MetricsConfiguration], FieldInfo(annotation=NoneType, required=False, default=[], title='Metrics configuration', description='The list of metrics and their configuration details.')] = []

The list of metrics and their configuration details.

field name: Annotated[str, FieldInfo(annotation=NoneType, required=True, title='Name', description='The name of the node.')] [Required]

The name of the node.

classmethod model_validate(obj, **kwargs)

Validate a pydantic model instance.

Parameters:
  • obj – The object to validate.

  • strict – Whether to enforce types strictly.

  • from_attributes – Whether to extract data from object attributes.

  • context – Additional context to pass to the validator.

Raises:

ValidationError – If the object could not be validated.

Returns:

The validated model instance.