Agentic AI Configuration

pydantic model ibm_watsonx_gov.config.agentic_ai_configuration.AgenticAIConfiguration

Bases: GenAIConfiguration

Defines the AgenticAIConfiguration class.

The configuration interface for Agentic AI tools and applications. This is used to specify the fields mapping details in the data and other configuration parameters needed for evaluation.

Examples

  1. Create configuration with default parameters
    configuration = AgenticAIConfiguration()
    
  2. Create configuration with parameters
    configuration = AgenticAIConfiguration(input_fields=["input"],
                                           output_fields=["output"])
    
  1. Create configuration with dict parameters
    config = {"input_fields": ["input"],
              "output_fields": ["output"],
              "context_fields": ["contexts"],
              "reference_fields": ["reference"]}
    configuration = AgenticAIConfiguration(**config)
    

Show JSON schema
{
   "title": "AgenticAIConfiguration",
   "description": "Defines the AgenticAIConfiguration class.\n\nThe configuration interface for Agentic AI tools and applications.\nThis is used to specify the fields mapping details in the data and other configuration parameters needed for evaluation.\n\nExamples:\n    1. Create configuration with default parameters\n        .. code-block:: python\n\n            configuration = AgenticAIConfiguration()\n\n    2. Create configuration with parameters\n        .. code-block:: python\n\n            configuration = AgenticAIConfiguration(input_fields=[\"input\"], \n                                                   output_fields=[\"output\"])\n\n    2. Create configuration with dict parameters\n        .. code-block:: python\n\n            config = {\"input_fields\": [\"input\"],\n                      \"output_fields\": [\"output\"],\n                      \"context_fields\": [\"contexts\"],\n                      \"reference_fields\": [\"reference\"]}\n            configuration = AgenticAIConfiguration(**config)",
   "type": "object",
   "properties": {
      "record_id_field": {
         "default": "record_id",
         "description": "The record identifier field name.",
         "examples": [
            "record_id"
         ],
         "title": "Record id field",
         "type": "string"
      },
      "record_timestamp_field": {
         "default": "record_timestamp",
         "description": "The record timestamp field name.",
         "examples": [
            "record_timestamp"
         ],
         "title": "Record timestamp field",
         "type": "string"
      },
      "task_type": {
         "anyOf": [
            {
               "$ref": "#/$defs/TaskType"
            },
            {
               "type": "null"
            }
         ],
         "default": null,
         "description": "The generative task type. Default value is None.",
         "examples": [
            "retrieval_augmented_generation"
         ],
         "title": "Task Type"
      },
      "input_fields": {
         "default": [
            "input_text"
         ],
         "description": "The list of model input fields in the data. Default value is ['input_text'].",
         "examples": [
            [
               "question"
            ]
         ],
         "items": {
            "type": "string"
         },
         "title": "Input Fields",
         "type": "array"
      },
      "context_fields": {
         "default": [
            "context"
         ],
         "description": "The list of context fields in the input fields. Default value is ['context'].",
         "examples": [
            [
               "context1",
               "context2"
            ]
         ],
         "items": {
            "type": "string"
         },
         "title": "Context Fields",
         "type": "array"
      },
      "output_fields": {
         "default": [
            "generated_text"
         ],
         "description": "The list of model output fields in the data. Default value is ['generated_text'].",
         "examples": [
            [
               "output"
            ]
         ],
         "items": {
            "type": "string"
         },
         "title": "Output Fields",
         "type": "array"
      },
      "reference_fields": {
         "default": [
            "ground_truth"
         ],
         "description": "The list of reference fields in the data. Default value is ['ground_truth'].",
         "examples": [
            [
               "reference"
            ]
         ],
         "items": {
            "type": "string"
         },
         "title": "Reference Fields",
         "type": "array"
      },
      "locale": {
         "anyOf": [
            {
               "$ref": "#/$defs/Locale"
            },
            {
               "type": "null"
            }
         ],
         "default": null,
         "description": "The language locale of the input, output and reference fields in the data.",
         "title": "Locale"
      },
      "tools": {
         "default": [],
         "description": "The list of tools used by the LLM.",
         "examples": [
            [
               "function1",
               "function2"
            ]
         ],
         "items": {
            "type": "object"
         },
         "title": "Tools",
         "type": "array"
      },
      "tool_calls_field": {
         "anyOf": [
            {
               "type": "string"
            },
            {
               "type": "null"
            }
         ],
         "default": "tool_calls",
         "description": "The tool calls field in the input fields. Default value is 'tool_calls'.",
         "examples": [
            "tool_calls"
         ],
         "title": "Tool Calls Field"
      },
      "available_tools_field": {
         "anyOf": [
            {
               "type": "string"
            },
            {
               "type": "null"
            }
         ],
         "default": "available_tools",
         "description": "The tool inventory field in the data. Default value is 'available_tools'.",
         "examples": [
            "available_tools"
         ],
         "title": "Available Tools Field"
      },
      "llm_judge": {
         "anyOf": [
            {
               "$ref": "#/$defs/LLMJudge"
            },
            {
               "type": "null"
            }
         ],
         "default": null,
         "description": "LLM as Judge Model details.",
         "title": "LLM Judge"
      },
      "prompt_field": {
         "anyOf": [
            {
               "type": "string"
            },
            {
               "type": "null"
            }
         ],
         "default": "model_prompt",
         "description": "The prompt field in the input fields. Default value is 'model_prompt'.",
         "examples": [
            "model_prompt"
         ],
         "title": "Model Prompt Field"
      },
      "interaction_id_field": {
         "anyOf": [
            {
               "type": "string"
            },
            {
               "type": "null"
            }
         ],
         "default": "interaction_id",
         "description": "The interaction identifier field name. Default value is 'interaction_id'.",
         "examples": [
            "interaction_id"
         ],
         "title": "Interaction id field"
      },
      "conversation_id_field": {
         "anyOf": [
            {
               "type": "string"
            },
            {
               "type": "null"
            }
         ],
         "default": "conversation_id",
         "description": "The conversation identifier field name. Default value is 'conversation_id'.",
         "examples": [
            "conversation_id"
         ],
         "title": "Conversation id field"
      }
   },
   "$defs": {
      "AzureOpenAICredentials": {
         "properties": {
            "url": {
               "anyOf": [
                  {
                     "type": "string"
                  },
                  {
                     "type": "null"
                  }
               ],
               "description": "Azure OpenAI url. This attribute can be read from `AZURE_OPENAI_HOST` environment variable.",
               "title": "Url"
            },
            "api_key": {
               "anyOf": [
                  {
                     "type": "string"
                  },
                  {
                     "type": "null"
                  }
               ],
               "description": "API key for Azure OpenAI. This attribute can be read from `AZURE_OPENAI_API_KEY` environment variable.",
               "title": "Api Key"
            },
            "api_version": {
               "anyOf": [
                  {
                     "type": "string"
                  },
                  {
                     "type": "null"
                  }
               ],
               "description": "The model API version from Azure OpenAI. This attribute can be read from `AZURE_OPENAI_API_VERSION` environment variable.",
               "title": "Api Version"
            }
         },
         "required": [
            "url",
            "api_key",
            "api_version"
         ],
         "title": "AzureOpenAICredentials",
         "type": "object"
      },
      "AzureOpenAIFoundationModel": {
         "description": "The Azure OpenAI foundation model details\n\nExamples:\n    1. Create Azure OpenAI foundation model by passing the credentials during object creation.\n        .. code-block:: python\n\n            azure_openai_foundation_model = AzureOpenAIFoundationModel(\n                model_id=\"gpt-4o-mini\",\n                provider=AzureOpenAIModelProvider(\n                    credentials=AzureOpenAICredentials(\n                        api_key=azure_api_key,\n                        url=azure_host_url,\n                        api_version=azure_api_model_version,\n                    )\n                )\n            )\n\n2. Create Azure OpenAI foundation model by setting the credentials in environment variables:\n    * ``AZURE_OPENAI_API_KEY`` is used to set the api key for OpenAI.\n    * ``AZURE_OPENAI_HOST`` is used to set the url for Azure OpenAI.\n    * ``AZURE_OPENAI_API_VERSION`` is uses to set the the api version for Azure OpenAI.\n\n        .. code-block:: python\n\n            openai_foundation_model = AzureOpenAIFoundationModel(\n                model_id=\"gpt-4o-mini\",\n            )",
         "properties": {
            "model_name": {
               "anyOf": [
                  {
                     "type": "string"
                  },
                  {
                     "type": "null"
                  }
               ],
               "default": null,
               "description": "The name of the foundation model.",
               "title": "Model Name"
            },
            "provider": {
               "$ref": "#/$defs/AzureOpenAIModelProvider",
               "description": "Azure OpenAI provider"
            },
            "model_id": {
               "description": "Model deployment name from Azure OpenAI",
               "title": "Model Id",
               "type": "string"
            }
         },
         "required": [
            "model_id"
         ],
         "title": "AzureOpenAIFoundationModel",
         "type": "object"
      },
      "AzureOpenAIModelProvider": {
         "properties": {
            "type": {
               "$ref": "#/$defs/ModelProviderType",
               "default": "azure_openai",
               "description": "The type of model provider."
            },
            "credentials": {
               "anyOf": [
                  {
                     "$ref": "#/$defs/AzureOpenAICredentials"
                  },
                  {
                     "type": "null"
                  }
               ],
               "default": null,
               "description": "Azure OpenAI credentials."
            }
         },
         "title": "AzureOpenAIModelProvider",
         "type": "object"
      },
      "LLMJudge": {
         "description": "Defines the LLMJudge.\n\nThe LLMJudge class contains the details of the llm judge model to be used for computing the metric.\n\nExamples:\n    1. Create LLMJudge using watsonx.ai foundation model:\n        .. code-block:: python\n\n            wx_ai_foundation_model = WxAIFoundationModel(\n                model_id=\"google/flan-ul2\",\n                project_id=PROJECT_ID,\n                provider=WxAIModelProvider(\n                    credentials=WxAICredentials(api_key=wx_apikey)\n                )\n            )\n            llm_judge = LLMJudge(model=wx_ai_foundation_model)",
         "properties": {
            "model": {
               "anyOf": [
                  {
                     "$ref": "#/$defs/WxAIFoundationModel"
                  },
                  {
                     "$ref": "#/$defs/OpenAIFoundationModel"
                  },
                  {
                     "$ref": "#/$defs/AzureOpenAIFoundationModel"
                  },
                  {
                     "$ref": "#/$defs/RITSFoundationModel"
                  }
               ],
               "description": "The foundation model to be used as judge",
               "title": "Model"
            }
         },
         "required": [
            "model"
         ],
         "title": "LLMJudge",
         "type": "object"
      },
      "Locale": {
         "properties": {
            "input": {
               "anyOf": [
                  {
                     "items": {
                        "type": "string"
                     },
                     "type": "array"
                  },
                  {
                     "additionalProperties": {
                        "type": "string"
                     },
                     "type": "object"
                  },
                  {
                     "type": "string"
                  },
                  {
                     "type": "null"
                  }
               ],
               "default": null,
               "title": "Input"
            },
            "output": {
               "anyOf": [
                  {
                     "items": {
                        "type": "string"
                     },
                     "type": "array"
                  },
                  {
                     "type": "null"
                  }
               ],
               "default": null,
               "title": "Output"
            },
            "reference": {
               "anyOf": [
                  {
                     "items": {
                        "type": "string"
                     },
                     "type": "array"
                  },
                  {
                     "additionalProperties": {
                        "type": "string"
                     },
                     "type": "object"
                  },
                  {
                     "type": "string"
                  },
                  {
                     "type": "null"
                  }
               ],
               "default": null,
               "title": "Reference"
            }
         },
         "title": "Locale",
         "type": "object"
      },
      "ModelProviderType": {
         "description": "Supported model provider types for Generative AI",
         "enum": [
            "ibm_watsonx.ai",
            "azure_openai",
            "rits",
            "openai",
            "custom"
         ],
         "title": "ModelProviderType",
         "type": "string"
      },
      "OpenAICredentials": {
         "description": "Defines the OpenAICredentials class to specify the OpenAI server details.\n\nExamples:\n    1. Create OpenAICredentials with default parameters. By default Dallas region is used.\n        .. code-block:: python\n\n            openai_credentials = OpenAICredentials(api_key=api_key,\n                                                   url=openai_url)\n\n    2. Create OpenAICredentials by reading from environment variables.\n        .. code-block:: python\n\n            os.environ[\"OPENAI_API_KEY\"] = \"...\"\n            os.environ[\"OPENAI_URL\"] = \"...\"\n            openai_credentials = OpenAICredentials.create_from_env()",
         "properties": {
            "url": {
               "anyOf": [
                  {
                     "type": "string"
                  },
                  {
                     "type": "null"
                  }
               ],
               "title": "Url"
            },
            "api_key": {
               "anyOf": [
                  {
                     "type": "string"
                  },
                  {
                     "type": "null"
                  }
               ],
               "title": "Api Key"
            }
         },
         "required": [
            "url",
            "api_key"
         ],
         "title": "OpenAICredentials",
         "type": "object"
      },
      "OpenAIFoundationModel": {
         "description": "The OpenAI foundation model details\n\nExamples:\n    1. Create OpenAI foundation model by passing the credentials during object creation. Note that the url is optional and will be set to the default value for OpenAI. To change the default value, the url should be passed to ``OpenAICredentials`` object.\n        .. code-block:: python\n\n            openai_foundation_model = OpenAIFoundationModel(\n                model_id=\"gpt-4o-mini\",\n                provider=OpenAIModelProvider(\n                    credentials=OpenAICredentials(\n                        api_key=api_key,\n                        url=openai_url,\n                    )\n                )\n            )\n\n    2. Create OpenAI foundation model by setting the credentials in environment variables:\n        * ``OPENAI_API_KEY`` is used to set the api key for OpenAI.\n        * ``OPENAI_URL`` is used to set the url for OpenAI\n\n        .. code-block:: python\n\n            openai_foundation_model = OpenAIFoundationModel(\n                model_id=\"gpt-4o-mini\",\n            )",
         "properties": {
            "model_name": {
               "anyOf": [
                  {
                     "type": "string"
                  },
                  {
                     "type": "null"
                  }
               ],
               "default": null,
               "description": "The name of the foundation model.",
               "title": "Model Name"
            },
            "provider": {
               "$ref": "#/$defs/OpenAIModelProvider",
               "description": "OpenAI provider"
            },
            "model_id": {
               "description": "Model name from OpenAI",
               "title": "Model Id",
               "type": "string"
            }
         },
         "required": [
            "model_id"
         ],
         "title": "OpenAIFoundationModel",
         "type": "object"
      },
      "OpenAIModelProvider": {
         "properties": {
            "type": {
               "$ref": "#/$defs/ModelProviderType",
               "default": "openai",
               "description": "The type of model provider."
            },
            "credentials": {
               "anyOf": [
                  {
                     "$ref": "#/$defs/OpenAICredentials"
                  },
                  {
                     "type": "null"
                  }
               ],
               "default": null,
               "description": "OpenAI credentials. This can also be set by using `OPENAI_API_KEY` environment variable."
            }
         },
         "title": "OpenAIModelProvider",
         "type": "object"
      },
      "RITSCredentials": {
         "properties": {
            "hostname": {
               "anyOf": [
                  {
                     "type": "string"
                  },
                  {
                     "type": "null"
                  }
               ],
               "default": "https://inference-3scale-apicast-production.apps.rits.fmaas.res.ibm.com",
               "description": "The rits hostname",
               "title": "Hostname"
            },
            "api_key": {
               "title": "Api Key",
               "type": "string"
            }
         },
         "required": [
            "api_key"
         ],
         "title": "RITSCredentials",
         "type": "object"
      },
      "RITSFoundationModel": {
         "properties": {
            "model_name": {
               "anyOf": [
                  {
                     "type": "string"
                  },
                  {
                     "type": "null"
                  }
               ],
               "default": null,
               "description": "The name of the foundation model.",
               "title": "Model Name"
            },
            "provider": {
               "$ref": "#/$defs/RITSModelProvider",
               "description": "The provider of the model."
            }
         },
         "title": "RITSFoundationModel",
         "type": "object"
      },
      "RITSModelProvider": {
         "properties": {
            "type": {
               "$ref": "#/$defs/ModelProviderType",
               "default": "rits",
               "description": "The type of model provider."
            },
            "credentials": {
               "anyOf": [
                  {
                     "$ref": "#/$defs/RITSCredentials"
                  },
                  {
                     "type": "null"
                  }
               ],
               "default": null,
               "description": "RITS credentials."
            }
         },
         "title": "RITSModelProvider",
         "type": "object"
      },
      "TaskType": {
         "description": "Supported task types for generative AI models",
         "enum": [
            "question_answering",
            "classification",
            "summarization",
            "generation",
            "extraction",
            "retrieval_augmented_generation"
         ],
         "title": "TaskType",
         "type": "string"
      },
      "WxAICredentials": {
         "description": "Defines the WxAICredentials class to specify the watsonx.ai server details.\n\nExamples:\n    1. Create WxAICredentials with default parameters. By default Dallas region is used.\n        .. code-block:: python\n\n            wxai_credentials = WxAICredentials(api_key=\"...\")\n\n    2. Create WxAICredentials by specifying region url.\n        .. code-block:: python\n\n            wxai_credentials = WxAICredentials(api_key=\"...\",\n                                               url=\"https://au-syd.ml.cloud.ibm.com\")\n\n    3. Create WxAICredentials by reading from environment variables.\n        .. code-block:: python\n\n            os.environ[\"WATSONX_APIKEY\"] = \"...\"\n            # [Optional] Specify watsonx region specific url. Default is https://us-south.ml.cloud.ibm.com .\n            os.environ[\"WATSONX_URL\"] = \"https://eu-gb.ml.cloud.ibm.com\"\n            wxai_credentials = WxAICredentials.create_from_env()\n\n    4. Create WxAICredentials for on-prem.\n        .. code-block:: python\n\n            wxai_credentials = WxAICredentials(url=\"https://<hostname>\",\n                                               username=\"...\"\n                                               api_key=\"...\",\n                                               version=\"5.2\")\n\n    5. Create WxAICredentials by reading from environment variables for on-prem.\n        .. code-block:: python\n\n            os.environ[\"WATSONX_URL\"] = \"https://<hostname>\"\n            os.environ[\"WATSONX_VERSION\"] = \"5.2\"\n            os.environ[\"WATSONX_USERNAME\"] = \"...\"\n            os.environ[\"WATSONX_APIKEY\"] = \"...\"\n            # Only one of api_key or password is needed\n            #os.environ[\"WATSONX_PASSWORD\"] = \"...\"\n            wxai_credentials = WxAICredentials.create_from_env()",
         "properties": {
            "url": {
               "default": "https://us-south.ml.cloud.ibm.com",
               "description": "The url for watsonx ai service",
               "examples": [
                  "https://us-south.ml.cloud.ibm.com",
                  "https://eu-de.ml.cloud.ibm.com",
                  "https://eu-gb.ml.cloud.ibm.com",
                  "https://jp-tok.ml.cloud.ibm.com",
                  "https://au-syd.ml.cloud.ibm.com"
               ],
               "title": "watsonx.ai url",
               "type": "string"
            },
            "api_key": {
               "anyOf": [
                  {
                     "type": "string"
                  },
                  {
                     "type": "null"
                  }
               ],
               "default": null,
               "description": "The user api key. Required for using watsonx as a service and one of api_key or password is required for using watsonx on-prem software.",
               "strip_whitespace": true,
               "title": "Api Key"
            },
            "version": {
               "anyOf": [
                  {
                     "type": "string"
                  },
                  {
                     "type": "null"
                  }
               ],
               "default": null,
               "description": "The watsonx on-prem software version. Required for using watsonx on-prem software.",
               "title": "Version"
            },
            "username": {
               "anyOf": [
                  {
                     "type": "string"
                  },
                  {
                     "type": "null"
                  }
               ],
               "default": null,
               "description": "The user name. Required for using watsonx on-prem software.",
               "title": "User name"
            },
            "password": {
               "anyOf": [
                  {
                     "type": "string"
                  },
                  {
                     "type": "null"
                  }
               ],
               "default": null,
               "description": "The user password. One of api_key or password is required for using watsonx on-prem software.",
               "title": "Password"
            },
            "instance_id": {
               "anyOf": [
                  {
                     "type": "string"
                  },
                  {
                     "type": "null"
                  }
               ],
               "default": "openshift",
               "description": "The watsonx.ai instance id. Default value is openshift.",
               "title": "Instance id"
            }
         },
         "title": "WxAICredentials",
         "type": "object"
      },
      "WxAIFoundationModel": {
         "description": "The IBM watsonx.ai foundation model details\n\nTo initialize the foundation model, you can either pass in the credentials directly or set the environment.\nYou can follow these examples to create the provider.\n\nExamples:\n    1. Create foundation model by specifying the credentials during object creation:\n        .. code-block:: python\n\n            # Specify the credentials during object creation\n            wx_ai_foundation_model = WxAIFoundationModel(\n                model_id=\"google/flan-ul2\",\n                project_id=<PROJECT_ID>,\n                provider=WxAIModelProvider(\n                    credentials=WxAICredentials(\n                        url=wx_url, # This is optional field, by default US-Dallas region is selected\n                        api_key=wx_apikey,\n                    )\n                )\n            )\n\n    2. Create foundation model by setting the credentials environment variables:\n        * The api key can be set using one of the environment variables ``WXAI_API_KEY``, ``WATSONX_APIKEY``, or ``WXG_API_KEY``. These will be read in the order of precedence.\n        * The url is optional and will be set to US-Dallas region by default. It can be set using one of the environment variables ``WXAI_URL``, ``WATSONX_URL``, or ``WXG_URL``. These will be read in the order of precedence.\n\n        .. code-block:: python\n\n            wx_ai_foundation_model = WxAIFoundationModel(\n                model_id=\"google/flan-ul2\",\n                project_id=<PROJECT_ID>,\n            )\n\n    3. Create foundation model by specifying watsonx.governance software credentials during object creation:\n        .. code-block:: python\n\n            wx_ai_foundation_model = WxAIFoundationModel(\n                model_id=\"google/flan-ul2\",\n                project_id=project_id,\n                provider=WxAIModelProvider(\n                    credentials=WxAICredentials(\n                        url=wx_url,\n                        api_key=wx_apikey,\n                        username=wx_username,\n                        version=wx_version,\n                    )\n                )\n            )\n\n    4. Create foundation model by setting watsonx.governance software credentials environment variables:\n        * The api key can be set using one of the environment variables ``WXAI_API_KEY``, ``WATSONX_APIKEY``, or ``WXG_API_KEY``. These will be read in the order of precedence.\n        * The url can be set using one of these environment variable ``WXAI_URL``, ``WATSONX_URL``, or ``WXG_URL``. These will be read in the order of precedence.\n        * The username can be set using one of these environment variable ``WXAI_USERNAME``, ``WATSONX_USERNAME``, or ``WXG_USERNAME``. These will be read in the order of precedence.\n        * The version of watsonx.governance software can be set using one of these environment variable ``WXAI_VERSION``, ``WATSONX_VERSION``, or ``WXG_VERSION``. These will be read in the order of precedence.\n\n        .. code-block:: python\n\n            wx_ai_foundation_model = WxAIFoundationModel(\n                model_id=\"google/flan-ul2\",\n                project_id=project_id,\n            )",
         "properties": {
            "model_name": {
               "anyOf": [
                  {
                     "type": "string"
                  },
                  {
                     "type": "null"
                  }
               ],
               "default": null,
               "description": "The name of the foundation model.",
               "title": "Model Name"
            },
            "provider": {
               "$ref": "#/$defs/WxAIModelProvider",
               "description": "The provider of the model."
            },
            "model_id": {
               "description": "The unique identifier for the watsonx.ai model.",
               "title": "Model Id",
               "type": "string"
            },
            "project_id": {
               "anyOf": [
                  {
                     "type": "string"
                  },
                  {
                     "type": "null"
                  }
               ],
               "default": null,
               "description": "The project ID associated with the model.",
               "title": "Project Id"
            },
            "space_id": {
               "anyOf": [
                  {
                     "type": "string"
                  },
                  {
                     "type": "null"
                  }
               ],
               "default": null,
               "description": "The space ID associated with the model.",
               "title": "Space Id"
            }
         },
         "required": [
            "model_id"
         ],
         "title": "WxAIFoundationModel",
         "type": "object"
      },
      "WxAIModelProvider": {
         "description": "This class represents a model provider configuration for IBM watsonx.ai. It includes the provider type and\ncredentials required to authenticate and interact with the watsonx.ai platform. If credentials are not explicitly\nprovided, it attempts to load them from environment variables.\n\nExamples:\n    1. Create provider using credentials object:\n        .. code-block:: python\n\n            credentials = WxAICredentials(\n                url=\"https://us-south.ml.cloud.ibm.com\",\n                api_key=\"your-api-key\"\n            )\n            provider = WxAIModelProvider(credentials=credentials)\n\n    2. Create provider using environment variables:\n        .. code-block:: python\n\n            import os\n\n            os.environ['WATSONX_URL'] = \"https://us-south.ml.cloud.ibm.com\"\n            os.environ['WATSONX_APIKEY'] = \"your-api-key\"\n\n            provider = WxAIModelProvider()",
         "properties": {
            "type": {
               "$ref": "#/$defs/ModelProviderType",
               "default": "ibm_watsonx.ai",
               "description": "The type of model provider."
            },
            "credentials": {
               "anyOf": [
                  {
                     "$ref": "#/$defs/WxAICredentials"
                  },
                  {
                     "type": "null"
                  }
               ],
               "default": null,
               "description": "The credentials used to authenticate with watsonx.ai. If not provided, they will be loaded from environment variables."
            }
         },
         "title": "WxAIModelProvider",
         "type": "object"
      }
   }
}

Config:
  • arbitrary_types_allowed: bool = True

Fields:
Validators:

field conversation_id_field: Annotated[str | None, FieldInfo(annotation=NoneType, required=False, default='conversation_id', title='Conversation id field', description="The conversation identifier field name. Default value is 'conversation_id'.", examples=['conversation_id'])] = 'conversation_id'

The conversation identifier field name. Default value is ‘conversation_id’.

Validated by:
field interaction_id_field: Annotated[str | None, FieldInfo(annotation=NoneType, required=False, default='interaction_id', title='Interaction id field', description="The interaction identifier field name. Default value is 'interaction_id'.", examples=['interaction_id'])] = 'interaction_id'

The interaction identifier field name. Default value is ‘interaction_id’.

Validated by:
classmethod create_configuration(*, app_config: Self | None, method_config: Self | None, defaults: list[EvaluatorFields], add_record_fields: bool = True) Self

Creates a configuration object based on the provided parameters.

Parameters:
  • app_config (Optional[Self]) – The application configuration.

  • method_config (Optional[Self]) – The method configuration.

  • defaults (list[EvaluatorFields]) – The default fields to include in the configuration.

  • add_record_fields (bool, optional) – Whether to add record fields to the configuration. Defaults to True.

Returns:

The created configuration object.

Return type:

Self

pydantic model ibm_watsonx_gov.config.agentic_ai_configuration.OTLPCollectorConfiguration

Bases: BaseModel

Defines the OTLPCollectorConfiguration class. It contains the configuration settings for the OpenTelemetry Protocol collector.

Examples

  1. Create OTLPCollectorConfiguration with default parameters
    oltp_config = OTLPCollectorConfiguration()
    
  1. Create OTLPCollectorConfiguration by providing server endpoint details.
    oltp_config = OTLPCollectorConfiguration(app_name="app",
                                             endpoint="https://hostname/ml/v1/traces",
                                             timeout=10,
                                             headers={"Authorization": "Bearer token"})
    

Show JSON schema
{
   "title": "OTLPCollectorConfiguration",
   "description": "Defines the OTLPCollectorConfiguration class.\nIt contains the configuration settings for the OpenTelemetry Protocol collector.\n\nExamples:\n    1. Create OTLPCollectorConfiguration with default parameters\n        .. code-block:: python\n\n            oltp_config = OTLPCollectorConfiguration()\n\n    1. Create OTLPCollectorConfiguration by providing server endpoint details.\n        .. code-block:: python\n\n            oltp_config = OTLPCollectorConfiguration(app_name=\"app\",\n                                                     endpoint=\"https://hostname/ml/v1/traces\",\n                                                     timeout=10,\n                                                     headers={\"Authorization\": \"Bearer token\"})",
   "type": "object",
   "properties": {
      "app_name": {
         "anyOf": [
            {
               "type": "string"
            },
            {
               "type": "null"
            }
         ],
         "description": "Application name for tracing.",
         "title": "App Name"
      },
      "endpoint": {
         "anyOf": [
            {
               "type": "string"
            },
            {
               "type": "null"
            }
         ],
         "default": "http://localhost:4318/v1/traces",
         "description": "The OTLP collector endpoint URL for sending trace data. Default value is 'http://localhost:4318/v1/traces'",
         "title": "OTLP Endpoint"
      },
      "insecure": {
         "anyOf": [
            {
               "type": "boolean"
            },
            {
               "type": "null"
            }
         ],
         "default": false,
         "description": "Whether to disable TLS for the exporter (i.e., use an insecure connection). Default is False.",
         "title": "Insecure Connection"
      },
      "is_grpc": {
         "anyOf": [
            {
               "type": "boolean"
            },
            {
               "type": "null"
            }
         ],
         "default": false,
         "description": "If True, use gRPC for exporting traces instead of HTTP. Default is False.",
         "title": "Use gRPC"
      },
      "timeout": {
         "anyOf": [
            {
               "type": "integer"
            },
            {
               "type": "null"
            }
         ],
         "default": 100,
         "description": "Timeout in milliseconds for sending telemetry data to the collector. Default is 100ms.",
         "title": "Timeout"
      },
      "headers": {
         "anyOf": [
            {
               "additionalProperties": {
                  "type": "string"
               },
               "type": "object"
            },
            {
               "type": "null"
            }
         ],
         "description": "Headers needed to call the server.",
         "title": "Headers"
      }
   },
   "required": [
      "app_name"
   ]
}

Fields:
field app_name: Annotated[str | None, FieldInfo(annotation=NoneType, required=True, title='App Name', description='Application name for tracing.')] [Required]

Application name for tracing.

field endpoint: Annotated[str | None, FieldInfo(annotation=NoneType, required=False, default='http://localhost:4318/v1/traces', title='OTLP Endpoint', description="The OTLP collector endpoint URL for sending trace data. Default value is 'http://localhost:4318/v1/traces'")] = 'http://localhost:4318/v1/traces'

The OTLP collector endpoint URL for sending trace data. Default value is ‘http://localhost:4318/v1/traces

field headers: Annotated[dict[str, str] | None, FieldInfo(annotation=NoneType, required=False, default_factory=dict, title='Headers', description='Headers needed to call the server.')] [Optional]

Headers needed to call the server.

field insecure: Annotated[bool | None, FieldInfo(annotation=NoneType, required=False, default=False, title='Insecure Connection', description='Whether to disable TLS for the exporter (i.e., use an insecure connection). Default is False.')] = False

Whether to disable TLS for the exporter (i.e., use an insecure connection). Default is False.

field is_grpc: Annotated[bool | None, FieldInfo(annotation=NoneType, required=False, default=False, title='Use gRPC', description='If True, use gRPC for exporting traces instead of HTTP. Default is False.')] = False

If True, use gRPC for exporting traces instead of HTTP. Default is False.

field timeout: Annotated[int | None, FieldInfo(annotation=NoneType, required=False, default=100, title='Timeout', description='Timeout in milliseconds for sending telemetry data to the collector. Default is 100ms.')] = 100

Timeout in milliseconds for sending telemetry data to the collector. Default is 100ms.

pydantic model ibm_watsonx_gov.config.agentic_ai_configuration.TracingConfiguration

Bases: BaseModel

Defines the tracing configuration class. Tracing configuration is required if the the evaluations are needed to be tracked in an experiment or if the agentic application traces should be sent to a Open Telemetry Collector. One of project_id or space_id is required. If the otlp_collector_config is provided, the traces are logged to Open Telemetry Collector, otherwise the traces are logged to file on disk. If its required to log the traces to both collector and local file, provide the otlp_collector_config and set the flag log_traces_to_file to True.

Examples

  1. Create Tracing configuration to track the results in an experiment
    tracing_config = TracingConfiguration(project_id="...")
    agentic_evaluator = AgenticEvaluator(tracing_configuration=tracing_config)
    agentic_evaluator.track_experiment(name="my_experiment")
    ...
    
  2. Create Tracing configuration to send traces to collector
    oltp_collector_config = OTLPCollectorConfiguration(endpoint="http://hostname:4318/v1/traces")
    tracing_config = TracingConfiguration(space_id="...",
                                          resource_attributes={
                                                "wx-deployment-id": deployment_id,
                                                "wx-instance-id": "wml-instance-id1",
                                                "wx-ai-service-id": "ai-service-id1"},
                                           otlp_collector_config=oltp_collector_config)
    agentic_evaluator = AgenticEvaluator(tracing_configuration=tracing_config)
    ...
    

Show JSON schema
{
   "title": "TracingConfiguration",
   "description": "Defines the tracing configuration class. \nTracing configuration is required if the the evaluations are needed to be tracked in an experiment or if the agentic application traces should be sent to a Open Telemetry Collector.\nOne of project_id or space_id is required.\nIf the otlp_collector_config is provided, the traces are logged to Open Telemetry Collector, otherwise the traces are logged to file on disk.\nIf its required to log the traces to both collector and local file, provide the otlp_collector_config and set the flag log_traces_to_file to True.\n\nExamples:\n    1. Create Tracing configuration to track the results in an experiment\n        .. code-block:: python\n\n            tracing_config = TracingConfiguration(project_id=\"...\")\n            agentic_evaluator = AgenticEvaluator(tracing_configuration=tracing_config)\n            agentic_evaluator.track_experiment(name=\"my_experiment\")\n            ...\n\n    2. Create Tracing configuration to send traces to collector\n        .. code-block:: python\n\n            oltp_collector_config = OTLPCollectorConfiguration(endpoint=\"http://hostname:4318/v1/traces\")\n            tracing_config = TracingConfiguration(space_id=\"...\",\n                                                  resource_attributes={\n                                                        \"wx-deployment-id\": deployment_id,\n                                                        \"wx-instance-id\": \"wml-instance-id1\",\n                                                        \"wx-ai-service-id\": \"ai-service-id1\"},\n                                                   otlp_collector_config=oltp_collector_config)\n            agentic_evaluator = AgenticEvaluator(tracing_configuration=tracing_config)\n            ...",
   "type": "object",
   "properties": {
      "project_id": {
         "anyOf": [
            {
               "type": "string"
            },
            {
               "type": "null"
            }
         ],
         "default": null,
         "description": "The project id.",
         "title": "Project ID"
      },
      "space_id": {
         "anyOf": [
            {
               "type": "string"
            },
            {
               "type": "null"
            }
         ],
         "default": null,
         "description": "The space id.",
         "title": "Space ID"
      },
      "resource_attributes": {
         "anyOf": [
            {
               "additionalProperties": {
                  "type": "string"
               },
               "type": "object"
            },
            {
               "type": "null"
            }
         ],
         "description": "The resource attributes set in all the spans.",
         "title": "Resource Attributes"
      },
      "otlp_collector_config": {
         "anyOf": [
            {
               "$ref": "#/$defs/OTLPCollectorConfiguration"
            },
            {
               "type": "null"
            }
         ],
         "default": null,
         "description": "OTLP Collector configuration.",
         "title": "OTLP Collector Config"
      },
      "log_traces_to_file": {
         "anyOf": [
            {
               "type": "boolean"
            },
            {
               "type": "null"
            }
         ],
         "default": false,
         "description": "The flag to enable logging of traces to a file. If set to True, the traces are logged to a file. Use the flag when its needed to log the traces to file and to be sent to the server simultaneously.",
         "title": "Log Traces to file"
      }
   },
   "$defs": {
      "OTLPCollectorConfiguration": {
         "description": "Defines the OTLPCollectorConfiguration class.\nIt contains the configuration settings for the OpenTelemetry Protocol collector.\n\nExamples:\n    1. Create OTLPCollectorConfiguration with default parameters\n        .. code-block:: python\n\n            oltp_config = OTLPCollectorConfiguration()\n\n    1. Create OTLPCollectorConfiguration by providing server endpoint details.\n        .. code-block:: python\n\n            oltp_config = OTLPCollectorConfiguration(app_name=\"app\",\n                                                     endpoint=\"https://hostname/ml/v1/traces\",\n                                                     timeout=10,\n                                                     headers={\"Authorization\": \"Bearer token\"})",
         "properties": {
            "app_name": {
               "anyOf": [
                  {
                     "type": "string"
                  },
                  {
                     "type": "null"
                  }
               ],
               "description": "Application name for tracing.",
               "title": "App Name"
            },
            "endpoint": {
               "anyOf": [
                  {
                     "type": "string"
                  },
                  {
                     "type": "null"
                  }
               ],
               "default": "http://localhost:4318/v1/traces",
               "description": "The OTLP collector endpoint URL for sending trace data. Default value is 'http://localhost:4318/v1/traces'",
               "title": "OTLP Endpoint"
            },
            "insecure": {
               "anyOf": [
                  {
                     "type": "boolean"
                  },
                  {
                     "type": "null"
                  }
               ],
               "default": false,
               "description": "Whether to disable TLS for the exporter (i.e., use an insecure connection). Default is False.",
               "title": "Insecure Connection"
            },
            "is_grpc": {
               "anyOf": [
                  {
                     "type": "boolean"
                  },
                  {
                     "type": "null"
                  }
               ],
               "default": false,
               "description": "If True, use gRPC for exporting traces instead of HTTP. Default is False.",
               "title": "Use gRPC"
            },
            "timeout": {
               "anyOf": [
                  {
                     "type": "integer"
                  },
                  {
                     "type": "null"
                  }
               ],
               "default": 100,
               "description": "Timeout in milliseconds for sending telemetry data to the collector. Default is 100ms.",
               "title": "Timeout"
            },
            "headers": {
               "anyOf": [
                  {
                     "additionalProperties": {
                        "type": "string"
                     },
                     "type": "object"
                  },
                  {
                     "type": "null"
                  }
               ],
               "description": "Headers needed to call the server.",
               "title": "Headers"
            }
         },
         "required": [
            "app_name"
         ],
         "title": "OTLPCollectorConfiguration",
         "type": "object"
      }
   }
}

Fields:
Validators:
field log_traces_to_file: Annotated[bool | None, FieldInfo(annotation=NoneType, required=False, default=False, title='Log Traces to file', description='The flag to enable logging of traces to a file. If set to True, the traces are logged to a file. Use the flag when its needed to log the traces to file and to be sent to the server simultaneously.')] = False

The flag to enable logging of traces to a file. If set to True, the traces are logged to a file. Use the flag when its needed to log the traces to file and to be sent to the server simultaneously.

Validated by:
field otlp_collector_config: Annotated[OTLPCollectorConfiguration | None, FieldInfo(annotation=NoneType, required=False, default=None, title='OTLP Collector Config', description='OTLP Collector configuration.')] = None

OTLP Collector configuration.

Validated by:
field project_id: Annotated[str | None, FieldInfo(annotation=NoneType, required=False, default=None, title='Project ID', description='The project id.')] = None

The project id.

Validated by:
field resource_attributes: Annotated[dict[str, str] | None, FieldInfo(annotation=NoneType, required=False, default_factory=dict, title='Resource Attributes', description='The resource attributes set in all the spans.')] [Optional]

The resource attributes set in all the spans.

Validated by:
field space_id: Annotated[str | None, FieldInfo(annotation=NoneType, required=False, default=None, title='Space ID', description='The space id.')] = None

The space id.

Validated by:
classmethod create_from_env()
validator validate_fields  »  all fields
property enable_local_traces: bool
property enable_server_traces: bool