Metrics Evaluator¶
- pydantic model ibm_watsonx_gov.evaluators.metrics_evaluator.MetricsEvaluator¶
Bases:
BaseEvaluator
The class to evaluate the metrics and display the results.
Examples
- Evaluate metrics by passing data as a dataframe and default configuration
os.environ["WATSONX_APIKEY"] = "..." evaluator = MetricsEvaluator() df = pd.read_csv("") metrics = [AnswerSimilarityMetric()] result = evaluator.evaluate(data=df, metrics=metrics)
- Evaluate metrics by passing data as a json and default configuration
os.environ["WATSONX_APIKEY"] = "..." evaluator = MetricsEvaluator() json_data = {"input_text": "..."} metrics=[HAPMetric()] result = evaluator.evaluate(data=json_data, metrics=metrics)
- Evaluate metrics by passing configuration and api_client
config = GenAIConfiguration(input_fields=["question"], context_fields=["context"], output_fields=["generated_text"], reference_fields=["reference_answer"]) wxgov_client = APIClient(credentials=Credentials(api_key="")) evaluator = MetricsEvaluator(configuration=config, api_client=wxgov_client) df = pd.read_csv("") metrics = [AnswerSimilarityMetric()] result = evaluator.evaluate(data=df, metrics=metrics)
- Evaluate metrics by passing metric groups
os.environ["WATSONX_APIKEY"] = "..." evaluator = MetricsEvaluator() df = pd.read_csv("") metrics = [AnswerSimilarityMetric()] metric_groups = [MetricGroup.RETRIEVAL_QUALITY] result = evaluator.evaluate(data=df, metrics=metrics, metric_groups=metric_groups)
- Display the results
# Get the results in the required format from the output of the evaluate method result.to_json() result.to_df() result.to_dict() # Display the results evaluator.display_table() evaluator.display_insights()
Show JSON schema
{ "title": "MetricsEvaluator", "type": "object", "properties": { "api_client": { "default": null, "title": "Api Client" }, "configuration": { "$ref": "#/$defs/GenAIConfiguration", "default": { "record_id_field": "record_id", "record_timestamp_field": "record_timestamp", "task_type": null, "input_fields": [ "input_text" ], "context_fields": [ "context" ], "output_fields": [ "generated_text" ], "reference_fields": [ "ground_truth" ], "locale": null, "tools": [], "tool_calls_field": "tool_calls", "available_tools_field": "available_tools", "llm_judge": null, "prompt_field": "model_prompt" }, "description": "The configuration for metrics evaluation.", "title": "Generative AI Configuration" } }, "$defs": { "AzureOpenAICredentials": { "properties": { "url": { "anyOf": [ { "type": "string" }, { "type": "null" } ], "description": "Azure OpenAI url. This attribute can be read from `AZURE_OPENAI_HOST` environment variable.", "title": "Url" }, "api_key": { "anyOf": [ { "type": "string" }, { "type": "null" } ], "description": "API key for Azure OpenAI. This attribute can be read from `AZURE_OPENAI_API_KEY` environment variable.", "title": "Api Key" }, "api_version": { "anyOf": [ { "type": "string" }, { "type": "null" } ], "description": "The model API version from Azure OpenAI. This attribute can be read from `AZURE_OPENAI_API_VERSION` environment variable.", "title": "Api Version" } }, "required": [ "url", "api_key", "api_version" ], "title": "AzureOpenAICredentials", "type": "object" }, "AzureOpenAIFoundationModel": { "description": "The Azure OpenAI foundation model details\n\nExamples:\n 1. Create Azure OpenAI foundation model by passing the credentials during object creation.\n .. code-block:: python\n\n azure_openai_foundation_model = AzureOpenAIFoundationModel(\n model_id=\"gpt-4o-mini\",\n provider=AzureOpenAIModelProvider(\n credentials=AzureOpenAICredentials(\n api_key=azure_api_key,\n url=azure_host_url,\n api_version=azure_api_model_version,\n )\n )\n )\n\n2. Create Azure OpenAI foundation model by setting the credentials in environment variables:\n * ``AZURE_OPENAI_API_KEY`` is used to set the api key for OpenAI.\n * ``AZURE_OPENAI_HOST`` is used to set the url for Azure OpenAI.\n * ``AZURE_OPENAI_API_VERSION`` is uses to set the the api version for Azure OpenAI.\n\n .. code-block:: python\n\n openai_foundation_model = AzureOpenAIFoundationModel(\n model_id=\"gpt-4o-mini\",\n )", "properties": { "model_name": { "anyOf": [ { "type": "string" }, { "type": "null" } ], "default": null, "description": "The name of the foundation model.", "title": "Model Name" }, "provider": { "$ref": "#/$defs/AzureOpenAIModelProvider", "description": "Azure OpenAI provider" }, "model_id": { "description": "Model deployment name from Azure OpenAI", "title": "Model Id", "type": "string" } }, "required": [ "model_id" ], "title": "AzureOpenAIFoundationModel", "type": "object" }, "AzureOpenAIModelProvider": { "properties": { "type": { "$ref": "#/$defs/ModelProviderType", "default": "azure_openai", "description": "The type of model provider." }, "credentials": { "anyOf": [ { "$ref": "#/$defs/AzureOpenAICredentials" }, { "type": "null" } ], "default": null, "description": "Azure OpenAI credentials." } }, "title": "AzureOpenAIModelProvider", "type": "object" }, "GenAIConfiguration": { "description": "Defines the GenAIConfiguration class.\n\nThis is used to specify the fields mapping details in the data and other configuration parameters needed for evaluation.\n\nExamples:\n 1. Create configuration with default parameters\n .. code-block:: python\n\n configuration = GenAIConfiguration()\n\n 2. Create configuration with parameters\n .. code-block:: python\n\n configuration = GenAIConfiguration(input_fields=[\"input\"], \n output_fields=[\"output\"])\n\n 2. Create configuration with dict parameters\n .. code-block:: python\n\n config = {\"input_fields\": [\"input\"],\n \"output_fields\": [\"output\"],\n \"context_fields\": [\"contexts\"],\n \"reference_fields\": [\"reference\"]}\n configuration = GenAIConfiguration(**config) ", "properties": { "record_id_field": { "default": "record_id", "description": "The record identifier field name.", "examples": [ "record_id" ], "title": "Record id field", "type": "string" }, "record_timestamp_field": { "default": "record_timestamp", "description": "The record timestamp field name.", "examples": [ "record_timestamp" ], "title": "Record timestamp field", "type": "string" }, "task_type": { "anyOf": [ { "$ref": "#/$defs/TaskType" }, { "type": "null" } ], "default": null, "description": "The generative task type. Default value is None.", "examples": [ "retrieval_augmented_generation" ], "title": "Task Type" }, "input_fields": { "default": [ "input_text" ], "description": "The list of model input fields in the data. Default value is ['input_text'].", "examples": [ [ "question" ] ], "items": { "type": "string" }, "title": "Input Fields", "type": "array" }, "context_fields": { "default": [ "context" ], "description": "The list of context fields in the input fields. Default value is ['context'].", "examples": [ [ "context1", "context2" ] ], "items": { "type": "string" }, "title": "Context Fields", "type": "array" }, "output_fields": { "default": [ "generated_text" ], "description": "The list of model output fields in the data. Default value is ['generated_text'].", "examples": [ [ "output" ] ], "items": { "type": "string" }, "title": "Output Fields", "type": "array" }, "reference_fields": { "default": [ "ground_truth" ], "description": "The list of reference fields in the data. Default value is ['ground_truth'].", "examples": [ [ "reference" ] ], "items": { "type": "string" }, "title": "Reference Fields", "type": "array" }, "locale": { "anyOf": [ { "$ref": "#/$defs/Locale" }, { "type": "null" } ], "default": null, "description": "The language locale of the input, output and reference fields in the data.", "title": "Locale" }, "tools": { "default": [], "description": "The list of tools used by the LLM.", "examples": [ [ "function1", "function2" ] ], "items": { "type": "object" }, "title": "Tools", "type": "array" }, "tool_calls_field": { "anyOf": [ { "type": "string" }, { "type": "null" } ], "default": "tool_calls", "description": "The tool calls field in the input fields. Default value is 'tool_calls'.", "examples": [ "tool_calls" ], "title": "Tool Calls Field" }, "available_tools_field": { "anyOf": [ { "type": "string" }, { "type": "null" } ], "default": "available_tools", "description": "The tool inventory field in the data. Default value is 'available_tools'.", "examples": [ "available_tools" ], "title": "Available Tools Field" }, "llm_judge": { "anyOf": [ { "$ref": "#/$defs/LLMJudge" }, { "type": "null" } ], "default": null, "description": "LLM as Judge Model details.", "title": "LLM Judge" }, "prompt_field": { "anyOf": [ { "type": "string" }, { "type": "null" } ], "default": "model_prompt", "description": "The prompt field in the input fields. Default value is 'model_prompt'.", "examples": [ "model_prompt" ], "title": "Model Prompt Field" } }, "title": "GenAIConfiguration", "type": "object" }, "LLMJudge": { "description": "Defines the LLMJudge.\n\nThe LLMJudge class contains the details of the llm judge model to be used for computing the metric.\n\nExamples:\n 1. Create LLMJudge using watsonx.ai foundation model:\n .. code-block:: python\n\n wx_ai_foundation_model = WxAIFoundationModel(\n model_id=\"google/flan-ul2\",\n project_id=PROJECT_ID,\n provider=WxAIModelProvider(\n credentials=WxAICredentials(api_key=wx_apikey)\n )\n )\n llm_judge = LLMJudge(model=wx_ai_foundation_model)", "properties": { "model": { "anyOf": [ { "$ref": "#/$defs/WxAIFoundationModel" }, { "$ref": "#/$defs/OpenAIFoundationModel" }, { "$ref": "#/$defs/AzureOpenAIFoundationModel" }, { "$ref": "#/$defs/RITSFoundationModel" } ], "description": "The foundation model to be used as judge", "title": "Model" } }, "required": [ "model" ], "title": "LLMJudge", "type": "object" }, "Locale": { "properties": { "input": { "anyOf": [ { "items": { "type": "string" }, "type": "array" }, { "additionalProperties": { "type": "string" }, "type": "object" }, { "type": "string" }, { "type": "null" } ], "default": null, "title": "Input" }, "output": { "anyOf": [ { "items": { "type": "string" }, "type": "array" }, { "type": "null" } ], "default": null, "title": "Output" }, "reference": { "anyOf": [ { "items": { "type": "string" }, "type": "array" }, { "additionalProperties": { "type": "string" }, "type": "object" }, { "type": "string" }, { "type": "null" } ], "default": null, "title": "Reference" } }, "title": "Locale", "type": "object" }, "ModelProviderType": { "description": "Supported model provider types for Generative AI", "enum": [ "ibm_watsonx.ai", "azure_openai", "rits", "openai", "custom" ], "title": "ModelProviderType", "type": "string" }, "OpenAICredentials": { "description": "Defines the OpenAICredentials class to specify the OpenAI server details.\n\nExamples:\n 1. Create OpenAICredentials with default parameters. By default Dallas region is used.\n .. code-block:: python\n\n openai_credentials = OpenAICredentials(api_key=api_key,\n url=openai_url)\n\n 2. Create OpenAICredentials by reading from environment variables.\n .. code-block:: python\n\n os.environ[\"OPENAI_API_KEY\"] = \"...\"\n os.environ[\"OPENAI_URL\"] = \"...\"\n openai_credentials = OpenAICredentials.create_from_env()", "properties": { "url": { "anyOf": [ { "type": "string" }, { "type": "null" } ], "title": "Url" }, "api_key": { "anyOf": [ { "type": "string" }, { "type": "null" } ], "title": "Api Key" } }, "required": [ "url", "api_key" ], "title": "OpenAICredentials", "type": "object" }, "OpenAIFoundationModel": { "description": "The OpenAI foundation model details\n\nExamples:\n 1. Create OpenAI foundation model by passing the credentials during object creation. Note that the url is optional and will be set to the default value for OpenAI. To change the default value, the url should be passed to ``OpenAICredentials`` object.\n .. code-block:: python\n\n openai_foundation_model = OpenAIFoundationModel(\n model_id=\"gpt-4o-mini\",\n provider=OpenAIModelProvider(\n credentials=OpenAICredentials(\n api_key=api_key,\n url=openai_url,\n )\n )\n )\n\n 2. Create OpenAI foundation model by setting the credentials in environment variables:\n * ``OPENAI_API_KEY`` is used to set the api key for OpenAI.\n * ``OPENAI_URL`` is used to set the url for OpenAI\n\n .. code-block:: python\n\n openai_foundation_model = OpenAIFoundationModel(\n model_id=\"gpt-4o-mini\",\n )", "properties": { "model_name": { "anyOf": [ { "type": "string" }, { "type": "null" } ], "default": null, "description": "The name of the foundation model.", "title": "Model Name" }, "provider": { "$ref": "#/$defs/OpenAIModelProvider", "description": "OpenAI provider" }, "model_id": { "description": "Model name from OpenAI", "title": "Model Id", "type": "string" } }, "required": [ "model_id" ], "title": "OpenAIFoundationModel", "type": "object" }, "OpenAIModelProvider": { "properties": { "type": { "$ref": "#/$defs/ModelProviderType", "default": "openai", "description": "The type of model provider." }, "credentials": { "anyOf": [ { "$ref": "#/$defs/OpenAICredentials" }, { "type": "null" } ], "default": null, "description": "OpenAI credentials. This can also be set by using `OPENAI_API_KEY` environment variable." } }, "title": "OpenAIModelProvider", "type": "object" }, "RITSCredentials": { "properties": { "hostname": { "anyOf": [ { "type": "string" }, { "type": "null" } ], "default": "https://inference-3scale-apicast-production.apps.rits.fmaas.res.ibm.com", "description": "The rits hostname", "title": "Hostname" }, "api_key": { "title": "Api Key", "type": "string" } }, "required": [ "api_key" ], "title": "RITSCredentials", "type": "object" }, "RITSFoundationModel": { "properties": { "model_name": { "anyOf": [ { "type": "string" }, { "type": "null" } ], "default": null, "description": "The name of the foundation model.", "title": "Model Name" }, "provider": { "$ref": "#/$defs/RITSModelProvider", "description": "The provider of the model." } }, "title": "RITSFoundationModel", "type": "object" }, "RITSModelProvider": { "properties": { "type": { "$ref": "#/$defs/ModelProviderType", "default": "rits", "description": "The type of model provider." }, "credentials": { "anyOf": [ { "$ref": "#/$defs/RITSCredentials" }, { "type": "null" } ], "default": null, "description": "RITS credentials." } }, "title": "RITSModelProvider", "type": "object" }, "TaskType": { "description": "Supported task types for generative AI models", "enum": [ "question_answering", "classification", "summarization", "generation", "extraction", "retrieval_augmented_generation" ], "title": "TaskType", "type": "string" }, "WxAICredentials": { "description": "Defines the WxAICredentials class to specify the watsonx.ai server details.\n\nExamples:\n 1. Create WxAICredentials with default parameters. By default Dallas region is used.\n .. code-block:: python\n\n wxai_credentials = WxAICredentials(api_key=\"...\")\n\n 2. Create WxAICredentials by specifying region url.\n .. code-block:: python\n\n wxai_credentials = WxAICredentials(api_key=\"...\",\n url=\"https://au-syd.ml.cloud.ibm.com\")\n\n 3. Create WxAICredentials by reading from environment variables.\n .. code-block:: python\n\n os.environ[\"WATSONX_APIKEY\"] = \"...\"\n # [Optional] Specify watsonx region specific url. Default is https://us-south.ml.cloud.ibm.com .\n os.environ[\"WATSONX_URL\"] = \"https://eu-gb.ml.cloud.ibm.com\"\n wxai_credentials = WxAICredentials.create_from_env()\n\n 4. Create WxAICredentials for on-prem.\n .. code-block:: python\n\n wxai_credentials = WxAICredentials(url=\"https://<hostname>\",\n username=\"...\"\n api_key=\"...\",\n version=\"5.2\")\n\n 5. Create WxAICredentials by reading from environment variables for on-prem.\n .. code-block:: python\n\n os.environ[\"WATSONX_URL\"] = \"https://<hostname>\"\n os.environ[\"WATSONX_VERSION\"] = \"5.2\"\n os.environ[\"WATSONX_USERNAME\"] = \"...\"\n os.environ[\"WATSONX_APIKEY\"] = \"...\"\n # Only one of api_key or password is needed\n #os.environ[\"WATSONX_PASSWORD\"] = \"...\"\n wxai_credentials = WxAICredentials.create_from_env()", "properties": { "url": { "default": "https://us-south.ml.cloud.ibm.com", "description": "The url for watsonx ai service", "examples": [ "https://us-south.ml.cloud.ibm.com", "https://eu-de.ml.cloud.ibm.com", "https://eu-gb.ml.cloud.ibm.com", "https://jp-tok.ml.cloud.ibm.com", "https://au-syd.ml.cloud.ibm.com" ], "title": "watsonx.ai url", "type": "string" }, "api_key": { "anyOf": [ { "type": "string" }, { "type": "null" } ], "default": null, "description": "The user api key. Required for using watsonx as a service and one of api_key or password is required for using watsonx on-prem software.", "strip_whitespace": true, "title": "Api Key" }, "version": { "anyOf": [ { "type": "string" }, { "type": "null" } ], "default": null, "description": "The watsonx on-prem software version. Required for using watsonx on-prem software.", "title": "Version" }, "username": { "anyOf": [ { "type": "string" }, { "type": "null" } ], "default": null, "description": "The user name. Required for using watsonx on-prem software.", "title": "User name" }, "password": { "anyOf": [ { "type": "string" }, { "type": "null" } ], "default": null, "description": "The user password. One of api_key or password is required for using watsonx on-prem software.", "title": "Password" }, "instance_id": { "anyOf": [ { "type": "string" }, { "type": "null" } ], "default": "openshift", "description": "The watsonx.ai instance id. Default value is openshift.", "title": "Instance id" } }, "title": "WxAICredentials", "type": "object" }, "WxAIFoundationModel": { "description": "The IBM watsonx.ai foundation model details\n\nTo initialize the foundation model, you can either pass in the credentials directly or set the environment.\nYou can follow these examples to create the provider.\n\nExamples:\n 1. Create foundation model by specifying the credentials during object creation:\n .. code-block:: python\n\n # Specify the credentials during object creation\n wx_ai_foundation_model = WxAIFoundationModel(\n model_id=\"google/flan-ul2\",\n project_id=<PROJECT_ID>,\n provider=WxAIModelProvider(\n credentials=WxAICredentials(\n url=wx_url, # This is optional field, by default US-Dallas region is selected\n api_key=wx_apikey,\n )\n )\n )\n\n 2. Create foundation model by setting the credentials environment variables:\n * The api key can be set using one of the environment variables ``WXAI_API_KEY``, ``WATSONX_APIKEY``, or ``WXG_API_KEY``. These will be read in the order of precedence.\n * The url is optional and will be set to US-Dallas region by default. It can be set using one of the environment variables ``WXAI_URL``, ``WATSONX_URL``, or ``WXG_URL``. These will be read in the order of precedence.\n\n .. code-block:: python\n\n wx_ai_foundation_model = WxAIFoundationModel(\n model_id=\"google/flan-ul2\",\n project_id=<PROJECT_ID>,\n )\n\n 3. Create foundation model by specifying watsonx.governance software credentials during object creation:\n .. code-block:: python\n\n wx_ai_foundation_model = WxAIFoundationModel(\n model_id=\"google/flan-ul2\",\n project_id=project_id,\n provider=WxAIModelProvider(\n credentials=WxAICredentials(\n url=wx_url,\n api_key=wx_apikey,\n username=wx_username,\n version=wx_version,\n )\n )\n )\n\n 4. Create foundation model by setting watsonx.governance software credentials environment variables:\n * The api key can be set using one of the environment variables ``WXAI_API_KEY``, ``WATSONX_APIKEY``, or ``WXG_API_KEY``. These will be read in the order of precedence.\n * The url can be set using one of these environment variable ``WXAI_URL``, ``WATSONX_URL``, or ``WXG_URL``. These will be read in the order of precedence.\n * The username can be set using one of these environment variable ``WXAI_USERNAME``, ``WATSONX_USERNAME``, or ``WXG_USERNAME``. These will be read in the order of precedence.\n * The version of watsonx.governance software can be set using one of these environment variable ``WXAI_VERSION``, ``WATSONX_VERSION``, or ``WXG_VERSION``. These will be read in the order of precedence.\n\n .. code-block:: python\n\n wx_ai_foundation_model = WxAIFoundationModel(\n model_id=\"google/flan-ul2\",\n project_id=project_id,\n )", "properties": { "model_name": { "anyOf": [ { "type": "string" }, { "type": "null" } ], "default": null, "description": "The name of the foundation model.", "title": "Model Name" }, "provider": { "$ref": "#/$defs/WxAIModelProvider", "description": "The provider of the model." }, "model_id": { "description": "The unique identifier for the watsonx.ai model.", "title": "Model Id", "type": "string" }, "project_id": { "anyOf": [ { "type": "string" }, { "type": "null" } ], "default": null, "description": "The project ID associated with the model.", "title": "Project Id" }, "space_id": { "anyOf": [ { "type": "string" }, { "type": "null" } ], "default": null, "description": "The space ID associated with the model.", "title": "Space Id" } }, "required": [ "model_id" ], "title": "WxAIFoundationModel", "type": "object" }, "WxAIModelProvider": { "description": "This class represents a model provider configuration for IBM watsonx.ai. It includes the provider type and\ncredentials required to authenticate and interact with the watsonx.ai platform. If credentials are not explicitly\nprovided, it attempts to load them from environment variables.\n\nExamples:\n 1. Create provider using credentials object:\n .. code-block:: python\n\n credentials = WxAICredentials(\n url=\"https://us-south.ml.cloud.ibm.com\",\n api_key=\"your-api-key\"\n )\n provider = WxAIModelProvider(credentials=credentials)\n\n 2. Create provider using environment variables:\n .. code-block:: python\n\n import os\n\n os.environ['WATSONX_URL'] = \"https://us-south.ml.cloud.ibm.com\"\n os.environ['WATSONX_APIKEY'] = \"your-api-key\"\n\n provider = WxAIModelProvider()", "properties": { "type": { "$ref": "#/$defs/ModelProviderType", "default": "ibm_watsonx.ai", "description": "The type of model provider." }, "credentials": { "anyOf": [ { "$ref": "#/$defs/WxAICredentials" }, { "type": "null" } ], "default": null, "description": "The credentials used to authenticate with watsonx.ai. If not provided, they will be loaded from environment variables." } }, "title": "WxAIModelProvider", "type": "object" } } }
- field configuration: Annotated[GenAIConfiguration, FieldInfo(annotation=NoneType, required=False, default=GenAIConfiguration(record_id_field='record_id', record_timestamp_field='record_timestamp', task_type=None, input_fields=['input_text'], context_fields=['context'], output_fields=['generated_text'], reference_fields=['ground_truth'], locale=None, tools=[], tool_calls_field='tool_calls', available_tools_field='available_tools', llm_judge=None, prompt_field='model_prompt'), title='Generative AI Configuration', description='The configuration for metrics evaluation.')] = GenAIConfiguration(record_id_field='record_id', record_timestamp_field='record_timestamp', task_type=None, input_fields=['input_text'], context_fields=['context'], output_fields=['generated_text'], reference_fields=['ground_truth'], locale=None, tools=[], tool_calls_field='tool_calls', available_tools_field='available_tools', llm_judge=None, prompt_field='model_prompt')¶
The configuration for metrics evaluation.
- display_insights()¶
Display the metrics result in a venn diagram based on the metrics threshold.
- display_table()¶
Display the metrics result as a table.
- evaluate(data: DataFrame | dict, metrics: list[GenAIMetric] = [], metric_groups: list[MetricGroup] = [], **kwargs) MetricsEvaluationResult ¶
Evaluate the metrics for the given data.
- Parameters:
data (pd.DataFrame | dict) – The data to be evaluated.
metrics (list[GenAIMetric], optional) – The metrics to be evaluated. Defaults to [].
metric_groups (list[MetricGroup], optional) – The metric groups to be evaluated. Defaults to [].
**kwargs – Additional keyword arguments.
- Returns:
The result of the evaluation.
- Return type:
- async evaluate_async(data: DataFrame | dict, metrics: list[GenAIMetric] = [], metric_groups: list[MetricGroup] = [], **kwargs) MetricsEvaluationResult ¶
asynchronously evaluate the metrics for the given data.
- Parameters:
data (pd.DataFrame | dict) – The data to be evaluated.
metrics (list[GenAIMetric], optional) – The metrics to be evaluated. Defaults to [].
metric_groups (list[MetricGroup], optional) – The metric groups to be evaluated. Defaults to [].
**kwargs – Additional keyword arguments.
- Returns:
The result of the evaluation.
- Return type:
- model_post_init(context: Any, /) None ¶
This function is meant to behave like a BaseModel method to initialise private attributes.
It takes context as an argument since that’s what pydantic-core passes when calling it.
- Parameters:
self – The BaseModel instance.
context – The context.