Foundation Model¶
- pydantic model ibm_watsonx_gov.entities.foundation_model.AWSBedrockFoundationModel¶
Bases:
BaseModel
The Amazon Bedrock foundation model details.
- Examples:
- Create AWS Bedrock foundation model by passing credentials manually:
bedrock_model = AWSBedrockFoundationModel( model_id="anthropic.claude-v2", provider=AWSBedrockModelProvider( credentials=AWSBedrockCredentials( aws_access_key_id="your-access-key-id", aws_secret_access_key="your-secret-access-key", aws_region_name="us-east-1", aws_session_token="optional-session-token" ) ), parameters={ "temperature": 0.7, "top_p": 0.9, "max_tokens": 200, "stop_sequences": ["
- “],
“system”: “You are a concise assistant.”, “reasoning_effort”: “high”, “tool_choice”: “auto”
}
)
- Create AWS Bedrock foundation model using environment variables:
os.environ[“AWS_ACCESS_KEY_ID”] = “your-access-key-id” os.environ[“AWS_SECRET_ACCESS_KEY”] = “your-secret-access-key” os.environ[“AWS_DEFAULT_REGION”] = “us-east-1”
bedrock_model = AWSBedrockFoundationModel( model_id="anthropic.claude-v2" )
Show JSON schema
{ "title": "AWSBedrockFoundationModel", "description": " The Amazon Bedrock foundation model details.\n\n Examples:\n 1. Create AWS Bedrock foundation model by passing credentials manually:\n .. code-block:: python\n\n bedrock_model = AWSBedrockFoundationModel(\n model_id=\"anthropic.claude-v2\",\n provider=AWSBedrockModelProvider(\n credentials=AWSBedrockCredentials(\n aws_access_key_id=\"your-access-key-id\",\n aws_secret_access_key=\"your-secret-access-key\",\n aws_region_name=\"us-east-1\",\n aws_session_token=\"optional-session-token\"\n )\n ),\n parameters={\n \"temperature\": 0.7,\n \"top_p\": 0.9,\n \"max_tokens\": 200,\n \"stop_sequences\": [\"\n\"],\n \"system\": \"You are a concise assistant.\",\n \"reasoning_effort\": \"high\",\n \"tool_choice\": \"auto\"\n }\n )\n\n 2. Create AWS Bedrock foundation model using environment variables:\n os.environ[\"AWS_ACCESS_KEY_ID\"] = \"your-access-key-id\"\n os.environ[\"AWS_SECRET_ACCESS_KEY\"] = \"your-secret-access-key\"\n os.environ[\"AWS_DEFAULT_REGION\"] = \"us-east-1\"\n\n .. code-block:: python\n\n bedrock_model = AWSBedrockFoundationModel(\n model_id=\"anthropic.claude-v2\"\n )\n ", "type": "object", "properties": { "model_id": { "description": "The AWS Bedrock model name. It must be a valid AWS Bedrock model identifier.", "examples": [ "anthropic.claude-v2" ], "title": "Model ID", "type": "string" }, "provider": { "$ref": "#/$defs/AWSBedrockModelProvider", "description": "The AWS Bedrock provider details.", "title": "Provider" }, "parameters": { "anyOf": [ { "type": "object" }, { "type": "null" } ], "description": "The model parameters to be used when invoking the model. The parameters may include temperature, top_p, max_tokens, etc..", "title": "Parameters" } }, "$defs": { "AWSBedrockCredentials": { "description": "Defines the AWSBedrockCredentials class for accessing AWS Bedrock using environment variables or manual input.\n\nExamples:\n 1. Create credentials manually:\n .. code-block:: python\n\n credentials = AWSBedrockCredentials(\n aws_access_key_id=\"your-access-key-id\",\n aws_secret_access_key=\"your-secret-access-key\",\n aws_region_name=\"us-east-1\",\n aws_session_token=\"optional-session-token\"\n )\n\n 2. Create credentials from environment:\n .. code-block:: python\n\n os.environ[\"AWS_ACCESS_KEY_ID\"] = \"your-access-key-id\"\n os.environ[\"AWS_DEFAULT_REGION\"] = \"us-east-1\"\n os.environ[\"AWS_SECRET_ACCESS_KEY\"] = \"your-secret-access-key\"\n\n credentials = AWSBedrockCredentials.create_from_env()", "properties": { "aws_access_key_id": { "anyOf": [ { "type": "string" }, { "type": "null" } ], "description": "The AWS access key id. This attribute value will be read from AWS_ACCESS_KEY_ID environment variable when creating AWSBedrockCredentials from environment.", "title": "AWS Access Key ID" }, "aws_secret_access_key": { "anyOf": [ { "type": "string" }, { "type": "null" } ], "description": "The AWS secret access key. This attribute value will be read from AWS_SECRET_ACCESS_KEY environment variable when creating AWSBedrockCredentials from environment.", "title": "AWS Secret Access Key" }, "aws_region_name": { "default": "us-east-1", "description": "AWS region. This attribute value will be read from AWS_DEFAULT_REGION environment variable when creating AWSBedrockCredentials from environment.", "examples": [ "us-east-1", "eu-west-1" ], "title": "AWS Region", "type": "string" }, "aws_session_token": { "anyOf": [ { "type": "string" }, { "type": "null" } ], "description": "Optional AWS session token for temporary credentials.", "title": "AWS Session Token" } }, "required": [ "aws_access_key_id", "aws_secret_access_key", "aws_session_token" ], "title": "AWSBedrockCredentials", "type": "object" }, "AWSBedrockModelProvider": { "description": "Represents a model provider using Amazon Bedrock.\n\nExamples:\n 1. Create provider using credentials object:\n .. code-block:: python\n\n provider = AWSBedrockModelProvider(\n credentials=AWSBedrockCredentials(\n aws_access_key_id=\"your-access-key-id\",\n aws_secret_access_key=\"your-secret-access-key\",\n aws_region_name=\"us-east-1\",\n aws_session_token=\"optional-session-token\"\n )\n )\n\n 2. Create provider using environment variables:\n .. code-block:: python\n\n os.environ['AWS_ACCESS_KEY_ID'] = \"your-access-key-id\"\n os.environ['AWS_SECRET_ACCESS_KEY'] = \"your-secret-access-key\"\n os.environ['AWS_SESSION_TOKEN'] = \"optional-session-token\" # Optional\n os.environ['AWS_DEFAULT_REGION'] = \"us-east-1\"\n provider = AWSBedrockModelProvider()", "properties": { "type": { "$ref": "#/$defs/ModelProviderType", "default": "aws_bedrock", "description": "The type of model provider." }, "credentials": { "anyOf": [ { "$ref": "#/$defs/AWSBedrockCredentials" }, { "type": "null" } ], "default": null, "description": "AWS Bedrock credentials." } }, "title": "AWSBedrockModelProvider", "type": "object" }, "ModelProviderType": { "description": "Supported model provider types for Generative AI", "enum": [ "ibm_watsonx.ai", "azure_openai", "rits", "openai", "vertex_ai", "google_ai_studio", "aws_bedrock", "custom", "portkey" ], "title": "ModelProviderType", "type": "string" } }, "required": [ "model_id" ] }
- Fields:
- field model_id: Annotated[str, FieldInfo(annotation=NoneType, required=True, title='Model ID', description='The AWS Bedrock model name. It must be a valid AWS Bedrock model identifier.', examples=['anthropic.claude-v2'])] [Required]¶
The AWS Bedrock model name. It must be a valid AWS Bedrock model identifier.
- field parameters: Dict[str, Any] | None [Optional]¶
The model parameters to be used when invoking the model. The parameters may include temperature, top_p, max_tokens, etc..
- field provider: Annotated[AWSBedrockModelProvider, FieldInfo(annotation=NoneType, required=False, default_factory=AWSBedrockModelProvider, title='Provider', description='The AWS Bedrock provider details.')] [Optional]¶
The AWS Bedrock provider details.
- pydantic model ibm_watsonx_gov.entities.foundation_model.AzureOpenAIFoundationModel¶
Bases:
FoundationModel
The Azure OpenAI foundation model details
Examples
- Create Azure OpenAI foundation model by passing the credentials during object creation.
azure_openai_foundation_model = AzureOpenAIFoundationModel( model_id="gpt-4o-mini", provider=AzureOpenAIModelProvider( credentials=AzureOpenAICredentials( api_key=azure_api_key, url=azure_host_url, api_version=azure_api_model_version, ) ) )
- Create Azure OpenAI foundation model by setting the credentials in environment variables:
AZURE_OPENAI_API_KEY
is used to set the api key for OpenAI.AZURE_OPENAI_HOST
is used to set the url for Azure OpenAI.AZURE_OPENAI_API_VERSION
is uses to set the the api version for Azure OpenAI.openai_foundation_model = AzureOpenAIFoundationModel( model_id="gpt-4o-mini", )
Show JSON schema
{ "title": "AzureOpenAIFoundationModel", "description": "The Azure OpenAI foundation model details\n\nExamples:\n 1. Create Azure OpenAI foundation model by passing the credentials during object creation.\n .. code-block:: python\n\n azure_openai_foundation_model = AzureOpenAIFoundationModel(\n model_id=\"gpt-4o-mini\",\n provider=AzureOpenAIModelProvider(\n credentials=AzureOpenAICredentials(\n api_key=azure_api_key,\n url=azure_host_url,\n api_version=azure_api_model_version,\n )\n )\n )\n\n2. Create Azure OpenAI foundation model by setting the credentials in environment variables:\n * ``AZURE_OPENAI_API_KEY`` is used to set the api key for OpenAI.\n * ``AZURE_OPENAI_HOST`` is used to set the url for Azure OpenAI.\n * ``AZURE_OPENAI_API_VERSION`` is uses to set the the api version for Azure OpenAI.\n\n .. code-block:: python\n\n openai_foundation_model = AzureOpenAIFoundationModel(\n model_id=\"gpt-4o-mini\",\n )", "type": "object", "properties": { "model_name": { "anyOf": [ { "type": "string" }, { "type": "null" } ], "default": null, "description": "The name of the foundation model.", "title": "Model Name" }, "provider": { "$ref": "#/$defs/AzureOpenAIModelProvider", "description": "Azure OpenAI provider" }, "model_id": { "description": "Model deployment name from Azure OpenAI", "title": "Model Id", "type": "string" } }, "$defs": { "AzureOpenAICredentials": { "properties": { "url": { "anyOf": [ { "type": "string" }, { "type": "null" } ], "description": "Azure OpenAI url. This attribute can be read from `AZURE_OPENAI_HOST` environment variable.", "title": "Url" }, "api_key": { "anyOf": [ { "type": "string" }, { "type": "null" } ], "description": "API key for Azure OpenAI. This attribute can be read from `AZURE_OPENAI_API_KEY` environment variable.", "title": "Api Key" }, "api_version": { "anyOf": [ { "type": "string" }, { "type": "null" } ], "description": "The model API version from Azure OpenAI. This attribute can be read from `AZURE_OPENAI_API_VERSION` environment variable.", "title": "Api Version" } }, "required": [ "url", "api_key", "api_version" ], "title": "AzureOpenAICredentials", "type": "object" }, "AzureOpenAIModelProvider": { "properties": { "type": { "$ref": "#/$defs/ModelProviderType", "default": "azure_openai", "description": "The type of model provider." }, "credentials": { "anyOf": [ { "$ref": "#/$defs/AzureOpenAICredentials" }, { "type": "null" } ], "default": null, "description": "Azure OpenAI credentials." } }, "title": "AzureOpenAIModelProvider", "type": "object" }, "ModelProviderType": { "description": "Supported model provider types for Generative AI", "enum": [ "ibm_watsonx.ai", "azure_openai", "rits", "openai", "vertex_ai", "google_ai_studio", "aws_bedrock", "custom", "portkey" ], "title": "ModelProviderType", "type": "string" } }, "required": [ "model_id" ] }
- Config:
protected_namespaces: tuple = ()
- Fields:
- field model_id: Annotated[str, FieldInfo(annotation=NoneType, required=True, description='Model deployment name from Azure OpenAI')] [Required]¶
Model deployment name from Azure OpenAI
- field provider: Annotated[AzureOpenAIModelProvider, FieldInfo(annotation=NoneType, required=False, default_factory=AzureOpenAIModelProvider, description='Azure OpenAI provider')] [Optional]¶
Azure OpenAI provider
- pydantic model ibm_watsonx_gov.entities.foundation_model.CustomFoundationModel¶
Bases:
FoundationModel
Defines the CustomFoundationModel class.
This class extends the base FoundationModel to support custom inference logic through a user-defined scoring function. It is intended for use cases where the model is externally hosted and not in the list of supported frameworks. .. rubric:: Examples
- Define a custom scoring function and create a model:
import pandas as pd def scoring_fn(data: pd.DataFrame): predictions_list = [] # Custom logic to call an external LLM return pd.DataFrame({"generated_text": predictions_list}) model = CustomFoundationModel( scoring_fn=scoring_fn )
Show JSON schema
{ "title": "CustomFoundationModel", "description": "Defines the CustomFoundationModel class.\n\nThis class extends the base `FoundationModel` to support custom inference logic through a user-defined scoring function.\nIt is intended for use cases where the model is externally hosted and not in the list of supported frameworks.\nExamples:\n 1. Define a custom scoring function and create a model:\n .. code-block:: python\n\n import pandas as pd\n\n def scoring_fn(data: pd.DataFrame):\n predictions_list = []\n # Custom logic to call an external LLM\n return pd.DataFrame({\"generated_text\": predictions_list}) \n\n model = CustomFoundationModel(\n scoring_fn=scoring_fn\n )", "type": "object", "properties": { "model_name": { "anyOf": [ { "type": "string" }, { "type": "null" } ], "default": null, "description": "The name of the foundation model.", "title": "Model Name" }, "provider": { "$ref": "#/$defs/ModelProvider", "description": "The provider of the model." } }, "$defs": { "ModelProvider": { "properties": { "type": { "$ref": "#/$defs/ModelProviderType", "description": "The type of model provider." } }, "required": [ "type" ], "title": "ModelProvider", "type": "object" }, "ModelProviderType": { "description": "Supported model provider types for Generative AI", "enum": [ "ibm_watsonx.ai", "azure_openai", "rits", "openai", "vertex_ai", "google_ai_studio", "aws_bedrock", "custom", "portkey" ], "title": "ModelProviderType", "type": "string" } } }
- Config:
protected_namespaces: tuple = ()
- Fields:
- field provider: Annotated[ModelProvider, FieldInfo(annotation=NoneType, required=False, default_factory=CustomModelProvider, description='The provider of the model.')] [Optional]¶
The provider of the model.
- model_post_init(context: Any, /) None ¶
We need to both initialize private attributes and call the user-defined model_post_init method.
- pydantic model ibm_watsonx_gov.entities.foundation_model.FoundationModel¶
Bases:
BaseModel
Defines the base FoundationModel class.
Show JSON schema
{ "title": "FoundationModel", "description": "Defines the base FoundationModel class.", "type": "object", "properties": { "model_name": { "anyOf": [ { "type": "string" }, { "type": "null" } ], "default": null, "description": "The name of the foundation model.", "title": "Model Name" }, "provider": { "$ref": "#/$defs/ModelProvider", "description": "The provider of the foundation model." } }, "$defs": { "ModelProvider": { "properties": { "type": { "$ref": "#/$defs/ModelProviderType", "description": "The type of model provider." } }, "required": [ "type" ], "title": "ModelProvider", "type": "object" }, "ModelProviderType": { "description": "Supported model provider types for Generative AI", "enum": [ "ibm_watsonx.ai", "azure_openai", "rits", "openai", "vertex_ai", "google_ai_studio", "aws_bedrock", "custom", "portkey" ], "title": "ModelProviderType", "type": "string" } }, "required": [ "provider" ] }
- Config:
protected_namespaces: tuple = ()
- Fields:
- field model_name: Annotated[str | None, FieldInfo(annotation=NoneType, required=False, default=None, description='The name of the foundation model.')] = None¶
The name of the foundation model.
- field provider: Annotated[ModelProvider, FieldInfo(annotation=NoneType, required=True, description='The provider of the foundation model.')] [Required]¶
The provider of the foundation model.
- pydantic model ibm_watsonx_gov.entities.foundation_model.FoundationModelInfo¶
Bases:
BaseModel
Represents a foundation model used in an experiment.
Show JSON schema
{ "title": "FoundationModelInfo", "description": "Represents a foundation model used in an experiment.", "type": "object", "properties": { "model_name": { "anyOf": [ { "type": "string" }, { "type": "null" } ], "default": null, "description": "The name of the foundation model.", "title": "Model Name" }, "model_id": { "anyOf": [ { "type": "string" }, { "type": "null" } ], "default": null, "description": "The id of the foundation model.", "title": "Model Id" }, "provider": { "description": "The provider of the foundation model.", "title": "Provider", "type": "string" }, "type": { "description": "The type of foundation model.", "example": [ "chat", "embedding", "text-generation" ], "title": "Type", "type": "string" } }, "required": [ "provider", "type" ] }
- field model_id: Annotated[str | None, FieldInfo(annotation=NoneType, required=False, default=None, description='The id of the foundation model.')] = None¶
The id of the foundation model.
- field model_name: Annotated[str | None, FieldInfo(annotation=NoneType, required=False, default=None, description='The name of the foundation model.')] = None¶
The name of the foundation model.
- field provider: Annotated[str, FieldInfo(annotation=NoneType, required=True, description='The provider of the foundation model.')] [Required]¶
The provider of the foundation model.
- field type: ', json_schema_extra={'example': ['chat', 'embedding', 'text-generation']})] [Required]¶
The type of foundation model.
- pydantic model ibm_watsonx_gov.entities.foundation_model.GoogleAIStudioFoundationModel¶
Bases:
FoundationModel
Represents a foundation model served via Google AI Studio.
Examples
- Create Google AI Studio foundation model by passing the credentials during object creation.
model = GoogleAIStudioFoundationModel( model_id="gemini-1.5-pro-002", provider=GoogleAIStudioModelProvider( credentials=GoogleAIStudioCredentials(api_key="your_api_key") ) )
- Create Google AI Studio foundation model by setting the credentials in environment variables:
GOOGLE_API_KEY
ORGEMINI_API_KEY
is used to set the Credentials path for Vertex AI.model = GoogleAIStudioFoundationModel( model_id="gemini/gpt-4o-mini", )
Show JSON schema
{ "title": "GoogleAIStudioFoundationModel", "description": "Represents a foundation model served via Google AI Studio.\n\nExamples:\n 1. Create Google AI Studio foundation model by passing the credentials during object creation.\n .. code-block:: python\n\n model = GoogleAIStudioFoundationModel(\n model_id=\"gemini-1.5-pro-002\",\n provider=GoogleAIStudioModelProvider(\n credentials=GoogleAIStudioCredentials(api_key=\"your_api_key\")\n )\n )\n 2. Create Google AI Studio foundation model by setting the credentials in environment variables:\n * ``GOOGLE_API_KEY`` OR ``GEMINI_API_KEY`` is used to set the Credentials path for Vertex AI.\n .. code-block:: python\n\n model = GoogleAIStudioFoundationModel(\n model_id=\"gemini/gpt-4o-mini\",\n )", "type": "object", "properties": { "model_name": { "anyOf": [ { "type": "string" }, { "type": "null" } ], "default": null, "description": "The name of the foundation model.", "title": "Model Name" }, "provider": { "$ref": "#/$defs/GoogleAIStudioModelProvider", "description": "Google AI Studio provider.", "title": "Provider" }, "model_id": { "description": "Model name for Google AI Studio. Must be a valid Google AI model identifier or a fully-qualified publisher path", "examples": [ "gemini-1.5-pro-002" ], "title": "Model id", "type": "string" } }, "$defs": { "GoogleAIStudioCredentials": { "description": "Defines the GoogleAIStudioCredentials class for accessing Google AI Studio using an API key.\n\nExamples:\n 1. Create credentials manually:\n .. code-block:: python\n\n google_credentials = GoogleAIStudioCredentials(api_key=\"your-api-key\")\n\n 2. Create credentials from environment:\n .. code-block:: python\n\n os.environ[\"GOOGLE_API_KEY\"] = \"your-api-key\"\n google_credentials = GoogleAIStudioCredentials.create_from_env()", "properties": { "api_key": { "description": "The Google AI Studio key. This attribute can be read from GOOGLE_API_KEY environment variable when creating GoogleAIStudioCredentials from environment.", "title": "Api Key", "type": "string" } }, "required": [ "api_key" ], "title": "GoogleAIStudioCredentials", "type": "object" }, "GoogleAIStudioModelProvider": { "description": "Represents a model provider using Google AI Studio.\n\nExamples:\n 1. Create provider using credentials object:\n .. code-block:: python\n\n provider = GoogleAIStudioModelProvider(\n credentials=GoogleAIStudioCredentials(api_key=\"api-key\")\n )\n\n 2. Create provider using environment variables:\n .. code-block:: python\n\n os.environ['GOOGLE_API_KEY'] = \"your_api_key\"\n\n provider = GoogleAIStudioModelProvider()", "properties": { "type": { "$ref": "#/$defs/ModelProviderType", "default": "google_ai_studio", "description": "The type of model provider." }, "credentials": { "anyOf": [ { "$ref": "#/$defs/GoogleAIStudioCredentials" }, { "type": "null" } ], "default": null, "description": "Google AI Studio credentials." } }, "title": "GoogleAIStudioModelProvider", "type": "object" }, "ModelProviderType": { "description": "Supported model provider types for Generative AI", "enum": [ "ibm_watsonx.ai", "azure_openai", "rits", "openai", "vertex_ai", "google_ai_studio", "aws_bedrock", "custom", "portkey" ], "title": "ModelProviderType", "type": "string" } }, "required": [ "model_id" ] }
- Config:
protected_namespaces: tuple = ()
- Fields:
- field model_id: Annotated[str, FieldInfo(annotation=NoneType, required=True, title='Model id', description='Model name for Google AI Studio. Must be a valid Google AI model identifier or a fully-qualified publisher path', examples=['gemini-1.5-pro-002'])] [Required]¶
Model name for Google AI Studio. Must be a valid Google AI model identifier or a fully-qualified publisher path
- field provider: Annotated[GoogleAIStudioModelProvider, FieldInfo(annotation=NoneType, required=False, default_factory=GoogleAIStudioModelProvider, title='Provider', description='Google AI Studio provider.')] [Optional]¶
Google AI Studio provider.
- pydantic model ibm_watsonx_gov.entities.foundation_model.OpenAIFoundationModel¶
Bases:
FoundationModel
The OpenAI foundation model details
Examples
- Create OpenAI foundation model by passing the credentials during object creation. Note that the url is optional and will be set to the default value for OpenAI. To change the default value, the url should be passed to
OpenAICredentials
object. openai_foundation_model = OpenAIFoundationModel( model_id="gpt-4o-mini", provider=OpenAIModelProvider( credentials=OpenAICredentials( api_key=api_key, url=openai_url, ) ) )
- Create OpenAI foundation model by passing the credentials during object creation. Note that the url is optional and will be set to the default value for OpenAI. To change the default value, the url should be passed to
- Create OpenAI foundation model by setting the credentials in environment variables:
OPENAI_API_KEY
is used to set the api key for OpenAI.OPENAI_URL
is used to set the url for OpenAI
openai_foundation_model = OpenAIFoundationModel( model_id="gpt-4o-mini", )
Show JSON schema
{ "title": "OpenAIFoundationModel", "description": "The OpenAI foundation model details\n\nExamples:\n 1. Create OpenAI foundation model by passing the credentials during object creation. Note that the url is optional and will be set to the default value for OpenAI. To change the default value, the url should be passed to ``OpenAICredentials`` object.\n .. code-block:: python\n\n openai_foundation_model = OpenAIFoundationModel(\n model_id=\"gpt-4o-mini\",\n provider=OpenAIModelProvider(\n credentials=OpenAICredentials(\n api_key=api_key,\n url=openai_url,\n )\n )\n )\n\n 2. Create OpenAI foundation model by setting the credentials in environment variables:\n * ``OPENAI_API_KEY`` is used to set the api key for OpenAI.\n * ``OPENAI_URL`` is used to set the url for OpenAI\n\n .. code-block:: python\n\n openai_foundation_model = OpenAIFoundationModel(\n model_id=\"gpt-4o-mini\",\n )", "type": "object", "properties": { "model_name": { "anyOf": [ { "type": "string" }, { "type": "null" } ], "default": null, "description": "The name of the foundation model.", "title": "Model Name" }, "provider": { "$ref": "#/$defs/OpenAIModelProvider", "description": "OpenAI provider" }, "model_id": { "description": "Model name from OpenAI", "title": "Model Id", "type": "string" } }, "$defs": { "ModelProviderType": { "description": "Supported model provider types for Generative AI", "enum": [ "ibm_watsonx.ai", "azure_openai", "rits", "openai", "vertex_ai", "google_ai_studio", "aws_bedrock", "custom", "portkey" ], "title": "ModelProviderType", "type": "string" }, "OpenAICredentials": { "description": "Defines the OpenAICredentials class to specify the OpenAI server details.\n\nExamples:\n 1. Create OpenAICredentials with default parameters. By default Dallas region is used.\n .. code-block:: python\n\n openai_credentials = OpenAICredentials(api_key=api_key,\n url=openai_url)\n\n 2. Create OpenAICredentials by reading from environment variables.\n .. code-block:: python\n\n os.environ[\"OPENAI_API_KEY\"] = \"...\"\n os.environ[\"OPENAI_URL\"] = \"...\"\n openai_credentials = OpenAICredentials.create_from_env()", "properties": { "url": { "anyOf": [ { "type": "string" }, { "type": "null" } ], "title": "Url" }, "api_key": { "anyOf": [ { "type": "string" }, { "type": "null" } ], "title": "Api Key" } }, "required": [ "url", "api_key" ], "title": "OpenAICredentials", "type": "object" }, "OpenAIModelProvider": { "properties": { "type": { "$ref": "#/$defs/ModelProviderType", "default": "openai", "description": "The type of model provider." }, "credentials": { "anyOf": [ { "$ref": "#/$defs/OpenAICredentials" }, { "type": "null" } ], "default": null, "description": "OpenAI credentials. This can also be set by using `OPENAI_API_KEY` environment variable." } }, "title": "OpenAIModelProvider", "type": "object" } }, "required": [ "model_id" ] }
- Config:
protected_namespaces: tuple = ()
- Fields:
- field model_id: Annotated[str, FieldInfo(annotation=NoneType, required=True, description='Model name from OpenAI')] [Required]¶
Model name from OpenAI
- field provider: Annotated[OpenAIModelProvider, FieldInfo(annotation=NoneType, required=False, default_factory=OpenAIModelProvider, description='OpenAI provider')] [Optional]¶
OpenAI provider
- pydantic model ibm_watsonx_gov.entities.foundation_model.PortKeyGateway¶
Bases:
FoundationModel
The PortKey gateway details
Examples
- Create PortKeyGateway by passing the credentials during object creation. Note that the url is optional and will be set to the default value for PortKey. To change the default value, the url should be passed to
PortKeyCredentials
object. port_key_gateway = PortKeyGateway( model_id="gpt-4o-mini", provider=PortKeyModelProvider( credentials=PortKeyCredentials( api_key=api_key, url=openai_url, provider_api_key=provider_api_key, provider_name=provider_name ) ) )
- Create PortKeyGateway by passing the credentials during object creation. Note that the url is optional and will be set to the default value for PortKey. To change the default value, the url should be passed to
- Create PortKeyGateway by setting the credentials in environment variables:
PORTKEY_API_KEY
is used to set the api key for PortKey.PORTKEY_URL
is used to set the url for PortKey.PORTKEY_PROVIDER_API_KEY
is used to set the provider api key for PortKey.PORTKEY_PROVIDER_NAME
is used to set the provider name for PortKey
port_key_gateway = PortKeyGateway( model_id="gpt-4o-mini", )
Show JSON schema
{ "title": "PortKeyGateway", "description": "The PortKey gateway details\n\nExamples:\n 1. Create PortKeyGateway by passing the credentials during object creation. Note that the url is optional and will be set to the default value for PortKey. To change the default value, the url should be passed to ``PortKeyCredentials`` object.\n .. code-block:: python\n\n port_key_gateway = PortKeyGateway(\n model_id=\"gpt-4o-mini\",\n provider=PortKeyModelProvider(\n credentials=PortKeyCredentials(\n api_key=api_key,\n url=openai_url,\n provider_api_key=provider_api_key,\n provider_name=provider_name\n )\n )\n )\n\n 2. Create PortKeyGateway by setting the credentials in environment variables:\n * ``PORTKEY_API_KEY`` is used to set the api key for PortKey.\n * ``PORTKEY_URL`` is used to set the url for PortKey.\n * ``PORTKEY_PROVIDER_API_KEY`` is used to set the provider api key for PortKey.\n * ``PORTKEY_PROVIDER_NAME`` is used to set the provider name for PortKey\n\n .. code-block:: python\n\n port_key_gateway = PortKeyGateway(\n model_id=\"gpt-4o-mini\",\n )", "type": "object", "properties": { "model_name": { "anyOf": [ { "type": "string" }, { "type": "null" } ], "default": null, "description": "The name of the foundation model.", "title": "Model Name" }, "provider": { "$ref": "#/$defs/PortKeyModelProvider", "description": "PortKey Provider" }, "model_id": { "description": "Model name from the Provider", "title": "Model Id", "type": "string" } }, "$defs": { "ModelProviderType": { "description": "Supported model provider types for Generative AI", "enum": [ "ibm_watsonx.ai", "azure_openai", "rits", "openai", "vertex_ai", "google_ai_studio", "aws_bedrock", "custom", "portkey" ], "title": "ModelProviderType", "type": "string" }, "PortKeyCredentials": { "description": "Defines the PortKeyCredentials class to specify the PortKey Gateway details.\n\nExamples:\n 1. Create PortKeyCredentials with default parameters.\n .. code-block:: python\n\n portkey_credentials = PortKeyCredentials(api_key=api_key,\n url=portkey_url,\n provider_api_key=provider_api_key,\n provider=provider_name)\n\n 2. Create PortKeyCredentials by reading from environment variables.\n .. code-block:: python\n\n os.environ[\"PORTKEY_API_KEY\"] = \"...\"\n os.environ[\"PORTKEY_URL\"] = \"...\"\n os.environ[\"PORTKEY_PROVIDER_API_KEY\"] = \"...\"\n os.environ[\"PORTKEY_PROVIDER_NAME\"] = \"...\"\n portkey_credentials = PortKeyCredentials.create_from_env()", "properties": { "url": { "anyOf": [ { "type": "string" }, { "type": "null" } ], "description": "PortKey url. This attribute can be read from `PORTKEY_URL` environment variable.", "title": "Url" }, "api_key": { "anyOf": [ { "type": "string" }, { "type": "null" } ], "description": "API key for PortKey. This attribute can be read from `PORTKEY_API_KEY` environment variable.", "title": "Api Key" }, "provider_api_key": { "anyOf": [ { "type": "string" }, { "type": "null" } ], "description": "API key for the provider. This attribute can be read from `PORTKEY_PROVIDER_API_KEY` environment variable.", "title": "Provider Api Key" }, "provider": { "anyOf": [ { "type": "string" }, { "type": "null" } ], "description": "The provider name. This attribute can be read from `PORTKEY_PROVIDER_NAME` environment variable.", "title": "Provider" } }, "required": [ "url", "api_key", "provider_api_key", "provider" ], "title": "PortKeyCredentials", "type": "object" }, "PortKeyModelProvider": { "properties": { "type": { "$ref": "#/$defs/ModelProviderType", "default": "portkey", "description": "The type of model provider." }, "credentials": { "anyOf": [ { "$ref": "#/$defs/PortKeyCredentials" }, { "type": "null" } ], "default": null, "description": "PortKey credentials." } }, "title": "PortKeyModelProvider", "type": "object" } }, "required": [ "model_id" ] }
- Config:
protected_namespaces: tuple = ()
- Fields:
- field model_id: Annotated[str, FieldInfo(annotation=NoneType, required=True, description='Model name from the Provider')] [Required]¶
Model name from the Provider
- field provider: Annotated[PortKeyModelProvider, FieldInfo(annotation=NoneType, required=False, default_factory=PortKeyModelProvider, description='PortKey Provider')] [Optional]¶
PortKey Provider
- pydantic model ibm_watsonx_gov.entities.foundation_model.RITSFoundationModel¶
Bases:
FoundationModel
Show JSON schema
{ "title": "RITSFoundationModel", "type": "object", "properties": { "model_name": { "anyOf": [ { "type": "string" }, { "type": "null" } ], "default": null, "description": "The name of the foundation model.", "title": "Model Name" }, "provider": { "$ref": "#/$defs/RITSModelProvider", "description": "The provider of the model." } }, "$defs": { "ModelProviderType": { "description": "Supported model provider types for Generative AI", "enum": [ "ibm_watsonx.ai", "azure_openai", "rits", "openai", "vertex_ai", "google_ai_studio", "aws_bedrock", "custom", "portkey" ], "title": "ModelProviderType", "type": "string" }, "RITSCredentials": { "properties": { "hostname": { "anyOf": [ { "type": "string" }, { "type": "null" } ], "default": "https://inference-3scale-apicast-production.apps.rits.fmaas.res.ibm.com", "description": "The rits hostname", "title": "Hostname" }, "api_key": { "title": "Api Key", "type": "string" } }, "required": [ "api_key" ], "title": "RITSCredentials", "type": "object" }, "RITSModelProvider": { "properties": { "type": { "$ref": "#/$defs/ModelProviderType", "default": "rits", "description": "The type of model provider." }, "credentials": { "anyOf": [ { "$ref": "#/$defs/RITSCredentials" }, { "type": "null" } ], "default": null, "description": "RITS credentials." } }, "title": "RITSModelProvider", "type": "object" } } }
- Config:
protected_namespaces: tuple = ()
- Fields:
- field provider: Annotated[RITSModelProvider, FieldInfo(annotation=NoneType, required=False, default_factory=RITSModelProvider, description='The provider of the model.')] [Optional]¶
The provider of the model.
- pydantic model ibm_watsonx_gov.entities.foundation_model.VertexAIFoundationModel¶
Bases:
FoundationModel
Represents a foundation model served via Vertex AI.
Examples
- Create Vertex AI foundation model by passing the credentials during object creation.
model = VertexAIFoundationModel( model_id="gemini-1.5-pro-002", provider=VertexAIModelProvider( credentials=VertexAICredentials( project_id="your-project", location="us-central1", # This is optional field, by default us-central1 location is selected credentials_path="/path/to/service_account.json" ) ) )
- Create Vertex AI foundation model by setting the credentials in environment variables:
GOOGLE_APPLICATION_CREDENTIALS
is used to set the Credentials path for Vertex AI.GOOGLE_CLOUD_PROJECT
is used to set the Project id for Vertex AI.GOOGLE_CLOUD_LOCATION
is uses to set the Location for Vertex AI. By default us-central1 location is used when GOOGLE_CLOUD_LOCATION is not provided .os.environ["GOOGLE_APPLICATION_CREDENTIALS"] = "path/to/service_account.json" os.environ["GOOGLE_CLOUD_PROJECT"] = "my-gcp-project" os.environ["GOOGLE_CLOUD_LOCATION"] = "us-central1" model = VertexAIFoundationModel( model_id="gemini/gpt-4o-mini", )
Show JSON schema
{ "title": "VertexAIFoundationModel", "description": "Represents a foundation model served via Vertex AI.\n\nExamples:\n 1. Create Vertex AI foundation model by passing the credentials during object creation.\n .. code-block:: python\n\n model = VertexAIFoundationModel(\n model_id=\"gemini-1.5-pro-002\",\n provider=VertexAIModelProvider(\n credentials=VertexAICredentials(\n project_id=\"your-project\",\n location=\"us-central1\", # This is optional field, by default us-central1 location is selected\n credentials_path=\"/path/to/service_account.json\"\n )\n )\n )\n 2. Create Vertex AI foundation model by setting the credentials in environment variables:\n * ``GOOGLE_APPLICATION_CREDENTIALS`` is used to set the Credentials path for Vertex AI.\n * ``GOOGLE_CLOUD_PROJECT`` is used to set the Project id for Vertex AI.\n * ``GOOGLE_CLOUD_LOCATION`` is uses to set the Location for Vertex AI. By default us-central1 location is used when GOOGLE_CLOUD_LOCATION is not provided .\n\n .. code-block:: python\n\n os.environ[\"GOOGLE_APPLICATION_CREDENTIALS\"] = \"path/to/service_account.json\"\n os.environ[\"GOOGLE_CLOUD_PROJECT\"] = \"my-gcp-project\"\n os.environ[\"GOOGLE_CLOUD_LOCATION\"] = \"us-central1\"\n\n model = VertexAIFoundationModel(\n model_id=\"gemini/gpt-4o-mini\",\n )", "type": "object", "properties": { "model_name": { "anyOf": [ { "type": "string" }, { "type": "null" } ], "default": null, "description": "The name of the foundation model.", "title": "Model Name" }, "provider": { "$ref": "#/$defs/VertexAIModelProvider", "description": "Vertex AI provider.", "title": "Provider" }, "model_id": { "description": "Model name for Vertex AI. Must be a valid Vertex AI model identifier or a fully-qualified publisher path", "examples": [ "gemini-1.5-pro-002" ], "title": "Model id", "type": "string" } }, "$defs": { "ModelProviderType": { "description": "Supported model provider types for Generative AI", "enum": [ "ibm_watsonx.ai", "azure_openai", "rits", "openai", "vertex_ai", "google_ai_studio", "aws_bedrock", "custom", "portkey" ], "title": "ModelProviderType", "type": "string" }, "VertexAICredentials": { "description": "Defines the VertexAICredentials class for accessing Vertex AI using service account credentials.\n\nExamples:\n 1. Create credentials manually:\n .. code-block:: python\n\n vertex_credentials = VertexAICredentials(\n credentials_path=\"path/to/service_account.json\",\n project_id=\"my-gcp-project\",\n location=\"us-central1\"\n )\n\n 2. Create credentials from environment:\n .. code-block:: python\n\n os.environ[\"GOOGLE_APPLICATION_CREDENTIALS\"] = \"path/to/service_account.json\"\n os.environ[\"GOOGLE_CLOUD_PROJECT\"] = \"my-gcp-project\"\n os.environ[\"GOOGLE_CLOUD_LOCATION\"] = \"us-central1\"\n\n vertex_ai_credentials = VertexAICredentials.create_from_env()", "properties": { "credentials_path": { "description": "Path to service-account JSON. This attribute can be read from GOOGLE_APPLICATION_CREDENTIALS environment variable when creating VertexAICredentials from environment.", "title": "Credentials Path", "type": "string" }, "project_id": { "description": "The Google Cloud project id. This attribute can be read from GOOGLE_CLOUD_PROJECT or GCLOUD_PROJECT environment variable when creating VertexAICredentials from environment.", "title": "Project ID", "type": "string" }, "location": { "default": "us-central1", "description": "Vertex AI region. This attribute can be read from GOOGLE_CLOUD_LOCATION environment variable when creating VertexAICredentials from environment. By default us-central1 location is used.", "examples": [ "us-central1", "europe-west4" ], "title": "Location", "type": "string" } }, "required": [ "credentials_path", "project_id" ], "title": "VertexAICredentials", "type": "object" }, "VertexAIModelProvider": { "description": "Represents a model provider using Vertex AI.\n\nExamples:\n 1. Create provider using credentials object:\n .. code-block:: python\n\n provider = VertexAIModelProvider(\n credentials=VertexAICredentials(\n credentials_path=\"path/to/key.json\",\n project_id=\"your-project\",\n location=\"us-central1\" \n )\n )\n\n 2. Create provider using environment variables:\n .. code-block:: python\n\n os.environ['GOOGLE_APPLICATION_CREDENTIALS'] = \"/path/to/service_account.json\"\n os.environ['GOOGLE_CLOUD_PROJECT'] = \"your-project\"\n os.environ['GOOGLE_CLOUD_LOCATION'] = \"us-central1\" # This is optional field, by default us-central1 location is selected\n\n provider = VertexAIModelProvider()", "properties": { "type": { "$ref": "#/$defs/ModelProviderType", "default": "vertex_ai", "description": "The type of model provider." }, "credentials": { "anyOf": [ { "$ref": "#/$defs/VertexAICredentials" }, { "type": "null" } ], "default": null, "description": "Vertex AI credentials." } }, "title": "VertexAIModelProvider", "type": "object" } }, "required": [ "model_id" ] }
- Config:
protected_namespaces: tuple = ()
- Fields:
- field model_id: Annotated[str, FieldInfo(annotation=NoneType, required=True, title='Model id', description='Model name for Vertex AI. Must be a valid Vertex AI model identifier or a fully-qualified publisher path', examples=['gemini-1.5-pro-002'])] [Required]¶
Model name for Vertex AI. Must be a valid Vertex AI model identifier or a fully-qualified publisher path
- field provider: Annotated[VertexAIModelProvider, FieldInfo(annotation=NoneType, required=False, default_factory=VertexAIModelProvider, title='Provider', description='Vertex AI provider.')] [Optional]¶
Vertex AI provider.
- pydantic model ibm_watsonx_gov.entities.foundation_model.WxAIFoundationModel¶
Bases:
FoundationModel
The IBM watsonx.ai foundation model details
To initialize the foundation model, you can either pass in the credentials directly or set the environment. You can follow these examples to create the provider.
Examples
- Create foundation model by specifying the credentials during object creation:
# Specify the credentials during object creation wx_ai_foundation_model = WxAIFoundationModel( model_id="ibm/granite-3-3-8b-instruct", project_id=<PROJECT_ID>, provider=WxAIModelProvider( credentials=WxAICredentials( url=wx_url, # This is optional field, by default US-Dallas region is selected api_key=wx_apikey, ) ) )
- Create foundation model by setting the credentials environment variables:
The api key can be set using one of the environment variables
WXAI_API_KEY
,WATSONX_APIKEY
, orWXG_API_KEY
. These will be read in the order of precedence.The url is optional and will be set to US-Dallas region by default. It can be set using one of the environment variables
WXAI_URL
,WATSONX_URL
, orWXG_URL
. These will be read in the order of precedence.
wx_ai_foundation_model = WxAIFoundationModel( model_id="ibm/granite-3-3-8b-instruct", project_id=<PROJECT_ID>, )
- Create foundation model by specifying watsonx.governance software credentials during object creation:
wx_ai_foundation_model = WxAIFoundationModel( model_id="ibm/granite-3-3-8b-instruct", project_id=project_id, provider=WxAIModelProvider( credentials=WxAICredentials( url=wx_url, api_key=wx_apikey, username=wx_username, version=wx_version, ) ) )
- Create foundation model by setting watsonx.governance software credentials environment variables:
The api key can be set using one of the environment variables
WXAI_API_KEY
,WATSONX_APIKEY
, orWXG_API_KEY
. These will be read in the order of precedence.The url can be set using one of these environment variable
WXAI_URL
,WATSONX_URL
, orWXG_URL
. These will be read in the order of precedence.The username can be set using one of these environment variable
WXAI_USERNAME
,WATSONX_USERNAME
, orWXG_USERNAME
. These will be read in the order of precedence.The version of watsonx.governance software can be set using one of these environment variable
WXAI_VERSION
,WATSONX_VERSION
, orWXG_VERSION
. These will be read in the order of precedence.
wx_ai_foundation_model = WxAIFoundationModel( model_id="ibm/granite-3-3-8b-instruct", project_id=project_id, )
Show JSON schema
{ "title": "WxAIFoundationModel", "description": "The IBM watsonx.ai foundation model details\n\nTo initialize the foundation model, you can either pass in the credentials directly or set the environment.\nYou can follow these examples to create the provider.\n\nExamples:\n 1. Create foundation model by specifying the credentials during object creation:\n .. code-block:: python\n\n # Specify the credentials during object creation\n wx_ai_foundation_model = WxAIFoundationModel(\n model_id=\"ibm/granite-3-3-8b-instruct\",\n project_id=<PROJECT_ID>,\n provider=WxAIModelProvider(\n credentials=WxAICredentials(\n url=wx_url, # This is optional field, by default US-Dallas region is selected\n api_key=wx_apikey,\n )\n )\n )\n\n 2. Create foundation model by setting the credentials environment variables:\n * The api key can be set using one of the environment variables ``WXAI_API_KEY``, ``WATSONX_APIKEY``, or ``WXG_API_KEY``. These will be read in the order of precedence.\n * The url is optional and will be set to US-Dallas region by default. It can be set using one of the environment variables ``WXAI_URL``, ``WATSONX_URL``, or ``WXG_URL``. These will be read in the order of precedence.\n\n .. code-block:: python\n\n wx_ai_foundation_model = WxAIFoundationModel(\n model_id=\"ibm/granite-3-3-8b-instruct\",\n project_id=<PROJECT_ID>,\n )\n\n 3. Create foundation model by specifying watsonx.governance software credentials during object creation:\n .. code-block:: python\n\n wx_ai_foundation_model = WxAIFoundationModel(\n model_id=\"ibm/granite-3-3-8b-instruct\",\n project_id=project_id,\n provider=WxAIModelProvider(\n credentials=WxAICredentials(\n url=wx_url,\n api_key=wx_apikey,\n username=wx_username,\n version=wx_version,\n )\n )\n )\n\n 4. Create foundation model by setting watsonx.governance software credentials environment variables:\n * The api key can be set using one of the environment variables ``WXAI_API_KEY``, ``WATSONX_APIKEY``, or ``WXG_API_KEY``. These will be read in the order of precedence.\n * The url can be set using one of these environment variable ``WXAI_URL``, ``WATSONX_URL``, or ``WXG_URL``. These will be read in the order of precedence.\n * The username can be set using one of these environment variable ``WXAI_USERNAME``, ``WATSONX_USERNAME``, or ``WXG_USERNAME``. These will be read in the order of precedence.\n * The version of watsonx.governance software can be set using one of these environment variable ``WXAI_VERSION``, ``WATSONX_VERSION``, or ``WXG_VERSION``. These will be read in the order of precedence.\n\n .. code-block:: python\n\n wx_ai_foundation_model = WxAIFoundationModel(\n model_id=\"ibm/granite-3-3-8b-instruct\",\n project_id=project_id,\n )", "type": "object", "properties": { "model_name": { "anyOf": [ { "type": "string" }, { "type": "null" } ], "default": null, "description": "The name of the foundation model.", "title": "Model Name" }, "provider": { "$ref": "#/$defs/WxAIModelProvider", "description": "The provider of the model." }, "model_id": { "description": "The unique identifier for the watsonx.ai model.", "title": "Model Id", "type": "string" }, "project_id": { "anyOf": [ { "type": "string" }, { "type": "null" } ], "default": null, "description": "The project ID associated with the model.", "title": "Project Id" }, "space_id": { "anyOf": [ { "type": "string" }, { "type": "null" } ], "default": null, "description": "The space ID associated with the model.", "title": "Space Id" } }, "$defs": { "ModelProviderType": { "description": "Supported model provider types for Generative AI", "enum": [ "ibm_watsonx.ai", "azure_openai", "rits", "openai", "vertex_ai", "google_ai_studio", "aws_bedrock", "custom", "portkey" ], "title": "ModelProviderType", "type": "string" }, "WxAICredentials": { "description": "Defines the WxAICredentials class to specify the watsonx.ai server details.\n\nExamples:\n 1. Create WxAICredentials with default parameters. By default Dallas region is used.\n .. code-block:: python\n\n wxai_credentials = WxAICredentials(api_key=\"...\")\n\n 2. Create WxAICredentials by specifying region url.\n .. code-block:: python\n\n wxai_credentials = WxAICredentials(api_key=\"...\",\n url=\"https://au-syd.ml.cloud.ibm.com\")\n\n 3. Create WxAICredentials by reading from environment variables.\n .. code-block:: python\n\n os.environ[\"WATSONX_APIKEY\"] = \"...\"\n # [Optional] Specify watsonx region specific url. Default is https://us-south.ml.cloud.ibm.com .\n os.environ[\"WATSONX_URL\"] = \"https://eu-gb.ml.cloud.ibm.com\"\n wxai_credentials = WxAICredentials.create_from_env()\n\n 4. Create WxAICredentials for on-prem.\n .. code-block:: python\n\n wxai_credentials = WxAICredentials(url=\"https://<hostname>\",\n username=\"...\"\n api_key=\"...\",\n version=\"5.2\")\n\n 5. Create WxAICredentials by reading from environment variables for on-prem.\n .. code-block:: python\n\n os.environ[\"WATSONX_URL\"] = \"https://<hostname>\"\n os.environ[\"WATSONX_VERSION\"] = \"5.2\"\n os.environ[\"WATSONX_USERNAME\"] = \"...\"\n os.environ[\"WATSONX_APIKEY\"] = \"...\"\n # Only one of api_key or password is needed\n #os.environ[\"WATSONX_PASSWORD\"] = \"...\"\n wxai_credentials = WxAICredentials.create_from_env()", "properties": { "url": { "default": "https://us-south.ml.cloud.ibm.com", "description": "The url for watsonx ai service", "examples": [ "https://us-south.ml.cloud.ibm.com", "https://eu-de.ml.cloud.ibm.com", "https://eu-gb.ml.cloud.ibm.com", "https://jp-tok.ml.cloud.ibm.com", "https://au-syd.ml.cloud.ibm.com" ], "title": "watsonx.ai url", "type": "string" }, "api_key": { "anyOf": [ { "type": "string" }, { "type": "null" } ], "default": null, "description": "The user api key. Required for using watsonx as a service and one of api_key or password is required for using watsonx on-prem software.", "strip_whitespace": true, "title": "Api Key" }, "version": { "anyOf": [ { "type": "string" }, { "type": "null" } ], "default": null, "description": "The watsonx on-prem software version. Required for using watsonx on-prem software.", "title": "Version" }, "username": { "anyOf": [ { "type": "string" }, { "type": "null" } ], "default": null, "description": "The user name. Required for using watsonx on-prem software.", "title": "User name" }, "password": { "anyOf": [ { "type": "string" }, { "type": "null" } ], "default": null, "description": "The user password. One of api_key or password is required for using watsonx on-prem software.", "title": "Password" }, "instance_id": { "anyOf": [ { "type": "string" }, { "type": "null" } ], "default": "openshift", "description": "The watsonx.ai instance id. Default value is openshift.", "title": "Instance id" } }, "title": "WxAICredentials", "type": "object" }, "WxAIModelProvider": { "description": "This class represents a model provider configuration for IBM watsonx.ai. It includes the provider type and\ncredentials required to authenticate and interact with the watsonx.ai platform. If credentials are not explicitly\nprovided, it attempts to load them from environment variables.\n\nExamples:\n 1. Create provider using credentials object:\n .. code-block:: python\n\n credentials = WxAICredentials(\n url=\"https://us-south.ml.cloud.ibm.com\",\n api_key=\"your-api-key\"\n )\n provider = WxAIModelProvider(credentials=credentials)\n\n 2. Create provider using environment variables:\n .. code-block:: python\n\n import os\n\n os.environ['WATSONX_URL'] = \"https://us-south.ml.cloud.ibm.com\"\n os.environ['WATSONX_APIKEY'] = \"your_api_key\"\n\n provider = WxAIModelProvider()", "properties": { "type": { "$ref": "#/$defs/ModelProviderType", "default": "ibm_watsonx.ai", "description": "The type of model provider." }, "credentials": { "anyOf": [ { "$ref": "#/$defs/WxAICredentials" }, { "type": "null" } ], "default": null, "description": "The credentials used to authenticate with watsonx.ai. If not provided, they will be loaded from environment variables." } }, "title": "WxAIModelProvider", "type": "object" } }, "required": [ "model_id" ] }
- Config:
protected_namespaces: tuple = ()
- Fields:
- Validators:
get_params_from_env
»all fields
- field model_id: Annotated[str, FieldInfo(annotation=NoneType, required=True, description='The unique identifier for the watsonx.ai model.')] [Required]¶
The unique identifier for the watsonx.ai model.
- Validated by:
- field project_id: Annotated[str | None, FieldInfo(annotation=NoneType, required=False, default=None, description='The project ID associated with the model.')] = None¶
The project ID associated with the model.
- Validated by:
- field provider: Annotated[WxAIModelProvider, FieldInfo(annotation=NoneType, required=False, default_factory=WxAIModelProvider, description='The provider of the model.')] [Optional]¶
The provider of the model.
- Validated by:
- field space_id: Annotated[str | None, FieldInfo(annotation=NoneType, required=False, default=None, description='The space ID associated with the model.')] = None¶
The space ID associated with the model.
- Validated by:
- validator get_params_from_env » all fields¶