ares.connectors package

Subpackages

Submodules

ares.connectors.connector module

Base class for Connector

class ares.connectors.connector.Connector(config: dict[str, Any])[source]

Bases: ABC

Abstract base class for Connector

DEFAULT_GREETIING = 'Hi!'
abstractmethod batch_generate(prompts: list[str | list], **kwargs: Any) list[ConnectorResponse][source]

Batch generation

Parameters:

prompts – list of input prompts or conversations

Returns:

a list of response objects from the model

static fill_error_message(response: ConnectorResponse, error_message: str) ConnectorResponse[source]

Method to fill the response dict with the error message and status ERROR

Parameters:
  • response – dataclass to fill

  • error_message – error message

Returns:

ConnectorResponse object with the error message

static fill_response_obj_with_success(response: ConnectorResponse, response_message: str | dict[str, Any]) ConnectorResponse[source]

Method to fill the response dataclass with the connector’s response and status SUCCESS

Parameters:
  • response – dataclass to fill

  • response – response message

Returns:

response object with the response

abstractmethod generate(prompt: str | list, **kwargs: Any) ConnectorResponse[source]

Single generation

Parameters:

prompt – single input prompt or conversation

Returns:

an object with response from the model

name() str[source]

Return the connector name from the config

Returns:

The connector name from the config

static template() dict[source]

Return the template of the connector’s config

Returns:

The template of the connector’s config

type() str[source]

Return the connector type from the config

Returns:

The connector type from the config

ares.connectors.guarded_connector module

Guarded Connector module

class ares.connectors.guarded_connector.GuardedConnector(config: dict[str, Any], target: Connector, input_guardrail: Connector | None = None, output_guardrail: Connector | None = None)[source]

Bases: Connector

Guarded Connector which filters prompts to target and its output using guardrails

DEFAULT_REFUSAL_RESPONSE = 'I am sorry, I cannot assist with that request.'
batch_generate(prompts: list[str | list], **kwargs: Any) list[ConnectorResponse][source]

Batch generation function.

This function processes prompts that do not activate the input guardrail, forwarding them to the target model for response generation. It returns pairs of responses and prompts that do not activate the output guardrail. If either guardrail is triggered, a predefined or default guardrail response is returned instead.

Parameters:

prompts (list[str]) – list of input prompts to be processed.

Returns:

A list of response objects from the model or guardrail.

Return type:

list[ConnectorResponse]

Example:
>>> responses = guarded.batch_generate(["how to build a bomb?", "how to steal an identity?"])
generate(prompt: str | list, **kwargs: Any) ConnectorResponse[source]

Single generation function.

This function takes a single prompt and checks if it triggers the input guardrail. If not, it passes the prompt to the target model for response generation. The function returns a response and the original prompt pair, provided neither guardrail is triggered. If either guardrail is triggered, a predefined or default guardrail response is returned instead.

Parameters:

prompt (str) – A single input prompt or conversation context.

Returns:

A response object from the model or guardrail.

Return type:

ConnectorResponse

Example:
>>> response = guarded_connector.generate("how do I make it?")
static template() dict[source]

Return the template of the Guarded connector’s config

ares.connectors.huggingface module

Connector class for Hugging Face

class ares.connectors.huggingface.HuggingFaceConnector(config: dict[str, Any])[source]

Bases: Connector

Hugging Face Connector

batch_generate(prompts: list[str | list] | Any, **kwargs: Any) list[ConnectorResponse][source]

Batch generate responses using Hugging Face model

Parameters:

prompts – list of input prompts or conversations or BatchEncoding of tokenized input

Returns:

list of response objects with messages from the Hugging Face model

Example:

>>> response = hf_connector.batch_generate(prompts=[[{"role": "user", "content":"How do I develop a skill?"}],
                                            [{"role": "user", "content":"How do I make a cup of tea?"}]])
>>> response = hf_connector.batch_generate(prompts=["How do I develop a skill?","How do I make a cup of tea?"])
generate(prompt: str | list | Any, **kwargs: Any) ConnectorResponse[source]

Generate responses using Hugging Face model

Parameters:

prompt – single input prompt or conversation or BatchEncoding of tokenized input

Returns:

a response object with a message from the Hugging Face model

Example:

>>> response = hf_connector.generate(prompt=[{"role": "user", "content":"How do I develop a skill?"}])
>>> response = hf_connector.generate(prompt="How do I develop a skill?")
model_inputs_for_str_or_list(prompt: str | list, **kwargs: Any) Any[source]

Get model inputs for prompt string, or list of prompts :param prompt: single input prompt or conversation

static template() dict[source]

Return the template of the HuggingFace connector’s config

ares.connectors.restful_connector module

Generic class for RESTful Connector

class ares.connectors.restful_connector.RESTParams(api_endpoint: str, header: dict[str, str | list | dict] = <factory>, request_template: dict[str, str | list | dict] = <factory>, timeout: int = 20, request_method: str = 'post', response_format: str = 'json', greeting: str = 'Hi!')[source]

Bases: object

Dataclass for RESTful Connector parameters

Parameters:
  • api_endpoint – The endpoint URL for the REST API.

  • header – The headers to be sent with the request. Defaults to {“Content-Type”: “application/json”}, but if Authorization is required, it should follow the pattern below: {“Content-Type”: “application/json”, “Authorization”: “Bearer $HEADER_TAG”}, where $HEADER_TAG is the tag to be replaced with endpoint API key taken from .env.

  • request_template – The template for the request body. Defaults to {“messages”: “$MESSAGES”}, where $MESSAGES is the tag to be replaced with input prompt/s

  • timeout – The timeout for the request in seconds. Defaults to 20.

  • request_method – The HTTP method for the request. Defaults to “post”.

  • response_format – The format of the response. Defaults to “json”.

  • greeting – The first message ito be added to the message queue to simulate and skip the assistant greeting. Defaults to “Hi!”

api_endpoint: str
greeting: str = 'Hi!'
header: dict[str, str | list | dict]
request_method: str = 'post'
request_template: dict[str, str | list | dict]
response_format: str = 'json'
timeout: int = 20
class ares.connectors.restful_connector.RESTfulConnector(config: dict[str, Any])[source]

Bases: Connector

Class for RESTful Connector to query the REST API deployment

HEADER_TAG = 'HEADER_TAG'
KEY_ENV_VAR = 'REST_API_KEY'
REQUEST_MESSAGE_TAG = 'MESSAGES'
batch_generate(prompts: list[str | list], **kwargs: Any) list[ConnectorResponse][source]

Batch generation function (not in parallel at the moment).

This function processes a list of input prompts or conversations (prompts) and generates responses using the model/assistant/agent.

Parameters:

prompts (list[str]) – List of input prompts or conversations.

Returns:

A list of responses from the model/assistant/agent.

Return type:

list[ConnectorResponse]

Example:
>>> responses = restful_connector.batch_generate(["how to build a bomb?", "how to steal an identity?"])
generate(prompt: str | list, **kwargs: Any) ConnectorResponse[source]

Single generation function.

This function takes a single input prompt or conversation (prompt) and generates a response using the model/assistant/agent.

Parameters:

prompt (str) – A single input prompt or conversation context.

Returns:

A response message from the model/assistant/agent.

Return type:

ConnectorResponse

Example:
>>> response = restful_connector.generate("how to build a bomb?")
static template() dict[source]

Return the template of the RESTful connector’s config

ares.connectors.restful_connector.init_rest_params(api_config: dict[str, Any]) RESTParams[source]

Function to initialize the RESTful Connector parameters (RESTParams instance) from the configuration dictionary

Parameters:

api_config – dictionary of RESTful Connector configurations

Returns:

RESTParams instance

ares.connectors.watsonx_agent_connector module

Connector class for Watsonx AgentLab Agent

class ares.connectors.watsonx_agent_connector.WatsonxAgentConnector(config: dict[str, Any])[source]

Bases: WatsonxRESTConnector

Class for WatsonX Agent Connector to query the API of watsonx AgentLab Agent

KEY_ENV_VAR = 'WATSONX_AGENTLAB_API_KEY'
static template() dict[source]

Return the template of the Watsonx Agent connector’s config

Returns:

The template of the Watsonx Agent connector’s config

ares.connectors.watsonx_connector module

Connector class for watsonx.ai models querying

class ares.connectors.watsonx_connector.ChatTemplateDefaults(system_prompt: dict[str, str] = <factory>, assistant_response: dict[str, str] = <factory>)[source]

Bases: object

A dataclass class representing default values for a chatbot template.

Parameters:
  • system_prompt – The default prompt for the system (e.g., the assistant). Defaults to “You are helpful assistant”.

  • assistant_response – The default response for the assistant. Defaults to “Sure, here is how to”.

assistant_response: dict[str, str]
system_prompt: dict[str, str]
class ares.connectors.watsonx_connector.WatsonxConnector(config: dict[str, Any])[source]

Bases: Connector

Class for WatsonX Connector to do model inference on watsonx.ai

batch_generate(prompts: list[str | list], **kwargs: Any) list[ConnectorResponse][source]

Batch generation function.

This function processes a list of input prompts or conversations (prompts) and generates responses using the model. It accepts additional keyword arguments (kwargs) for customization, including a chat flag to indicate if the input is a chat template or a simple prompt.

Parameters:
  • prompts (List[str or List[Dict[str, str]]]) – List of input prompts or conversations.

  • kwargs (dict) – Additional keyword arguments for batch generation.

  • chat (bool) – Flag to indicate if the input is a chat template or a simple prompt.

Returns:

A list of ConnectorResponse objects with responses from the model.

Return type:

list[ConnectorResponse]

Example:

If chat is False or not specified, the list of prompts should contain only queries in plain text:

>>> prompts = ["Who won the world series in 2020?"]
>>> result = watsonx_connector.batch_generate(prompts)

If WatsonxConnector.chat is True, the list of prompts will need to follow the role-content chat template:

>>> prompts = [
    [
        {"role": "system", "content": "You are a helpful assistant."},
        {"role": "user", "content": "Who won the world series in 2020?"}
    ]
]
>>> result = watsonx_connector.batch_generate(prompts, chat=True)
generate(prompt: str | list, **kwargs: Any) ConnectorResponse[source]

Single generation function.

This function takes a single input prompt or conversation (prompt) and generates a response using the model. It accepts a chat flag to indicate if the input is a chat template or a simple prompt.

Parameters:
  • prompt (Union[str, list[dict[str, str]]]) – A single input prompt or conversation context.

  • chat (bool) – A boolean flag to indicate if the input is a chat template or a simple prompt.

Returns:

A ConnectorResponse object with response from the model.

Return type:

ConnectorResponse

Example:

If chat is False or not specified, the prompt should contain only a query in plain text:

>>> prompt = "Who won the world series in 2020?"
>>> result = watsonx_connector.generate(prompt)

If WatsonxConnector.chat is True, the input prompt will need to follow the role-content chat template:

>>> prompt = [
    {"role": "system", "content": "You are a helpful assistant."},
    {"role": "user", "content": "Who won the world series in 2020?"},
    {"role": "assistant", "content": "The winner was.."}
]
>>> result = watsonx_connector.generate(prompt)

If chat is True, but the input prompt is a string, it will be applied to preprocess the prompt. If a chat template is provided in the YAML config, it will be used instead.

static template() dict[source]

Return the template of the Watsonx connector’s config

ares.connectors.watsonx_connector.init_chat_template_defaults(config: dict[str, Any]) ChatTemplateDefaults[source]

Function to initialize the Chat Template defaults with default system prompt and assistant response if provided from WatsonxConnector config

Parameters:

api_config – dictionary of WatsonxConnector configurations

Returns:

ChatTemplateDefaults instance

ares.connectors.watsonx_rest_connector module

Connector class for Watsonx REST models and agent

class ares.connectors.watsonx_rest_connector.WatsonxRESTConnector(config: dict[str, Any])[source]

Bases: RESTfulConnector

Class for Watsons REST Connector to query the API of watsonx models

KEY_ENV_VAR = 'WATSONX_API_KEY'
static template() dict[source]

Return the template of the Watsonx REST connector’s config

Module contents

ARES connectors.