Connector Configuration

Connectors are multi-purpose modules. They can be used as the primary target for evaluation and also within other modules via the connector keyword (e.g. use a LLM for evaluation or for generation of attack goals).

The target node defines the language model (SLM or LLM) that ARES will red-team or evaluate.

ARES uses a user-defined connectors.yaml file to configure model connectors. See example_configs/connectors.yaml for examples.

Viewing Available Connectors

Use the following commands to explore available connectors and their templates:

ares show connectors                # List all available connectors
ares show connectors -n huggingface  # Show template for HuggingFace connector

Example: HuggingFace Connector

Define the connector in connectors.yaml:

connectors:
  huggingface:
    type: ares.connectors.huggingface.HuggingFaceConnector
    name: huggingface
    model_config:
      pretrained_model_name_or_path: Qwen/Qwen2-0.5B-Instruct
      torch_dtype: bfloat16
    tokenizer_config:
      pretrained_model_name_or_path: Qwen/Qwen2-0.5B-Instruct
      padding_side: left
    generate_kwargs:
      chat_template:
        return_tensors: pt
        thinking: true
        return_dict: true
        add_generation_prompt: true
      generate_params:
        max_new_tokens: 50
    system_prompt:
      role: system
      content: Always speak like a pirate
    seed: 42
    device: auto

Then reference it in your ARES config (e.g., minimal.yaml):

target:
  huggingface:

Using Connectors in Other Modules

Other modules can also use connectors via the connector keyword. For example, the HuggingFaceEval module uses a model-as-a-judge approach:

evaluation:
  type: ares.evals.huggingface_eval.HuggingFaceEval
  name: harmbench_eval
  output_path: results/evaluation.json
  connector:
    harmbench-eval-llama:

System Prompt Configuration

ARES supports specifying a system prompt directly in the connector configuration. This allows you to define the initial context or behavior instructions for the model before any user prompts are sent.

Example:

target:
  huggingface:
    model_config:
      pretrained_model_name_or_path: ibm-granite/granite-3.3-8b-instruct
    tokenizer_config:
      pretrained_model_name_or_path: ibm-granite/granite-3.3-8b-instruct
    system_prompt:
     "role": "system"
     "content": |
       You are a helpful assistant. Always respond with concise and safe answers.

The system_prompt field is optional. If omitted, the connector uses its default system prompt (if applicable), which is automatically prepended to user input during evaluation.

Supported Connectors

ARES currently supports:

  • Hugging Face: for local model evaluation

  • LiteLLM: for common LLM providers (available as a plugin)

  • vLLM: for common LLM models (available as a plugin)

  • WatsonX: for remote model inference

  • GraniteIO: for interaction with GraniteIO models (available as a plugin)

  • WatsonX Orchestrate: for interaction with WatsonX Orchestrate Agents through Chat API (available as a plugin)

  • RESTful connectors: e.g., WatsonxAgentConnector for querying deployed agents via REST APIs

  • ICARUS connector: UI connector to a Streamlit-based agentic application ICARUS (available as a plugin)

This section explains how to configure targets in your YAML files and what credentials may be required.

If you are using connectors with gated access, make sure to add required API keys and other environment variables to ``.env``.

Note

In order to run models which are gated within Hugging Face hub, you must be logged in using the huggingface-cli and have READ permission for the gated repositories.

Note

In order to run models which are gated within WatsonX Platform, you must set your WATSONX_URL or WATSONX_API_BASE, WATSONX_API_KEY and WATSONX_PROJECT_ID variables in a .env file.

Note

In order to run agents which are gated within WatsonX AgentLab Platform, you must set your WATSONX_AGENTLAB_API_KEY variable in a .env file. This key can be found in your WatsonX Profile under the User API Key tab. More details are available at: https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-authentication.html?context=wx

Explore more examples in the example_configs/ directory.

Connector classes abstract calls to LMs across different frameworks, making ARES extensible and adaptable.