Skip to content

Metrics

aisteer360.evaluation.metrics.base

Metric

Bases: ABC

Base-class for evaluation metrics.

Provides a standardized interface for computing evaluation scores on model-generated responses. Subclasses should define their specific scoring logic in compute() and can accept additional configuration through constructor arguments stored in extras.

Source code in aisteer360/evaluation/metrics/base.py
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
class Metric(ABC):
    """
    Base-class for evaluation metrics.

    Provides a standardized interface for computing evaluation scores on model-generated responses. Subclasses should
    define their specific scoring logic in `compute()` and can accept additional configuration through constructor
    arguments stored in `extras`.

    Args:
        **extras
            Required extras for the metric (e.g., LLM, tokenizer, etc.)
    """
    def __init__(self, **extras: Any) -> None:
        self.name: str = self.__class__.__name__
        self.extras: dict[str, Any] = extras

    @abstractmethod
    def compute(
        self,
        responses: list[Any],
        prompts: list[str] | None = None,
        **kwargs: Any,
    ) -> dict[str, Any]:
        """Base compute method."""
        raise NotImplementedError

    def __call__(self, *args, **kwargs):
        return self.compute(*args, **kwargs)

extras = extras instance-attribute

name = self.__class__.__name__ instance-attribute

compute(responses, prompts=None, **kwargs) abstractmethod

Base compute method.

Source code in aisteer360/evaluation/metrics/base.py
21
22
23
24
25
26
27
28
29
@abstractmethod
def compute(
    self,
    responses: list[Any],
    prompts: list[str] | None = None,
    **kwargs: Any,
) -> dict[str, Any]:
    """Base compute method."""
    raise NotImplementedError

aisteer360.evaluation.metrics.base_judge

LLMJudgeMetric

Bases: Metric

Base class for LLM-as-a-judge evaluation metrics.

Leverages a language model to evaluate the quality of generated text responses according to customized (natural language) criteria. The judge model evaluates each response (optionally with respect to an associated prompt and context) and returns numerical scores within a specified range. When multiple samples are generated per prompt (via num_return_sequences), scores are averaged to improve reliability.

Subclasses should define their specific evaluation criteria by providing a prompt_template that instructs the judge model how to score responses. The template should use placeholders {response}, {lower_bound}, and {upper_bound} (and optionally {prompt} and {context}). Subclasses typically override __init__() to set their specific prompt template and scoring scale (e.g., see metrics.generic.relevance).

Parameters:

Name Type Description Default
model_or_id str | PreTrainedModel

HuggingFace model ID or loaded model instance to use as the judge. If string, the model will be loaded automatically.

required
prompt_template str

Template string for evaluation prompts. Should contain placeholders for {response}, {lower_bound}, {upper_bound}, and optionally {prompt}, {context}. The formatted prompt will be passed to the judge model.

required
tokenizer Any | None

Tokenizer for the judge model. If None, will be loaded from the model ID. Required if passing a PreTrainedModel instance.

None
device str | None

Device for model inference ('cuda', 'mps', 'cpu'). Defaults to GPU if available, otherwise CPU.

None
scale tuple[float, float]

Score range as (min, max) tuple. Scores outside this range will be clamped. Defaults to (1, 5).

(1, 5)
batch_size int

Number of prompts to process simultaneously. Defaults to 8.

8
max_retries int

Maximum retry attempts when score parsing fails. Defaults to 5.

5
gen_kwargs dict[str, Any] | None

Generation parameters passed to the model.

None
Source code in aisteer360/evaluation/metrics/base_judge.py
 60
 61
 62
 63
 64
 65
 66
 67
 68
 69
 70
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
class LLMJudgeMetric(Metric):
    """Base class for LLM-as-a-judge evaluation metrics.

    Leverages a language model to evaluate the quality of generated text responses according to customized (natural
    language) criteria. The judge model evaluates each response (optionally with respect to an associated prompt and
    context) and returns numerical scores within a specified range. When multiple samples are generated per prompt (via
    num_return_sequences), scores are averaged to improve reliability.

    Subclasses should define their specific evaluation criteria by providing a `prompt_template` that instructs the
    judge model how to score responses. The template should use placeholders {response}, {lower_bound}, and
    {upper_bound} (and optionally {prompt} and {context}). Subclasses typically override `__init__()` to set their
    specific prompt template and scoring scale (e.g., see `metrics.generic.relevance`).

    Args:
        model_or_id (str | PreTrainedModel): HuggingFace model ID or loaded model instance to use as the judge.
            If string, the model will be loaded automatically.
        prompt_template (str): Template string for evaluation prompts. Should contain placeholders for {response},
            {lower_bound}, {upper_bound}, and optionally {prompt}, {context}.
            The formatted prompt will be passed to the judge model.
        tokenizer (Any | None): Tokenizer for the judge model. If None, will be loaded from the model ID.
            Required if passing a PreTrainedModel instance.
        device (str | None): Device for model inference ('cuda', 'mps', 'cpu').
            Defaults to GPU if available, otherwise CPU.
        scale (tuple[float, float]): Score range as (min, max) tuple. Scores outside this range will be clamped.
            Defaults to (1, 5).
        batch_size (int): Number of prompts to process simultaneously. Defaults to 8.
        max_retries (int): Maximum retry attempts when score parsing fails. Defaults to 5.
        gen_kwargs (dict[str, Any] | None): Generation parameters passed to the model.
    """

    def __init__(
        self,
        model_or_id: str | PreTrainedModel,
        prompt_template: str,
        tokenizer: Any | None = None,
        device: str | None = None,
        scale: tuple[float, float] = (1, 5),
        batch_size: int = 8,
        max_retries: int = 5,
        gen_kwargs: dict[str, Any] | None = None,
    ):
        super().__init__()

        if isinstance(model_or_id, str):
            self.model = AutoModelForCausalLM.from_pretrained(model_or_id)
            self.tokenizer = tokenizer or AutoTokenizer.from_pretrained(model_or_id)
        else:  # model
            self.model = model_or_id
            self.tokenizer = tokenizer or AutoTokenizer.from_pretrained(model_or_id.config._name_or_path)

        self.use_chat = hasattr(self.tokenizer, "apply_chat_template") and self.tokenizer.chat_template is not None
        self.device = device or (
            "cuda" if torch.cuda.is_available()
            else "mps" if torch.backends.mps.is_available()
            else "cpu"
        )
        self.model.to(self.device).eval()

        gen_kwargs = dict(gen_kwargs or {})
        gen_kwargs.setdefault("temperature", 0.0)
        gen_kwargs.setdefault("max_new_tokens", 30)
        gen_kwargs.setdefault("pad_token_id", self.tokenizer.eos_token_id)

        self.num_return_sequences: int = int(gen_kwargs.pop("num_return_sequences", 1))
        self.model.generation_config = GenerationConfig(**gen_kwargs)

        if self.tokenizer.pad_token_id is None:
            self.tokenizer.pad_token_id = self.tokenizer.eos_token_id

        self.pipeline = TextGenerationPipeline(
            model=self.model,
            tokenizer=self.tokenizer,
        )

        self.scale = scale
        self.output_parser, self.parse_fn = build_structured_parser(scale)
        self.base_prompt_template = prompt_template.strip()
        self.format_instructions = self.output_parser.get_format_instructions()
        self.batch_size = batch_size
        self.max_retries = max_retries

    def _wrap(self, prompt: str) -> str:
        """Wrap prompt with appropriate formatting for the model.

        Applies the chat template (if the model supports it) with the prompt as a user message.
        Otherwise, returns the prompt unchanged.

        Args:
            prompt (str): The user prompt.

        Returns:
            str: The formatted prompt.
        """
        if self.use_chat:
            messages = [{"role": "user", "content": prompt}]
            return self.tokenizer.apply_chat_template(
                messages,
                tokenize=False,
                add_generation_prompt=True,
            )
        return prompt

    @staticmethod
    def _batch_chunks(seq: Sequence[Any], chunk_size: int) -> Iterable[Sequence[Any]]:
        """Split a sequence into chunks of specified size.

        Args:
            seq (Sequence[Any]): The sequence to split into chunks.
            chunk_size (int): Maximum size of each chunk.

        Yields:
            Sequence[Any]: Chunks of the input sequence, each with at most chunk_size elements.
        """
        for i in range(0, len(seq), chunk_size):
            yield seq[i: i + chunk_size]

    def _score_with_retries(self, prompt: str) -> float:
        """Generate replies until parsing succeeds or maximum retries reached.

        Attempts to generate a response and parse it (using `parse_fn`) as a score.
        If parsing fails, retries up to `max_retries` times.
        If all attempts fail, raises a warning and returns `float('nan')`.

        Args:
            prompt (str): The formatted prompt to send to the model.

        Returns:
            float: The parsed score from the model's response, or `float('nan')` if parsing fails.
        """
        for attempt in range(self.max_retries + 1):
            reply_text = self.pipeline(
                prompt,
                clean_up_tokenization_spaces=True,
                return_full_text=False
            )[0]["generated_text"]

            try:
                return self.parse_fn(reply_text, self.scale)
            except Exception:
                if attempt == self.max_retries:
                    warnings.warn(
                        f"Failed to parse score after {self.max_retries + 1} attempts. "
                        "Returning float('nan') instead."
                    )
                    return float('nan')

    @torch.inference_mode()
    def compute(
        self,
        responses: list[str],
        prompts: list[str] | None = None,
        **kwargs: Any,
    ) -> dict[str, float | list[float]]:
        """Compute LLM judge scores for a list of responses.

        Evaluates each response using the configured judge model and prompt template. Scores are averaged when multiple
        samples are generated per response (via `num_return_sequences`).

        Args:
            responses (list[str]): List of text responses to evaluate.
            prompts (list[str] | None): Optional list of prompts corresponding to each response.
                If provided, must be the same length as responses. These prompts can be
                referenced in the prompt_template using the {prompt} placeholder.
            **kwargs: Additional keyword arguments (currently unused).

        Returns:
            Score statistics containing:

                - "mean_score": Overall average score across all responses
                - "scores": List of mean scores for each response (averaged across samples)
                - "raw_scores": List of lists containing all individual scores for each response

        Raises:
            AssertionError: If prompts is provided but has different length than responses.
        """

        if prompts is not None and len(prompts) != len(responses):
            raise AssertionError("`responses` and `prompts` must be the same length")

        # build prompts
        prompts_list: list[str] = []
        for i in range(len(responses)):
            fields: dict[str, str | float] = {
                "response": responses[i],
                "lower_bound": self.scale[0],
                "upper_bound": self.scale[1],
            }
            if prompts is not None:
                fields["prompt"] = prompts[i]

            prompt_core = self.base_prompt_template.format(**fields)
            prompt_formatted = self._wrap(prompt_core + "\n\n" + self.format_instructions)
            prompts_list.append(prompt_formatted)

        # generate
        prompt_scores: list[list[float]] = []
        for batch in self._batch_chunks(prompts_list, self.batch_size):
            outputs = self.pipeline(
                batch,
                num_return_sequences=self.num_return_sequences,
                return_full_text=False,
                clean_up_tokenization_spaces=True,
            )

            for prompt, generations in zip(batch, outputs):
                generations = generations if isinstance(generations, list) else [generations]
                assert len(generations) == self.num_return_sequences

                scores = []
                for generation in generations:
                    reply_text = generation["generated_text"]
                    try:
                        score = self.parse_fn(reply_text, self.scale)
                    except Exception:
                        score = self._score_with_retries(prompt)
                    scores.append(score)

                prompt_scores.append(scores)

        mean_per_prompt = [sum(prompt_score) / len(prompt_score) for prompt_score in prompt_scores]
        corpus_mean = sum(mean_per_prompt) / len(mean_per_prompt)

        return {
            "mean_score": corpus_mean,  # overall average
            "scores": mean_per_prompt,  # one number per original prompt
            "raw_scores": prompt_scores  # n_samples scores per prompt
        }

base_prompt_template = prompt_template.strip() instance-attribute

batch_size = batch_size instance-attribute

device = device or ('cuda' if torch.cuda.is_available() else 'mps' if torch.backends.mps.is_available() else 'cpu') instance-attribute

extras = extras instance-attribute

format_instructions = self.output_parser.get_format_instructions() instance-attribute

max_retries = max_retries instance-attribute

model = AutoModelForCausalLM.from_pretrained(model_or_id) instance-attribute

name = self.__class__.__name__ instance-attribute

num_return_sequences = int(gen_kwargs.pop('num_return_sequences', 1)) instance-attribute

pipeline = TextGenerationPipeline(model=(self.model), tokenizer=(self.tokenizer)) instance-attribute

scale = scale instance-attribute

tokenizer = tokenizer or AutoTokenizer.from_pretrained(model_or_id) instance-attribute

use_chat = hasattr(self.tokenizer, 'apply_chat_template') and self.tokenizer.chat_template is not None instance-attribute

compute(responses, prompts=None, **kwargs)

Compute LLM judge scores for a list of responses.

Evaluates each response using the configured judge model and prompt template. Scores are averaged when multiple samples are generated per response (via num_return_sequences).

Parameters:

Name Type Description Default
responses list[str]

List of text responses to evaluate.

required
prompts list[str] | None

Optional list of prompts corresponding to each response. If provided, must be the same length as responses. These prompts can be referenced in the prompt_template using the {prompt} placeholder.

None
**kwargs Any

Additional keyword arguments (currently unused).

{}

Returns:

Type Description
dict[str, float | list[float]]

Score statistics containing:

  • "mean_score": Overall average score across all responses
  • "scores": List of mean scores for each response (averaged across samples)
  • "raw_scores": List of lists containing all individual scores for each response

Raises:

Type Description
AssertionError

If prompts is provided but has different length than responses.

Source code in aisteer360/evaluation/metrics/base_judge.py
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
@torch.inference_mode()
def compute(
    self,
    responses: list[str],
    prompts: list[str] | None = None,
    **kwargs: Any,
) -> dict[str, float | list[float]]:
    """Compute LLM judge scores for a list of responses.

    Evaluates each response using the configured judge model and prompt template. Scores are averaged when multiple
    samples are generated per response (via `num_return_sequences`).

    Args:
        responses (list[str]): List of text responses to evaluate.
        prompts (list[str] | None): Optional list of prompts corresponding to each response.
            If provided, must be the same length as responses. These prompts can be
            referenced in the prompt_template using the {prompt} placeholder.
        **kwargs: Additional keyword arguments (currently unused).

    Returns:
        Score statistics containing:

            - "mean_score": Overall average score across all responses
            - "scores": List of mean scores for each response (averaged across samples)
            - "raw_scores": List of lists containing all individual scores for each response

    Raises:
        AssertionError: If prompts is provided but has different length than responses.
    """

    if prompts is not None and len(prompts) != len(responses):
        raise AssertionError("`responses` and `prompts` must be the same length")

    # build prompts
    prompts_list: list[str] = []
    for i in range(len(responses)):
        fields: dict[str, str | float] = {
            "response": responses[i],
            "lower_bound": self.scale[0],
            "upper_bound": self.scale[1],
        }
        if prompts is not None:
            fields["prompt"] = prompts[i]

        prompt_core = self.base_prompt_template.format(**fields)
        prompt_formatted = self._wrap(prompt_core + "\n\n" + self.format_instructions)
        prompts_list.append(prompt_formatted)

    # generate
    prompt_scores: list[list[float]] = []
    for batch in self._batch_chunks(prompts_list, self.batch_size):
        outputs = self.pipeline(
            batch,
            num_return_sequences=self.num_return_sequences,
            return_full_text=False,
            clean_up_tokenization_spaces=True,
        )

        for prompt, generations in zip(batch, outputs):
            generations = generations if isinstance(generations, list) else [generations]
            assert len(generations) == self.num_return_sequences

            scores = []
            for generation in generations:
                reply_text = generation["generated_text"]
                try:
                    score = self.parse_fn(reply_text, self.scale)
                except Exception:
                    score = self._score_with_retries(prompt)
                scores.append(score)

            prompt_scores.append(scores)

    mean_per_prompt = [sum(prompt_score) / len(prompt_score) for prompt_score in prompt_scores]
    corpus_mean = sum(mean_per_prompt) / len(mean_per_prompt)

    return {
        "mean_score": corpus_mean,  # overall average
        "scores": mean_per_prompt,  # one number per original prompt
        "raw_scores": prompt_scores  # n_samples scores per prompt
    }

build_structured_parser(scale)

Build a StructuredOutputParser and parsing function for rating predictions.

Constructs a StructuredOutputParser configured with a single ResponseSchema that expects a float score within the specified scale range. It also returns a parsing function that extracts and validates the score from text, ensuring the result is clamped between the provided bounds.

Parameters:

Name Type Description Default
scale tuple[float, float]

A (low, high) tuple specifying the valid inclusive range for the score.

required
Source code in aisteer360/evaluation/metrics/base_judge.py
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
def build_structured_parser(scale):
    """
    Build a StructuredOutputParser and parsing function for rating predictions.

    Constructs a `StructuredOutputParser` configured with a single `ResponseSchema` that expects a float score within
    the specified scale range. It also returns a parsing function that extracts and validates the score from text,
    ensuring the result is clamped between the provided bounds.

    Args:
        scale (tuple[float, float]): A `(low, high)` tuple specifying the valid inclusive range for the score.
    """
    low, high = scale
    score_schema = ResponseSchema(
        name="score",
        description=f"A single float between {low} and {high} (inclusive) that rates the prediction."
    )
    output_parser = StructuredOutputParser.from_response_schemas([score_schema])

    def parse_fn(text: str, _: tuple[float, float]) -> float:
        """
        Parse and validate a score from text using the structured output parser.

        Returns:
            A tuple with elements:

                - StructuredOutputParser: The parser configured with the score schema.
                - Callable[[str, tuple[float, float]], float]: A function that takes a raw text response and the
                  `(low, high)` scale, extracts the score, converts it to a float, and clamps it within the valid range.

        Raises:
            ValueError: If the score cannot be parsed from the text.
        """
        try:
            score = float(output_parser.parse(text)["score"])
        except OutputParserException as e:
            raise ValueError(f"Could not parse score: {e}")
        return max(low, min(high, score))

    return output_parser, parse_fn