Generic metrics
aisteer360.evaluation.metrics.generic
Generic evaluation metrics.
This module contains metrics that can be used for evaluating model outputs regardless of the specific task or domain (e.g., relevance, factuality, etc.).
factuality
Factuality
Bases: LLMJudgeMetric
Judge factual correctness of a response to a prompt.
Source code in aisteer360/evaluation/metrics/generic/factuality.py
19 20 21 22 23 24 25 26 27 28 29 30 |
|
base_prompt_template = prompt_template.strip()
instance-attribute
batch_size = batch_size
instance-attribute
device = device or ('cuda' if torch.cuda.is_available() else 'mps' if torch.backends.mps.is_available() else 'cpu')
instance-attribute
extras = extras
instance-attribute
format_instructions = self.output_parser.get_format_instructions()
instance-attribute
max_retries = max_retries
instance-attribute
model = AutoModelForCausalLM.from_pretrained(model_or_id)
instance-attribute
name = self.__class__.__name__
instance-attribute
num_return_sequences = int(gen_kwargs.pop('num_return_sequences', 1))
instance-attribute
pipeline = TextGenerationPipeline(model=(self.model), tokenizer=(self.tokenizer))
instance-attribute
scale = scale
instance-attribute
tokenizer = tokenizer or AutoTokenizer.from_pretrained(model_or_id)
instance-attribute
use_chat = hasattr(self.tokenizer, 'apply_chat_template') and self.tokenizer.chat_template is not None
instance-attribute
compute(responses, prompts=None, **kwargs)
Compute LLM judge scores for a list of responses.
Evaluates each response using the configured judge model and prompt template. Scores are averaged when multiple
samples are generated per response (via num_return_sequences
).
Parameters:
Name | Type | Description | Default |
---|---|---|---|
responses
|
list[str]
|
List of text responses to evaluate. |
required |
prompts
|
list[str] | None
|
Optional list of prompts corresponding to each response. If provided, must be the same length as responses. These prompts can be referenced in the prompt_template using the {prompt} placeholder. |
None
|
**kwargs
|
Any
|
Additional keyword arguments (currently unused). |
{}
|
Returns:
Type | Description |
---|---|
dict[str, float | list[float]]
|
Score statistics containing:
|
Raises:
Type | Description |
---|---|
AssertionError
|
If prompts is provided but has different length than responses. |
Source code in aisteer360/evaluation/metrics/base_judge.py
206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 |
|
perplexity
Perplexity
Bases: Metric
Compute token-level perplexity for a batch of sentences.
Perplexity is the exponentiated mean cross-entropy between the language model’s predicted distribution and the true next token. Lower is better.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
model_or_id
|
str | Module
|
Hugging Face model ID or an already-instantiated causal language model. |
required |
tokenizer
|
PreTrainedTokenizer | None
|
Tokenizer to use. Leave |
None
|
batch_size
|
int
|
Number of sentences per forward pass. Higher is faster until GPU memory becomes the
bottleneck. Defaults to |
16
|
add_bos
|
bool
|
Whether to prepend the tokenizer’s BOS token so the first word in each sentence is
also scored. Ignored if the tokenizer has no BOS token. Defaults to |
True
|
max_length
|
int | None
|
If set, truncate inputs to this length so they fit the model’s context
window. |
None
|
device
|
str | None
|
|
None
|
Attributes:
Name | Type | Description |
---|---|---|
add_bos |
bool
|
Whether a BOS token is prepended before scoring. |
batch_size |
int
|
Number of sentences processed per forward pass. |
device |
str
|
The device actually selected for computation ( |
max_length |
int | None
|
Truncation length for inputs, or |
model |
PreTrainedModel
|
The loaded causal language model used to score tokens. |
tokenizer |
PreTrainedTokenizer
|
Tokenizer used for encoding, padding, and BOS handling. |
Source code in aisteer360/evaluation/metrics/generic/perplexity.py
10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 |
|
add_bos = add_bos and self.tokenizer.bos_token_id is not None
instance-attribute
batch_size = batch_size
instance-attribute
device = device or ('cuda' if torch.cuda.is_available() else 'cpu')
instance-attribute
extras = extras
instance-attribute
max_length = max_length
instance-attribute
model = AutoModelForCausalLM.from_pretrained(model_or_id)
instance-attribute
name = self.__class__.__name__
instance-attribute
tokenizer = tokenizer or AutoTokenizer.from_pretrained(model_or_id)
instance-attribute
compute(responses, prompts=None)
Compute perplexity for each response (and the mean across the batch).
Parameters:
Name | Type | Description | Default |
---|---|---|---|
responses
|
list[str]
|
Text sequences to score. |
required |
prompts
|
list[str] | None
|
Unused here; present for a uniform metric API. |
None
|
Returns:
Type | Description |
---|---|
dict[str, float]
|
dict[str, float]: A dict with keys:
|
Source code in aisteer360/evaluation/metrics/generic/perplexity.py
69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 |
|
relevance
Relevance
Bases: LLMJudgeMetric
Judge relevance of a response to a prompt.
Source code in aisteer360/evaluation/metrics/generic/relevance.py
19 20 21 22 23 24 25 26 27 28 29 30 |
|
base_prompt_template = prompt_template.strip()
instance-attribute
batch_size = batch_size
instance-attribute
device = device or ('cuda' if torch.cuda.is_available() else 'mps' if torch.backends.mps.is_available() else 'cpu')
instance-attribute
extras = extras
instance-attribute
format_instructions = self.output_parser.get_format_instructions()
instance-attribute
max_retries = max_retries
instance-attribute
model = AutoModelForCausalLM.from_pretrained(model_or_id)
instance-attribute
name = self.__class__.__name__
instance-attribute
num_return_sequences = int(gen_kwargs.pop('num_return_sequences', 1))
instance-attribute
pipeline = TextGenerationPipeline(model=(self.model), tokenizer=(self.tokenizer))
instance-attribute
scale = scale
instance-attribute
tokenizer = tokenizer or AutoTokenizer.from_pretrained(model_or_id)
instance-attribute
use_chat = hasattr(self.tokenizer, 'apply_chat_template') and self.tokenizer.chat_template is not None
instance-attribute
compute(responses, prompts=None, **kwargs)
Compute LLM judge scores for a list of responses.
Evaluates each response using the configured judge model and prompt template. Scores are averaged when multiple
samples are generated per response (via num_return_sequences
).
Parameters:
Name | Type | Description | Default |
---|---|---|---|
responses
|
list[str]
|
List of text responses to evaluate. |
required |
prompts
|
list[str] | None
|
Optional list of prompts corresponding to each response. If provided, must be the same length as responses. These prompts can be referenced in the prompt_template using the {prompt} placeholder. |
None
|
**kwargs
|
Any
|
Additional keyword arguments (currently unused). |
{}
|
Returns:
Type | Description |
---|---|
dict[str, float | list[float]]
|
Score statistics containing:
|
Raises:
Type | Description |
---|---|
AssertionError
|
If prompts is provided but has different length than responses. |
Source code in aisteer360/evaluation/metrics/base_judge.py
206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 |
|