Loss¶
Loss¶
-
class
seq2seq.loss.loss.
Loss
(name, criterion)¶ Base class for encapsulation of the loss functions.
This class defines interfaces that are commonly used with loss functions in training and inferencing. For information regarding individual loss functions, please refer to http://pytorch.org/docs/master/nn.html#loss-functions
Note
Do not use this class directly, use one of the sub classes.
Parameters: - name (str) – name of the loss function used by logging messages.
- criterion (torch.nn._Loss) – one of PyTorch’s loss function. Refer to http://pytorch.org/docs/master/nn.html#loss-functions for a list of them.
Variables: - name (str) – name of the loss function used by logging messages.
- criterion (torch.nn._Loss) – one of PyTorch’s loss function. Refer to http://pytorch.org/docs/master/nn.html#loss-functions for a list of them. Implementation depends on individual sub-classes.
- acc_loss (int or torcn.nn.Tensor) – variable that stores accumulated loss.
- norm_term (float) – normalization term that can be used to calculate the loss of multiple batches. Implementation depends on individual sub-classes.
-
eval_batch
(outputs, target)¶ Evaluate and accumulate loss given outputs and expected results.
This method is called after each batch with the batch outputs and the target (expected) results. The loss and normalization term are accumulated in this method. Override it to define your own accumulation method.
Parameters: - outputs (torch.Tensor) – outputs of a batch.
- target (torch.Tensor) – expected output of a batch.
-
get_loss
()¶ Get the loss.
This method defines how to calculate the averaged loss given the accumulated loss and the normalization term. Override to define your own logic.
Returns: value of the loss. Return type: loss (float)
-
reset
()¶ Reset the accumulated loss.
NLLLoss¶
-
class
seq2seq.loss.loss.
NLLLoss
(weight=None, mask=None, size_average=True)¶ Batch averaged negative log-likelihood loss.
Parameters: - weight (torch.Tensor, optional) – refer to http://pytorch.org/docs/master/nn.html#nllloss
- mask (int, optional) – index of masked token, i.e. weight[mask] = 0.
- size_average (bool, optional) – refer to http://pytorch.org/docs/master/nn.html#nllloss
Perplexity¶
-
class
seq2seq.loss.loss.
Perplexity
(weight=None, mask=None)¶ Language model perplexity loss.
Perplexity is the token averaged likelihood. When the averaging options are the same, it is the exponential of negative log-likelihood.
Parameters: - weight (torch.Tensor, optional) – refer to http://pytorch.org/docs/master/nn.html#nllloss
- mask (int, optional) – index of masked token, i.e. weight[mask] = 0.