dqs.torch.loss.NegativeLogLikelihood
This class is used to compute the negative log-likelihood, which is one of the measures of the accuracy of probabilistic predictions. It is calculated by comparing the predicted probability distribution to the actual outcome. It quantifies the negative log-likelihood between the predicted probability distribution and the corresponding actual outcome.
Suppose that the predicted probability distribution is represented as a CDF with \(n\) knots, and the probability that the target value is contained in the \(i\)-th bin can be represented as f[i]
.
Then the negative log-likelihood for an actual outcome \(y\) can be calculated as
where \(i\) is the index of the bin \(y\) is contained.
class NegativeLogLikelihood(
distribution,
loss_boundaries)
Parameters:
Args |
Type |
Description |
---|---|---|
distribution |
dqs.distribution |
Object from dqs.distribution package to represent probability distribution. |
boundaries |
list (float) |
Boundaries used in negative log-likelihood. |
loss(pred, y, e=None)
Parameters:
Args |
Type |
Description |
---|---|---|
pred |
Tensor (float) |
Estimated probability distribution to be evaluated. |
y |
Tensor (float) |
One-dimensional tensor to represent labels from a dataset. |
e |
Tensor (bool) |
Optional parameter for survival analysis. One-dimensional tensor to represent censored (False) or uncensored (True). |
Return type: Tensor representing a single float.
Example
The following code computes negative log-likelihood based on pred
and y
.
boundaries = torch.linspace(0.0, 10.0, 11)
dist = dqs.distribution.DistributionLinear(boundaries)
loss_fn = dqs.loss.NegativeLogLikelihood(dist, boundaries)
pred = torch.Tensor([[0.4,0.6],[0.2,0.8]])
y = torch.Tensor([5.0,5.0])
loss = loss_fn.loss(pred, y)