scores.EntropyScore
Entropy-based confidence score.
Usage
scores.EntropyScore()Computes the predictive entropy of the output distribution. Higher entropy indicates higher uncertainty (higher score). Supports multiclass, binary, and multilabel tasks.
Parameters
temperature: float or None = None-
Optional initial temperature. If
None, temperature is fitted if labels are provided to fit. task: ("multiclass", "binary", "multilabel") = "multiclass"- Task type for score computation.
Examples
import torch
from seapig.scores.logits import EntropyScore
logits = torch.randn(2, 3)
EntropyScore().score(logits)Attributes
| Name | Description |
|---|---|
| ident | str(object=’’) -> str |
ident
str(object=’’) -> str
ident: str = "entropy"
str(bytes_or_buffer[, encoding[, errors]]) -> str
Create a new string object from the given object. If encoding or errors is specified, then the object must expose a data buffer that will be decoded using the given encoding and error handler. Otherwise, returns the result of object.__str__() (if defined) or repr(object). encoding defaults to sys.getdefaultencoding(). errors defaults to ‘strict’.
Methods
| Name | Description |
|---|---|
| score() | Compute predictive entropy for each sample (task-aware). |
score()
Compute predictive entropy for each sample (task-aware).
Usage
score(query_logits)Returns
torch.Tensor- 1-D tensor of shape (M,) with entropy scores (lower == more confident).
See Also
- seapig.scores.logits.SoftmaxScore: Softmax probability-based alternative.