scores.SoftmaxScore
Maximum softmax probability confidence score.
Usage
scores.SoftmaxScore()Supports multiclass, binary (single/two-logit), and multilabel tasks. Higher maximum softmax probability indicates higher confidence (lower score).
Parameters
temperature: float or None = None-
Optional initial temperature. If
None, temperature is fitted if labels are provided to fit. task: ("multiclass", "binary", "multilabel") = "multiclass"- Task type for score computation.
Examples
import torch
from seapig.scores.logits import SoftmaxScore
logits = torch.randn(2, 4)
SoftmaxScore().score(logits)Attributes
| Name | Description |
|---|---|
| ident | str(object=’’) -> str |
ident
str(object=’’) -> str
ident: str = "softmax"
str(bytes_or_buffer[, encoding[, errors]]) -> str
Create a new string object from the given object. If encoding or errors is specified, then the object must expose a data buffer that will be decoded using the given encoding and error handler. Otherwise, returns the result of object.__str__() (if defined) or repr(object). encoding defaults to sys.getdefaultencoding(). errors defaults to ‘strict’.
Methods
| Name | Description |
|---|---|
| score() | Compute task-aware softmax-based confidence score. |
score()
Compute task-aware softmax-based confidence score.
Usage
score(query_logits)For multiclass: -max softmax probability. For binary single-logit: -sigmoid(|logit|). For binary two-logit: -max softmax probability. For multilabel: -min(max(p, 1-p)), where p = sigmoid(logit).
Returns
torch.Tensor- 1-D tensor of shape (M,) with scores (lower == more confident).
See Also
- seapig.scores.logits.EntropyScore: Entropy-based alternative.
- seapig.scores.logits.EnergyScore: Energy-based alternative.
- seapig.scores.logits.MarginScore: Margin-based alternative.