Module streamauc.metrics

Sub-modules

streamauc.metrics.metric_synonyms

Functions

def dice(tp: numpy.ndarray, fp: numpy.ndarray, fn: numpy.ndarray, method: AggregationMethod = AggregationMethod.MACRO, class_index: Optional[int] = None, check_inputs: bool = True, **kwargs)
def f1_score(tp: numpy.ndarray, fp: numpy.ndarray, fn: numpy.ndarray, method: AggregationMethod = AggregationMethod.MACRO, class_index: Optional[int] = None, check_inputs: bool = True, **kwargs)
def fallout(fp: numpy.ndarray, tn: numpy.ndarray, method: AggregationMethod = AggregationMethod.MACRO, class_index: Optional[int] = None, check_inputs: bool = True, **kwargs)

Compute the false positive rate (FPR) given the false positive (fp) and true negative (tn) predictions at various thresholds. Can be used as a Callable for the auc method.

Parameters

fp : np.ndarray
Array of false positives for each class. Of shape [num_thresholds, num_classes]
tn : np.ndarray
Array of true negatives for each class. Of shape [num_thresholds, num_classes]
method : AggregationMethod
Aggregation method to be used in multiclass setting. Default is AggregationMethod.MACRO.
class_index : int, optional
Class index for "one_vs_all" calculation. Required if method is "one_vs_all".
check_inputs : bool, optional
If True, perform input validation checks. Default is True.
**kwargs
Additional keyword arguments.

Returns

fpr : np.ndarray
FPR for the specified class across different samples. Of shape [num_thresholds]

Raises

ValueError
If an invalid aggregation method is specified.

Notes

  • For micro-averaging: $$ \text{FPR}_{\text{micro}} = \frac{\sum \text{FP}}{\sum (\text{FP} + \text{TN})} $$
  • For macro-averaging: $$ \text{FPR}{\text{macro}} = \frac{1}{C} \sum_c + \text{TN}_c} $$}^{C} \frac{\text{FP}_c}{\text{FP
  • For one-vs-all: $$ \text{FPR}{\text{one_vs_all}} = \frac{\text{FP}}}{\text{FP{c} + \text{TN} $$ where $ c $ is the specified class index.}
def fpr(fp: numpy.ndarray, tn: numpy.ndarray, method: AggregationMethod = AggregationMethod.MACRO, class_index: Optional[int] = None, check_inputs: bool = True, **kwargs) ‑> numpy.ndarray

Compute the false positive rate (FPR) given the false positive (fp) and true negative (tn) predictions at various thresholds. Can be used as a Callable for the auc method.

Parameters

fp : np.ndarray
Array of false positives for each class. Of shape [num_thresholds, num_classes]
tn : np.ndarray
Array of true negatives for each class. Of shape [num_thresholds, num_classes]
method : AggregationMethod
Aggregation method to be used in multiclass setting. Default is AggregationMethod.MACRO.
class_index : int, optional
Class index for "one_vs_all" calculation. Required if method is "one_vs_all".
check_inputs : bool, optional
If True, perform input validation checks. Default is True.
**kwargs
Additional keyword arguments.

Returns

fpr : np.ndarray
FPR for the specified class across different samples. Of shape [num_thresholds]

Raises

ValueError
If an invalid aggregation method is specified.

Notes

  • For micro-averaging: $$ \text{FPR}_{\text{micro}} = \frac{\sum \text{FP}}{\sum (\text{FP} + \text{TN})} $$
  • For macro-averaging: $$ \text{FPR}{\text{macro}} = \frac{1}{C} \sum_c + \text{TN}_c} $$}^{C} \frac{\text{FP}_c}{\text{FP
  • For one-vs-all: $$ \text{FPR}{\text{one_vs_all}} = \frac{\text{FP}}}{\text{FP{c} + \text{TN} $$ where $ c $ is the specified class index.}
def hit_rate(tp: numpy.ndarray, fn: numpy.ndarray, method: AggregationMethod = AggregationMethod.MACRO, class_index: Optional[int] = None, check_inputs: bool = True, **kwargs)

Compute the true positive rate (TPR) given the true positive (tp) and false negative (fn) predictions at various thresholds. Can be used as a Callable for the auc method.

Parameters

tp : np.ndarray
Array of true positives for each class. Of shape [num_thresholds, num_classes]
fn : np.ndarray
Array of false negatives for each class. Of shape [num_thresholds, num_classes]
method : AggregationMethod
Aggregation method to be used in multiclass setting. Default is AggregationMethod.MACRO.
class_index : int, optional
Class index for "one_vs_all" calculation. Required if method is "one_vs_all".
check_inputs : bool, optional
If True, perform input validation checks. Default is True.
**kwargs
Additional keyword arguments.

Returns

tpr : np.ndarray
TPR for the specified class across different samples. Of shape [num_thresholds]

Raises

ValueError
If an invalid aggregation method is specified.

Notes

  • For micro-averaging: $$ \text{TPR}_{\text{micro}} = \frac{\sum \text{TP}}{ \sum (\text{TP} + \text{FN})} $$
  • For macro-averaging: $$ \text{TPR}{\text{macro}} = \frac{1}{C} \sum \frac{\text{TP}_c}{\text{TP}_c + \text{FN}_c} $$}^{C
  • For one-vs-all: $$ \text{TPR}{\text{one_vs_all}} = \frac{\text{TP}{ \text{TP}}{c} + \text{FN} $$ where $ c $ is the specified class index.}
def jaccard_index(tp: numpy.ndarray, fp: numpy.ndarray, fn: numpy.ndarray, method: AggregationMethod = AggregationMethod.MACRO, class_index: Optional[int] = None, check_inputs: bool = True, **kwargs)
def positive_predictive_value(tp: numpy.ndarray, fp: numpy.ndarray, method: AggregationMethod = AggregationMethod.MACRO, class_index: Optional[int] = None, check_inputs: bool = True, **kwargs)

Compute precision for multi-class classification using the specified aggregation method.

Parameters

tp : np.ndarray
Array of true positives for each class.
fp : np.ndarray
Array of false positives for each class.
method : AggregationMethod, optional
Method used to compute precision for multiple classes. Default is AggregationMethod.MACRO. Must be one of ["macro", "micro", "one_vs_all"].
class_index : int, optional
Class index for "one_vs_all" calculation. Required if method is "one_vs_all".
check_inputs : bool, optional
If True, perform input validation checks. Default is True.
**kwargs
Additional keyword arguments.

Returns

precision : np.ndarray
Computed precision values based on the specified aggregation method.

Raises

ValueError
If an invalid aggregation method is specified.

Notes

  • For micro-averaging: $$ \text{Precision}_{\text{micro}} = \frac{\sum \text{TP}}{\sum (\text{TP} + \text{FP})} $$
  • For macro-averaging: $$ \text{Precision}{\text{macro}} = \frac{1}{C} \sum \frac{\text{TP}_c}{\text{TP}_c + \text{FP}_c} $$}^{C
  • For one-vs-all: $$ \text{Precision}{\text{one_vs_all}} = \frac{\text{TP}}}{\text{TP{c} + \text{FP} $$ where $ c $ is the specified class index.}
def precision(tp: numpy.ndarray, fp: numpy.ndarray, method: AggregationMethod = AggregationMethod.MACRO, class_index: Optional[int] = None, check_inputs: bool = True, **kwargs)

Compute precision for multi-class classification using the specified aggregation method.

Parameters

tp : np.ndarray
Array of true positives for each class.
fp : np.ndarray
Array of false positives for each class.
method : AggregationMethod, optional
Method used to compute precision for multiple classes. Default is AggregationMethod.MACRO. Must be one of ["macro", "micro", "one_vs_all"].
class_index : int, optional
Class index for "one_vs_all" calculation. Required if method is "one_vs_all".
check_inputs : bool, optional
If True, perform input validation checks. Default is True.
**kwargs
Additional keyword arguments.

Returns

precision : np.ndarray
Computed precision values based on the specified aggregation method.

Raises

ValueError
If an invalid aggregation method is specified.

Notes

  • For micro-averaging: $$ \text{Precision}_{\text{micro}} = \frac{\sum \text{TP}}{\sum (\text{TP} + \text{FP})} $$
  • For macro-averaging: $$ \text{Precision}{\text{macro}} = \frac{1}{C} \sum \frac{\text{TP}_c}{\text{TP}_c + \text{FP}_c} $$}^{C
  • For one-vs-all: $$ \text{Precision}{\text{one_vs_all}} = \frac{\text{TP}}}{\text{TP{c} + \text{FP} $$ where $ c $ is the specified class index.}
def recall(tp: numpy.ndarray, fn: numpy.ndarray, method: AggregationMethod = AggregationMethod.MACRO, class_index: Optional[int] = None, check_inputs: bool = True, **kwargs)

Compute the true positive rate (TPR) given the true positive (tp) and false negative (fn) predictions at various thresholds. Can be used as a Callable for the auc method.

Parameters

tp : np.ndarray
Array of true positives for each class. Of shape [num_thresholds, num_classes]
fn : np.ndarray
Array of false negatives for each class. Of shape [num_thresholds, num_classes]
method : AggregationMethod
Aggregation method to be used in multiclass setting. Default is AggregationMethod.MACRO.
class_index : int, optional
Class index for "one_vs_all" calculation. Required if method is "one_vs_all".
check_inputs : bool, optional
If True, perform input validation checks. Default is True.
**kwargs
Additional keyword arguments.

Returns

tpr : np.ndarray
TPR for the specified class across different samples. Of shape [num_thresholds]

Raises

ValueError
If an invalid aggregation method is specified.

Notes

  • For micro-averaging: $$ \text{TPR}_{\text{micro}} = \frac{\sum \text{TP}}{ \sum (\text{TP} + \text{FN})} $$
  • For macro-averaging: $$ \text{TPR}{\text{macro}} = \frac{1}{C} \sum \frac{\text{TP}_c}{\text{TP}_c + \text{FN}_c} $$}^{C
  • For one-vs-all: $$ \text{TPR}{\text{one_vs_all}} = \frac{\text{TP}{ \text{TP}}{c} + \text{FN} $$ where $ c $ is the specified class index.}
def selectivity(fp: numpy.ndarray, tn: numpy.ndarray, method: AggregationMethod = AggregationMethod.MACRO, class_index: Optional[int] = None, check_inputs: bool = True, **kwargs)

Compute the true negative rate (TNR) given the false positive (fp) and true negative (tn) predictions at various thresholds. Can be used as a Callable for the auc method.

Parameters

fp : np.ndarray
Array of false positives for each class. Of shape [num_thresholds, num_classes]
tn : np.ndarray
Array of true negatives for each class. Of shape [num_thresholds, num_classes]
method : AggregationMethod, optional
Aggregation method to be used in multiclass setting. Default is AggregationMethod.MACRO.
class_index : int, optional
Class index for "one_vs_all" calculation. Required if method is "one_vs_all".
check_inputs : bool, optional
If True, perform input validation checks. Default is True.
**kwargs
Additional keyword arguments.

Returns

tnr : np.ndarray
TNR for the specified class across different samples. Of shape [num_thresholds]

Raises

ValueError
If an invalid aggregation method is specified.

Notes

  • For micro-averaging: $$ \text{TNR}_{\text{micro}} = 1 - \frac{\sum \text{FP}}{\sum (\text{FP} + \text{TN})} $$
  • For macro-averaging: $$ \text{TNR}{\text{macro}} = 1 - \frac{1}{C} \sum_c + \text{TN}_c} $$}^{C} \frac{\text{FP}_c}{\text{FP
  • For one-vs-all: $$ \text{TNR}{\text{one_vs_all}} = 1 - \frac{\text{FP}}}{\text{FP{c} + \text{TN} $$ where $ c $ is the specified class index.}
def sensitivity(tp: numpy.ndarray, fn: numpy.ndarray, method: AggregationMethod = AggregationMethod.MACRO, class_index: Optional[int] = None, check_inputs: bool = True, **kwargs)

Compute the true positive rate (TPR) given the true positive (tp) and false negative (fn) predictions at various thresholds. Can be used as a Callable for the auc method.

Parameters

tp : np.ndarray
Array of true positives for each class. Of shape [num_thresholds, num_classes]
fn : np.ndarray
Array of false negatives for each class. Of shape [num_thresholds, num_classes]
method : AggregationMethod
Aggregation method to be used in multiclass setting. Default is AggregationMethod.MACRO.
class_index : int, optional
Class index for "one_vs_all" calculation. Required if method is "one_vs_all".
check_inputs : bool, optional
If True, perform input validation checks. Default is True.
**kwargs
Additional keyword arguments.

Returns

tpr : np.ndarray
TPR for the specified class across different samples. Of shape [num_thresholds]

Raises

ValueError
If an invalid aggregation method is specified.

Notes

  • For micro-averaging: $$ \text{TPR}_{\text{micro}} = \frac{\sum \text{TP}}{ \sum (\text{TP} + \text{FN})} $$
  • For macro-averaging: $$ \text{TPR}{\text{macro}} = \frac{1}{C} \sum \frac{\text{TP}_c}{\text{TP}_c + \text{FN}_c} $$}^{C
  • For one-vs-all: $$ \text{TPR}{\text{one_vs_all}} = \frac{\text{TP}{ \text{TP}}{c} + \text{FN} $$ where $ c $ is the specified class index.}
def specificity(fp: numpy.ndarray, tn: numpy.ndarray, method: AggregationMethod = AggregationMethod.MACRO, class_index: Optional[int] = None, check_inputs: bool = True, **kwargs)

Compute the true negative rate (TNR) given the false positive (fp) and true negative (tn) predictions at various thresholds. Can be used as a Callable for the auc method.

Parameters

fp : np.ndarray
Array of false positives for each class. Of shape [num_thresholds, num_classes]
tn : np.ndarray
Array of true negatives for each class. Of shape [num_thresholds, num_classes]
method : AggregationMethod, optional
Aggregation method to be used in multiclass setting. Default is AggregationMethod.MACRO.
class_index : int, optional
Class index for "one_vs_all" calculation. Required if method is "one_vs_all".
check_inputs : bool, optional
If True, perform input validation checks. Default is True.
**kwargs
Additional keyword arguments.

Returns

tnr : np.ndarray
TNR for the specified class across different samples. Of shape [num_thresholds]

Raises

ValueError
If an invalid aggregation method is specified.

Notes

  • For micro-averaging: $$ \text{TNR}_{\text{micro}} = 1 - \frac{\sum \text{FP}}{\sum (\text{FP} + \text{TN})} $$
  • For macro-averaging: $$ \text{TNR}{\text{macro}} = 1 - \frac{1}{C} \sum_c + \text{TN}_c} $$}^{C} \frac{\text{FP}_c}{\text{FP
  • For one-vs-all: $$ \text{TNR}{\text{one_vs_all}} = 1 - \frac{\text{FP}}}{\text{FP{c} + \text{TN} $$ where $ c $ is the specified class index.}
def tnr(fp: numpy.ndarray, tn: numpy.ndarray, method: AggregationMethod = AggregationMethod.MACRO, class_index: Optional[int] = None, check_inputs: bool = True, **kwargs) ‑> numpy.ndarray

Compute the true negative rate (TNR) given the false positive (fp) and true negative (tn) predictions at various thresholds. Can be used as a Callable for the auc method.

Parameters

fp : np.ndarray
Array of false positives for each class. Of shape [num_thresholds, num_classes]
tn : np.ndarray
Array of true negatives for each class. Of shape [num_thresholds, num_classes]
method : AggregationMethod, optional
Aggregation method to be used in multiclass setting. Default is AggregationMethod.MACRO.
class_index : int, optional
Class index for "one_vs_all" calculation. Required if method is "one_vs_all".
check_inputs : bool, optional
If True, perform input validation checks. Default is True.
**kwargs
Additional keyword arguments.

Returns

tnr : np.ndarray
TNR for the specified class across different samples. Of shape [num_thresholds]

Raises

ValueError
If an invalid aggregation method is specified.

Notes

  • For micro-averaging: $$ \text{TNR}_{\text{micro}} = 1 - \frac{\sum \text{FP}}{\sum (\text{FP} + \text{TN})} $$
  • For macro-averaging: $$ \text{TNR}{\text{macro}} = 1 - \frac{1}{C} \sum_c + \text{TN}_c} $$}^{C} \frac{\text{FP}_c}{\text{FP
  • For one-vs-all: $$ \text{TNR}{\text{one_vs_all}} = 1 - \frac{\text{FP}}}{\text{FP{c} + \text{TN} $$ where $ c $ is the specified class index.}
def tpr(tp: numpy.ndarray, fn: numpy.ndarray, method: AggregationMethod = AggregationMethod.MACRO, class_index: Optional[int] = None, check_inputs: bool = True, **kwargs)

Compute the true positive rate (TPR) given the true positive (tp) and false negative (fn) predictions at various thresholds. Can be used as a Callable for the auc method.

Parameters

tp : np.ndarray
Array of true positives for each class. Of shape [num_thresholds, num_classes]
fn : np.ndarray
Array of false negatives for each class. Of shape [num_thresholds, num_classes]
method : AggregationMethod
Aggregation method to be used in multiclass setting. Default is AggregationMethod.MACRO.
class_index : int, optional
Class index for "one_vs_all" calculation. Required if method is "one_vs_all".
check_inputs : bool, optional
If True, perform input validation checks. Default is True.
**kwargs
Additional keyword arguments.

Returns

tpr : np.ndarray
TPR for the specified class across different samples. Of shape [num_thresholds]

Raises

ValueError
If an invalid aggregation method is specified.

Notes

  • For micro-averaging: $$ \text{TPR}_{\text{micro}} = \frac{\sum \text{TP}}{ \sum (\text{TP} + \text{FN})} $$
  • For macro-averaging: $$ \text{TPR}{\text{macro}} = \frac{1}{C} \sum \frac{\text{TP}_c}{\text{TP}_c + \text{FN}_c} $$}^{C
  • For one-vs-all: $$ \text{TPR}{\text{one_vs_all}} = \frac{\text{TP}{ \text{TP}}{c} + \text{FN} $$ where $ c $ is the specified class index.}