snorkel.analysis.metric_score

snorkel.analysis.metric_score(golds=None, preds=None, probs=None, metric='accuracy', filter_dict=None, **kwargs)[source]

Evaluate a standard metric on a set of predictions/probabilities.

Parameters
  • golds (Optional[ndarray]) – An array of gold (int) labels

  • preds (Optional[ndarray]) – An array of (int) predictions

  • probs (Optional[ndarray]) – An [n_datapoints, n_classes] array of probabilistic predictions

  • metric (str) – The name of the metric to calculate

  • filter_dict (Optional[Dict[str, List[int]]]) – A mapping from label set name to the labels that should be filtered out for that label set

Returns

The value of the requested metric

Return type

float

Raises
  • ValueError – The requested metric is not currently supported

  • ValueError – The user attempted to calculate roc_auc score for a non-binary problem