utils
Package
Various useful constants defining groups of evaluation metrics.
- skll.utils.constants.CLASSIFICATION_ONLY_METRICS = {'accuracy', 'average_precision', 'balanced_accuracy', 'f05', 'f05_score_macro', 'f05_score_micro', 'f05_score_weighted', 'f1', 'f1_score_least_frequent', 'f1_score_macro', 'f1_score_micro', 'f1_score_weighted', 'jaccard', 'jaccard_macro', 'jaccard_micro', 'jaccard_weighted', 'neg_log_loss', 'precision', 'precision_macro', 'precision_micro', 'precision_weighted', 'recall', 'recall_macro', 'recall_micro', 'recall_weighted', 'roc_auc'}
Set of evaluation metrics only used for classification tasks
- skll.utils.constants.CORRELATION_METRICS = {'kendall_tau', 'pearson', 'spearman'}
Set of evaluation metrics based on correlation
- skll.utils.constants.PROBABILISTIC_METRICS = frozenset({'average_precision', 'neg_log_loss', 'roc_auc'})
Set of evaluation metrics that can use prediction probabilities
- skll.utils.constants.REGRESSION_ONLY_METRICS = {'explained_variance', 'max_error', 'neg_mean_absolute_error', 'neg_mean_squared_error', 'neg_root_mean_squared_error', 'r2'}
Set of evaluation metrics only used for regression tasks
- skll.utils.constants.UNWEIGHTED_KAPPA_METRICS = {'unweighted_kappa', 'uwk_off_by_one'}
Set of unweighted kappa agreement metrics
- skll.utils.constants.WEIGHTED_KAPPA_METRICS = {'linear_weighted_kappa', 'lwk_off_by_one', 'quadratic_weighted_kappa', 'qwk_off_by_one'}
Set of weighed kappa agreement metrics
A useful logging function for SKLL developers
- skll.utils.logging.get_skll_logger(name, filepath=None, log_level=20)[source]
Create and return logger instances appropriate for use in SKLL code.
These logger instances can log to both STDERR as well as a file. This function will try to reuse any previously created logger based on the given name and filepath.
- Parameters:
- Returns:
logger – A
Logger
instance.- Return type: