compare

Draw train and evaluation metrics in Jupyter Notebook for two trained models.

Method call format

compare(model,
        data=None,
        metrics=None,
        ntree_start=0,
        ntree_end=0,
        eval_period=1,
        thread_count=-1,
        tmp_dir=None)

Parameters

Parameter Possible types Description Default value
model CatBoost Model

The CatBoost model to compare with.

Required parameter
metrics list of strings

The list of metrics to be calculated.

Supported metrics
  • RMSE
  • Logloss
  • MAE
  • CrossEntropy
  • Quantile
  • LogLinQuantile
  • Lq
  • MultiClass
  • MultiClassOneVsAll
  • MAPE
  • Poisson
  • PairLogit
  • PairLogitPairwise
  • QueryRMSE
  • QuerySoftMax
  • SMAPE
  • Recall
  • Precision
  • F1
  • TotalF1
  • Accuracy
  • BalancedAccuracy
  • BalancedErrorRate
  • Kappa
  • WKappa
  • LogLikelihoodOfPrediction
  • AUC
  • R2
  • FairLoss
  • NumErrors
  • MCC
  • BrierScore
  • HingeLoss
  • HammingLoss
  • ZeroOneLoss
  • MSLE
  • MedianAbsoluteError
  • Huber
  • Expectile
  • PairAccuracy
  • AverageGain
  • PFound
  • NDCG
  • DCG
  • FilteredDCG
  • NormalizedGini
  • PrecisionAt
  • RecallAt
  • MAP
  • CtrFactor

For example, if the AUC and Logloss metrics should be calculated, use the following construction:

['Logloss', 'AUC']
Required parameter
data catboost.Pool A file or matrix with the input dataset, on which the compared metric values should be calculated. Required parameter