catboost.get_object_importance(model, pool, train_pool, top_size = -1, type = 'Average', update_method = 'SinglePoint', thread_count = -1)
Calculate the effect of objects from the train dataset on the optimized metric values for the objects from the input dataset:
- Positive values reflect that the optimized metric increases.
- Negative values reflect that the optimized metric decreases.
The higher the deviation from 0, the bigger the impact that an object has on the optimized metric.
The method is an implementation of the approach described in the Finding Influential Training Samples for Gradient Boosted Decision Trees paper .
Currently, object importance is supported only for the following loss functions.
The model obtained as the result of training.
The input dataset.
The dataset used for training.
Defines the number of most important objects from the training dataset. The number of returned objects is limited to this number.
-1 (top size is not limited)
The method for calculating the object importances.
- Average — The average of scores of objects from the training dataset for every object from the input dataset.
- PerObject — The scores of each object from the training dataset for each object from the input dataset.
The algorithm accuracy method.
- SinglePoint — The fastest and least accurate method.
- TopKLeaves — Specify the number of leaves. The higher the value, the more accurate and the slower the calculation.
- AllPoints — The slowest and most accurate method.
top— Defines the number of leaves to use for the TopKLeaves update method. See the Finding Influential Training Samples for Gradient Boosted Decision Trees for more details.
For example, the following value sets the method to TopKLeaves and limits the number of leaves to 3:
The number of threads to use during the training.
Optimizes the speed of execution. This parameter doesn't affect results.
-1 (the number of threads is equal to the number of processor cores)
Calculate the object strength:
library(catboost) train_dataset = matrix(c(1900,7,1, 1896,1,1), nrow=2, ncol=3, byrow = TRUE) label_values = c(0, 1) train_pool = catboost.load_pool(train_dataset, label_values) input_dataset = matrix(c(1900,47,1, 1904,27,1), nrow=2, ncol=3, byrow = TRUE) input_pool = catboost.load_pool(input_dataset, label_values) trained_model <- catboost.train(train_pool, params = list(iterations = 10)) object_importance <- catboost.get_object_importance(trained_model, input_pool, train_pool)