Attributes
tree_count_
Purpose
Return the number of trees in the model.
This number can differ from the value specified in the --iterations
training parameter in the following cases:
- The training is stopped by the overfitting detector.
- The
--use-best-model
training parameter is set toTrue
.
Type
int
feature_importances_
Purpose
Return the calculated feature importances. The output data depends on the type of the model's loss function:
- Non-ranking loss functions — PredictionValuesChange
- Ranking loss functions — LossFunctionChange
If the corresponding feature importance is not calculated the returned value is None
.
Use the get_feature_importance
function to surely calculate the LossFunctionChange feature importance.
Type
numpy.ndarray
random_seed_
Purpose
The random seed used for training.
Type
int
learning_rate_
Purpose
The learning rate used for training.
Type
float
feature_names_
Purpose
The names of features in the dataset.
Type
list
evals_result_
Purpose
Return the values of metrics calculated during the training.
Note
Only the values of calculated metrics are output. The following metrics are not calculated by default for the training dataset and therefore these metrics are not output:
- PFound
- YetiRank
- NDCG
- YetiRankPairwise
- AUC
- NormalizedGini
- FilteredDCG
- DCG
Use the hints=skip_train~false
parameter to enable the calculation. See the Enable, disable and configure metrics calculation section for more details.
Type
dict
Output format:
{pool_name: {metric_name_1-1: [value_1, value_2, .., value_N]}, .., {metric_name_1-M: [value_1, value_2, .., value_N]}}
For example:
{'learn': {'Logloss': [0.6720840012056274, 0.6476800666988386, 0.6284055381249782], 'AUC': [1.0, 1.0, 1.0], 'CrossEntropy': [0.6720840012056274, 0.6476800666988386, 0.6284055381249782]}}
best_score_
Purpose
Return the best result for each metric calculated on each validation dataset.
Note
Only the values of calculated metrics are output. The following metrics are not calculated by default for the training dataset and therefore these metrics are not output:
- PFound
- YetiRank
- NDCG
- YetiRankPairwise
- AUC
- NormalizedGini
- FilteredDCG
- DCG
Use the hints=skip_train~false
parameter to enable the calculation. See the Enable, disable and configure metrics calculation section for more details.
Type
dict
Output format:
{pool_name_1: {metric_1: value,..., metric_N: value}, ..., pool_name_M: {metric_1: value,..., metric_N: value}
For example:
{'validation': {'Logloss': 0.6085537606941837, 'AUC': 0.0}}
best_iteration_
Purpose
Return the identifier of the iteration with the best result of the evaluation metric or loss function on the last validation set.
Type
int or None if the validation dataset is not specified.
classes_
Purpose
Return the names of classes for classification models. An empty list is returned for all other models.
The order of classes in this list corresponds to the order of classes in resulting predictions.
Type
list (an empty list is returned for non-multiclassification models)