• Installation
• Overview
• Python package installation
• CatBoost for Apache Spark installation
• R package installation
• Command-line version binary
• Key Features
• Training parameters
• Python package
• CatBoost for Apache Spark
• R package
• Command-line version
• Applying models
• Objectives and metrics
• Model analysis
• Data format description
• Parameter tuning
• Speeding up the training
• Data visualization
• Algorithm details
• FAQ
• Educational materials
• Development and contributions
• Contacts

# QueryCrossEntropy

Let's assume that it is required to solve a classification problem on a dataset with grouped objects. For example, it may be required to predict user clicks on a search engine results page.

Generally, this task can be solved by the Logloss function:
$Logloss = \displaystyle\frac{1}{\sum\limits_{i = 1}^{N} w_{i}} \sum_{group} \left( \sum_{obj\_in\_group} w_{i} \left(t_{i} \cdot log(p_{i}) + (1 - t_{i}) \cdot log(1 - p_{i}) \right) \right)$

• $t_{i}$ is the label value for the i-th object (from the input data for training). Possible values are in the range $[0;1]$.
• $a_{i}$ is the Logloss raw formula prediction.
• $p_{i}$ is the predicted probability that the object belongs to the positive class. $p_i = \sigma(a_{i})$ (refer to the Logistic function, odds, odds ratio, and logit section of the Logistic regression article in Wikipedia for details).

Since the internal structure of the data is known, it can be assumed that the predictions in various groups are different. This can be modeled by adding a $shift\_group$ to each formula prediction for a group:
$\bar p_{i} = \sigma(a_{i} + group\_shift)$
The $shift\_group$ parameter is jointly optimized for each group during the training.

In this case, the Logloss formula for grouped objects takes the following form:
$Logloss_{group} = \displaystyle\frac{1}{\sum\limits_{i = 1}^{N} w_{i}} \sum_{group} \left( \sum_{obj\_in\_group} w_{i} \left( t_{i} \cdot log({{\bar p_{i}}} ) + (1 - t_{i}) \cdot log(1 - {{\bar p_i}} ) \right) \right)$
The QueryCrossEntropy metric is calculated as follows:
$QueryCrossEntropy(\alpha) = (1 - \alpha) \cdot LogLoss + \alpha \cdot LogLoss_{group}$

## User-defined parameters

Parameter: alpha

#### Description

The coefficient used in quantile-based losses. Defines the rules for mixing the

$Logloss = \displaystyle\frac{1}{\sum\limits_{i = 1}^{N} w_{i}} \sum_{group} \left( \sum_{obj\_in\_group} w_{i} \left(t_{i} \cdot log(p_{i}) + (1 - t_{i}) \cdot log(1 - p_{i}) \right) \right)$

and

$Logloss_{group} = \displaystyle\frac{1}{\sum\limits_{i = 1}^{N} w_{i}} \sum_{group} \left( \sum_{obj\_in\_group} w_{i} \left( t_{i} \cdot log({{\bar p_{i}}} ) + (1 - t_{i}) \cdot log(1 - {{\bar p_i}} ) \right) \right)$

versions of the Logloss function.