Ranking

Name Used for optimization User-defined parameters Formula and/or description

Pairwise metrics

All possible pairs in each group are generated without repetition according to the label values if pairs are not given for the PairLogit or PairLogitPairwise loss functions.

Attention.

The object label values from the input dataset are not taken into account when defining the winner object of the pair. Regardless of the label values, the first object in the pair is the winner.

PairLogit +
Calculation principles
Note.

The object weights are not used to calculate and optimize the value of this metric. The weights of object pairs are used instead. Use the corresponding column of the Pair description file to change the importance of a certain pair.

PairLogitPairwise +
Calculation principles

This metric may give more accurate results on large datasets compared to PairLogit but it is calculated significantly slower.

This technique is described in the Winning The Transfer Learning Track of Yahoo!’s Learning To Rank Challenge with YetiRank paper.

Note.

The object weights are not used to calculate and optimize the value of this metric. The weights of object pairs are used instead. Use the corresponding column of the Pair description file to change the importance of a certain pair.

PairAccuracy

use_weights

Default: true

Calculation principles
Note.

The object weights are not used to calculate the value of this metric. The weights of object pairs are used instead. Use the corresponding column of the Pair description file to change the importance of a certain pair.

Groupwise metrics
YetiRank * +

An approximation of ranking metrics (such as NDCG and PFound). Allows to use ranking metrics for optimization.

The value of this metric can not be calculated. By default, the value of the PFound metric is written to output data if YetiRank is optimized.

This metric gives less accurate results on big datasets compared to YetiRankPairwise but it is significantly faster.
Note.

The object weights are not used to optimize this metric. The group weights are used instead.

This objective is used to optimize PairLogit. Automatically generated object pairs are used for this purpose. These pairs are generated independently for each object group. Use the Group weights file or the GroupWeight column of the Column descriptions file to change the group importance. In this case, the weight of each generated pair is multiplied by the value of the corresponding group weight.

YetiRankPairwise * +

An approximation of ranking metrics (such as NDCG and PFound). Allows to use ranking metrics for optimization.

The value of this metric can not be calculated. By default, the value of the PFound metric is written to output data if YetiRank is optimized.

This metric gives more accurate results on big datasets compared to YetiRank but it is significantly slower.

This technique is described in the Winning The Transfer Learning Track of Yahoo!’s Learning To Rank Challenge with YetiRank paper.
Note.

The object weights are not used to optimize this metric. The group weights are used instead.

This objective is used to optimize PairLogit. Automatically generated object pairs are used for this purpose. These pairs are generated independently for each object group. Use the Group weights file or the GroupWeight column of the Column descriptions file to change the group importance. In this case, the weight of each generated pair is multiplied by the value of the corresponding group weight.

QueryCrossEntropy +

alpha

Default: 0.95

Calculation principles

See the QueryCrossEntropy section for more details.

QueryRMSE +

use_weights

Default: true

Calculation principles
QuerySoftMax +

use_weights

Default: true

Calculation principles
PFound *
Calculation principles

See the PFound section for more details.

NDCG * _
  • top

    Default: –1 (all label values are used)

  • use_weights

    Default: true

  • type

    Default: Base

Calculation principles

See the NDCG section for more details.

AverageGain
  • top

    Default: This parameter is obligatory (the default value is not defined)

  • use_weights

    Default: true

Represents the average value of the label values for objects with the defined top label values.

See the AverageGain section for more details.

PrecisionAt
  • top

    Default: –1 (all label values are used)

  • border

    Default: 0.5

Calculation principles

The calculation of this function consists of the following steps:

  1. The objects are sorted in descending order of predicted relevancies ()

  2. The metric is calculated as follows:

RecallAt
  • top

    Default: –1 (all label values are used)

  • border

    Default: 0.5

Calculation principles

The calculation of this function consists of the following steps:

  1. The objects are sorted in descending order of predicted relevancies ()

  2. The metric is calculated as follows:

MAP
  • top

    Default: –1 (all label values are used)

  • border

    Default: 0.5

Calculation principles
  1. The objects are sorted in descending order of predicted relevancies ()

  2. The metric is calculated as follows:

    • is the number of groups
    • The value is calculated individually for each j-th group.

Name Used for optimization User-defined parameters Formula and/or description

Pairwise metrics

All possible pairs in each group are generated without repetition according to the label values if pairs are not given for the PairLogit or PairLogitPairwise loss functions.

Attention.

The object label values from the input dataset are not taken into account when defining the winner object of the pair. Regardless of the label values, the first object in the pair is the winner.

PairLogit +
Calculation principles
Note.

The object weights are not used to calculate and optimize the value of this metric. The weights of object pairs are used instead. Use the corresponding column of the Pair description file to change the importance of a certain pair.

PairLogitPairwise +
Calculation principles

This metric may give more accurate results on large datasets compared to PairLogit but it is calculated significantly slower.

This technique is described in the Winning The Transfer Learning Track of Yahoo!’s Learning To Rank Challenge with YetiRank paper.

Note.

The object weights are not used to calculate and optimize the value of this metric. The weights of object pairs are used instead. Use the corresponding column of the Pair description file to change the importance of a certain pair.

PairAccuracy

use_weights

Default: true

Calculation principles
Note.

The object weights are not used to calculate the value of this metric. The weights of object pairs are used instead. Use the corresponding column of the Pair description file to change the importance of a certain pair.

Groupwise metrics
YetiRank * +

An approximation of ranking metrics (such as NDCG and PFound). Allows to use ranking metrics for optimization.

The value of this metric can not be calculated. By default, the value of the PFound metric is written to output data if YetiRank is optimized.

This metric gives less accurate results on big datasets compared to YetiRankPairwise but it is significantly faster.
Note.

The object weights are not used to optimize this metric. The group weights are used instead.

This objective is used to optimize PairLogit. Automatically generated object pairs are used for this purpose. These pairs are generated independently for each object group. Use the Group weights file or the GroupWeight column of the Column descriptions file to change the group importance. In this case, the weight of each generated pair is multiplied by the value of the corresponding group weight.

YetiRankPairwise * +

An approximation of ranking metrics (such as NDCG and PFound). Allows to use ranking metrics for optimization.

The value of this metric can not be calculated. By default, the value of the PFound metric is written to output data if YetiRank is optimized.

This metric gives more accurate results on big datasets compared to YetiRank but it is significantly slower.

This technique is described in the Winning The Transfer Learning Track of Yahoo!’s Learning To Rank Challenge with YetiRank paper.
Note.

The object weights are not used to optimize this metric. The group weights are used instead.

This objective is used to optimize PairLogit. Automatically generated object pairs are used for this purpose. These pairs are generated independently for each object group. Use the Group weights file or the GroupWeight column of the Column descriptions file to change the group importance. In this case, the weight of each generated pair is multiplied by the value of the corresponding group weight.

QueryCrossEntropy +

alpha

Default: 0.95

Calculation principles

See the QueryCrossEntropy section for more details.

QueryRMSE +

use_weights

Default: true

Calculation principles
QuerySoftMax +

use_weights

Default: true

Calculation principles
PFound *
Calculation principles

See the PFound section for more details.

NDCG * _
  • top

    Default: –1 (all label values are used)

  • use_weights

    Default: true

  • type

    Default: Base

Calculation principles

See the NDCG section for more details.

AverageGain
  • top

    Default: This parameter is obligatory (the default value is not defined)

  • use_weights

    Default: true

Represents the average value of the label values for objects with the defined top label values.

See the AverageGain section for more details.

PrecisionAt
  • top

    Default: –1 (all label values are used)

  • border

    Default: 0.5

Calculation principles

The calculation of this function consists of the following steps:

  1. The objects are sorted in descending order of predicted relevancies ()

  2. The metric is calculated as follows:

RecallAt
  • top

    Default: –1 (all label values are used)

  • border

    Default: 0.5

Calculation principles

The calculation of this function consists of the following steps:

  1. The objects are sorted in descending order of predicted relevancies ()

  2. The metric is calculated as follows:

MAP
  • top

    Default: –1 (all label values are used)

  • border

    Default: 0.5

Calculation principles
  1. The objects are sorted in descending order of predicted relevancies ()

  2. The metric is calculated as follows:

    • is the number of groups
    • The value is calculated individually for each j-th group.