# Ranking: objectives and metrics

## Pairwise metrics

Pairwise metrics use special labeled information — pairs of dataset objects where one object is considered the “winner” and the other is considered the “loser”. This information might be not exhaustive (not all possible pairs of objects are labeled in such a way). It is also possible to specify the weight for each pair.

If GroupId is specified, then all pairs must have both members from the same group if this dataset is used in pairwise modes.

If the labeled pairs data is not specified for the dataset, then pairs are generated automatically in each group using per-object label values (labels must be specified and must be numerical). The object with a greater label value in the pair is considered the “winner”.

Name | Used for optimization | User-defined parameters | Formula and/or description |
---|---|---|---|

PairLogit | + | *Default:*true*Default:*All possible pairs are generated in each group
| Calculation principles Note. The object weights are not used to calculate and optimize the value of this metric. The weights of object pairs are used instead. |

PairLogitPairwise | + | *Default:*true*Default:*All possible pairs are generated in each group
| Calculation principles This metric may give more accurate results on large datasets compared to PairLogit but it is calculated significantly slower. This technique is described in the Winning The Transfer Learning Track of Yahoo!’s Learning To Rank Challenge with YetiRank paper. The object weights are not used to calculate and optimize the value of this metric. The weights of object pairs are used instead. |

PairAccuracy | – | | Calculation principles Note. The object weights are not used to calculate the value of this metric. The weights of object pairs are used instead. |

Name | Used for optimization | User-defined parameters | Formula and/or description |
---|---|---|---|

PairLogit | + | *Default:*true*Default:*All possible pairs are generated in each group
| Calculation principles The object weights are not used to calculate and optimize the value of this metric. The weights of object pairs are used instead. |

PairLogitPairwise | + | *Default:*true*Default:*All possible pairs are generated in each group
| Calculation principles This metric may give more accurate results on large datasets compared to PairLogit but it is calculated significantly slower. This technique is described in the Winning The Transfer Learning Track of Yahoo!’s Learning To Rank Challenge with YetiRank paper. |

PairAccuracy | – | | Calculation principles Note. The object weights are not used to calculate the value of this metric. The weights of object pairs are used instead. |

## Groupwise metrics

Name | Used for optimization | User-defined parameters | Formula and/or description |
---|---|---|---|

YetiRank * | + | Default: 0.99 *Default:*10*Default:*true
| An approximation of ranking metrics (such as NDCG and PFound). Allows to use ranking metrics for optimization. The value of this metric can not be calculated. The metric that is written to output data if YetiRank is optimized depends on the range of all This metric gives less accurate results on big datasets compared to YetiRankPairwise but it is significantly faster.N target values () of the dataset:- — PFound
- — NDCG
Note. The object weights are not used to optimize this metric. The group weights are used instead. This objective is used to optimize PairLogit. Automatically generated object pairs are used for this purpose. These pairs are generated independently for each object group. Use the Group weights file or the GroupWeight column of the Columns description file to change the group importance. In this case, the weight of each generated pair is multiplied by the value of the corresponding group weight. |

YetiRankPairwise * | + | Default: 0.99 *Default:*10*Default:*true
| An approximation of ranking metrics (such as NDCG and PFound). Allows to use ranking metrics for optimization. The value of this metric can not be calculated. The metric that is written to output data if YetiRank is optimized depends on the range of all N target values () of the dataset:- — PFound
- — NDCG
This metric gives more accurate results on big datasets compared to YetiRank but it is significantly slower. This technique is described in the Winning The Transfer Learning Track of Yahoo!’s Learning To Rank Challenge with YetiRank paper. The object weights are not used to optimize this metric. The group weights are used instead. This objective is used to optimize PairLogit. Automatically generated object pairs are used for this purpose. These pairs are generated independently for each object group. Use the Group weights file or the GroupWeight column of the Columns description file to change the group importance. In this case, the weight of each generated pair is multiplied by the value of the corresponding group weight. |

QueryCrossEntropy | + |
| Calculation principles See the QueryCrossEntropy section for more details. |

QueryRMSE | + | | Calculation principles |

QuerySoftMax | + | | Calculation principles |

PFound * | – | Default: 0.85 *Default:*–1 (all label values are used)*Default:*true
| Calculation principles See the PFound section for more details. |

NDCG * | _ | *Default:*–1 (all label values are used)*Default:*true*Default:*Base*Default:*LogPosition
| Calculation principles See the NDCG section for more details. |

DCG * | _ | *Default:*–1 (all label values are used)*Default:*true*Default:*Base*Default:*LogPosition
| Calculation principles See the NDCG section for more details. |

FilteredDCG * | _ | *Default:*Base*Default:*Position
| Calculation principles See the FilteredDCG section for more details. |

AverageGain | – | *Default:*This parameter is obligatory (the default value is not defined)*Default:*true
| Represents the average value of the label values for objects with the defined top label values. See the AverageGain section for more details. |

PrecisionAt | – | The objects are sorted in descending order of predicted relevancies () The metric is calculated as follows:
Calculation principles The calculation of this function consists of the following steps: | |

RecallAt | – | The objects are sorted in descending order of predicted relevancies () The metric is calculated as follows:
Calculation principles The calculation of this function consists of the following steps: | |

MAP | – | The objects are sorted in descending order of predicted relevancies () The metric is calculated as follows: - is the number of groups
The value is calculated individually for each *j*-th group.
Calculation principles |

Name | Used for optimization | User-defined parameters | Formula and/or description |
---|---|---|---|

YetiRank * | + | Default: 0.99 *Default:*10*Default:*true
| An approximation of ranking metrics (such as NDCG and PFound). Allows to use ranking metrics for optimization. The value of this metric can not be calculated. The metric that is written to output data if YetiRank is optimized depends on the range of all This metric gives less accurate results on big datasets compared to YetiRankPairwise but it is significantly faster.N target values () of the dataset:- — PFound
- — NDCG
The object weights are not used to optimize this metric. The group weights are used instead. This objective is used to optimize PairLogit. Automatically generated object pairs are used for this purpose. These pairs are generated independently for each object group. Use the Group weights file or the GroupWeight column of the Columns description file to change the group importance. In this case, the weight of each generated pair is multiplied by the value of the corresponding group weight. |

YetiRankPairwise * | + | Default: 0.99 *Default:*10*Default:*true
| N target values () of the dataset:- — PFound
- — NDCG
This metric gives more accurate results on big datasets compared to YetiRank but it is significantly slower. This technique is described in the Winning The Transfer Learning Track of Yahoo!’s Learning To Rank Challenge with YetiRank paper. The object weights are not used to optimize this metric. The group weights are used instead. |

QueryCrossEntropy | + |
| Calculation principles See the QueryCrossEntropy section for more details. |

QueryRMSE | + | | Calculation principles |

QuerySoftMax | + | | Calculation principles |

PFound * | – | Default: 0.85 *Default:*–1 (all label values are used)*Default:*true
| Calculation principles See the PFound section for more details. |

NDCG * | _ | *Default:*–1 (all label values are used)*Default:*true*Default:*Base*Default:*LogPosition
| Calculation principles See the NDCG section for more details. |

DCG * | _ | *Default:*–1 (all label values are used)*Default:*true*Default:*Base*Default:*LogPosition
| Calculation principles See the NDCG section for more details. |

FilteredDCG * | _ | *Default:*Base*Default:*Position
| Calculation principles See the FilteredDCG section for more details. |

AverageGain | – | *Default:*This parameter is obligatory (the default value is not defined)*Default:*true
| Represents the average value of the label values for objects with the defined top label values. See the AverageGain section for more details. |

PrecisionAt | – | The objects are sorted in descending order of predicted relevancies () The metric is calculated as follows:
Calculation principles The calculation of this function consists of the following steps: | |

RecallAt | – | The objects are sorted in descending order of predicted relevancies () The metric is calculated as follows:
The calculation of this function consists of the following steps: | |

MAP | – | The objects are sorted in descending order of predicted relevancies () The metric is calculated as follows: - is the number of groups
The value is calculated individually for each *j*-th group.
Calculation principles |