CatBoost is a fast, scalable, high performance open-source gradient boosting on decision trees library

Get started

Best in class inference and a ton of speedups

January 31, 2018

CatBoost version 0.6 has a lot speedups and improvements. Most valuable improvement at the moment is the release of industry fastest inference implementation.

Fast inference

CatBoost uses oblivious trees as base predictors. In oblivious trees each leaf index can be encoded as a binary vector with length equal to the depth of the tree. This fact is widely used in CatBoost model evaluator: we first binarize all used float features, statistics and one-hot encoded features and then use binary features to calculate model predictions. That vectors can be built in a data parallel manner with SSE intrinsics. This results in a much faster applier than all existing ones as shown in our comparison below.

CatBoost applier vs LightGBM vs XGBoost

We used LightGBM, XGBoost and CatBoost models for Epsilon (400K samples, 2000 features) dataset trained as described in our previous benchmarks. For each model we limit number of trees used for evaluation to 8000 to make results comparable for the reasons described above. Thus this comparison gives only some insights of how fast the models can be applied. For each algorithm we loaded test dataset in Python, converted it to the algorithm internal representation and measured wall-time of model predictions on Intel Xeon E5-2660 CPU with 128GB RAM. The results are presented in the table below.
 

1 thread32 thread
XGBoost71 sec (x39)4,5 sec (x31)
LightGBM88 sec (x48)17,1 sec(x118)
CatBoost1,83 sec0,145 sec


From this we can see that on similar sizes of ensembles CatBoost can be applied about 35 and 83 times faster than XGBoost and LightGBM respectively.

Speedups

CatBoost team spent a lot of effort to speedup different parts of library. For now the list is below:

  • 43% speedup for training on large datasets.
  • 15% speedup for QueryRMSE and calculation of querywise metrics.
  • Large speedups when using binary categorical features.
  • Significant (x200 on 5k trees and 50k lines dataset) speedup for plot and stage predict calculations in cmdline.
  • Compilation time speedup.

Please take notice, we added many synonyms to our parameter names, now it is more convenient to try CatBoost if you are used to some other library.

Other improvements, bug fixes as well as builds you could find in release on GitHub.

Feel free to drop us issue or contribute to the project.

Latest News

0.10.x and 0.9.x releases review

CatBoost team continues to make a lot of improvements and speedups. What new and interesting have we added in our two latest releases and why is it worth to try CatBoost now? We'll discuss it in this post.

New ways to explore your data

New superb tool for exploring feature importance, new algorithm for finding most influential training samples, possibility to save your model as cpp or python code and more. Check CatBoost v0.8 details inside!

CatBoost on GPU talk at GTC 2018

Come and listen our talk about the fastest implementation of Gradient Boosting for GPU at the GTC 2018 Silicon Valley! GTC will take place on March 26-29 and will provide an excellent opportunity to get more details about CatBoost performance on GPU.

Contacts