CatBoost is a fast, scalable, high performance open-source gradient boosting on decision trees library

Get started

CatBoost on GPU talk at GTC 2018

March 26, 2018

Vasily Ershov CatBoost lead developer, will talk on GTC 2018 about fastest gradient boosting implementation on GPU.

He'll provide a brief overview of problems which could be solved with CatBoost, discuss challenges and key optimizations in the most significant computation blocks and describe how one can efficiently build histograms in shared memory to construct decision trees and how to avoid atomic operation during this step. Also he'll provide benchmarks that shows that our GPU implementation is five to 40 times faster compared to CPU. And finally he'll talk about performance comparison against GPU implementations of gradient boosting in other open-source libraries.

Picture contains learning speed on GPU comparison between CatBoost, XGBoost and LightGBM on epsilon dataset. Notice XGBoost GPU implementation doesn’t support V100 cards.

Talk scheduled for March 27, 1:00 PM - Room 231. Don’t miss the chance to listen about best-in-class gradient boosting implementation on GPU and ask any question.

Latest News

0.10.x and 0.9.x releases review

CatBoost team continues to make a lot of improvements and speedups. What new and interesting have we added in our two latest releases and why is it worth to try CatBoost now? We'll discuss it in this post.

New ways to explore your data

New superb tool for exploring feature importance, new algorithm for finding most influential training samples, possibility to save your model as cpp or python code and more. Check CatBoost v0.8 details inside!

Best in class inference and a ton of speedups

New version of CatBoost has industry fastest inference implementation. It's 35 times faster than open-source alternatives and completely production ready. Furthermore 0.6 release contains a lot of speedups and improvements. Find more inside.