1. LightGBM will add more trees if we update it through continued training (e.g. through BoosterUpdateOneIter). Assuming we use refit we will be using existing tree structures to update the output of the leaves based on the new data. It is faster than re-training from scratch, since we do not have to re-discover the optimal tree structures. Nevertheless, please note that almost certainly it will have worse performance (on the combined old and new data) than doing a full retrain from scratch on them.
2. Any online learning algorithm will be designed to adapt to changes. That said, LighyGBM's performance will depend on the training parameters we will use and how we will validate our predictions (e.g. how much we care to disregard previous data points). Assuming we properly train our booster, without having a relevant baseline (e.g. a ridge regression trained on an incremental manner) it does not make sense to say "LightGBM is good (or bad)" for dealing with concept drift.

1. 对增量数据使用新树拟合残差。
2. 重新拟合现存的树（refit）。

## Ref

Last modification：April 21, 2022