regressor
regressor
¶
sklearn-compatible regressor for ngboost-lightning.
LightningBoostRegressor
¶
LightningBoostRegressor(
dist: type[Distribution] = Normal,
n_estimators: int = 500,
learning_rate: float = 0.01,
minibatch_frac: float = 1.0,
col_sample: float = 1.0,
natural_gradient: bool = True,
tol: float = 0.0001,
random_state: int | None = None,
verbose: bool = True,
verbose_eval: int = 100,
num_leaves: int = 31,
max_depth: int = -1,
min_child_samples: int = 20,
subsample: float = 1.0,
colsample_bytree: float = 1.0,
reg_alpha: float = 0.0,
reg_lambda: float = 0.0,
lgbm_params: dict[str, Any] | None = None,
scoring_rule: ScoringRule | None = None,
validation_fraction: float | None = None,
)
Bases: BaseEstimator, RegressorMixin
Natural gradient boosting regressor powered by LightGBM.
Outputs full probability distributions (not just point predictions) by boosting the parameters of a conditional distribution using the natural gradient of the log-likelihood.
Internally trains K independent LightGBM boosters (one per distribution parameter), faithfully replicating the NGBoost algorithm with LightGBM's histogram-based splitting for speed.
| PARAMETER | DESCRIPTION |
|---|---|
dist
|
Distribution class to use. Must be a subclass of
TYPE:
|
n_estimators
|
Number of boosting iterations.
TYPE:
|
learning_rate
|
Outer learning rate applied to each boosting step.
TYPE:
|
minibatch_frac
|
Fraction of training rows to subsample each iteration
for gradient computation (NGBoost-style minibatch). 1.0 means no
subsampling. Distinct from
TYPE:
|
col_sample
|
Fraction of columns to subsample each boosting iteration. 1.0 means no column subsampling. All K parameter-boosters see the same feature subset each iteration.
TYPE:
|
natural_gradient
|
Whether to use the natural gradient (True) or the ordinary gradient (False).
TYPE:
|
tol
|
Convergence tolerance. Training stops when the mean gradient norm falls below this value.
TYPE:
|
random_state
|
Seed for reproducibility (minibatch sampling).
TYPE:
|
verbose
|
Whether to log training progress.
TYPE:
|
verbose_eval
|
Log progress every this many iterations.
TYPE:
|
num_leaves
|
Maximum number of leaves per tree. Primary complexity control for LightGBM.
TYPE:
|
max_depth
|
Maximum tree depth. -1 means no limit.
TYPE:
|
min_child_samples
|
Minimum number of samples in a leaf.
TYPE:
|
subsample
|
LightGBM-level row subsampling ratio per tree. Distinct
from
TYPE:
|
colsample_bytree
|
Column subsampling ratio per tree.
TYPE:
|
reg_alpha
|
L1 regularization on leaf weights.
TYPE:
|
reg_lambda
|
L2 regularization on leaf weights.
TYPE:
|
lgbm_params
|
Additional parameters passed to each LightGBM Booster.
Use this for less common LightGBM options (e.g.
TYPE:
|
scoring_rule
|
The scoring rule used for training. Defaults to
TYPE:
|
validation_fraction
|
Fraction of training data to hold out as
validation for early stopping. If set and
TYPE:
|
| ATTRIBUTE | DESCRIPTION |
|---|---|
engine_ |
The fitted
|
n_features_in_ |
Number of features seen during
|
n_estimators_ |
Actual number of boosting iterations (may be less
than
|
init_params_ |
Initial distribution parameters from
|
scalings_ |
Line search scale factor per iteration.
|
train_loss_ |
Training NLL per iteration.
|
val_loss_ |
Validation NLL per iteration (only if validation data was provided).
|
best_val_loss_itr_ |
Iteration with best validation loss (only if validation data was provided).
|
Examples:
>>> from ngboost_lightning import LightningBoostRegressor
>>> reg = LightningBoostRegressor(n_estimators=100, learning_rate=0.05)
>>> reg.fit(X_train, y_train)
>>> preds = reg.predict(X_test)
>>> dist = reg.pred_dist(X_test) # full distribution
>>> dist.scale # predicted uncertainty
Initialize the regressor. See class docstring for parameters.
Source code in ngboost_lightning/regressor.py
feature_importances_
property
¶
Feature importances per distribution parameter.
| RETURNS | DESCRIPTION |
|---|---|
NDArray[floating]
|
Importance array, shape |
NDArray[floating]
|
sums to 1.0 and corresponds to one distribution parameter |
NDArray[floating]
|
(e.g. row 0 = mean, row 1 = log_scale for Normal). |
fit
¶
fit(
X: NDArray[floating],
y: NDArray[floating],
X_val: NDArray[floating] | None = None,
y_val: NDArray[floating] | None = None,
early_stopping_rounds: int | None = None,
sample_weight: NDArray[floating] | None = None,
val_sample_weight: NDArray[floating] | None = None,
train_loss_monitor: Callable[
[Distribution, NDArray[floating]], float
]
| None = None,
val_loss_monitor: Callable[
[Distribution, NDArray[floating]], float
]
| None = None,
) -> Self
Fit the natural gradient boosting model.
| PARAMETER | DESCRIPTION |
|---|---|
X
|
Training features, shape
TYPE:
|
y
|
Training targets, shape
TYPE:
|
X_val
|
Validation features for early stopping.
TYPE:
|
y_val
|
Validation targets for early stopping.
TYPE:
|
early_stopping_rounds
|
Stop if validation loss hasn't improved for this many consecutive iterations.
TYPE:
|
sample_weight
|
Per-sample training weights, shape
TYPE:
|
val_sample_weight
|
Per-sample validation weights,
shape
TYPE:
|
train_loss_monitor
|
Custom callable for computing training loss.
Signature:
TYPE:
|
val_loss_monitor
|
Custom callable for computing validation loss.
Signature:
TYPE:
|
| RETURNS | DESCRIPTION |
|---|---|
Self
|
The fitted estimator. |
| RAISES | DESCRIPTION |
|---|---|
ValueError
|
If a LightGBM parameter appears in both a surfaced
constructor kwarg and |
Source code in ngboost_lightning/regressor.py
146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 | |
predict
¶
Point prediction (conditional mean).
| PARAMETER | DESCRIPTION |
|---|---|
X
|
Features, shape
TYPE:
|
| RETURNS | DESCRIPTION |
|---|---|
NDArray[floating]
|
Predictions, shape |
Source code in ngboost_lightning/regressor.py
pred_dist
¶
pred_dist(X: NDArray[floating]) -> Distribution
Predict the full conditional distribution.
This is the primary probabilistic output. The returned distribution
object provides mean, scale, cdf, ppf, sample,
and other methods.
| PARAMETER | DESCRIPTION |
|---|---|
X
|
Features, shape
TYPE:
|
| RETURNS | DESCRIPTION |
|---|---|
Distribution
|
A Distribution instance for all samples. |
Source code in ngboost_lightning/regressor.py
staged_predict
¶
Yield point predictions after each boosting iteration.
| PARAMETER | DESCRIPTION |
|---|---|
X
|
Features, shape
TYPE:
|
| YIELDS | DESCRIPTION |
|---|---|
Generator[NDArray[floating]]
|
Predictions at iteration i, shape |
Source code in ngboost_lightning/regressor.py
staged_pred_dist
¶
staged_pred_dist(
X: NDArray[floating],
) -> Generator[Distribution]
Yield the full conditional distribution after each iteration.
| PARAMETER | DESCRIPTION |
|---|---|
X
|
Features, shape
TYPE:
|
| YIELDS | DESCRIPTION |
|---|---|
Generator[Distribution]
|
Distribution at iteration i. |
Source code in ngboost_lightning/regressor.py
score
¶
Negative mean score (higher is better).
Uses the scoring rule from training (LogScore or CRPScore).
Follows sklearn's convention that score() returns a value where
higher is better, making it compatible with cross_val_score and
other sklearn utilities.
| PARAMETER | DESCRIPTION |
|---|---|
X
|
Features, shape
TYPE:
|
y
|
Target values, shape
TYPE:
|
| RETURNS | DESCRIPTION |
|---|---|
float
|
|