classifier
classifier
¶
sklearn-compatible classifier for ngboost-lightning.
LightningBoostClassifier
¶
LightningBoostClassifier(
dist: type[Distribution] = Bernoulli,
n_estimators: int = 500,
learning_rate: float = 0.01,
minibatch_frac: float = 1.0,
col_sample: float = 1.0,
natural_gradient: bool = True,
tol: float = 0.0001,
random_state: int | None = None,
verbose: bool = True,
verbose_eval: int = 100,
num_leaves: int = 31,
max_depth: int = -1,
min_child_samples: int = 20,
subsample: float = 1.0,
colsample_bytree: float = 1.0,
reg_alpha: float = 0.0,
reg_lambda: float = 0.0,
lgbm_params: dict[str, Any] | None = None,
scoring_rule: ScoringRule | None = None,
validation_fraction: float | None = None,
)
Bases: BaseEstimator, ClassifierMixin
Natural gradient boosting classifier powered by LightGBM.
Outputs full probability distributions over classes by boosting the parameters of a categorical distribution using the natural gradient of the log-likelihood.
Internally trains K-1 independent LightGBM boosters (one per logit parameter), faithfully replicating the NGBoost algorithm with LightGBM's histogram-based splitting for speed.
| PARAMETER | DESCRIPTION |
|---|---|
dist
|
Distribution class to use. Must be a subclass of
TYPE:
|
n_estimators
|
Number of boosting iterations.
TYPE:
|
learning_rate
|
Outer learning rate applied to each boosting step.
TYPE:
|
minibatch_frac
|
Fraction of training rows to subsample each iteration for gradient computation (NGBoost-style minibatch). 1.0 means no subsampling.
TYPE:
|
col_sample
|
Fraction of columns to subsample each boosting iteration. 1.0 means no column subsampling. All K parameter-boosters see the same feature subset each iteration.
TYPE:
|
natural_gradient
|
Whether to use the natural gradient (True) or the ordinary gradient (False).
TYPE:
|
tol
|
Convergence tolerance. Training stops when the mean gradient norm falls below this value.
TYPE:
|
random_state
|
Seed for reproducibility (minibatch sampling).
TYPE:
|
verbose
|
Whether to log training progress.
TYPE:
|
verbose_eval
|
Log progress every this many iterations.
TYPE:
|
num_leaves
|
Maximum number of leaves per tree.
TYPE:
|
max_depth
|
Maximum tree depth. -1 means no limit.
TYPE:
|
min_child_samples
|
Minimum number of samples in a leaf.
TYPE:
|
subsample
|
LightGBM-level row subsampling ratio per tree.
TYPE:
|
colsample_bytree
|
Column subsampling ratio per tree.
TYPE:
|
reg_alpha
|
L1 regularization on leaf weights.
TYPE:
|
reg_lambda
|
L2 regularization on leaf weights.
TYPE:
|
lgbm_params
|
Additional parameters passed to each LightGBM Booster.
TYPE:
|
validation_fraction
|
Fraction of training data to hold out as
validation for early stopping. If set and
TYPE:
|
| ATTRIBUTE | DESCRIPTION |
|---|---|
engine_ |
The fitted
|
classes_ |
Array of unique class labels seen during fit.
|
n_classes_ |
Number of classes.
|
n_features_in_ |
Number of features seen during
|
n_estimators_ |
Actual number of boosting iterations.
|
init_params_ |
Initial distribution parameters from
|
scalings_ |
Line search scale factor per iteration.
|
train_loss_ |
Training NLL per iteration.
|
Examples:
>>> from ngboost_lightning import LightningBoostClassifier
>>> clf = LightningBoostClassifier(n_estimators=100, learning_rate=0.05)
>>> clf.fit(X_train, y_train)
>>> probs = clf.predict_proba(X_test)
>>> labels = clf.predict(X_test)
Initialize the classifier. See class docstring for parameters.
Source code in ngboost_lightning/classifier.py
feature_importances_
property
¶
Feature importances per distribution parameter.
| RETURNS | DESCRIPTION |
|---|---|
NDArray[floating]
|
Importance array, shape |
NDArray[floating]
|
sums to 1.0 and corresponds to one logit parameter. |
fit
¶
fit(
X: NDArray[floating],
y: NDArray[floating],
X_val: NDArray[floating] | None = None,
y_val: NDArray[floating] | None = None,
early_stopping_rounds: int | None = None,
sample_weight: NDArray[floating] | None = None,
val_sample_weight: NDArray[floating] | None = None,
train_loss_monitor: Callable[
[Distribution, NDArray[floating]], float
]
| None = None,
val_loss_monitor: Callable[
[Distribution, NDArray[floating]], float
]
| None = None,
) -> Self
Fit the natural gradient boosting classifier.
| PARAMETER | DESCRIPTION |
|---|---|
X
|
Training features, shape
TYPE:
|
y
|
Training class labels, shape
TYPE:
|
X_val
|
Validation features for early stopping.
TYPE:
|
y_val
|
Validation class labels for early stopping.
TYPE:
|
early_stopping_rounds
|
Stop if validation loss hasn't improved for this many consecutive iterations.
TYPE:
|
sample_weight
|
Per-sample training weights, shape
TYPE:
|
val_sample_weight
|
Per-sample validation weights,
shape
TYPE:
|
train_loss_monitor
|
Custom callable for computing training loss.
Signature:
TYPE:
|
val_loss_monitor
|
Custom callable for computing validation loss.
Signature:
TYPE:
|
| RETURNS | DESCRIPTION |
|---|---|
Self
|
The fitted estimator. |
| RAISES | DESCRIPTION |
|---|---|
ValueError
|
If the number of classes in y does not match the
distribution's K, or if a LightGBM parameter appears in
both a surfaced kwarg and |
Source code in ngboost_lightning/classifier.py
138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 | |
predict
¶
Predict class labels.
| PARAMETER | DESCRIPTION |
|---|---|
X
|
Features, shape
TYPE:
|
| RETURNS | DESCRIPTION |
|---|---|
NDArray[integer]
|
Predicted class labels, shape |
Source code in ngboost_lightning/classifier.py
predict_proba
¶
Predict class probabilities.
| PARAMETER | DESCRIPTION |
|---|---|
X
|
Features, shape
TYPE:
|
| RETURNS | DESCRIPTION |
|---|---|
NDArray[floating]
|
Probability matrix, shape |
NDArray[floating]
|
Each row sums to 1. |
Source code in ngboost_lightning/classifier.py
pred_dist
¶
pred_dist(X: NDArray[floating]) -> Categorical
Predict the full conditional distribution.
| PARAMETER | DESCRIPTION |
|---|---|
X
|
Features, shape
TYPE:
|
| RETURNS | DESCRIPTION |
|---|---|
Categorical
|
A Categorical distribution instance for all samples. |
Source code in ngboost_lightning/classifier.py
staged_predict
¶
Yield class label predictions after each boosting iteration.
| PARAMETER | DESCRIPTION |
|---|---|
X
|
Features, shape
TYPE:
|
| YIELDS | DESCRIPTION |
|---|---|
Generator[NDArray[integer]]
|
Predicted class labels at iteration i, shape |
Source code in ngboost_lightning/classifier.py
staged_predict_proba
¶
Yield class probabilities after each boosting iteration.
| PARAMETER | DESCRIPTION |
|---|---|
X
|
Features, shape
TYPE:
|
| YIELDS | DESCRIPTION |
|---|---|
Generator[NDArray[floating]]
|
Probability matrix at iteration i, |
Generator[NDArray[floating]]
|
shape |
Source code in ngboost_lightning/classifier.py
staged_pred_dist
¶
staged_pred_dist(
X: NDArray[floating],
) -> Generator[Categorical]
Yield the full conditional distribution after each iteration.
| PARAMETER | DESCRIPTION |
|---|---|
X
|
Features, shape
TYPE:
|
| YIELDS | DESCRIPTION |
|---|---|
Generator[Categorical]
|
Categorical distribution at iteration i. |
Source code in ngboost_lightning/classifier.py
score
¶
Negative mean NLL (higher is better).
Follows the same convention as LightningBoostRegressor: returns
-mean(NLL) so that higher values indicate better fit. This is
consistent with probabilistic scoring but differs from sklearn's
ClassifierMixin.score() which returns accuracy.
| PARAMETER | DESCRIPTION |
|---|---|
X
|
Features, shape
TYPE:
|
y
|
True class labels, shape
TYPE:
|
| RETURNS | DESCRIPTION |
|---|---|
float
|
|