次の方法で共有


FastLinearRegressor Class

A Stochastic Dual Coordinate Ascent (SDCA) optimization trainer for linear regression.

Inheritance
nimbusml.internal.core.linear_model._fastlinearregressor.FastLinearRegressor
FastLinearRegressor
nimbusml.base_predictor.BasePredictor
FastLinearRegressor
sklearn.base.RegressorMixin
FastLinearRegressor

Constructor

FastLinearRegressor(l2_regularization=None, l1_threshold=None, normalize='Auto', caching='Auto', loss='squared', number_of_threads=None, convergence_tolerance=0.01, maximum_number_of_iterations=None, shuffle=True, convergence_check_frequency=None, bias_learning_rate=1.0, feature=None, label=None, weight=None, **params)

Parameters

Name Description
feature

see Columns.

label

see Columns.

weight

see Columns.

l2_regularization

L2 regularizer constant. By default the l2 constant is automatically inferred based on data set.

l1_threshold

L1 soft threshold (L1/L2). Note that it is easier to control and sweep using the threshold parameter than the raw L1-regularizer constant. By default the l1 threshold is automatically inferred based on data set.

normalize

Specifies the type of automatic normalization used:

  • "Auto": if normalization is needed, it is performed automatically. This is the default choice.

  • "No": no normalization is performed.

  • "Yes": normalization is performed.

  • "Warn": if normalization is needed, a warning message is displayed, but normalization is not performed.

Normalization rescales disparate data ranges to a standard scale. Feature scaling insures the distances between data points are proportional and enables various optimization methods such as gradient descent to converge much faster. If normalization is performed, a MaxMin normalizer is used. It normalizes values in an interval [a, b] where -1 <= a <= 0 and 0 <= b <= 1 and b - a = 1. This normalizer preserves sparsity by mapping zero to zero.

caching

Whether trainer should cache input training data.

loss

The only supported loss is Squared. For more information, please see the documentation page about losses, Loss.

number_of_threads

Degree of lock-free parallelism. Defaults to automatic. Determinism not guaranteed.

convergence_tolerance

The tolerance for the ratio between duality gap and primal loss for convergence checking.

maximum_number_of_iterations

Maximum number of iterations; set to 1 to simulate online learning. Defaults to automatic.

shuffle

Shuffle data every epoch?.

convergence_check_frequency

Convergence check frequency (in terms of number of iterations). Set as negative or zero for not checking at all. If left blank, it defaults to check after every 'numThreads' iterations.

bias_learning_rate

The learning rate for adjusting bias from being regularized.

params

Additional arguments sent to compute engine.

Examples


   ###############################################################################
   # FastLinearRegressor
   from nimbusml import Pipeline, FileDataStream
   from nimbusml.datasets import get_dataset
   from nimbusml.feature_extraction.categorical import OneHotVectorizer
   from nimbusml.linear_model import FastLinearRegressor

   # data input (as a FileDataStream)
   path = get_dataset('infert').as_filepath()
   data = FileDataStream.read_csv(path)
   print(data.head())
   #   age  case education  induced  parity  ... row_num  spontaneous  ...
   # 0   26     1    0-5yrs        1       6 ...       1            2  ...
   # 1   42     1    0-5yrs        1       1 ...       2            0  ...
   # 2   39     1    0-5yrs        2       6 ...       3            0  ...
   # 3   34     1    0-5yrs        2       4 ...       4            0  ...
   # 4   35     1   6-11yrs        1       3 ...       5            1  ...

   # define the training pipeline
   pipeline = Pipeline([
       OneHotVectorizer(columns={'edu': 'education'}),
       FastLinearRegressor(feature=['induced', 'edu'], label='age')
   ])

   # train, predict, and evaluate
   metrics, predictions = pipeline.fit(data).test(data, output_scores=True)

   # print predictions
   print(predictions.head())
   #       Score
   # 0  35.335880
   # 1  35.335880
   # 2  34.582409
   # 3  34.582409
   # 4  32.460728

   # print evaluation metrics
   print(metrics)
   #    L1(avg)    L2(avg)  RMS(avg)  Loss-fn(avg)  R Squared
   # 0  4.082992  24.122282  4.911444     24.122282   0.121795

Remarks

FastLinearRegressor is a trainer based on the Stochastic Dual Coordinate Ascent (SDCA) method, a state-of-the-art optimization technique for convex objective functions. The algorithm can be scaled for use on large out-of-memory data sets due to a semi-asynchronized implementation that supports multi-threading. Convergence is underwritten by periodically enforcing synchronization between primal and dual updates in a separate thread. Several choices of loss functions are also provided. The SDCA method combines several of the best properties and capabilities of logistic regression and SVM algorithms. For more information on SDCA, see the citations in the reference section.

Traditional optimization algorithms, such as stochastic gradient descent (SGD), optimize the empirical loss function directly. The SDCA chooses a different approach that optimizes the dual problem instead. The dual loss function is parameterized by per-example weights. In each iteration, when a training example from the training data set is read, the corresponding example weight is adjusted so that the dual loss function is optimized with respect to the current example. No learning rate is needed by SDCA to determine step size as is required by various gradient descent methods.

FastLinearRegressor only supports squared loss function. Elastic net regularization can be specified by the l2_weight and l1_threshold parameters. Note that the l2_weight has an effect on the rate of convergence. In general, the larger the l2_weight, the faster SDCA converges.

Note that FastLinearRegressor is a stochastic and streaming optimization algorithm. The results depends on the order of the training data. For reproducible results, it is recommended that one sets shuffle to False and number_of_threads to 1.

Reference

Scaling Up Stochastic Dual Coordinate Ascent

Stochastic Dual Coordinate Ascent Methods for Regularized Loss Minimization

Methods

get_params

Get the parameters for this operator.

get_params

Get the parameters for this operator.

get_params(deep=False)

Parameters

Name Description
deep
Default value: False