Share via


SymbolicSgdLogisticRegressionBinaryTrainer Class

Definition

The IEstimator<TTransformer> to predict a target using a linear binary classification model trained with the symbolic stochastic gradient descent.

public sealed class SymbolicSgdLogisticRegressionBinaryTrainer : Microsoft.ML.Trainers.TrainerEstimatorBase<Microsoft.ML.Data.BinaryPredictionTransformer<Microsoft.ML.Calibrators.CalibratedModelParametersBase<Microsoft.ML.Trainers.LinearBinaryModelParameters,Microsoft.ML.Calibrators.PlattCalibrator>>,Microsoft.ML.Calibrators.CalibratedModelParametersBase<Microsoft.ML.Trainers.LinearBinaryModelParameters,Microsoft.ML.Calibrators.PlattCalibrator>>
type SymbolicSgdLogisticRegressionBinaryTrainer = class
    inherit TrainerEstimatorBase<BinaryPredictionTransformer<CalibratedModelParametersBase<LinearBinaryModelParameters, PlattCalibrator>>, CalibratedModelParametersBase<LinearBinaryModelParameters, PlattCalibrator>>
Public NotInheritable Class SymbolicSgdLogisticRegressionBinaryTrainer
Inherits TrainerEstimatorBase(Of BinaryPredictionTransformer(Of CalibratedModelParametersBase(Of LinearBinaryModelParameters, PlattCalibrator)), CalibratedModelParametersBase(Of LinearBinaryModelParameters, PlattCalibrator))
Inheritance

Remarks

To create this trainer, use SymbolicStochasticGradientDescent or SymbolicStochasticGradientDescent(Options).

Input and Output Columns

The input label column data must be Boolean. The input features column data must be a known-sized vector of Single.

This trainer outputs the following columns:

Output Column Name Column Type Description
Score Single The unbounded score that was calculated by the model.
PredictedLabel Boolean The predicted label, based on the sign of the score. A negative score maps to false and a positive score maps to true.
Probability Single The probability calculated by calibrating the score of having true as the label. Probability value is in range [0, 1].

Trainer Characteristics

Machine learning task Binary classification
Is normalization required? Yes
Is caching required? No
Required NuGet in addition to Microsoft.ML Microsoft.ML.Mkl.Components
Exportable to ONNX Yes

Training Algorithm Details

The symbolic stochastic gradient descent is an algorithm that makes its predictions by finding a separating hyperplane. For instance, with feature values $f0, f1,..., f_{D-1}$, the prediction is given by determining what side of the hyperplane the point falls into. That is the same as the sign of the feature's weighted sum, i.e. $\sum_{i = 0}^{D-1} (w_i * f_i) + b$, where $w_0, w_1,..., w_{D-1}$ are the weights computed by the algorithm, and $b$ is the bias computed by the algorithm.

While most symbolic stochastic gradient descent algorithms are inherently sequential - at each step, the processing of the current example depends on the parameters learned from previous examples. This algorithm trains local models in separate threads and probabilistic model cobminer that allows the local models to be combined to produce the same result as what a sequential symbolic stochastic gradient descent would have produced, in expectation.

For more information see Parallel Stochastic Gradient Descent with Sound Combiners.

Check the See Also section for links to usage examples.

Fields

FeatureColumn

The feature column that the trainer expects.

(Inherited from TrainerEstimatorBase<TTransformer,TModel>)
LabelColumn

The label column that the trainer expects. Can be null, which indicates that label is not used for training.

(Inherited from TrainerEstimatorBase<TTransformer,TModel>)
WeightColumn

The weight column that the trainer expects. Can be null, which indicates that weight is not used for training.

(Inherited from TrainerEstimatorBase<TTransformer,TModel>)

Properties

Info

Methods

Fit(IDataView, LinearModelParameters)

Continues the training of SymbolicSgdLogisticRegressionBinaryTrainer using an already trained modelParameters a Microsoft.ML.Data.BinaryPredictionTransformer.

Fit(IDataView)

Trains and returns a ITransformer.

(Inherited from TrainerEstimatorBase<TTransformer,TModel>)
GetOutputSchema(SchemaShape) (Inherited from TrainerEstimatorBase<TTransformer,TModel>)

Extension Methods

AppendCacheCheckpoint<TTrans>(IEstimator<TTrans>, IHostEnvironment)

Append a 'caching checkpoint' to the estimator chain. This will ensure that the downstream estimators will be trained against cached data. It is helpful to have a caching checkpoint before trainers that take multiple data passes.

WithOnFitDelegate<TTransformer>(IEstimator<TTransformer>, Action<TTransformer>)

Given an estimator, return a wrapping object that will call a delegate once Fit(IDataView) is called. It is often important for an estimator to return information about what was fit, which is why the Fit(IDataView) method returns a specifically typed object, rather than just a general ITransformer. However, at the same time, IEstimator<TTransformer> are often formed into pipelines with many objects, so we may need to build a chain of estimators via EstimatorChain<TLastTransformer> where the estimator for which we want to get the transformer is buried somewhere in this chain. For that scenario, we can through this method attach a delegate that will be called once fit is called.

Applies to

See also