ClassificationModels type
Defines values for ClassificationModels.
KnownClassificationModels can be used interchangeably with ClassificationModels,
this enum contains the known values that the service supports.
Known values supported by the service
LogisticRegression: Logistic regression is a fundamental classification technique.
It belongs to the group of linear classifiers and is somewhat similar to polynomial and linear regression.
Logistic regression is fast and relatively uncomplicated, and it's convenient for you to interpret the results.
Although it's essentially a method for binary classification, it can also be applied to multiclass problems.
SGD: SGD: Stochastic gradient descent is an optimization algorithm often used in machine learning applications
to find the model parameters that correspond to the best fit between predicted and actual outputs.
MultinomialNaiveBayes: The multinomial Naive Bayes classifier is suitable for classification with discrete features (e.g., word counts for text classification).
The multinomial distribution normally requires integer feature counts. However, in practice, fractional counts such as tf-idf may also work.
BernoulliNaiveBayes: Naive Bayes classifier for multivariate Bernoulli models.
SVM: A support vector machine (SVM) is a supervised machine learning model that uses classification algorithms for two-group classification problems.
After giving an SVM model sets of labeled training data for each category, they're able to categorize new text.
LinearSVM: A support vector machine (SVM) is a supervised machine learning model that uses classification algorithms for two-group classification problems.
After giving an SVM model sets of labeled training data for each category, they're able to categorize new text.
Linear SVM performs best when input data is linear, i.e., data can be easily classified by drawing the straight line between classified values on a plotted graph.
KNN: K-nearest neighbors (KNN) algorithm uses 'feature similarity' to predict the values of new datapoints
which further means that the new data point will be assigned a value based on how closely it matches the points in the training set.
DecisionTree: Decision Trees are a non-parametric supervised learning method used for both classification and regression tasks.
The goal is to create a model that predicts the value of a target variable by learning simple decision rules inferred from the data features.
RandomForest: Random forest is a supervised learning algorithm.
The "forest" it builds, is an ensemble of decision trees, usually trained with the “bagging” method.
The general idea of the bagging method is that a combination of learning models increases the overall result.
ExtremeRandomTrees: Extreme Trees is an ensemble machine learning algorithm that combines the predictions from many decision trees. It is related to the widely used random forest algorithm.
LightGBM: LightGBM is a gradient boosting framework that uses tree based learning algorithms.
GradientBoosting: The technique of transiting week learners into a strong learner is called Boosting. The gradient boosting algorithm process works on this theory of execution.
XGBoostClassifier: XGBoost: Extreme Gradient Boosting Algorithm. This algorithm is used for structured data where target column values can be divided into distinct class values.
type ClassificationModels = string