PermutationFeatureImportanceExtensions.PermutationFeatureImportance Method
Definition
Important
Some information relates to prerelease product that may be substantially modified before it’s released. Microsoft makes no warranties, express or implied, with respect to the information provided here.
Overloads
PermutationFeatureImportance(MulticlassClassificationCatalog, ITransformer, IDataView, String, Boolean, Nullable<Int32>, Int32)
Permutation Feature Importance (PFI) for MulticlassClassification.
public static System.Collections.Immutable.ImmutableDictionary<string,Microsoft.ML.Data.MulticlassClassificationMetricsStatistics> PermutationFeatureImportance (this Microsoft.ML.MulticlassClassificationCatalog catalog, Microsoft.ML.ITransformer model, Microsoft.ML.IDataView data, string labelColumnName = "Label", bool useFeatureWeightFilter = false, int? numberOfExamplesToUse = default, int permutationCount = 1);
static member PermutationFeatureImportance : Microsoft.ML.MulticlassClassificationCatalog * Microsoft.ML.ITransformer * Microsoft.ML.IDataView * string * bool * Nullable<int> * int -> System.Collections.Immutable.ImmutableDictionary<string, Microsoft.ML.Data.MulticlassClassificationMetricsStatistics>
<Extension()>
Public Function PermutationFeatureImportance (catalog As MulticlassClassificationCatalog, model As ITransformer, data As IDataView, Optional labelColumnName As String = "Label", Optional useFeatureWeightFilter As Boolean = false, Optional numberOfExamplesToUse As Nullable(Of Integer) = Nothing, Optional permutationCount As Integer = 1) As ImmutableDictionary(Of String, MulticlassClassificationMetricsStatistics)
Parameters
- catalog
- MulticlassClassificationCatalog
The multiclass classification catalog.
- model
- ITransformer
The model on which to evaluate feature importance.
- data
- IDataView
The evaluation data set.
- labelColumnName
- String
Label column name. The column data must be KeyDataViewType.
- useFeatureWeightFilter
- Boolean
Use features weight to pre-filter features.
Limit the number of examples to evaluate on. means up to ~2 bln examples from will be used.
- permutationCount
- Int32
The number of permutations to perform.
Returns
Dictionary mapping each feature to its per-feature 'contributions' to the score.
Examples
using System;
using System.Collections.Generic;
using System.Linq;
using Microsoft.ML;
namespace Samples.Dynamic.Trainers.MulticlassClassification
{
public static class PermutationFeatureImportance
{
public static void Example()
{
// Create a new context for ML.NET operations. It can be used for
// exception tracking and logging, as a catalog of available operations
// and as the source of randomness.
var mlContext = new MLContext(seed: 1);
// Create sample data.
var samples = GenerateData();
// Load the sample data as an IDataView.
var data = mlContext.Data.LoadFromEnumerable(samples);
// Define a training pipeline that concatenates features into a vector,
// normalizes them, and then trains a linear model.
var featureColumns =
new string[] { nameof(Data.Feature1), nameof(Data.Feature2) };
var pipeline = mlContext.Transforms
.Concatenate("Features", featureColumns)
.Append(mlContext.Transforms.Conversion.MapValueToKey("Label"))
.Append(mlContext.Transforms.NormalizeMinMax("Features"))
.Append(mlContext.MulticlassClassification.Trainers
.SdcaMaximumEntropy());
// Fit the pipeline to the data.
var model = pipeline.Fit(data);
// Transform the dataset.
var transformedData = model.Transform(data);
// Extract the predictor.
var linearPredictor = model.LastTransformer;
// Compute the permutation metrics for the linear model using the
// normalized data.
var permutationMetrics = mlContext.MulticlassClassification
.PermutationFeatureImportance(linearPredictor, transformedData,
permutationCount: 30);
// Now let's look at which features are most important to the model
// overall. Get the feature indices sorted by their impact on
// microaccuracy.
var sortedIndices = permutationMetrics
.Select((metrics, index) => new { index, metrics.MicroAccuracy })
.OrderByDescending(feature => Math.Abs(feature.MicroAccuracy.Mean))
.Select(feature => feature.index);
Console.WriteLine("Feature\tChange in MicroAccuracy\t95% Confidence in "
+ "the Mean Change in MicroAccuracy");
var microAccuracy = permutationMetrics.Select(x => x.MicroAccuracy)
.ToArray();
foreach (int i in sortedIndices)
{
Console.WriteLine("{0}\t{1:G4}\t{2:G4}",
featureColumns[i],
microAccuracy[i].Mean,
1.96 * microAccuracy[i].StandardError);
}
// Expected output:
//Feature Change in MicroAccuracy 95% Confidence in the Mean Change in MicroAccuracy
//Feature2 -0.1395 0.0006567
//Feature1 -0.05367 0.0006908
}
private class Data
{
public float Label { get; set; }
public float Feature1 { get; set; }
public float Feature2 { get; set; }
}
/// <summary>
/// Generate an enumerable of Data objects, creating the label as a simple
/// linear combination of the features.
/// </summary>
/// <param name="nExamples">The number of examples.</param>
/// <param name="bias">The bias, or offset, in the calculation of the
/// label.</param>
/// <param name="weight1">The weight to multiply the first feature with to
/// compute the label.</param>
/// <param name="weight2">The weight to multiply the second feature with to
/// compute the label.</param>
/// <param name="seed">The seed for generating feature values and label
/// noise.</param>
/// <returns>An enumerable of Data objects.</returns>
private static IEnumerable<Data> GenerateData(int nExamples = 10000,
double bias = 0, double weight1 = 1, double weight2 = 2, int seed = 1)
{
var rng = new Random(seed);
var max = bias + 4.5 * weight1 + 4.5 * weight2 + 0.5;
for (int i = 0; i < nExamples; i++)
{
var data = new Data
{
Feature1 = (float)(rng.Next(10) * (rng.NextDouble() - 0.5)),
Feature2 = (float)(rng.Next(10) * (rng.NextDouble() - 0.5)),
};
// Create a noisy label.
var value = (float)
(bias + weight1 * data.Feature1 + weight2 * data.Feature2 +
rng.NextDouble() - 0.5);
if (value < max / 3)
data.Label = 0;
else if (value < 2 * max / 3)
data.Label = 1;
else
data.Label = 2;
yield return data;
}
}
}
}
Remarks
Permutation feature importance (PFI) is a technique to determine the global importance of features in a trained machine learning model. PFI is a simple yet powerful technique motivated by Breiman in his Random Forest paper, section 10 (Breiman. "Random Forests." Machine Learning, 2001.) The advantage of the PFI method is that it is model agnostic -- it works with any model that can be evaluated -- and it can use any dataset, not just the training set, to compute feature importance metrics.
PFI works by taking a labeled dataset, choosing a feature, and permuting the values for that feature across all the examples, so that each example now has a random value for the feature and the original values for all other features. The evaluation metric (e.g. micro-accuracy) is then calculated for this modified dataset, and the change in the evaluation metric from the original dataset is computed. The larger the change in the evaluation metric, the more important the feature is to the model. PFI works by performing this permutation analysis across all the features of a model, one after another.
In this implementation, PFI computes the change in all possible multiclass classification evaluation metrics for each feature, and an ImmutableArray of MulticlassClassificationMetrics objects is returned. See the sample below for an example of working with these results to analyze the feature importance of a model.
Applies to
PermutationFeatureImportance(RegressionCatalog, ITransformer, IDataView, String, Boolean, Nullable<Int32>, Int32)
Permutation Feature Importance (PFI) for Regression.
public static System.Collections.Immutable.ImmutableDictionary<string,Microsoft.ML.Data.RegressionMetricsStatistics> PermutationFeatureImportance (this Microsoft.ML.RegressionCatalog catalog, Microsoft.ML.ITransformer model, Microsoft.ML.IDataView data, string labelColumnName = "Label", bool useFeatureWeightFilter = false, int? numberOfExamplesToUse = default, int permutationCount = 1);
static member PermutationFeatureImportance : Microsoft.ML.RegressionCatalog * Microsoft.ML.ITransformer * Microsoft.ML.IDataView * string * bool * Nullable<int> * int -> System.Collections.Immutable.ImmutableDictionary<string, Microsoft.ML.Data.RegressionMetricsStatistics>
<Extension()>
Public Function PermutationFeatureImportance (catalog As RegressionCatalog, model As ITransformer, data As IDataView, Optional labelColumnName As String = "Label", Optional useFeatureWeightFilter As Boolean = false, Optional numberOfExamplesToUse As Nullable(Of Integer) = Nothing, Optional permutationCount As Integer = 1) As ImmutableDictionary(Of String, RegressionMetricsStatistics)
Parameters
- catalog
- RegressionCatalog
The regression catalog.
- model
- ITransformer
The model on which to evaluate feature importance.
- data
- IDataView
The evaluation data set.
- useFeatureWeightFilter
- Boolean
Use features weight to pre-filter features.
Limit the number of examples to evaluate on. means up to ~2 bln examples from will be used.
- permutationCount
- Int32
The number of permutations to perform.
Returns
Dictionary mapping each feature to its per-feature 'contributions' to the score.
Examples
using System;
using System.Collections.Generic;
using System.Linq;
using Microsoft.ML;
namespace Samples.Dynamic.Trainers.Regression
{
public static class PermutationFeatureImportance
{
public static void Example()
{
// Create a new context for ML.NET operations. It can be used for
// exception tracking and logging, as a catalog of available operations
// and as the source of randomness.
var mlContext = new MLContext(seed: 1);
// Create sample data.
var samples = GenerateData();
// Load the sample data as an IDataView.
var data = mlContext.Data.LoadFromEnumerable(samples);
// Define a training pipeline that concatenates features into a vector,
// normalizes them, and then trains a linear model.
var featureColumns = new string[] { nameof(Data.Feature1),
nameof(Data.Feature2) };
var pipeline = mlContext.Transforms.Concatenate(
"Features",
featureColumns)
.Append(mlContext.Transforms.NormalizeMinMax("Features"))
.Append(mlContext.Regression.Trainers.Ols());
// Fit the pipeline to the data.
var model = pipeline.Fit(data);
// Transform the dataset.
var transformedData = model.Transform(data);
// Extract the predictor.
var linearPredictor = model.LastTransformer;
// Compute the permutation metrics for the linear model using the
// normalized data.
var permutationMetrics = mlContext.Regression
.PermutationFeatureImportance(
linearPredictor, transformedData, permutationCount: 30);
// Now let's look at which features are most important to the model
// overall. Get the feature indices sorted by their impact on RMSE.
var sortedIndices = permutationMetrics
.Select((metrics, index) => new
{
index,
metrics.RootMeanSquaredError
})
.OrderByDescending(feature => Math.Abs(
feature.RootMeanSquaredError.Mean))
.Select(feature => feature.index);
Console.WriteLine("Feature\tModel Weight\tChange in RMSE\t95%" +
"Confidence in the Mean Change in RMSE");
var rmse = permutationMetrics.Select(x => x.RootMeanSquaredError)
.ToArray();
foreach (int i in sortedIndices)
{
Console.WriteLine("{0}\t{1:0.00}\t{2:G4}\t{3:G4}",
featureColumns[i],
linearPredictor.Model.Weights[i],
rmse[i].Mean,
1.96 * rmse[i].StandardError);
}
// Expected output:
// Feature Model Weight Change in RMSE 95% Confidence in the Mean Change in RMSE
// Feature2 9.00 4.009 0.008304
// Feature1 4.48 1.901 0.003351
}
private class Data
{
public float Label { get; set; }
public float Feature1 { get; set; }
public float Feature2 { get; set; }
}
/// <summary>
/// Generate an enumerable of Data objects, creating the label as a simple
/// linear combination of the features.
/// </summary>
/// <param name="nExamples">The number of examples.</param>
/// <param name="bias">The bias, or offset, in the calculation of the label.
/// </param>
/// <param name="weight1">The weight to multiply the first feature with to
/// compute the label.</param>
/// <param name="weight2">The weight to multiply the second feature with to
/// compute the label.</param>
/// <param name="seed">The seed for generating feature values and label
/// noise.</param>
/// <returns>An enumerable of Data objects.</returns>
private static IEnumerable<Data> GenerateData(int nExamples = 10000,
double bias = 0, double weight1 = 1, double weight2 = 2, int seed = 1)
{
var rng = new Random(seed);
for (int i = 0; i < nExamples; i++)
{
var data = new Data
{
Feature1 = (float)(rng.Next(10) * (rng.NextDouble() - 0.5)),
Feature2 = (float)(rng.Next(10) * (rng.NextDouble() - 0.5)),
};
// Create a noisy label.
data.Label = (float)(bias + weight1 * data.Feature1 + weight2 *
data.Feature2 + rng.NextDouble() - 0.5);
yield return data;
}
}
}
}
Remarks
Permutation feature importance (PFI) is a technique to determine the global importance of features in a trained machine learning model. PFI is a simple yet powerful technique motivated by Breiman in his Random Forest paper, section 10 (Breiman. "Random Forests." Machine Learning, 2001.) The advantage of the PFI method is that it is model agnostic -- it works with any model that can be evaluated -- and it can use any dataset, not just the training set, to compute feature importance metrics.
PFI works by taking a labeled dataset, choosing a feature, and permuting the values for that feature across all the examples, so that each example now has a random value for the feature and the original values for all other features. The evaluation metric (e.g. R-squared) is then calculated for this modified dataset, and the change in the evaluation metric from the original dataset is computed. The larger the change in the evaluation metric, the more important the feature is to the model. PFI works by performing this permutation analysis across all the features of a model, one after another.
In this implementation, PFI computes the change in all possible regression evaluation metrics for each feature, and an ImmutableArray of RegressionMetrics objects is returned. See the sample below for an example of working with these results to analyze the feature importance of a model.
Applies to
PermutationFeatureImportance(RankingCatalog, ITransformer, IDataView, String, String, Boolean, Nullable<Int32>, Int32)
Permutation Feature Importance (PFI) for Ranking.
public static System.Collections.Immutable.ImmutableDictionary<string,Microsoft.ML.Data.RankingMetricsStatistics> PermutationFeatureImportance (this Microsoft.ML.RankingCatalog catalog, Microsoft.ML.ITransformer model, Microsoft.ML.IDataView data, string labelColumnName = "Label", string rowGroupColumnName = "GroupId", bool useFeatureWeightFilter = false, int? numberOfExamplesToUse = default, int permutationCount = 1);
static member PermutationFeatureImportance : Microsoft.ML.RankingCatalog * Microsoft.ML.ITransformer * Microsoft.ML.IDataView * string * string * bool * Nullable<int> * int -> System.Collections.Immutable.ImmutableDictionary<string, Microsoft.ML.Data.RankingMetricsStatistics>
<Extension()>
Public Function PermutationFeatureImportance (catalog As RankingCatalog, model As ITransformer, data As IDataView, Optional labelColumnName As String = "Label", Optional rowGroupColumnName As String = "GroupId", Optional useFeatureWeightFilter As Boolean = false, Optional numberOfExamplesToUse As Nullable(Of Integer) = Nothing, Optional permutationCount As Integer = 1) As ImmutableDictionary(Of String, RankingMetricsStatistics)
Parameters
- catalog
- RankingCatalog
The ranking catalog.
- model
- ITransformer
The model on which to evaluate feature importance.
- data
- IDataView
The evaluation data set.
- labelColumnName
- String
Label column name. The column data must be Single or KeyDataViewType.
- rowGroupColumnName
- String
GroupId column name
- useFeatureWeightFilter
- Boolean
Use features weight to pre-filter features.
Limit the number of examples to evaluate on. means up to ~2 bln examples from will be used.
- permutationCount
- Int32
The number of permutations to perform.
Returns
Dictionary mapping each feature to its per-feature 'contributions' to the score.
Examples
using System;
using System.Collections.Generic;
using System.Linq;
using Microsoft.ML;
namespace Samples.Dynamic.Trainers.Ranking
{
public static class PermutationFeatureImportance
{
public static void Example()
{
// Create a new context for ML.NET operations. It can be used for
// exception tracking and logging, as a catalog of available operations
// and as the source of randomness.
var mlContext = new MLContext(seed: 1);
// Create sample data.
var samples = GenerateData();
// Load the sample data as an IDataView.
var data = mlContext.Data.LoadFromEnumerable(samples);
// Define a training pipeline that concatenates features into a vector,
// normalizes them, and then trains a linear model.
var featureColumns = new string[] { nameof(Data.Feature1), nameof(
Data.Feature2) };
var pipeline = mlContext.Transforms.Concatenate("Features",
featureColumns)
.Append(mlContext.Transforms.Conversion.MapValueToKey("Label"))
.Append(mlContext.Transforms.Conversion.MapValueToKey(
"GroupId"))
.Append(mlContext.Transforms.NormalizeMinMax("Features"))
.Append(mlContext.Ranking.Trainers.FastTree());
// Fit the pipeline to the data.
var model = pipeline.Fit(data);
// Transform the dataset.
var transformedData = model.Transform(data);
// Extract the predictor.
var linearPredictor = model.LastTransformer;
// Compute the permutation metrics for the linear model using the
// normalized data.
var permutationMetrics = mlContext.Ranking.PermutationFeatureImportance(
linearPredictor, transformedData, permutationCount: 30);
// Now let's look at which features are most important to the model
// overall. Get the feature indices sorted by their impact on NDCG@1.
var sortedIndices = permutationMetrics.Select((metrics, index) => new
{
index,
metrics.NormalizedDiscountedCumulativeGains
})
.OrderByDescending(feature => Math.Abs(
feature.NormalizedDiscountedCumulativeGains[0].Mean))
.Select(feature => feature.index);
Console.WriteLine("Feature\tChange in NDCG@1\t95% Confidence in the" +
"Mean Change in NDCG@1");
var ndcg = permutationMetrics.Select(
x => x.NormalizedDiscountedCumulativeGains).ToArray();
foreach (int i in sortedIndices)
{
Console.WriteLine("{0}\t{1:G4}\t{2:G4}",
featureColumns[i],
ndcg[i][0].Mean,
1.96 * ndcg[i][0].StandardError);
}
// Expected output:
// Feature Change in NDCG@1 95% Confidence in the Mean Change in NDCG@1
// Feature2 -0.2421 0.001748
// Feature1 -0.0513 0.001184
}
private class Data
{
public float Label { get; set; }
public int GroupId { get; set; }
public float Feature1 { get; set; }
public float Feature2 { get; set; }
}
/// <summary>
/// Generate an enumerable of Data objects, creating the label as a simple
/// linear combination of the features.
/// </summary>
///
/// <param name="nExamples">The number of examples.</param>
///
/// <param name="bias">The bias, or offset, in the calculation of the label.
/// </param>
///
/// <param name="weight1">The weight to multiply the first feature with to
/// compute the label.</param>
///
/// <param name="weight2">The weight to multiply the second feature with to
/// compute the label.</param>
///
/// <param name="seed">The seed for generating feature values and label
/// noise.</param>
///
/// <returns>An enumerable of Data objects.</returns>
private static IEnumerable<Data> GenerateData(int nExamples = 10000,
double bias = 0, double weight1 = 1, double weight2 = 2, int seed = 1,
int groupSize = 5)
{
var rng = new Random(seed);
var max = bias + 4.5 * weight1 + 4.5 * weight2 + 0.5;
for (int i = 0; i < nExamples; i++)
{
var data = new Data
{
GroupId = i / groupSize,
Feature1 = (float)(rng.Next(10) * (rng.NextDouble() - 0.5)),
Feature2 = (float)(rng.Next(10) * (rng.NextDouble() - 0.5)),
};
// Create a noisy label.
var value = (float)(bias + weight1 * data.Feature1 + weight2 *
data.Feature2 + rng.NextDouble() - 0.5);
if (value < max / 3)
data.Label = 0;
else if (value < 2 * max / 3)
data.Label = 1;
else
data.Label = 2;
yield return data;
}
}
}
}
Remarks
Permutation feature importance (PFI) is a technique to determine the global importance of features in a trained machine learning model. PFI is a simple yet powerful technique motivated by Breiman in his Random Forest paper, section 10 (Breiman. "Random Forests." Machine Learning, 2001.) The advantage of the PFI method is that it is model agnostic -- it works with any model that can be evaluated -- and it can use any dataset, not just the training set, to compute feature importance metrics.
PFI works by taking a labeled dataset, choosing a feature, and permuting the values for that feature across all the examples, so that each example now has a random value for the feature and the original values for all other features. The evaluation metric (e.g. NDCG) is then calculated for this modified dataset, and the change in the evaluation metric from the original dataset is computed. The larger the change in the evaluation metric, the more important the feature is to the model. PFI works by performing this permutation analysis across all the features of a model, one after another.
In this implementation, PFI computes the change in all possible ranking evaluation metrics for each feature, and an ImmutableArray of RankingMetrics objects is returned. See the sample below for an example of working with these results to analyze the feature importance of a model.
Applies to
PermutationFeatureImportance<TModel>(BinaryClassificationCatalog, ISingleFeaturePredictionTransformer<TModel>, IDataView, String, Boolean, Nullable<Int32>, Int32)
Permutation Feature Importance (PFI) for Binary Classification.
public static System.Collections.Immutable.ImmutableArray<Microsoft.ML.Data.BinaryClassificationMetricsStatistics> PermutationFeatureImportance<TModel> (this Microsoft.ML.BinaryClassificationCatalog catalog, Microsoft.ML.ISingleFeaturePredictionTransformer<TModel> predictionTransformer, Microsoft.ML.IDataView data, string labelColumnName = "Label", bool useFeatureWeightFilter = false, int? numberOfExamplesToUse = default, int permutationCount = 1) where TModel : class;
static member PermutationFeatureImportance : Microsoft.ML.BinaryClassificationCatalog * Microsoft.ML.ISingleFeaturePredictionTransformer<'Model (requires 'Model : null)> * Microsoft.ML.IDataView * string * bool * Nullable<int> * int -> System.Collections.Immutable.ImmutableArray<Microsoft.ML.Data.BinaryClassificationMetricsStatistics> (requires 'Model : null)
<Extension()>
Public Function PermutationFeatureImportance(Of TModel As Class) (catalog As BinaryClassificationCatalog, predictionTransformer As ISingleFeaturePredictionTransformer(Of TModel), data As IDataView, Optional labelColumnName As String = "Label", Optional useFeatureWeightFilter As Boolean = false, Optional numberOfExamplesToUse As Nullable(Of Integer) = Nothing, Optional permutationCount As Integer = 1) As ImmutableArray(Of BinaryClassificationMetricsStatistics)
Type Parameters
- TModel
Parameters
- catalog
- BinaryClassificationCatalog
The binary classification catalog.
- predictionTransformer
- ISingleFeaturePredictionTransformer<TModel>
The model on which to evaluate feature importance.
- data
- IDataView
The evaluation data set.
- useFeatureWeightFilter
- Boolean
Use features weight to pre-filter features.
Limit the number of examples to evaluate on. means up to ~2 bln examples from will be used.
- permutationCount
- Int32
The number of permutations to perform.
Returns
Array of per-feature 'contributions' to the score.
Examples
using System;
using System.Collections.Generic;
using System.Linq;
using Microsoft.ML;
namespace Samples.Dynamic.Trainers.BinaryClassification
{
public static class PermutationFeatureImportance
{
public static void Example()
{
// Create a new context for ML.NET operations. It can be used for
// exception tracking and logging, as a catalog of available operations
// and as the source of randomness.
var mlContext = new MLContext(seed: 1);
// Create sample data.
var samples = GenerateData();
// Load the sample data as an IDataView.
var data = mlContext.Data.LoadFromEnumerable(samples);
// Define a training pipeline that concatenates features into a vector,
// normalizes them, and then trains a linear model.
var featureColumns =
new string[] { nameof(Data.Feature1), nameof(Data.Feature2) };
var pipeline = mlContext.Transforms
.Concatenate("Features", featureColumns)
.Append(mlContext.Transforms.NormalizeMinMax("Features"))
.Append(mlContext.BinaryClassification.Trainers
.SdcaLogisticRegression());
// Fit the pipeline to the data.
var model = pipeline.Fit(data);
// Transform the dataset.
var transformedData = model.Transform(data);
// Extract the predictor.
var linearPredictor = model.LastTransformer;
// Compute the permutation metrics for the linear model using the
// normalized data.
var permutationMetrics = mlContext.BinaryClassification
.PermutationFeatureImportance(linearPredictor, transformedData,
permutationCount: 30);
// Now let's look at which features are most important to the model
// overall. Get the feature indices sorted by their impact on AUC.
var sortedIndices = permutationMetrics
.Select((metrics, index) => new { index, metrics.AreaUnderRocCurve })
.OrderByDescending(
feature => Math.Abs(feature.AreaUnderRocCurve.Mean))
.Select(feature => feature.index);
Console.WriteLine("Feature\tModel Weight\tChange in AUC"
+ "\t95% Confidence in the Mean Change in AUC");
var auc = permutationMetrics.Select(x => x.AreaUnderRocCurve).ToArray();
foreach (int i in sortedIndices)
{
Console.WriteLine("{0}\t{1:0.00}\t{2:G4}\t{3:G4}",
featureColumns[i],
linearPredictor.Model.SubModel.Weights[i],
auc[i].Mean,
1.96 * auc[i].StandardError);
}
// Expected output:
// Feature Model Weight Change in AUC 95% Confidence in the Mean Change in AUC
// Feature2 35.15 -0.387 0.002015
// Feature1 17.94 -0.1514 0.0008963
}
private class Data
{
public bool Label { get; set; }
public float Feature1 { get; set; }
public float Feature2 { get; set; }
}
/// <summary>
/// Generate an enumerable of Data objects, creating the label as a simple
/// linear combination of the features.
/// </summary>
/// <param name="nExamples">The number of examples.</param>
/// <param name="bias">The bias, or offset, in the calculation of the label.
/// </param>
/// <param name="weight1">The weight to multiply the first feature with to
/// compute the label.</param>
/// <param name="weight2">The weight to multiply the second feature with to
/// compute the label.</param>
/// <param name="seed">The seed for generating feature values and label
/// noise.</param>
/// <returns>An enumerable of Data objects.</returns>
private static IEnumerable<Data> GenerateData(int nExamples = 10000,
double bias = 0, double weight1 = 1, double weight2 = 2, int seed = 1)
{
var rng = new Random(seed);
for (int i = 0; i < nExamples; i++)
{
var data = new Data
{
Feature1 = (float)(rng.Next(10) * (rng.NextDouble() - 0.5)),
Feature2 = (float)(rng.Next(10) * (rng.NextDouble() - 0.5)),
};
// Create a noisy label.
var value = (float)(bias + weight1 * data.Feature1 + weight2 *
data.Feature2 + rng.NextDouble() - 0.5);
data.Label = Sigmoid(value) > 0.5;
yield return data;
}
}
private static double Sigmoid(double x) => 1.0 / (1.0 + Math.Exp(-1 * x));
}
}
Remarks
Permutation feature importance (PFI) is a technique to determine the global importance of features in a trained machine learning model. PFI is a simple yet powerful technique motivated by Breiman in his Random Forest paper, section 10 (Breiman. "Random Forests." Machine Learning, 2001.) The advantage of the PFI method is that it is model agnostic -- it works with any model that can be evaluated -- and it can use any dataset, not just the training set, to compute feature importance metrics.
PFI works by taking a labeled dataset, choosing a feature, and permuting the values for that feature across all the examples, so that each example now has a random value for the feature and the original values for all other features. The evaluation metric (e.g. AUC) is then calculated for this modified dataset, and the change in the evaluation metric from the original dataset is computed. The larger the change in the evaluation metric, the more important the feature is to the model. PFI works by performing this permutation analysis across all the features of a model, one after another.
In this implementation, PFI computes the change in all possible binary classification evaluation metrics for each feature, and an ImmutableArray of BinaryClassificationMetrics objects is returned. See the sample below for an example of working with these results to analyze the feature importance of a model.
Applies to
PermutationFeatureImportance<TModel>(MulticlassClassificationCatalog, ISingleFeaturePredictionTransformer<TModel>, IDataView, String, Boolean, Nullable<Int32>, Int32)
Permutation Feature Importance (PFI) for MulticlassClassification.
public static System.Collections.Immutable.ImmutableArray<Microsoft.ML.Data.MulticlassClassificationMetricsStatistics> PermutationFeatureImportance<TModel> (this Microsoft.ML.MulticlassClassificationCatalog catalog, Microsoft.ML.ISingleFeaturePredictionTransformer<TModel> predictionTransformer, Microsoft.ML.IDataView data, string labelColumnName = "Label", bool useFeatureWeightFilter = false, int? numberOfExamplesToUse = default, int permutationCount = 1) where TModel : class;
static member PermutationFeatureImportance : Microsoft.ML.MulticlassClassificationCatalog * Microsoft.ML.ISingleFeaturePredictionTransformer<'Model (requires 'Model : null)> * Microsoft.ML.IDataView * string * bool * Nullable<int> * int -> System.Collections.Immutable.ImmutableArray<Microsoft.ML.Data.MulticlassClassificationMetricsStatistics> (requires 'Model : null)
<Extension()>
Public Function PermutationFeatureImportance(Of TModel As Class) (catalog As MulticlassClassificationCatalog, predictionTransformer As ISingleFeaturePredictionTransformer(Of TModel), data As IDataView, Optional labelColumnName As String = "Label", Optional useFeatureWeightFilter As Boolean = false, Optional numberOfExamplesToUse As Nullable(Of Integer) = Nothing, Optional permutationCount As Integer = 1) As ImmutableArray(Of MulticlassClassificationMetricsStatistics)
Type Parameters
- TModel
Parameters
- catalog
- MulticlassClassificationCatalog
The multiclass classification catalog.
- predictionTransformer
- ISingleFeaturePredictionTransformer<TModel>
The model on which to evaluate feature importance.
- data
- IDataView
The evaluation data set.
- labelColumnName
- String
Label column name. The column data must be KeyDataViewType.
- useFeatureWeightFilter
- Boolean
Use features weight to pre-filter features.
Limit the number of examples to evaluate on. means up to ~2 bln examples from will be used.
- permutationCount
- Int32
The number of permutations to perform.
Returns
Array of per-feature 'contributions' to the score.
Examples
using System;
using System.Collections.Generic;
using System.Linq;
using Microsoft.ML;
namespace Samples.Dynamic.Trainers.MulticlassClassification
{
public static class PermutationFeatureImportance
{
public static void Example()
{
// Create a new context for ML.NET operations. It can be used for
// exception tracking and logging, as a catalog of available operations
// and as the source of randomness.
var mlContext = new MLContext(seed: 1);
// Create sample data.
var samples = GenerateData();
// Load the sample data as an IDataView.
var data = mlContext.Data.LoadFromEnumerable(samples);
// Define a training pipeline that concatenates features into a vector,
// normalizes them, and then trains a linear model.
var featureColumns =
new string[] { nameof(Data.Feature1), nameof(Data.Feature2) };
var pipeline = mlContext.Transforms
.Concatenate("Features", featureColumns)
.Append(mlContext.Transforms.Conversion.MapValueToKey("Label"))
.Append(mlContext.Transforms.NormalizeMinMax("Features"))
.Append(mlContext.MulticlassClassification.Trainers
.SdcaMaximumEntropy());
// Fit the pipeline to the data.
var model = pipeline.Fit(data);
// Transform the dataset.
var transformedData = model.Transform(data);
// Extract the predictor.
var linearPredictor = model.LastTransformer;
// Compute the permutation metrics for the linear model using the
// normalized data.
var permutationMetrics = mlContext.MulticlassClassification
.PermutationFeatureImportance(linearPredictor, transformedData,
permutationCount: 30);
// Now let's look at which features are most important to the model
// overall. Get the feature indices sorted by their impact on
// microaccuracy.
var sortedIndices = permutationMetrics
.Select((metrics, index) => new { index, metrics.MicroAccuracy })
.OrderByDescending(feature => Math.Abs(feature.MicroAccuracy.Mean))
.Select(feature => feature.index);
Console.WriteLine("Feature\tChange in MicroAccuracy\t95% Confidence in "
+ "the Mean Change in MicroAccuracy");
var microAccuracy = permutationMetrics.Select(x => x.MicroAccuracy)
.ToArray();
foreach (int i in sortedIndices)
{
Console.WriteLine("{0}\t{1:G4}\t{2:G4}",
featureColumns[i],
microAccuracy[i].Mean,
1.96 * microAccuracy[i].StandardError);
}
// Expected output:
//Feature Change in MicroAccuracy 95% Confidence in the Mean Change in MicroAccuracy
//Feature2 -0.1395 0.0006567
//Feature1 -0.05367 0.0006908
}
private class Data
{
public float Label { get; set; }
public float Feature1 { get; set; }
public float Feature2 { get; set; }
}
/// <summary>
/// Generate an enumerable of Data objects, creating the label as a simple
/// linear combination of the features.
/// </summary>
/// <param name="nExamples">The number of examples.</param>
/// <param name="bias">The bias, or offset, in the calculation of the
/// label.</param>
/// <param name="weight1">The weight to multiply the first feature with to
/// compute the label.</param>
/// <param name="weight2">The weight to multiply the second feature with to
/// compute the label.</param>
/// <param name="seed">The seed for generating feature values and label
/// noise.</param>
/// <returns>An enumerable of Data objects.</returns>
private static IEnumerable<Data> GenerateData(int nExamples = 10000,
double bias = 0, double weight1 = 1, double weight2 = 2, int seed = 1)
{
var rng = new Random(seed);
var max = bias + 4.5 * weight1 + 4.5 * weight2 + 0.5;
for (int i = 0; i < nExamples; i++)
{
var data = new Data
{
Feature1 = (float)(rng.Next(10) * (rng.NextDouble() - 0.5)),
Feature2 = (float)(rng.Next(10) * (rng.NextDouble() - 0.5)),
};
// Create a noisy label.
var value = (float)
(bias + weight1 * data.Feature1 + weight2 * data.Feature2 +
rng.NextDouble() - 0.5);
if (value < max / 3)
data.Label = 0;
else if (value < 2 * max / 3)
data.Label = 1;
else
data.Label = 2;
yield return data;
}
}
}
}
Remarks
Permutation feature importance (PFI) is a technique to determine the global importance of features in a trained machine learning model. PFI is a simple yet powerful technique motivated by Breiman in his Random Forest paper, section 10 (Breiman. "Random Forests." Machine Learning, 2001.) The advantage of the PFI method is that it is model agnostic -- it works with any model that can be evaluated -- and it can use any dataset, not just the training set, to compute feature importance metrics.
PFI works by taking a labeled dataset, choosing a feature, and permuting the values for that feature across all the examples, so that each example now has a random value for the feature and the original values for all other features. The evaluation metric (e.g. micro-accuracy) is then calculated for this modified dataset, and the change in the evaluation metric from the original dataset is computed. The larger the change in the evaluation metric, the more important the feature is to the model. PFI works by performing this permutation analysis across all the features of a model, one after another.
In this implementation, PFI computes the change in all possible multiclass classification evaluation metrics for each feature, and an ImmutableArray of MulticlassClassificationMetrics objects is returned. See the sample below for an example of working with these results to analyze the feature importance of a model.
Applies to
PermutationFeatureImportance<TModel>(RegressionCatalog, ISingleFeaturePredictionTransformer<TModel>, IDataView, String, Boolean, Nullable<Int32>, Int32)
Permutation Feature Importance (PFI) for Regression.
public static System.Collections.Immutable.ImmutableArray<Microsoft.ML.Data.RegressionMetricsStatistics> PermutationFeatureImportance<TModel> (this Microsoft.ML.RegressionCatalog catalog, Microsoft.ML.ISingleFeaturePredictionTransformer<TModel> predictionTransformer, Microsoft.ML.IDataView data, string labelColumnName = "Label", bool useFeatureWeightFilter = false, int? numberOfExamplesToUse = default, int permutationCount = 1) where TModel : class;
static member PermutationFeatureImportance : Microsoft.ML.RegressionCatalog * Microsoft.ML.ISingleFeaturePredictionTransformer<'Model (requires 'Model : null)> * Microsoft.ML.IDataView * string * bool * Nullable<int> * int -> System.Collections.Immutable.ImmutableArray<Microsoft.ML.Data.RegressionMetricsStatistics> (requires 'Model : null)
<Extension()>
Public Function PermutationFeatureImportance(Of TModel As Class) (catalog As RegressionCatalog, predictionTransformer As ISingleFeaturePredictionTransformer(Of TModel), data As IDataView, Optional labelColumnName As String = "Label", Optional useFeatureWeightFilter As Boolean = false, Optional numberOfExamplesToUse As Nullable(Of Integer) = Nothing, Optional permutationCount As Integer = 1) As ImmutableArray(Of RegressionMetricsStatistics)
Type Parameters
- TModel
Parameters
- catalog
- RegressionCatalog
The regression catalog.
- predictionTransformer
- ISingleFeaturePredictionTransformer<TModel>
The model on which to evaluate feature importance.
- data
- IDataView
The evaluation data set.
- useFeatureWeightFilter
- Boolean
Use features weight to pre-filter features.
Limit the number of examples to evaluate on. means up to ~2 bln examples from will be used.
- permutationCount
- Int32
The number of permutations to perform.
Returns
Array of per-feature 'contributions' to the score.
Examples
using System;
using System.Collections.Generic;
using System.Linq;
using Microsoft.ML;
namespace Samples.Dynamic.Trainers.Regression
{
public static class PermutationFeatureImportance
{
public static void Example()
{
// Create a new context for ML.NET operations. It can be used for
// exception tracking and logging, as a catalog of available operations
// and as the source of randomness.
var mlContext = new MLContext(seed: 1);
// Create sample data.
var samples = GenerateData();
// Load the sample data as an IDataView.
var data = mlContext.Data.LoadFromEnumerable(samples);
// Define a training pipeline that concatenates features into a vector,
// normalizes them, and then trains a linear model.
var featureColumns = new string[] { nameof(Data.Feature1),
nameof(Data.Feature2) };
var pipeline = mlContext.Transforms.Concatenate(
"Features",
featureColumns)
.Append(mlContext.Transforms.NormalizeMinMax("Features"))
.Append(mlContext.Regression.Trainers.Ols());
// Fit the pipeline to the data.
var model = pipeline.Fit(data);
// Transform the dataset.
var transformedData = model.Transform(data);
// Extract the predictor.
var linearPredictor = model.LastTransformer;
// Compute the permutation metrics for the linear model using the
// normalized data.
var permutationMetrics = mlContext.Regression
.PermutationFeatureImportance(
linearPredictor, transformedData, permutationCount: 30);
// Now let's look at which features are most important to the model
// overall. Get the feature indices sorted by their impact on RMSE.
var sortedIndices = permutationMetrics
.Select((metrics, index) => new
{
index,
metrics.RootMeanSquaredError
})
.OrderByDescending(feature => Math.Abs(
feature.RootMeanSquaredError.Mean))
.Select(feature => feature.index);
Console.WriteLine("Feature\tModel Weight\tChange in RMSE\t95%" +
"Confidence in the Mean Change in RMSE");
var rmse = permutationMetrics.Select(x => x.RootMeanSquaredError)
.ToArray();
foreach (int i in sortedIndices)
{
Console.WriteLine("{0}\t{1:0.00}\t{2:G4}\t{3:G4}",
featureColumns[i],
linearPredictor.Model.Weights[i],
rmse[i].Mean,
1.96 * rmse[i].StandardError);
}
// Expected output:
// Feature Model Weight Change in RMSE 95% Confidence in the Mean Change in RMSE
// Feature2 9.00 4.009 0.008304
// Feature1 4.48 1.901 0.003351
}
private class Data
{
public float Label { get; set; }
public float Feature1 { get; set; }
public float Feature2 { get; set; }
}
/// <summary>
/// Generate an enumerable of Data objects, creating the label as a simple
/// linear combination of the features.
/// </summary>
/// <param name="nExamples">The number of examples.</param>
/// <param name="bias">The bias, or offset, in the calculation of the label.
/// </param>
/// <param name="weight1">The weight to multiply the first feature with to
/// compute the label.</param>
/// <param name="weight2">The weight to multiply the second feature with to
/// compute the label.</param>
/// <param name="seed">The seed for generating feature values and label
/// noise.</param>
/// <returns>An enumerable of Data objects.</returns>
private static IEnumerable<Data> GenerateData(int nExamples = 10000,
double bias = 0, double weight1 = 1, double weight2 = 2, int seed = 1)
{
var rng = new Random(seed);
for (int i = 0; i < nExamples; i++)
{
var data = new Data
{
Feature1 = (float)(rng.Next(10) * (rng.NextDouble() - 0.5)),
Feature2 = (float)(rng.Next(10) * (rng.NextDouble() - 0.5)),
};
// Create a noisy label.
data.Label = (float)(bias + weight1 * data.Feature1 + weight2 *
data.Feature2 + rng.NextDouble() - 0.5);
yield return data;
}
}
}
}
Remarks
Permutation feature importance (PFI) is a technique to determine the global importance of features in a trained machine learning model. PFI is a simple yet powerful technique motivated by Breiman in his Random Forest paper, section 10 (Breiman. "Random Forests." Machine Learning, 2001.) The advantage of the PFI method is that it is model agnostic -- it works with any model that can be evaluated -- and it can use any dataset, not just the training set, to compute feature importance metrics.
PFI works by taking a labeled dataset, choosing a feature, and permuting the values for that feature across all the examples, so that each example now has a random value for the feature and the original values for all other features. The evaluation metric (e.g. R-squared) is then calculated for this modified dataset, and the change in the evaluation metric from the original dataset is computed. The larger the change in the evaluation metric, the more important the feature is to the model. PFI works by performing this permutation analysis across all the features of a model, one after another.
In this implementation, PFI computes the change in all possible regression evaluation metrics for each feature, and an ImmutableArray of RegressionMetrics objects is returned. See the sample below for an example of working with these results to analyze the feature importance of a model.
Applies to
PermutationFeatureImportance<TModel>(RankingCatalog, ISingleFeaturePredictionTransformer<TModel>, IDataView, String, String, Boolean, Nullable<Int32>, Int32)
Permutation Feature Importance (PFI) for Ranking.
public static System.Collections.Immutable.ImmutableArray<Microsoft.ML.Data.RankingMetricsStatistics> PermutationFeatureImportance<TModel> (this Microsoft.ML.RankingCatalog catalog, Microsoft.ML.ISingleFeaturePredictionTransformer<TModel> predictionTransformer, Microsoft.ML.IDataView data, string labelColumnName = "Label", string rowGroupColumnName = "GroupId", bool useFeatureWeightFilter = false, int? numberOfExamplesToUse = default, int permutationCount = 1) where TModel : class;
static member PermutationFeatureImportance : Microsoft.ML.RankingCatalog * Microsoft.ML.ISingleFeaturePredictionTransformer<'Model (requires 'Model : null)> * Microsoft.ML.IDataView * string * string * bool * Nullable<int> * int -> System.Collections.Immutable.ImmutableArray<Microsoft.ML.Data.RankingMetricsStatistics> (requires 'Model : null)
<Extension()>
Public Function PermutationFeatureImportance(Of TModel As Class) (catalog As RankingCatalog, predictionTransformer As ISingleFeaturePredictionTransformer(Of TModel), data As IDataView, Optional labelColumnName As String = "Label", Optional rowGroupColumnName As String = "GroupId", Optional useFeatureWeightFilter As Boolean = false, Optional numberOfExamplesToUse As Nullable(Of Integer) = Nothing, Optional permutationCount As Integer = 1) As ImmutableArray(Of RankingMetricsStatistics)
Type Parameters
- TModel
Parameters
- catalog
- RankingCatalog
The ranking catalog.
- predictionTransformer
- ISingleFeaturePredictionTransformer<TModel>
The model on which to evaluate feature importance.
- data
- IDataView
The evaluation data set.
- labelColumnName
- String
Label column name. The column data must be Single or KeyDataViewType.
- rowGroupColumnName
- String
GroupId column name
- useFeatureWeightFilter
- Boolean
Use features weight to pre-filter features.
Limit the number of examples to evaluate on. means up to ~2 bln examples from will be used.
- permutationCount
- Int32
The number of permutations to perform.
Returns
Array of per-feature 'contributions' to the score.
Examples
using System;
using System.Collections.Generic;
using System.Linq;
using Microsoft.ML;
namespace Samples.Dynamic.Trainers.Ranking
{
public static class PermutationFeatureImportance
{
public static void Example()
{
// Create a new context for ML.NET operations. It can be used for
// exception tracking and logging, as a catalog of available operations
// and as the source of randomness.
var mlContext = new MLContext(seed: 1);
// Create sample data.
var samples = GenerateData();
// Load the sample data as an IDataView.
var data = mlContext.Data.LoadFromEnumerable(samples);
// Define a training pipeline that concatenates features into a vector,
// normalizes them, and then trains a linear model.
var featureColumns = new string[] { nameof(Data.Feature1), nameof(
Data.Feature2) };
var pipeline = mlContext.Transforms.Concatenate("Features",
featureColumns)
.Append(mlContext.Transforms.Conversion.MapValueToKey("Label"))
.Append(mlContext.Transforms.Conversion.MapValueToKey(
"GroupId"))
.Append(mlContext.Transforms.NormalizeMinMax("Features"))
.Append(mlContext.Ranking.Trainers.FastTree());
// Fit the pipeline to the data.
var model = pipeline.Fit(data);
// Transform the dataset.
var transformedData = model.Transform(data);
// Extract the predictor.
var linearPredictor = model.LastTransformer;
// Compute the permutation metrics for the linear model using the
// normalized data.
var permutationMetrics = mlContext.Ranking.PermutationFeatureImportance(
linearPredictor, transformedData, permutationCount: 30);
// Now let's look at which features are most important to the model
// overall. Get the feature indices sorted by their impact on NDCG@1.
var sortedIndices = permutationMetrics.Select((metrics, index) => new
{
index,
metrics.NormalizedDiscountedCumulativeGains
})
.OrderByDescending(feature => Math.Abs(
feature.NormalizedDiscountedCumulativeGains[0].Mean))
.Select(feature => feature.index);
Console.WriteLine("Feature\tChange in NDCG@1\t95% Confidence in the" +
"Mean Change in NDCG@1");
var ndcg = permutationMetrics.Select(
x => x.NormalizedDiscountedCumulativeGains).ToArray();
foreach (int i in sortedIndices)
{
Console.WriteLine("{0}\t{1:G4}\t{2:G4}",
featureColumns[i],
ndcg[i][0].Mean,
1.96 * ndcg[i][0].StandardError);
}
// Expected output:
// Feature Change in NDCG@1 95% Confidence in the Mean Change in NDCG@1
// Feature2 -0.2421 0.001748
// Feature1 -0.0513 0.001184
}
private class Data
{
public float Label { get; set; }
public int GroupId { get; set; }
public float Feature1 { get; set; }
public float Feature2 { get; set; }
}
/// <summary>
/// Generate an enumerable of Data objects, creating the label as a simple
/// linear combination of the features.
/// </summary>
///
/// <param name="nExamples">The number of examples.</param>
///
/// <param name="bias">The bias, or offset, in the calculation of the label.
/// </param>
///
/// <param name="weight1">The weight to multiply the first feature with to
/// compute the label.</param>
///
/// <param name="weight2">The weight to multiply the second feature with to
/// compute the label.</param>
///
/// <param name="seed">The seed for generating feature values and label
/// noise.</param>
///
/// <returns>An enumerable of Data objects.</returns>
private static IEnumerable<Data> GenerateData(int nExamples = 10000,
double bias = 0, double weight1 = 1, double weight2 = 2, int seed = 1,
int groupSize = 5)
{
var rng = new Random(seed);
var max = bias + 4.5 * weight1 + 4.5 * weight2 + 0.5;
for (int i = 0; i < nExamples; i++)
{
var data = new Data
{
GroupId = i / groupSize,
Feature1 = (float)(rng.Next(10) * (rng.NextDouble() - 0.5)),
Feature2 = (float)(rng.Next(10) * (rng.NextDouble() - 0.5)),
};
// Create a noisy label.
var value = (float)(bias + weight1 * data.Feature1 + weight2 *
data.Feature2 + rng.NextDouble() - 0.5);
if (value < max / 3)
data.Label = 0;
else if (value < 2 * max / 3)
data.Label = 1;
else
data.Label = 2;
yield return data;
}
}
}
}
Remarks
Permutation feature importance (PFI) is a technique to determine the global importance of features in a trained machine learning model. PFI is a simple yet powerful technique motivated by Breiman in his Random Forest paper, section 10 (Breiman. "Random Forests." Machine Learning, 2001.) The advantage of the PFI method is that it is model agnostic -- it works with any model that can be evaluated -- and it can use any dataset, not just the training set, to compute feature importance metrics.
PFI works by taking a labeled dataset, choosing a feature, and permuting the values for that feature across all the examples, so that each example now has a random value for the feature and the original values for all other features. The evaluation metric (e.g. NDCG) is then calculated for this modified dataset, and the change in the evaluation metric from the original dataset is computed. The larger the change in the evaluation metric, the more important the feature is to the model. PFI works by performing this permutation analysis across all the features of a model, one after another.
In this implementation, PFI computes the change in all possible ranking evaluation metrics for each feature, and an ImmutableArray of RankingMetrics objects is returned. See the sample below for an example of working with these results to analyze the feature importance of a model.