OnnxCatalog.ApplyOnnxModel 方法
定义
重要
一些信息与预发行产品相关,相应产品在发行之前可能会进行重大修改。 对于此处提供的信息,Microsoft 不作任何明示或暗示的担保。
重载
ApplyOnnxModel(TransformsCatalog, OnnxOptions)
使用指定的OnnxOptions值创建一个 OnnxScoringEstimator 。 请参阅 OnnxScoringEstimator 详细了解必要的依赖项,以及如何在 GPU 上运行它。
public static Microsoft.ML.Transforms.Onnx.OnnxScoringEstimator ApplyOnnxModel (this Microsoft.ML.TransformsCatalog catalog, Microsoft.ML.Transforms.Onnx.OnnxOptions options);
static member ApplyOnnxModel : Microsoft.ML.TransformsCatalog * Microsoft.ML.Transforms.Onnx.OnnxOptions -> Microsoft.ML.Transforms.Onnx.OnnxScoringEstimator
<Extension()>
Public Function ApplyOnnxModel (catalog As TransformsCatalog, options As OnnxOptions) As OnnxScoringEstimator
参数
- catalog
- TransformsCatalog
转换的目录。
- options
- OnnxOptions
返回
注解
如果选项。GpuDeviceId 值是null
MLContext.GpuDeviceId如果该值不是null
,将使用该值。
适用于
ApplyOnnxModel(TransformsCatalog, String, Nullable<Int32>, Boolean)
创建一个 OnnxScoringEstimator,该模型将预先训练的 Onnx 模型应用于输入列。 输入/输出列是根据提供的 ONNX 模型的输入/输出列确定的。 请参阅 OnnxScoringEstimator 详细了解必要的依赖项,以及如何在 GPU 上运行它。
public static Microsoft.ML.Transforms.Onnx.OnnxScoringEstimator ApplyOnnxModel (this Microsoft.ML.TransformsCatalog catalog, string modelFile, int? gpuDeviceId = default, bool fallbackToCpu = false);
static member ApplyOnnxModel : Microsoft.ML.TransformsCatalog * string * Nullable<int> * bool -> Microsoft.ML.Transforms.Onnx.OnnxScoringEstimator
<Extension()>
Public Function ApplyOnnxModel (catalog As TransformsCatalog, modelFile As String, Optional gpuDeviceId As Nullable(Of Integer) = Nothing, Optional fallbackToCpu As Boolean = false) As OnnxScoringEstimator
参数
- catalog
- TransformsCatalog
转换的目录。
- modelFile
- String
包含 ONNX 模型的文件的路径。
- fallbackToCpu
- Boolean
如果出现 GPU 错误,请引发异常或回退到 CPU。
返回
示例
using System;
using System.IO;
using System.Linq;
using Microsoft.ML;
using Microsoft.ML.Data;
namespace Samples.Dynamic
{
public static class ApplyOnnxModel
{
public static void Example()
{
// Download the squeeznet image model from ONNX model zoo, version 1.2
// https://github.com/onnx/models/tree/master/squeezenet or
// https://s3.amazonaws.com/download.onnx/models/opset_8/squeezenet.tar.gz
// or use Microsoft.ML.Onnx.TestModels nuget.
var modelPath = @"squeezenet\00000001\model.onnx";
// Create ML pipeline to score the data using OnnxScoringEstimator
var mlContext = new MLContext();
// Generate sample test data.
var samples = GetTensorData();
// Convert training data to IDataView, the general data type used in
// ML.NET.
var data = mlContext.Data.LoadFromEnumerable(samples);
// Create the pipeline to score using provided onnx model.
var pipeline = mlContext.Transforms.ApplyOnnxModel(modelPath);
// Fit the pipeline and get the transformed values
var transformedValues = pipeline.Fit(data).Transform(data);
// Retrieve model scores into Prediction class
var predictions = mlContext.Data.CreateEnumerable<Prediction>(
transformedValues, reuseRowObject: false);
// Iterate rows
foreach (var prediction in predictions)
{
int numClasses = 0;
foreach (var classScore in prediction.softmaxout_1.Take(3))
{
Console.WriteLine("Class #" + numClasses++ + " score = " +
classScore);
}
Console.WriteLine(new string('-', 10));
}
// Results look like below...
// Class #0 score = 4.544065E-05
// Class #1 score = 0.003845858
// Class #2 score = 0.0001249467
// ----------
// Class #0 score = 4.491953E-05
// Class #1 score = 0.003848222
// Class #2 score = 0.0001245592
// ----------
}
// inputSize is the overall dimensions of the model input tensor.
private const int inputSize = 224 * 224 * 3;
// A class to hold sample tensor data. Member name should match
// the inputs that the model expects (in this case, data_0)
public class TensorData
{
[VectorType(inputSize)]
public float[] data_0 { get; set; }
}
// Method to generate sample test data. Returns 2 sample rows.
public static TensorData[] GetTensorData()
{
// This can be any numerical data. Assume image pixel values.
var image1 = Enumerable.Range(0, inputSize).Select(x => (float)x /
inputSize).ToArray();
var image2 = Enumerable.Range(0, inputSize).Select(x => (float)(x +
10000) / inputSize).ToArray();
return new TensorData[] { new TensorData() { data_0 = image1 }, new
TensorData() { data_0 = image2 } };
}
// Class to contain the output values from the transformation.
// This model generates a vector of 1000 floats.
class Prediction
{
[VectorType(1000)]
public float[] softmaxout_1 { get; set; }
}
}
}
注解
输入列的名称/类型必须与 ONNX 模型输入的名称/类型完全匹配。 生成的输出列的名称/类型将与 ONNX 模型输出的名称/类型匹配。 如果 gpuDeviceId 值不是null
MLContext.GpuDeviceId,则使用null
该值。
适用于
ApplyOnnxModel(TransformsCatalog, String, IDictionary<String,Int32[]>, Nullable<Int32>, Boolean)
创建一个 OnnxScoringEstimator,该模型将预先训练的 Onnx 模型应用于输入列。 输入/输出列是根据提供的 ONNX 模型的输入/输出列确定的。 请参阅 OnnxScoringEstimator 详细了解必要的依赖项,以及如何在 GPU 上运行它。
public static Microsoft.ML.Transforms.Onnx.OnnxScoringEstimator ApplyOnnxModel (this Microsoft.ML.TransformsCatalog catalog, string modelFile, System.Collections.Generic.IDictionary<string,int[]> shapeDictionary, int? gpuDeviceId = default, bool fallbackToCpu = false);
static member ApplyOnnxModel : Microsoft.ML.TransformsCatalog * string * System.Collections.Generic.IDictionary<string, int[]> * Nullable<int> * bool -> Microsoft.ML.Transforms.Onnx.OnnxScoringEstimator
<Extension()>
Public Function ApplyOnnxModel (catalog As TransformsCatalog, modelFile As String, shapeDictionary As IDictionary(Of String, Integer()), Optional gpuDeviceId As Nullable(Of Integer) = Nothing, Optional fallbackToCpu As Boolean = false) As OnnxScoringEstimator
参数
- catalog
- TransformsCatalog
转换的目录。
- modelFile
- String
包含 ONNX 模型的文件的路径。
- shapeDictionary
- IDictionary<String,Int32[]>
要用于从中 modelFile
加载的形状的 ONNX 形状。
对于密钥,请使用 ONNX 模型中所示的名称,例如“input”。 使用此参数说明形状对于处理变量维度输入和输出特别有用。
- fallbackToCpu
- Boolean
如果出现 GPU 错误,请引发异常或回退到 CPU。
返回
示例
using System;
using System.IO;
using System.Linq;
using Microsoft.ML;
using Microsoft.ML.Data;
namespace Samples.Dynamic
{
public static class ApplyOnnxModel
{
public static void Example()
{
// Download the squeeznet image model from ONNX model zoo, version 1.2
// https://github.com/onnx/models/tree/master/squeezenet or
// https://s3.amazonaws.com/download.onnx/models/opset_8/squeezenet.tar.gz
// or use Microsoft.ML.Onnx.TestModels nuget.
var modelPath = @"squeezenet\00000001\model.onnx";
// Create ML pipeline to score the data using OnnxScoringEstimator
var mlContext = new MLContext();
// Generate sample test data.
var samples = GetTensorData();
// Convert training data to IDataView, the general data type used in
// ML.NET.
var data = mlContext.Data.LoadFromEnumerable(samples);
// Create the pipeline to score using provided onnx model.
var pipeline = mlContext.Transforms.ApplyOnnxModel(modelPath);
// Fit the pipeline and get the transformed values
var transformedValues = pipeline.Fit(data).Transform(data);
// Retrieve model scores into Prediction class
var predictions = mlContext.Data.CreateEnumerable<Prediction>(
transformedValues, reuseRowObject: false);
// Iterate rows
foreach (var prediction in predictions)
{
int numClasses = 0;
foreach (var classScore in prediction.softmaxout_1.Take(3))
{
Console.WriteLine("Class #" + numClasses++ + " score = " +
classScore);
}
Console.WriteLine(new string('-', 10));
}
// Results look like below...
// Class #0 score = 4.544065E-05
// Class #1 score = 0.003845858
// Class #2 score = 0.0001249467
// ----------
// Class #0 score = 4.491953E-05
// Class #1 score = 0.003848222
// Class #2 score = 0.0001245592
// ----------
}
// inputSize is the overall dimensions of the model input tensor.
private const int inputSize = 224 * 224 * 3;
// A class to hold sample tensor data. Member name should match
// the inputs that the model expects (in this case, data_0)
public class TensorData
{
[VectorType(inputSize)]
public float[] data_0 { get; set; }
}
// Method to generate sample test data. Returns 2 sample rows.
public static TensorData[] GetTensorData()
{
// This can be any numerical data. Assume image pixel values.
var image1 = Enumerable.Range(0, inputSize).Select(x => (float)x /
inputSize).ToArray();
var image2 = Enumerable.Range(0, inputSize).Select(x => (float)(x +
10000) / inputSize).ToArray();
return new TensorData[] { new TensorData() { data_0 = image1 }, new
TensorData() { data_0 = image2 } };
}
// Class to contain the output values from the transformation.
// This model generates a vector of 1000 floats.
class Prediction
{
[VectorType(1000)]
public float[] softmaxout_1 { get; set; }
}
}
}
注解
输入列的名称/类型必须与 ONNX 模型输入的名称/类型完全匹配。 生成的输出列的名称/类型将与 ONNX 模型输出的名称/类型匹配。 如果 gpuDeviceId 值不是null
MLContext.GpuDeviceId,则使用null
该值。
适用于
ApplyOnnxModel(TransformsCatalog, String, String, String, Nullable<Int32>, Boolean)
创建一个 OnnxScoringEstimator,该模型将预先训练的 Onnx 模型应用于 inputColumnName
列。
请参阅 OnnxScoringEstimator 详细了解必要的依赖项,以及如何在 GPU 上运行它。
public static Microsoft.ML.Transforms.Onnx.OnnxScoringEstimator ApplyOnnxModel (this Microsoft.ML.TransformsCatalog catalog, string outputColumnName, string inputColumnName, string modelFile, int? gpuDeviceId = default, bool fallbackToCpu = false);
static member ApplyOnnxModel : Microsoft.ML.TransformsCatalog * string * string * string * Nullable<int> * bool -> Microsoft.ML.Transforms.Onnx.OnnxScoringEstimator
<Extension()>
Public Function ApplyOnnxModel (catalog As TransformsCatalog, outputColumnName As String, inputColumnName As String, modelFile As String, Optional gpuDeviceId As Nullable(Of Integer) = Nothing, Optional fallbackToCpu As Boolean = false) As OnnxScoringEstimator
参数
- catalog
- TransformsCatalog
转换的目录。
- outputColumnName
- String
转换生成的输出列。
- inputColumnName
- String
输入列。
- modelFile
- String
包含 ONNX 模型的文件的路径。
- fallbackToCpu
- Boolean
如果出现 GPU 错误,请引发异常或回退到 CPU。
返回
示例
using System;
using System.Linq;
using Microsoft.ML;
using Microsoft.ML.Data;
using Microsoft.ML.Transforms.Image;
namespace Samples.Dynamic
{
public static class ApplyOnnxModelWithInMemoryImages
{
// Example of applying ONNX transform on in-memory images.
public static void Example()
{
// Download the squeeznet image model from ONNX model zoo, version 1.2
// https://github.com/onnx/models/tree/master/vision/classification/squeezenet or use
// Microsoft.ML.Onnx.TestModels nuget.
// It's a multiclass classifier. It consumes an input "data_0" and
// produces an output "softmaxout_1".
var modelPath = @"squeezenet\00000001\model.onnx";
// Create ML pipeline to score the data using OnnxScoringEstimator
var mlContext = new MLContext();
// Create in-memory data points. Its Image/Scores field is the
// input /output of the used ONNX model.
var dataPoints = new ImageDataPoint[]
{
new ImageDataPoint(red: 255, green: 0, blue: 0), // Red color
new ImageDataPoint(red: 0, green: 128, blue: 0) // Green color
};
// Convert training data to IDataView, the general data type used in
// ML.NET.
var dataView = mlContext.Data.LoadFromEnumerable(dataPoints);
// Create a ML.NET pipeline which contains two steps. First,
// ExtractPixle is used to convert the 224x224 image to a 3x224x224
// float tensor. Then the float tensor is fed into a ONNX model with an
// input called "data_0" and an output called "softmaxout_1". Note that
// "data_0" and "softmaxout_1" are model input and output names stored
// in the used ONNX model file. Users may need to inspect their own
// models to get the right input and output column names.
// Map column "Image" to column "data_0"
// Map column "data_0" to column "softmaxout_1"
var pipeline = mlContext.Transforms.ExtractPixels("data_0", "Image")
.Append(mlContext.Transforms.ApplyOnnxModel("softmaxout_1",
"data_0", modelPath));
var model = pipeline.Fit(dataView);
var onnx = model.Transform(dataView);
// Convert IDataView back to IEnumerable<ImageDataPoint> so that user
// can inspect the output, column "softmaxout_1", of the ONNX transform.
// Note that Column "softmaxout_1" would be stored in ImageDataPont
//.Scores because the added attributed [ColumnName("softmaxout_1")]
// tells that ImageDataPont.Scores is equivalent to column
// "softmaxout_1".
var transformedDataPoints = mlContext.Data.CreateEnumerable<
ImageDataPoint>(onnx, false).ToList();
// The scores are probabilities of all possible classes, so they should
// all be positive.
foreach (var dataPoint in transformedDataPoints)
{
var firstClassProb = dataPoint.Scores.First();
var lastClassProb = dataPoint.Scores.Last();
Console.WriteLine("The probability of being the first class is " +
(firstClassProb * 100) + "%.");
Console.WriteLine($"The probability of being the last class is " +
(lastClassProb * 100) + "%.");
}
// Expected output:
// The probability of being the first class is 0.002542659%.
// The probability of being the last class is 0.0292684%.
// The probability of being the first class is 0.02258059%.
// The probability of being the last class is 0.394428%.
}
// This class is used in Example() to describe data points which will be
// consumed by ML.NET pipeline.
private class ImageDataPoint
{
// Height of Image.
private const int height = 224;
// Width of Image.
private const int width = 224;
// Image will be consumed by ONNX image multiclass classification model.
[ImageType(height, width)]
public MLImage Image { get; set; }
// Expected output of ONNX model. It contains probabilities of all
// classes. Note that the ColumnName below should match the output name
// in the used ONNX model file.
[ColumnName("softmaxout_1")]
public float[] Scores { get; set; }
public ImageDataPoint()
{
Image = null;
}
public ImageDataPoint(byte red, byte green, byte blue)
{
byte[] imageData = new byte[width * height * 4]; // 4 for the red, green, blue and alpha colors
for (int i = 0; i < imageData.Length; i += 4)
{
// Fill the buffer with the Bgra32 format
imageData[i] = blue;
imageData[i + 1] = green;
imageData[i + 2] = red;
imageData[i + 3] = 255;
}
Image = MLImage.CreateFromPixels(width, height, MLPixelFormat.Bgra32, imageData);
}
}
}
}
注解
如果 gpuDeviceId 值不是null
MLContext.GpuDeviceId,则使用null
该值。
适用于
ApplyOnnxModel(TransformsCatalog, String[], String[], String, Nullable<Int32>, Boolean)
创建一个 OnnxScoringEstimator,该模型将预先训练的 Onnx 模型应用于 inputColumnNames
列。
请参阅 OnnxScoringEstimator 详细了解必要的依赖项,以及如何在 GPU 上运行它。
public static Microsoft.ML.Transforms.Onnx.OnnxScoringEstimator ApplyOnnxModel (this Microsoft.ML.TransformsCatalog catalog, string[] outputColumnNames, string[] inputColumnNames, string modelFile, int? gpuDeviceId = default, bool fallbackToCpu = false);
static member ApplyOnnxModel : Microsoft.ML.TransformsCatalog * string[] * string[] * string * Nullable<int> * bool -> Microsoft.ML.Transforms.Onnx.OnnxScoringEstimator
<Extension()>
Public Function ApplyOnnxModel (catalog As TransformsCatalog, outputColumnNames As String(), inputColumnNames As String(), modelFile As String, Optional gpuDeviceId As Nullable(Of Integer) = Nothing, Optional fallbackToCpu As Boolean = false) As OnnxScoringEstimator
参数
- catalog
- TransformsCatalog
转换的目录。
- outputColumnNames
- String[]
转换生成的输出列。
- inputColumnNames
- String[]
输入列。
- modelFile
- String
包含 ONNX 模型的文件的路径。
- fallbackToCpu
- Boolean
如果出现 GPU 错误,请引发异常或回退到 CPU。
返回
注解
如果 gpuDeviceId 值不是null
MLContext.GpuDeviceId,则使用null
该值。
适用于
ApplyOnnxModel(TransformsCatalog, String, String, String, IDictionary<String,Int32[]>, Nullable<Int32>, Boolean)
创建一个 OnnxScoringEstimator,该模型将预先训练的 Onnx 模型应用于 inputColumnName
列。
请参阅 OnnxScoringEstimator 详细了解必要的依赖项,以及如何在 GPU 上运行它。
public static Microsoft.ML.Transforms.Onnx.OnnxScoringEstimator ApplyOnnxModel (this Microsoft.ML.TransformsCatalog catalog, string outputColumnName, string inputColumnName, string modelFile, System.Collections.Generic.IDictionary<string,int[]> shapeDictionary, int? gpuDeviceId = default, bool fallbackToCpu = false);
static member ApplyOnnxModel : Microsoft.ML.TransformsCatalog * string * string * string * System.Collections.Generic.IDictionary<string, int[]> * Nullable<int> * bool -> Microsoft.ML.Transforms.Onnx.OnnxScoringEstimator
<Extension()>
Public Function ApplyOnnxModel (catalog As TransformsCatalog, outputColumnName As String, inputColumnName As String, modelFile As String, shapeDictionary As IDictionary(Of String, Integer()), Optional gpuDeviceId As Nullable(Of Integer) = Nothing, Optional fallbackToCpu As Boolean = false) As OnnxScoringEstimator
参数
- catalog
- TransformsCatalog
转换的目录。
- outputColumnName
- String
转换生成的输出列。
- inputColumnName
- String
输入列。
- modelFile
- String
包含 ONNX 模型的文件的路径。
- shapeDictionary
- IDictionary<String,Int32[]>
要用于从中 modelFile
加载的形状的 ONNX 形状。
对于密钥,请使用 ONNX 模型中所示的名称,例如“input”。 使用此参数说明形状对于处理变量维度输入和输出特别有用。
- fallbackToCpu
- Boolean
如果出现 GPU 错误,请引发异常或回退到 CPU。
返回
示例
using System;
using System.Linq;
using Microsoft.ML;
using Microsoft.ML.Data;
using Microsoft.ML.Transforms.Image;
namespace Samples.Dynamic
{
public static class ApplyOnnxModelWithInMemoryImages
{
// Example of applying ONNX transform on in-memory images.
public static void Example()
{
// Download the squeeznet image model from ONNX model zoo, version 1.2
// https://github.com/onnx/models/tree/master/vision/classification/squeezenet or use
// Microsoft.ML.Onnx.TestModels nuget.
// It's a multiclass classifier. It consumes an input "data_0" and
// produces an output "softmaxout_1".
var modelPath = @"squeezenet\00000001\model.onnx";
// Create ML pipeline to score the data using OnnxScoringEstimator
var mlContext = new MLContext();
// Create in-memory data points. Its Image/Scores field is the
// input /output of the used ONNX model.
var dataPoints = new ImageDataPoint[]
{
new ImageDataPoint(red: 255, green: 0, blue: 0), // Red color
new ImageDataPoint(red: 0, green: 128, blue: 0) // Green color
};
// Convert training data to IDataView, the general data type used in
// ML.NET.
var dataView = mlContext.Data.LoadFromEnumerable(dataPoints);
// Create a ML.NET pipeline which contains two steps. First,
// ExtractPixle is used to convert the 224x224 image to a 3x224x224
// float tensor. Then the float tensor is fed into a ONNX model with an
// input called "data_0" and an output called "softmaxout_1". Note that
// "data_0" and "softmaxout_1" are model input and output names stored
// in the used ONNX model file. Users may need to inspect their own
// models to get the right input and output column names.
// Map column "Image" to column "data_0"
// Map column "data_0" to column "softmaxout_1"
var pipeline = mlContext.Transforms.ExtractPixels("data_0", "Image")
.Append(mlContext.Transforms.ApplyOnnxModel("softmaxout_1",
"data_0", modelPath));
var model = pipeline.Fit(dataView);
var onnx = model.Transform(dataView);
// Convert IDataView back to IEnumerable<ImageDataPoint> so that user
// can inspect the output, column "softmaxout_1", of the ONNX transform.
// Note that Column "softmaxout_1" would be stored in ImageDataPont
//.Scores because the added attributed [ColumnName("softmaxout_1")]
// tells that ImageDataPont.Scores is equivalent to column
// "softmaxout_1".
var transformedDataPoints = mlContext.Data.CreateEnumerable<
ImageDataPoint>(onnx, false).ToList();
// The scores are probabilities of all possible classes, so they should
// all be positive.
foreach (var dataPoint in transformedDataPoints)
{
var firstClassProb = dataPoint.Scores.First();
var lastClassProb = dataPoint.Scores.Last();
Console.WriteLine("The probability of being the first class is " +
(firstClassProb * 100) + "%.");
Console.WriteLine($"The probability of being the last class is " +
(lastClassProb * 100) + "%.");
}
// Expected output:
// The probability of being the first class is 0.002542659%.
// The probability of being the last class is 0.0292684%.
// The probability of being the first class is 0.02258059%.
// The probability of being the last class is 0.394428%.
}
// This class is used in Example() to describe data points which will be
// consumed by ML.NET pipeline.
private class ImageDataPoint
{
// Height of Image.
private const int height = 224;
// Width of Image.
private const int width = 224;
// Image will be consumed by ONNX image multiclass classification model.
[ImageType(height, width)]
public MLImage Image { get; set; }
// Expected output of ONNX model. It contains probabilities of all
// classes. Note that the ColumnName below should match the output name
// in the used ONNX model file.
[ColumnName("softmaxout_1")]
public float[] Scores { get; set; }
public ImageDataPoint()
{
Image = null;
}
public ImageDataPoint(byte red, byte green, byte blue)
{
byte[] imageData = new byte[width * height * 4]; // 4 for the red, green, blue and alpha colors
for (int i = 0; i < imageData.Length; i += 4)
{
// Fill the buffer with the Bgra32 format
imageData[i] = blue;
imageData[i + 1] = green;
imageData[i + 2] = red;
imageData[i + 3] = 255;
}
Image = MLImage.CreateFromPixels(width, height, MLPixelFormat.Bgra32, imageData);
}
}
}
}
注解
如果 gpuDeviceId 值不是null
MLContext.GpuDeviceId,则使用null
该值。
适用于
ApplyOnnxModel(TransformsCatalog, String[], String[], String, IDictionary<String,Int32[]>, Nullable<Int32>, Boolean)
创建一个 OnnxScoringEstimator,它将预先训练的 Onnx 模型应用于 inputColumnNames
列。
请参阅 OnnxScoringEstimator 详细了解必要的依赖项,以及如何在 GPU 上运行它。
public static Microsoft.ML.Transforms.Onnx.OnnxScoringEstimator ApplyOnnxModel (this Microsoft.ML.TransformsCatalog catalog, string[] outputColumnNames, string[] inputColumnNames, string modelFile, System.Collections.Generic.IDictionary<string,int[]> shapeDictionary, int? gpuDeviceId = default, bool fallbackToCpu = false);
static member ApplyOnnxModel : Microsoft.ML.TransformsCatalog * string[] * string[] * string * System.Collections.Generic.IDictionary<string, int[]> * Nullable<int> * bool -> Microsoft.ML.Transforms.Onnx.OnnxScoringEstimator
<Extension()>
Public Function ApplyOnnxModel (catalog As TransformsCatalog, outputColumnNames As String(), inputColumnNames As String(), modelFile As String, shapeDictionary As IDictionary(Of String, Integer()), Optional gpuDeviceId As Nullable(Of Integer) = Nothing, Optional fallbackToCpu As Boolean = false) As OnnxScoringEstimator
参数
- catalog
- TransformsCatalog
转换的目录。
- outputColumnNames
- String[]
转换生成的输出列。
- inputColumnNames
- String[]
输入列。
- modelFile
- String
包含 ONNX 模型的文件的路径。
- shapeDictionary
- IDictionary<String,Int32[]>
要用于从 modelFile
中加载的 ONNX 形状。
对于键,请使用 ONNX 模型中所示的名称,例如“input”。 使用此参数说明形状对于使用可变维度输入和输出尤其有用。
- fallbackToCpu
- Boolean
如果 GPU 错误,请引发异常或回退到 CPU。
返回
注解
如果 gpuDeviceId 值是null
MLContext.GpuDeviceId该值,则如果未null
使用该值。
适用于
ApplyOnnxModel(TransformsCatalog, String[], String[], String, IDictionary<String,Int32[]>, Nullable<Int32>, Boolean, Int32)
创建一个 OnnxScoringEstimator,它将预先训练的 Onnx 模型应用于 inputColumnNames
列。
请参阅 OnnxScoringEstimator 详细了解必要的依赖项,以及如何在 GPU 上运行它。
public static Microsoft.ML.Transforms.Onnx.OnnxScoringEstimator ApplyOnnxModel (this Microsoft.ML.TransformsCatalog catalog, string[] outputColumnNames, string[] inputColumnNames, string modelFile, System.Collections.Generic.IDictionary<string,int[]> shapeDictionary, int? gpuDeviceId = default, bool fallbackToCpu = false, int recursionLimit = 100);
static member ApplyOnnxModel : Microsoft.ML.TransformsCatalog * string[] * string[] * string * System.Collections.Generic.IDictionary<string, int[]> * Nullable<int> * bool * int -> Microsoft.ML.Transforms.Onnx.OnnxScoringEstimator
<Extension()>
Public Function ApplyOnnxModel (catalog As TransformsCatalog, outputColumnNames As String(), inputColumnNames As String(), modelFile As String, shapeDictionary As IDictionary(Of String, Integer()), Optional gpuDeviceId As Nullable(Of Integer) = Nothing, Optional fallbackToCpu As Boolean = false, Optional recursionLimit As Integer = 100) As OnnxScoringEstimator
参数
- catalog
- TransformsCatalog
转换的目录。
- outputColumnNames
- String[]
转换生成的输出列。
- inputColumnNames
- String[]
输入列。
- modelFile
- String
包含 ONNX 模型的文件的路径。
- shapeDictionary
- IDictionary<String,Int32[]>
要用于从 modelFile
中加载的 ONNX 形状。
对于键,请使用 ONNX 模型中所示的名称,例如“input”。 使用此参数说明形状对于使用可变维度输入和输出尤其有用。
- fallbackToCpu
- Boolean
如果 GPU 错误,请引发异常或回退到 CPU。
- recursionLimit
- Int32
可选,指定 Protobuf CodedInputStream 递归限制。 默认值为 100。
返回
注解
如果 gpuDeviceId 值是null
MLContext.GpuDeviceId该值,则如果未null
使用该值。