TextCatalog.ProduceHashedNgrams 方法
定義
重要
部分資訊涉及發行前產品,在發行之前可能會有大幅修改。 Microsoft 對此處提供的資訊,不做任何明確或隱含的瑕疵擔保。
多載
ProduceHashedNgrams(TransformsCatalog+TextTransforms, String, String, Int32, Int32, Int32, Boolean, UInt32, Boolean, Int32, Boolean) |
建立 NgramHashingEstimator ,將資料從 中指定的 |
ProduceHashedNgrams(TransformsCatalog+TextTransforms, String, String[], Int32, Int32, Int32, Boolean, UInt32, Boolean, Int32, Boolean) |
建立 NgramHashingEstimator ,它會將資料從 中指定的 |
ProduceHashedNgrams(TransformsCatalog+TextTransforms, String, String, Int32, Int32, Int32, Boolean, UInt32, Boolean, Int32, Boolean)
建立 NgramHashingEstimator ,將資料從 中指定的 inputColumnName
資料行複製到新的資料行: outputColumnName
並產生雜湊 n-gram 計數的向量。
public static Microsoft.ML.Transforms.Text.NgramHashingEstimator ProduceHashedNgrams (this Microsoft.ML.TransformsCatalog.TextTransforms catalog, string outputColumnName, string inputColumnName = default, int numberOfBits = 16, int ngramLength = 2, int skipLength = 0, bool useAllLengths = true, uint seed = 314489979, bool useOrderedHashing = true, int maximumNumberOfInverts = 0, bool rehashUnigrams = false);
static member ProduceHashedNgrams : Microsoft.ML.TransformsCatalog.TextTransforms * string * string * int * int * int * bool * uint32 * bool * int * bool -> Microsoft.ML.Transforms.Text.NgramHashingEstimator
<Extension()>
Public Function ProduceHashedNgrams (catalog As TransformsCatalog.TextTransforms, outputColumnName As String, Optional inputColumnName As String = Nothing, Optional numberOfBits As Integer = 16, Optional ngramLength As Integer = 2, Optional skipLength As Integer = 0, Optional useAllLengths As Boolean = true, Optional seed As UInteger = 314489979, Optional useOrderedHashing As Boolean = true, Optional maximumNumberOfInverts As Integer = 0, Optional rehashUnigrams As Boolean = false) As NgramHashingEstimator
參數
- catalog
- TransformsCatalog.TextTransforms
轉換的目錄。
- inputColumnName
- String
要從中複製資料的資料行名稱。 此估算器會透過索引鍵類型的向量運作。
- numberOfBits
- Int32
要雜湊到的位數。 必須介於 1 到 30 之間,包含。
- ngramLength
- Int32
Ngram 長度。
- skipLength
- Int32
建構 n-gram 時要略過的權杖數目上限。
- useAllLengths
- Boolean
是否要包含所有 n-gram 長度,最多 ngramLength
或只 ngramLength
包含 。
- seed
- UInt32
雜湊種子。
- useOrderedHashing
- Boolean
當有多個來源資料行) 時,每個來源資料行的位置是否應該包含在雜湊 (中。
- maximumNumberOfInverts
- Int32
在雜湊處理期間,我們會建構原始值與所產生雜湊值之間的對應。
原始值的文字表示會儲存在新資料行之批註的位置名稱中。因此,雜湊可以將許多初始值對應至一個。
maximumNumberOfInverts
會指定對應至應保留之雜湊的相異輸入值數目上限。
0 不會保留任何輸入值。 -1 會保留與每個雜湊對應的所有輸入值。
- rehashUnigrams
- Boolean
是否要重新隱藏 Unigram。
傳回
備註
NgramHashingEstimator與在內部標記文字時 WordHashBagEstimator 採用標記化文字做為輸入的方式 NgramHashingEstimator 不同 WordHashBagEstimator 。
適用於
ProduceHashedNgrams(TransformsCatalog+TextTransforms, String, String[], Int32, Int32, Int32, Boolean, UInt32, Boolean, Int32, Boolean)
建立 NgramHashingEstimator ,它會將資料從 中指定的 inputColumnNames
多個資料行擷取至新的資料行: outputColumnName
並產生雜湊 n-gram 計數的向量。
public static Microsoft.ML.Transforms.Text.NgramHashingEstimator ProduceHashedNgrams (this Microsoft.ML.TransformsCatalog.TextTransforms catalog, string outputColumnName, string[] inputColumnNames = default, int numberOfBits = 16, int ngramLength = 2, int skipLength = 0, bool useAllLengths = true, uint seed = 314489979, bool useOrderedHashing = true, int maximumNumberOfInverts = 0, bool rehashUnigrams = false);
static member ProduceHashedNgrams : Microsoft.ML.TransformsCatalog.TextTransforms * string * string[] * int * int * int * bool * uint32 * bool * int * bool -> Microsoft.ML.Transforms.Text.NgramHashingEstimator
<Extension()>
Public Function ProduceHashedNgrams (catalog As TransformsCatalog.TextTransforms, outputColumnName As String, Optional inputColumnNames As String() = Nothing, Optional numberOfBits As Integer = 16, Optional ngramLength As Integer = 2, Optional skipLength As Integer = 0, Optional useAllLengths As Boolean = true, Optional seed As UInteger = 314489979, Optional useOrderedHashing As Boolean = true, Optional maximumNumberOfInverts As Integer = 0, Optional rehashUnigrams As Boolean = false) As NgramHashingEstimator
參數
- catalog
- TransformsCatalog.TextTransforms
轉換的目錄。
- inputColumnNames
- String[]
要從中擷取資料的多個資料行名稱。 此估算器會透過索引鍵類型的向量運作。
- numberOfBits
- Int32
要雜湊到的位數。 必須介於 1 到 30 之間,包含。
- ngramLength
- Int32
Ngram 長度。
- skipLength
- Int32
建構 n-gram 時要略過的權杖數目上限。
- useAllLengths
- Boolean
是否要包含所有 n-gram 長度,最多 ngramLength
或只 ngramLength
包含 。
- seed
- UInt32
雜湊種子。
- useOrderedHashing
- Boolean
當有多個來源資料行) 時,每個來源資料行的位置是否應該包含在雜湊 (中。
- maximumNumberOfInverts
- Int32
在雜湊處理期間,我們會建構原始值與所產生雜湊值之間的對應。
原始值的文字表示會儲存在新資料行之批註的位置名稱中。因此,雜湊可以將許多初始值對應至一個。
maximumNumberOfInverts
會指定對應至應保留之雜湊的相異輸入值數目上限。
0 不會保留任何輸入值。 -1 會保留與每個雜湊對應的所有輸入值。
- rehashUnigrams
- Boolean
是否要重新隱藏 Unigram。
傳回
範例
using System;
using System.Collections.Generic;
using Microsoft.ML;
using Microsoft.ML.Data;
namespace Samples.Dynamic
{
public static class ProduceHashedNgrams
{
public static void Example()
{
// Create a new ML context, for ML.NET operations. It can be used for
// exception tracking and logging, as well as the source of randomness.
var mlContext = new MLContext();
// Create a small dataset as an IEnumerable.
var samples = new List<TextData>()
{
new TextData(){ Text = "This is an example to compute n-grams " +
"using hashing." },
new TextData(){ Text = "N-gram is a sequence of 'N' consecutive" +
" words/tokens." },
new TextData(){ Text = "ML.NET's ProduceHashedNgrams API " +
"produces count of n-grams and hashes it as an index into a " +
"vector of given bit length." },
new TextData(){ Text = "The hashing reduces the size of the " +
"output feature vector" },
new TextData(){ Text = "which is useful in case when number of " +
"n-grams is very large." },
};
// Convert training data to IDataView.
var dataview = mlContext.Data.LoadFromEnumerable(samples);
// A pipeline for converting text into numeric hashed n-gram features.
// The following call to 'ProduceHashedNgrams' requires the tokenized
// text /string as input. This is achieved by calling
// 'TokenizeIntoWords' first followed by 'ProduceHashedNgrams'.
// Please note that the length of the output feature vector depends on
// the 'numberOfBits' settings.
var textPipeline = mlContext.Transforms.Text.TokenizeIntoWords("Tokens",
"Text")
.Append(mlContext.Transforms.Conversion.MapValueToKey("Tokens"))
.Append(mlContext.Transforms.Text.ProduceHashedNgrams(
"NgramFeatures", "Tokens",
numberOfBits: 5,
ngramLength: 3,
useAllLengths: false,
maximumNumberOfInverts: 1));
// Fit to data.
var textTransformer = textPipeline.Fit(dataview);
var transformedDataView = textTransformer.Transform(dataview);
// Create the prediction engine to get the features extracted from the
// text.
var predictionEngine = mlContext.Model.CreatePredictionEngine<TextData,
TransformedTextData>(textTransformer);
// Convert the text into numeric features.
var prediction = predictionEngine.Predict(samples[0]);
// Print the length of the feature vector.
Console.WriteLine("Number of Features: " + prediction.NgramFeatures
.Length);
// Preview of the produced n-grams.
// Get the slot names from the column's metadata.
// The slot names for a vector column corresponds to the names
// associated with each position in the vector.
VBuffer<ReadOnlyMemory<char>> slotNames = default;
transformedDataView.Schema["NgramFeatures"].GetSlotNames(ref slotNames);
var NgramFeaturesColumn = transformedDataView.GetColumn<VBuffer<float>>(
transformedDataView.Schema["NgramFeatures"]);
var slots = slotNames.GetValues();
Console.Write("N-grams: ");
foreach (var featureRow in NgramFeaturesColumn)
{
foreach (var item in featureRow.Items())
Console.Write($"{slots[item.Key]} ");
Console.WriteLine();
}
// Print the first 10 feature values.
Console.Write("Features: ");
for (int i = 0; i < 10; i++)
Console.Write($"{prediction.NgramFeatures[i]:F4} ");
// Expected output:
// Number of Features: 32
// N-grams: This|is|an example|to|compute compute|n-grams|using n-grams|using|hashing. an|example|to is|an|example a|sequence|of of|'N'|consecutive is|a|sequence N-gram|is|a ...
// Features: 0.0000 0.0000 2.0000 0.0000 0.0000 1.0000 0.0000 0.0000 1.0000 0.0000 ...
}
private class TextData
{
public string Text { get; set; }
}
private class TransformedTextData : TextData
{
public float[] NgramFeatures { get; set; }
}
}
}
備註
NgramHashingEstimator與在內部標記文字時 WordHashBagEstimator 採用標記化文字做為輸入的方式 NgramHashingEstimator 不同 WordHashBagEstimator 。