Core ML 2 in Xamarin.iOS
Core ML is a machine learning technology available on iOS, macOS, tvOS, and watchOS. It allows apps to make predictions based on machine learning models.
In iOS 12, Core ML includes a batch processing API. This API makes Core ML more efficient and provides performance improvements in scenarios where a model is used to make a sequence of predictions.
Generate sample data
In ViewController
, the sample app's ViewDidLoad
method calls
LoadMLModel
, which loads the included Core ML model:
void LoadMLModel()
{
var assetPath = NSBundle.MainBundle.GetUrlForResource("CoreMLModel/MarsHabitatPricer", "mlmodelc");
model = MLModel.Create(assetPath, out NSError mlErr);
}
Then, the sample app creates 100,000 MarsHabitatPricerInput
objects to
use as input for sequential Core ML predictions. Each generated sample
has a random value set for the number of solar panels, the number of
greenhouses, and the number of acres:
async void CreateInputs(int num)
{
// ...
Random r = new Random();
await Task.Run(() =>
{
for (int i = 0; i < num; i++)
{
double solarPanels = r.NextDouble() * MaxSolarPanels;
double greenHouses = r.NextDouble() * MaxGreenHouses;
double acres = r.NextDouble() * MaxAcres;
inputs[i] = new MarsHabitatPricerInput(solarPanels, greenHouses, acres);
}
});
// ...
}
Tapping any of the app's three buttons executes two sequences of
predictions: one using a for
loop, and another using the new batch
GetPredictions
method introduced in iOS 12:
async void RunTest(int num)
{
// ...
await FetchNonBatchResults(num);
// ...
await FetchBatchResults(num);
// ...
}
for loop
The for
loop version of the test naively iterates over the specified
number of inputs, calling GetPrediction
for each and discarding the result. The method times how long it takes to
make the predictions:
async Task FetchNonBatchResults(int num)
{
Stopwatch stopWatch = Stopwatch.StartNew();
await Task.Run(() =>
{
for (int i = 0; i < num; i++)
{
IMLFeatureProvider output = model.GetPrediction(inputs[i], out NSError error);
}
});
stopWatch.Stop();
nonBatchMilliseconds = stopWatch.ElapsedMilliseconds;
}
GetPredictions (new batch API)
The batch version of the test creates an MLArrayBatchProvider
object
from the input array (since this is a required input parameter for the
GetPredictions
method), creates an
MLPredictionOptions
object that prevents prediction computations from being restricted to the
CPU, and uses the GetPredictions
API to fetch the predictions, again
discarding the result:
async Task FetchBatchResults(int num)
{
var batch = new MLArrayBatchProvider(inputs.Take(num).ToArray());
var options = new MLPredictionOptions()
{
UsesCpuOnly = false
};
Stopwatch stopWatch = Stopwatch.StartNew();
await Task.Run(() =>
{
model.GetPredictions(batch, options, out NSError error);
});
stopWatch.Stop();
batchMilliseconds = stopWatch.ElapsedMilliseconds;
}
Results
On both simulator and device, GetPredictions
finishes more quickly than
the loop-based Core ML predictions.