Microsoft Machine Learning for Apache Spark
MMLSpark
MMLSpark provides a number of deep learning and data science tools for Apache Spark, including seamless integration of Spark Machine Learning pipelines with Microsoft Cognitive Toolkit (CNTK) and OpenCV, enabling you to quickly create powerful, highly-scalable predictive and analytical models for large image and text datasets.
MMLSpark requires Scala 2.11, Spark 2.1+, and either Python 2.7 or Python 3.5+. See the API documentation for Scala and for PySpark.
Salient features
- Easily ingest images from HDFS into Spark
DataFrame
(example:301) - Pre-process image data using transforms from OpenCV (example:302)
- Featurize images using pre-trained deep neural nets using CNTK (example:301)
- Train DNN-based image classification models on N-Series GPU VMs on Azure
- Featurize free-form text data using convenient APIs on top of primitives in SparkML via a single transformer (example:201)
- Train classification and regression models easily via implicit featurization of data (example:101)
- Compute a rich set of evaluation metrics including per-instance metrics (example:102)
See our notebooks for all examples.
A short example
Below is an excerpt from a simple example of using a pre-trained CNN to classify images in the CIFAR-10 dataset. View the whole source code as an example notebook.
...
import mmlspark as mml
# Initialize CNTKModel and define input and output columns
cntkModel = mml.CNTKModel().setInputCol("images").setOutputCol("output").setModelLocation(modelFile)
# Train on dataset with internal spark pipeline
scoredImages = cntkModel.transform(imagesWithLabels)
...
See other sample notebooks as well as the MMLSpark documentation for Scala and PySpark.
Setup and installation
Docker
The easiest way to evaluate MMLSpark is via our pre-built Docker container. To do so, run the following command:
docker run -it -p 8888:8888 -e ACCEPT_EULA=yes microsoft/mmlspark
Navigate to https://localhost:8888 in your web browser to run the sample notebooks. See the documentation for more on Docker use.
To read the EULA for using the docker image, run docker run -it -p 8888:8888 microsoft/mmlspark eula
Spark package
MMLSpark can be conveniently installed on existing Spark clusters via the --packages
option, examples:
spark-shell --packages com.microsoft.ml.spark:mmlspark_2.11:0.6 \
--repositories https://mmlspark.azureedge.net/maven
pyspark --packages com.microsoft.ml.spark:mmlspark_2.11:0.6 \
--repositories https://mmlspark.azureedge.net/maven
spark-submit --packages com.microsoft.ml.spark:mmlspark_2.11:0.6 \
--repositories https://mmlspark.azureedge.net/maven \
MyApp.jar
HDInsight
To install MMLSpark on an existing HDInsight Spark Cluster, you can execute a script action on the cluster head and worker nodes. For instructions on running script actions, see this guide.
The script action url is: https://mmlspark.azureedge.net/buildartifacts/0.6/install-mmlspark.sh.
If you're using the Azure Portal to run the script action, go to Script actions
→ Submit new
in the Overview
section of your cluster blade. In the Bash script URI
field, input the script action URL provided above. Mark the rest of the options as shown on the screenshot to the right.
Submit, and the cluster should finish configuring within 10 minutes or so.
Databricks cloud
To install MMLSpark on the Databricks cloud, create a new library from Maven coordinates in your workspace.
For the coordinates use: com.microsoft.ml.spark:mmlspark:0.6
. Then, under Advanced Options, use https://mmlspark.azureedge.net/maven
for the repository. Ensure this library is attached to all clusters you create.
Finally, ensure that your Spark cluster has at least Spark 2.1 and Scala 2.11.
You can use MMLSpark in both your Scala and PySpark notebooks.
SBT
If you are building a Spark application in Scala, add the following lines to your build.sbt
:
resolvers += "MMLSpark Repo" at "https://mmlspark.azureedge.net/maven"
libraryDependencies += "com.microsoft.ml.spark" %% "mmlspark" % "0.6"
Building from source
You can also easily create your own build by cloning this repo and use the main build script: ./runme
. Run it once to install the needed dependencies, and again to do a build. See this guide for more information and check out all the resources and documention at https://github.com/azure/mmlspark
Interested in learning more watch Joseph Sirosh keynote from PyData 2017 https://channel9.msdn.com/Events/PyData/Seattle2017/Key03