databricks_step Module
Contains functionality to create an Azure ML pipeline step to run a Databricks notebook or Python script on DBFS.
Classes
DatabricksStep |
Creates an Azure ML Pipeline step to add a DataBricks notebook, Python script, or JAR as a node. For an example of using DatabricksStep, see the notebook https://aka.ms/pl-databricks. Create an Azure ML Pipeline step to add a DataBricks notebook, Python script, or JAR as a node. For an example of using DatabricksStep, see the notebook https://aka.ms/pl-databricks. :param python_script_name:[Required] The name of a Python script relative to Specify exactly one of If you specify a DataReference object as input with data_reference_name=input1 and a PipelineData object as output with name=output1, then the inputs and outputs will be passed to the script as parameters. This is how they will look like and you will need to parse the arguments in your script to access the paths of each input and output: "-input1","wasbs://test@storagename.blob.core.windows.net/test","-output1", "wasbs://test@storagename.blob.core.windows.net/b3e26de1-87a4-494d-a20f-1988d22b81a2/output1" In addition, the following parameters will be available within the script:
When you are executing a Python script from your local machine on Databricks using DatabricksStep
parameters |