Muokkaa

Jaa


How to mount S3 for HDFS tiering in a big data cluster

The following sections provide an example of how to configure HDFS tiering with an S3 Storage data source.

Important

The Microsoft SQL Server 2019 Big Data Clusters add-on will be retired. Support for SQL Server 2019 Big Data Clusters will end on February 28, 2025. All existing users of SQL Server 2019 with Software Assurance will be fully supported on the platform and the software will continue to be maintained through SQL Server cumulative updates until that time. For more information, see the announcement blog post and Big data options on the Microsoft SQL Server platform.

Prerequisites

  • Deployed big data cluster
  • Big data tools
    • azdata
    • kubectl
  • Create and upload data to an S3 bucket
    • Upload CSV or Parquet files to your S3 bucket. This is the external HDFS data that will be mounted to HDFS in the big data cluster.

Access keys

Set environment variable for access key credentials

Open a command-prompt on a client machine that can access your big data cluster. Set an environment variable using the following format. The credentials need to be in a comma separated list. The 'set' command is used on Windows. If you are using Linux, then use 'export' instead.

 set MOUNT_CREDENTIALS=fs.s3a.access.key=<Access Key ID of the key>,
 fs.s3a.secret.key=<Secret Access Key of the key>

Tip

For more information on how to create S3 access keys, see S3 access keys.

Mount the remote HDFS storage

Now that you have prepared a credential file with access keys, you can start mounting. The following steps mount the remote HDFS storage in S3 to the local HDFS storage of your big data cluster.

  1. Use kubectl to find the IP Address for the endpoint controller-svc-external service in your big data cluster. Look for the External-IP.

    kubectl get svc controller-svc-external -n <your-big-data-cluster-name>
    
  2. Log in with azdata using the external IP address of the controller endpoint with your cluster username and password:

    azdata login -e https://<IP-of-controller-svc-external>:30080/
    
  3. Set environment variable MOUNT_CREDENTIALS following the instructions above

  4. Mount the remote HDFS storage in Azure using azdata bdc hdfs mount create. Replace the placeholder values before running the following command:

    azdata bdc hdfs mount create --remote-uri s3a://<S3 bucket name> --mount-path /mounts/<mount-name>
    

    Note

    The mount create command is asynchronous. At this time, there is no message indicating whether the mount succeeded. See the status section to check the status of your mounts.

If mounted successfully, you should be able to query the HDFS data and run Spark jobs against it. It will appear in the HDFS for your big data cluster in the location specified by --mount-path.

Get the status of mounts

To list the status of all mounts in your big data cluster, use the following command:

azdata bdc hdfs mount status

To list the status of a mount at a specific path in HDFS, use the following command:

azdata bdc hdfs mount status --mount-path <mount-path-in-hdfs>

Refresh a mount

The following example refreshes the mount.

azdata bdc hdfs mount refresh --mount-path <mount-path-in-hdfs>

Delete the mount

To delete the mount, use the azdata bdc hdfs mount delete command, and specify the mount path in HDFS:

azdata bdc hdfs mount delete --mount-path <mount-path-in-hdfs>