你当前正在访问 Microsoft Azure Global Edition 技术文档网站。 如果需要访问由世纪互联运营的 Microsoft Azure 中国技术文档网站,请访问 https://docs.azure.cn。
DataLakeFileAsyncClient Class
- java.
lang. Object - com.
azure. storage. file. datalake. DataLakePathAsyncClient - com.
azure. storage. file. datalake. DataLakeFileAsyncClient
- com.
- com.
public class DataLakeFileAsyncClient
extends DataLakePathAsyncClient
This class provides a client that contains file operations for Azure Storage Data Lake. Operations provided by this client include creating a file, deleting a file, renaming a file, setting metadata and http headers, setting and retrieving access control, getting properties, reading a file, and appending and flushing data to write to a file.
This client is instantiated through DataLakePathClientBuilder or retrieved via getFileAsyncClient(String fileName).
Please refer to the Azure Docs for more information.
Method Summary
Methods inherited from DataLakePathAsyncClient
Methods inherited from java.lang.Object
Method Details
append
public Mono
Appends data to the specified resource to later be flushed (written) by a call to flush
Code Samples
client.append(data, offset, length)
.subscribe(
response -> System.out.println("Append data completed"),
error -> System.out.printf("Error when calling append data: %s", error));
For more information, see the Azure Docs
Parameters:
Returns:
append
public Mono
Appends data to the specified resource to later be flushed (written) by a call to flush
Code Samples
client.append(data, offset, length)
.subscribe(
response -> System.out.println("Append data completed"),
error -> System.out.printf("Error when calling append data: %s", error));
For more information, see the Azure Docs
Parameters:
Flux
.
Returns:
appendWithResponse
public Mono
Appends data to the specified resource to later be flushed (written) by a call to flush
Code Samples
FileRange range = new FileRange(1024, 2048L);
DownloadRetryOptions options = new DownloadRetryOptions().setMaxRetryRequests(5);
byte[] contentMd5 = new byte[0]; // Replace with valid md5
BinaryData data = BinaryData.fromString("Data!");
client.appendWithResponse(data, offset, contentMd5, leaseId).subscribe(response ->
System.out.printf("Append data completed with status %d%n", response.getStatusCode()));
For more information, see the Azure Docs
Parameters:
Returns:
appendWithResponse
public Mono
Appends data to the specified resource to later be flushed (written) by a call to flush
Code Samples
FileRange range = new FileRange(1024, 2048L);
byte[] contentMd5 = new byte[0]; // Replace with valid md5
DataLakeFileAppendOptions appendOptions = new DataLakeFileAppendOptions()
.setLeaseId(leaseId)
.setContentHash(contentMd5)
.setFlush(true);
BinaryData data = BinaryData.fromString("Data!");
client.appendWithResponse(data, offset, appendOptions).subscribe(response ->
System.out.printf("Append data completed with status %d%n", response.getStatusCode()));
For more information, see the Azure Docs
Parameters:
Returns:
appendWithResponse
public Mono
Appends data to the specified resource to later be flushed (written) by a call to flush
Code Samples
FileRange range = new FileRange(1024, 2048L);
DownloadRetryOptions options = new DownloadRetryOptions().setMaxRetryRequests(5);
byte[] contentMd5 = new byte[0]; // Replace with valid md5
client.appendWithResponse(data, offset, length, contentMd5, leaseId).subscribe(response ->
System.out.printf("Append data completed with status %d%n", response.getStatusCode()));
For more information, see the Azure Docs
Parameters:
Flux
.
Returns:
appendWithResponse
public Mono
Appends data to the specified resource to later be flushed (written) by a call to flush
Code Samples
FileRange range = new FileRange(1024, 2048L);
byte[] contentMd5 = new byte[0]; // Replace with valid md5
DataLakeFileAppendOptions appendOptions = new DataLakeFileAppendOptions()
.setLeaseId(leaseId)
.setContentHash(contentMd5)
.setFlush(true);
client.appendWithResponse(data, offset, length, appendOptions).subscribe(response ->
System.out.printf("Append data completed with status %d%n", response.getStatusCode()));
For more information, see the Azure Docs
Parameters:
Flux
.
Returns:
delete
public Mono
Deletes a file.
Code Samples
client.delete().subscribe(response ->
System.out.println("Delete request completed"));
For more information see the Azure Docs
Returns:
deleteIfExists
public Mono
Deletes a file if it exists.
Code Samples
client.deleteIfExists().subscribe(deleted -> {
if (deleted) {
System.out.println("Successfully deleted.");
} else {
System.out.println("Does not exist.");
}
});
For more information see the Azure Docs
Overrides:
DataLakeFileAsyncClient.deleteIfExists()Returns:
true
indicates that the file was successfully
deleted, false
indicates that the file did not exist.deleteIfExistsWithResponse
public Mono
Deletes a file if it exists.
Code Samples
DataLakeRequestConditions requestConditions = new DataLakeRequestConditions()
.setLeaseId(leaseId);
DataLakePathDeleteOptions options = new DataLakePathDeleteOptions().setIsRecursive(false)
.setRequestConditions(requestConditions);
client.deleteIfExistsWithResponse(options).subscribe(response -> {
if (response.getStatusCode() == 404) {
System.out.println("Does not exist.");
} else {
System.out.println("successfully deleted.");
}
});
For more information see the Azure Docs
Overrides:
DataLakeFileAsyncClient.deleteIfExistsWithResponse(DataLakePathDeleteOptions options)Parameters:
Returns:
deleteWithResponse
public Mono
Deletes a file.
Code Samples
DataLakeRequestConditions requestConditions = new DataLakeRequestConditions()
.setLeaseId(leaseId);
client.deleteWithResponse(requestConditions)
.subscribe(response -> System.out.println("Delete request completed"));
For more information see the Azure Docs
Parameters:
Returns:
flush
@Deprecated
public Mono
Deprecated
Flushes (writes) data previously appended to the file through a call to append. The previously uploaded data must be contiguous.
By default this method will not overwrite existing data.
Code Samples
client.flush(position).subscribe(response ->
System.out.println("Flush data completed"));
For more information, see the Azure Docs
Parameters:
Returns:
flush
public Mono
Flushes (writes) data previously appended to the file through a call to append. The previously uploaded data must be contiguous.
Code Samples
boolean overwrite = true;
client.flush(position, overwrite).subscribe(response ->
System.out.println("Flush data completed"));
For more information, see the Azure Docs
Parameters:
Returns:
flushWithResponse
public Mono
Flushes (writes) data previously appended to the file through a call to append. The previously uploaded data must be contiguous.
Code Samples
FileRange range = new FileRange(1024, 2048L);
DownloadRetryOptions options = new DownloadRetryOptions().setMaxRetryRequests(5);
byte[] contentMd5 = new byte[0]; // Replace with valid md5
boolean retainUncommittedData = false;
boolean close = false;
PathHttpHeaders httpHeaders = new PathHttpHeaders()
.setContentLanguage("en-US")
.setContentType("binary");
DataLakeRequestConditions requestConditions = new DataLakeRequestConditions()
.setLeaseId(leaseId);
client.flushWithResponse(position, retainUncommittedData, close, httpHeaders,
requestConditions).subscribe(response ->
System.out.printf("Flush data completed with status %d%n", response.getStatusCode()));
For more information, see the Azure Docs
Parameters:
Returns:
flushWithResponse
public Mono
Flushes (writes) data previously appended to the file through a call to append. The previously uploaded data must be contiguous.
Code Samples
FileRange range = new FileRange(1024, 2048L);
DownloadRetryOptions options = new DownloadRetryOptions().setMaxRetryRequests(5);
byte[] contentMd5 = new byte[0]; // Replace with valid md5
boolean retainUncommittedData = false;
boolean close = false;
PathHttpHeaders httpHeaders = new PathHttpHeaders()
.setContentLanguage("en-US")
.setContentType("binary");
DataLakeRequestConditions requestConditions = new DataLakeRequestConditions()
.setLeaseId(leaseId);
Integer leaseDuration = 15;
DataLakeFileFlushOptions flushOptions = new DataLakeFileFlushOptions()
.setUncommittedDataRetained(retainUncommittedData)
.setClose(close)
.setPathHttpHeaders(httpHeaders)
.setRequestConditions(requestConditions)
.setLeaseAction(LeaseAction.ACQUIRE)
.setLeaseDuration(leaseDuration)
.setProposedLeaseId(leaseId);
client.flushWithResponse(position, flushOptions).subscribe(response ->
System.out.printf("Flush data completed with status %d%n", response.getStatusCode()));
For more information, see the Azure Docs
Parameters:
Returns:
getCustomerProvidedKeyAsyncClient
public DataLakeFileAsyncClient getCustomerProvidedKeyAsyncClient(CustomerProvidedKey customerProvidedKey)
Creates a new DataLakeFileAsyncClient with the specified customerProvidedKey
.
Overrides:
DataLakeFileAsyncClient.getCustomerProvidedKeyAsyncClient(CustomerProvidedKey customerProvidedKey)Parameters:
null
to use no customer provided key.
Returns:
customerProvidedKey
.getFileName
public String getFileName()
Gets the name of this file, not including its full path.
Returns:
getFilePath
public String getFilePath()
Gets the path of this file, not including the name of the resource itself.
Returns:
getFileUrl
public String getFileUrl()
Gets the URL of the file represented by this client on the Data Lake service.
Returns:
query
public Flux
Queries the entire file.
For more information, see the Azure Docs
Code Samples
ByteArrayOutputStream queryData = new ByteArrayOutputStream();
String expression = "SELECT * from BlobStorage";
client.query(expression).subscribe(piece -> {
try {
queryData.write(piece.array());
} catch (IOException ex) {
throw new UncheckedIOException(ex);
}
});
Parameters:
Returns:
queryWithResponse
public Mono
Queries the entire file.
For more information, see the Azure Docs
Code Samples
String expression = "SELECT * from BlobStorage";
FileQueryJsonSerialization input = new FileQueryJsonSerialization()
.setRecordSeparator('\n');
FileQueryDelimitedSerialization output = new FileQueryDelimitedSerialization()
.setEscapeChar('\0')
.setColumnSeparator(',')
.setRecordSeparator('\n')
.setFieldQuote('\'')
.setHeadersPresent(true);
DataLakeRequestConditions requestConditions = new DataLakeRequestConditions().setLeaseId(leaseId);
Consumer<FileQueryError> errorConsumer = System.out::println;
Consumer<FileQueryProgress> progressConsumer = progress -> System.out.println("total file bytes read: "
+ progress.getBytesScanned());
FileQueryOptions queryOptions = new FileQueryOptions(expression)
.setInputSerialization(input)
.setOutputSerialization(output)
.setRequestConditions(requestConditions)
.setErrorConsumer(errorConsumer)
.setProgressConsumer(progressConsumer);
client.queryWithResponse(queryOptions)
.subscribe(response -> {
ByteArrayOutputStream queryData = new ByteArrayOutputStream();
response.getValue().subscribe(piece -> {
try {
queryData.write(piece.array());
} catch (IOException ex) {
throw new UncheckedIOException(ex);
}
});
});
Parameters:
Returns:
read
public Flux
Reads the entire file.
Code Samples
ByteArrayOutputStream downloadData = new ByteArrayOutputStream();
client.read().subscribe(piece -> {
try {
downloadData.write(piece.array());
} catch (IOException ex) {
throw new UncheckedIOException(ex);
}
});
For more information, see the Azure Docs
Returns:
readToFile
public Mono
Reads the entire file into a file specified by the path.
The file will be created and must not exist, if the file already exists a FileAlreadyExistsException will be thrown.
Code Samples
client.readToFile(new ReadToFileOptions(file))
.subscribe(response -> System.out.println("Completed download to file"));
For more information, see the Azure Docs
Parameters:
Returns:
readToFile
public Mono
Reads the entire file into a file specified by the path.
The file will be created and must not exist, if the file already exists a FileAlreadyExistsException will be thrown.
Code Samples
client.readToFile(file).subscribe(response -> System.out.println("Completed download to file"));
For more information, see the Azure Docs
Parameters:
Returns:
readToFile
public Mono
Reads the entire file into a file specified by the path.
If overwrite is set to false, the file will be created and must not exist, if the file already exists a FileAlreadyExistsException will be thrown.
Code Samples
boolean overwrite = false; // Default value
client.readToFile(file, overwrite).subscribe(response -> System.out.println("Completed download to file"));
For more information, see the Azure Docs
Parameters:
Returns:
readToFileWithResponse
public Mono
Reads the entire file into a file specified by the path.
By default the file will be created and must not exist, if the file already exists a FileAlreadyExistsException will be thrown. To override this behavior, provide appropriate OpenOptions
Code Samples
ReadToFileOptions options = new ReadToFileOptions(file);
options.setRange(new FileRange(1024, 2048L));
options.setDownloadRetryOptions(new DownloadRetryOptions().setMaxRetryRequests(5));
options.setOpenOptions(new HashSet<>(Arrays.asList(StandardOpenOption.CREATE_NEW,
StandardOpenOption.WRITE, StandardOpenOption.READ))); //Default options
options.setParallelTransferOptions(new ParallelTransferOptions().setBlockSizeLong(4L * Constants.MB));
options.setDataLakeRequestConditions(null);
options.setRangeGetContentMd5(false);
client.readToFileWithResponse(options)
.subscribe(response -> System.out.println("Completed download to file"));
For more information, see the Azure Docs
Parameters:
Returns:
readToFileWithResponse
public Mono
Reads the entire file into a file specified by the path.
By default the file will be created and must not exist, if the file already exists a FileAlreadyExistsException will be thrown. To override this behavior, provide appropriate OpenOptions
Code Samples
FileRange fileRange = new FileRange(1024, 2048L);
DownloadRetryOptions downloadRetryOptions = new DownloadRetryOptions().setMaxRetryRequests(5);
Set<OpenOption> openOptions = new HashSet<>(Arrays.asList(StandardOpenOption.CREATE_NEW,
StandardOpenOption.WRITE, StandardOpenOption.READ)); // Default options
client.readToFileWithResponse(file, fileRange, null, downloadRetryOptions, null, false, openOptions)
.subscribe(response -> System.out.println("Completed download to file"));
For more information, see the Azure Docs
Parameters:
Returns:
readWithResponse
public Mono
Reads a range of bytes from a file.
Code Samples
FileRange range = new FileRange(1024, 2048L);
DownloadRetryOptions options = new DownloadRetryOptions().setMaxRetryRequests(5);
client.readWithResponse(range, options, null, false).subscribe(response -> {
ByteArrayOutputStream readData = new ByteArrayOutputStream();
response.getValue().subscribe(piece -> {
try {
readData.write(piece.array());
} catch (IOException ex) {
throw new UncheckedIOException(ex);
}
});
});
For more information, see the Azure Docs
Parameters:
Returns:
rename
public Mono
Moves the file to another location within the file system. For more information see the Azure Docs.
Code Samples
DataLakeFileAsyncClient renamedClient = client.rename(fileSystemName, destinationPath).block();
System.out.println("Directory Client has been renamed");
Parameters:
null
for the current file system.
Returns:
renameWithResponse
public Mono
Moves the file to another location within the file system. For more information, see the Azure Docs.
Code Samples
DataLakeRequestConditions sourceRequestConditions = new DataLakeRequestConditions()
.setLeaseId(leaseId);
DataLakeRequestConditions destinationRequestConditions = new DataLakeRequestConditions();
DataLakeFileAsyncClient newRenamedClient = client.renameWithResponse(fileSystemName, destinationPath,
sourceRequestConditions, destinationRequestConditions).block().getValue();
System.out.println("Directory Client has been renamed");
Parameters:
null
for the current file system.
Returns:
scheduleDeletion
public Mono
Schedules the file for deletion.
Code Samples
FileScheduleDeletionOptions options = new FileScheduleDeletionOptions(OffsetDateTime.now().plusDays(1));
client.scheduleDeletion(options)
.subscribe(r -> System.out.println("File deletion has been scheduled"));
Parameters:
Returns:
scheduleDeletionWithResponse
public Mono
Schedules the file for deletion.
Code Samples
FileScheduleDeletionOptions options = new FileScheduleDeletionOptions(OffsetDateTime.now().plusDays(1));
client.scheduleDeletionWithResponse(options)
.subscribe(r -> System.out.println("File deletion has been scheduled"));
Parameters:
Returns:
upload
public Mono
Creates a new file and uploads content.
Code Samples
Long blockSize = 100L * 1024L * 1024L; // 100 MB;
ParallelTransferOptions pto = new ParallelTransferOptions()
.setBlockSizeLong(blockSize)
.setProgressListener(bytesTransferred -> System.out.printf("Upload progress: %s bytes sent", bytesTransferred));
BinaryData.fromFlux(data, length, false)
.flatMap(binaryData -> client.upload(binaryData, pto))
.doOnError(throwable -> System.err.printf("Failed to upload %s%n", throwable.getMessage()))
.subscribe(completion -> System.out.println("Upload succeeded"));
Parameters:
Flux
be replayable. In other words, it does not have to support multiple subscribers and is not expected
to produce the same values across subscriptions.
Returns:
upload
public Mono
Creates a new file and uploads content.
Code Samples
Long blockSize = 100L * 1024L * 1024L; // 100 MB;
ParallelTransferOptions pto = new ParallelTransferOptions()
.setBlockSizeLong(blockSize)
.setProgressListener(bytesTransferred -> System.out.printf("Upload progress: %s bytes sent", bytesTransferred));
BinaryData.fromFlux(data, length, false)
.flatMap(binaryData -> client.upload(binaryData, pto, true))
.doOnError(throwable -> System.err.printf("Failed to upload %s%n", throwable.getMessage()))
.subscribe(completion -> System.out.println("Upload succeeded"));
Parameters:
Flux
be replayable. In other words, it does not have to support multiple subscribers and is not expected
to produce the same values across subscriptions.
Returns:
upload
public Mono
Creates a new file and uploads content.
Code Samples
client.uploadFromFile(filePath)
.doOnError(throwable -> System.err.printf("Failed to upload from file %s%n", throwable.getMessage()))
.subscribe(completion -> System.out.println("Upload from file succeeded"));
Parameters:
Flux
be replayable. In other words, it does not have to support multiple subscribers and is not expected
to produce the same values across subscriptions.
Returns:
upload
public Mono
Creates a new file and uploads content.
Code Samples
boolean overwrite = false; // Default behavior
client.uploadFromFile(filePath, overwrite)
.doOnError(throwable -> System.err.printf("Failed to upload from file %s%n", throwable.getMessage()))
.subscribe(completion -> System.out.println("Upload from file succeeded"));
Parameters:
Flux
be replayable. In other words, it does not have to support multiple subscribers and is not expected
to produce the same values across subscriptions.
Returns:
uploadFromFile
public Mono
Creates a new file, with the content of the specified file. By default, this method will not overwrite an existing file.
Code Samples
client.uploadFromFile(filePath)
.doOnError(throwable -> System.err.printf("Failed to upload from file %s%n", throwable.getMessage()))
.subscribe(completion -> System.out.println("Upload from file succeeded"));
Parameters:
Returns:
uploadFromFile
public Mono
Creates a new file, with the content of the specified file.
Code Samples
boolean overwrite = false; // Default behavior
client.uploadFromFile(filePath, overwrite)
.doOnError(throwable -> System.err.printf("Failed to upload from file %s%n", throwable.getMessage()))
.subscribe(completion -> System.out.println("Upload from file succeeded"));
Parameters:
Returns:
uploadFromFile
public Mono
Creates a new file, with the content of the specified file.
To avoid overwriting, pass "*" to setIfNoneMatch(String ifNoneMatch).
Code Samples
PathHttpHeaders headers = new PathHttpHeaders()
.setContentMd5("data".getBytes(StandardCharsets.UTF_8))
.setContentLanguage("en-US")
.setContentType("binary");
Map<String, String> metadata = Collections.singletonMap("metadata", "value");
DataLakeRequestConditions requestConditions = new DataLakeRequestConditions()
.setLeaseId(leaseId)
.setIfUnmodifiedSince(OffsetDateTime.now().minusDays(3));
Long blockSize = 100L * 1024L * 1024L; // 100 MB;
ParallelTransferOptions parallelTransferOptions = new ParallelTransferOptions().setBlockSizeLong(blockSize);
client.uploadFromFile(filePath, parallelTransferOptions, headers, metadata, requestConditions)
.doOnError(throwable -> System.err.printf("Failed to upload from file %s%n", throwable.getMessage()))
.subscribe(completion -> System.out.println("Upload from file succeeded"));
Parameters:
Returns:
uploadFromFileWithResponse
public Mono
Creates a new file, with the content of the specified file.
To avoid overwriting, pass "*" to setIfNoneMatch(String ifNoneMatch).
Code Samples
PathHttpHeaders headers = new PathHttpHeaders()
.setContentMd5("data".getBytes(StandardCharsets.UTF_8))
.setContentLanguage("en-US")
.setContentType("binary");
Map<String, String> metadata = Collections.singletonMap("metadata", "value");
DataLakeRequestConditions requestConditions = new DataLakeRequestConditions()
.setLeaseId(leaseId)
.setIfUnmodifiedSince(OffsetDateTime.now().minusDays(3));
Long blockSize = 100L * 1024L * 1024L; // 100 MB;
ParallelTransferOptions parallelTransferOptions = new ParallelTransferOptions().setBlockSizeLong(blockSize);
client.uploadFromFileWithResponse(filePath, parallelTransferOptions, headers, metadata, requestConditions)
.doOnError(throwable ->
System.err.printf("Failed to upload from file %s%n", throwable.getMessage()))
.subscribe(completion ->
System.out.println("Upload from file succeeded at: " + completion.getValue().getLastModified()));
Parameters:
Returns:
uploadWithResponse
public Mono
Creates a new file.
To avoid overwriting, pass "*" to setIfNoneMatch(String ifNoneMatch).
Code Samples
PathHttpHeaders headers = new PathHttpHeaders()
.setContentMd5("data".getBytes(StandardCharsets.UTF_8))
.setContentLanguage("en-US")
.setContentType("binary");
Map<String, String> metadata = Collections.singletonMap("metadata", "value");
DataLakeRequestConditions requestConditions = new DataLakeRequestConditions()
.setLeaseId(leaseId)
.setIfUnmodifiedSince(OffsetDateTime.now().minusDays(3));
Long blockSize = 100L * 1024L * 1024L; // 100 MB;
ParallelTransferOptions parallelTransferOptions = new ParallelTransferOptions().setBlockSizeLong(blockSize);
client.uploadWithResponse(new FileParallelUploadOptions(data)
.setParallelTransferOptions(parallelTransferOptions).setHeaders(headers)
.setMetadata(metadata).setRequestConditions(requestConditions)
.setPermissions("permissions").setUmask("umask"))
.subscribe(response -> System.out.println("Uploaded file %n"));
Using Progress Reporting
PathHttpHeaders httpHeaders = new PathHttpHeaders()
.setContentMd5("data".getBytes(StandardCharsets.UTF_8))
.setContentLanguage("en-US")
.setContentType("binary");
Map<String, String> metadataMap = Collections.singletonMap("metadata", "value");
DataLakeRequestConditions conditions = new DataLakeRequestConditions()
.setLeaseId(leaseId)
.setIfUnmodifiedSince(OffsetDateTime.now().minusDays(3));
ParallelTransferOptions pto = new ParallelTransferOptions()
.setBlockSizeLong(blockSize)
.setProgressListener(bytesTransferred -> System.out.printf("Upload progress: %s bytes sent", bytesTransferred));
client.uploadWithResponse(new FileParallelUploadOptions(data)
.setParallelTransferOptions(parallelTransferOptions).setHeaders(headers)
.setMetadata(metadata).setRequestConditions(requestConditions)
.setPermissions("permissions").setUmask("umask"))
.subscribe(response -> System.out.println("Uploaded file %n"));
Parameters:
Returns:
uploadWithResponse
public Mono
Creates a new file. To avoid overwriting, pass "*" to setIfNoneMatch(String ifNoneMatch).
Code Samples
PathHttpHeaders headers = new PathHttpHeaders()
.setContentMd5("data".getBytes(StandardCharsets.UTF_8))
.setContentLanguage("en-US")
.setContentType("binary");
Map<String, String> metadata = Collections.singletonMap("metadata", "value");
DataLakeRequestConditions requestConditions = new DataLakeRequestConditions()
.setLeaseId(leaseId)
.setIfUnmodifiedSince(OffsetDateTime.now().minusDays(3));
Long blockSize = 100L * 1024L * 1024L; // 100 MB;
ParallelTransferOptions parallelTransferOptions = new ParallelTransferOptions().setBlockSizeLong(blockSize);
client.uploadWithResponse(data, parallelTransferOptions, headers, metadata, requestConditions)
.subscribe(response -> System.out.println("Uploaded file %n"));
Using Progress Reporting
PathHttpHeaders httpHeaders = new PathHttpHeaders()
.setContentMd5("data".getBytes(StandardCharsets.UTF_8))
.setContentLanguage("en-US")
.setContentType("binary");
Map<String, String> metadataMap = Collections.singletonMap("metadata", "value");
DataLakeRequestConditions conditions = new DataLakeRequestConditions()
.setLeaseId(leaseId)
.setIfUnmodifiedSince(OffsetDateTime.now().minusDays(3));
ParallelTransferOptions pto = new ParallelTransferOptions()
.setBlockSizeLong(blockSize)
.setProgressListener(bytesTransferred -> System.out.printf("Upload progress: %s bytes sent", bytesTransferred));
client.uploadWithResponse(data, pto, httpHeaders, metadataMap, conditions)
.subscribe(response -> System.out.println("Uploaded file %n"));
Parameters:
Flux
be replayable. In other words, it does not have to support multiple subscribers and is not expected
to produce the same values across subscriptions.
Returns: