DataLakeFileClient Class
- java.
lang. Object - com.
azure. storage. file. datalake. DataLakePathClient - com.
azure. storage. file. datalake. DataLakeFileClient
- com.
- com.
public class DataLakeFileClient
extends DataLakePathClient
This class provides a client that contains file operations for Azure Storage Data Lake. Operations provided by this client include creating a file, deleting a file, renaming a file, setting metadata and http headers, setting and retrieving access control, getting properties, reading a file, and appending and flushing data to write to a file.
This client is instantiated through DataLakePathClientBuilder or retrieved via getFileClient(String fileName).
Please refer to the Azure Docs for more information.
Method Summary
Methods inherited from DataLakePathClient
Methods inherited from java.lang.Object
Method Details
append
public void append(BinaryData data, long fileOffset)
Appends data to the specified resource to later be flushed (written) by a call to flush
Code Samples
client.append(binaryData, offset);
System.out.println("Append data completed");
For more information, see the Azure Docs
Parameters:
append
public void append(InputStream data, long fileOffset, long length)
Appends data to the specified resource to later be flushed (written) by a call to flush
Code Samples
client.append(data, offset, length);
System.out.println("Append data completed");
For more information, see the Azure Docs
Parameters:
appendWithResponse
public Response
Appends data to the specified resource to later be flushed (written) by a call to flush
Code Samples
FileRange range = new FileRange(1024, 2048L);
DownloadRetryOptions options = new DownloadRetryOptions().setMaxRetryRequests(5);
byte[] contentMd5 = new byte[0]; // Replace with valid md5
Response<Void> response = client.appendWithResponse(binaryData, offset, contentMd5, leaseId, timeout,
new Context(key1, value1));
System.out.printf("Append data completed with status %d%n", response.getStatusCode());
For more information, see the Azure Docs
Parameters:
Returns:
appendWithResponse
public Response
Appends data to the specified resource to later be flushed (written) by a call to flush
Code Samples
BinaryData binaryData = BinaryData.fromStream(data, length);
FileRange range = new FileRange(1024, 2048L);
byte[] contentMd5 = new byte[0]; // Replace with valid md5
DataLakeFileAppendOptions appendOptions = new DataLakeFileAppendOptions()
.setLeaseId(leaseId)
.setContentHash(contentMd5)
.setFlush(true);
Response<Void> response = client.appendWithResponse(binaryData, offset, appendOptions, timeout,
new Context(key1, value1));
System.out.printf("Append data completed with status %d%n", response.getStatusCode());
For more information, see the Azure Docs
Parameters:
Returns:
appendWithResponse
public Response
Appends data to the specified resource to later be flushed (written) by a call to flush
Code Samples
FileRange range = new FileRange(1024, 2048L);
DownloadRetryOptions options = new DownloadRetryOptions().setMaxRetryRequests(5);
byte[] contentMd5 = new byte[0]; // Replace with valid md5
Response<Void> response = client.appendWithResponse(data, offset, length, contentMd5, leaseId, timeout,
new Context(key1, value1));
System.out.printf("Append data completed with status %d%n", response.getStatusCode());
For more information, see the Azure Docs
Parameters:
Returns:
appendWithResponse
public Response
Appends data to the specified resource to later be flushed (written) by a call to flush
Code Samples
FileRange range = new FileRange(1024, 2048L);
byte[] contentMd5 = new byte[0]; // Replace with valid md5
DataLakeFileAppendOptions appendOptions = new DataLakeFileAppendOptions()
.setLeaseId(leaseId)
.setContentHash(contentMd5)
.setFlush(true);
Response<Void> response = client.appendWithResponse(data, offset, length, appendOptions, timeout,
new Context(key1, value1));
System.out.printf("Append data completed with status %d%n", response.getStatusCode());
For more information, see the Azure Docs
Parameters:
Returns:
delete
public void delete()
Deletes a file.
Code Samples
client.delete();
System.out.println("Delete request completed");
For more information see the Azure Docs
deleteIfExists
public boolean deleteIfExists()
Deletes a file if it exists.
Code Samples
client.deleteIfExists();
System.out.println("Delete request completed");
For more information see the Azure Docs
Overrides:
DataLakeFileClient.deleteIfExists()Returns:
true
if file is successfully deleted, false
if the file does not exist.deleteIfExistsWithResponse
public Response
Deletes a file if it exists.
Code Samples
DataLakeRequestConditions requestConditions = new DataLakeRequestConditions()
.setLeaseId(leaseId);
DataLakePathDeleteOptions options = new DataLakePathDeleteOptions().setIsRecursive(false)
.setRequestConditions(requestConditions);
Response<Boolean> response = client.deleteIfExistsWithResponse(options, timeout, new Context(key1, value1));
if (response.getStatusCode() == 404) {
System.out.println("Does not exist.");
} else {
System.out.printf("Delete completed with status %d%n", response.getStatusCode());
}
For more information see the Azure Docs
Overrides:
DataLakeFileClient.deleteIfExistsWithResponse(DataLakePathDeleteOptions options, Duration timeout, Context context)Parameters:
Returns:
deleteWithResponse
public Response
Deletes a file.
Code Samples
DataLakeRequestConditions requestConditions = new DataLakeRequestConditions()
.setLeaseId(leaseId);
client.deleteWithResponse(requestConditions, timeout, new Context(key1, value1));
System.out.println("Delete request completed");
For more information see the Azure Docs
Parameters:
Returns:
flush
@Deprecated
public PathInfo flush(long position)
Deprecated
Flushes (writes) data previously appended to the file through a call to append. The previously uploaded data must be contiguous.
By default this method will not overwrite existing data.
Code Samples
client.flush(position);
System.out.println("Flush data completed");
For more information, see the Azure Docs
Parameters:
Returns:
flush
public PathInfo flush(long position, boolean overwrite)
Flushes (writes) data previously appended to the file through a call to append. The previously uploaded data must be contiguous.
Code Samples
boolean overwrite = true;
client.flush(position, overwrite);
System.out.println("Flush data completed");
For more information, see the Azure Docs
Parameters:
Returns:
flushWithResponse
public Response
Flushes (writes) data previously appended to the file through a call to append. The previously uploaded data must be contiguous.
Code Samples
FileRange range = new FileRange(1024, 2048L);
DownloadRetryOptions options = new DownloadRetryOptions().setMaxRetryRequests(5);
byte[] contentMd5 = new byte[0]; // Replace with valid md5
boolean retainUncommittedData = false;
boolean close = false;
PathHttpHeaders httpHeaders = new PathHttpHeaders()
.setContentLanguage("en-US")
.setContentType("binary");
DataLakeRequestConditions requestConditions = new DataLakeRequestConditions()
.setLeaseId(leaseId);
Response<PathInfo> response = client.flushWithResponse(position, retainUncommittedData, close, httpHeaders,
requestConditions, timeout, new Context(key1, value1));
System.out.printf("Flush data completed with status %d%n", response.getStatusCode());
For more information, see the Azure Docs
Parameters:
Returns:
flushWithResponse
public Response
Flushes (writes) data previously appended to the file through a call to append. The previously uploaded data must be contiguous.
Code Samples
FileRange range = new FileRange(1024, 2048L);
DownloadRetryOptions options = new DownloadRetryOptions().setMaxRetryRequests(5);
byte[] contentMd5 = new byte[0]; // Replace with valid md5
boolean retainUncommittedData = false;
boolean close = false;
PathHttpHeaders httpHeaders = new PathHttpHeaders()
.setContentLanguage("en-US")
.setContentType("binary");
DataLakeRequestConditions requestConditions = new DataLakeRequestConditions()
.setLeaseId(leaseId);
Integer leaseDuration = 15;
DataLakeFileFlushOptions flushOptions = new DataLakeFileFlushOptions()
.setUncommittedDataRetained(retainUncommittedData)
.setClose(close)
.setPathHttpHeaders(httpHeaders)
.setRequestConditions(requestConditions)
.setLeaseAction(LeaseAction.ACQUIRE)
.setLeaseDuration(leaseDuration)
.setProposedLeaseId(leaseId);
Response<PathInfo> response = client.flushWithResponse(position, flushOptions, timeout,
new Context(key1, value1));
System.out.printf("Flush data completed with status %d%n", response.getStatusCode());
For more information, see the Azure Docs
Parameters:
Returns:
getCustomerProvidedKeyClient
public DataLakeFileClient getCustomerProvidedKeyClient(CustomerProvidedKey customerProvidedKey)
Creates a new DataLakeFileClient with the specified customerProvidedKey
.
Overrides:
DataLakeFileClient.getCustomerProvidedKeyClient(CustomerProvidedKey customerProvidedKey)Parameters:
null
to use no customer provided key.
Returns:
customerProvidedKey
.getFileName
public String getFileName()
Gets the name of this file, not including its full path.
Returns:
getFilePath
public String getFilePath()
Gets the path of this file, not including the name of the resource itself.
Returns:
getFileUrl
public String getFileUrl()
Gets the URL of the file represented by this client on the Data Lake service.
Returns:
getOutputStream
public OutputStream getOutputStream()
Creates and opens an output stream to write data to the file. If the file already exists on the service, it will be overwritten.
Returns:
getOutputStream
public OutputStream getOutputStream(DataLakeFileOutputStreamOptions options)
Creates and opens an output stream to write data to the file. If the file already exists on the service, it will be overwritten.
To avoid overwriting, pass "*" to setIfNoneMatch(String ifNoneMatch).
Parameters:
Returns:
getOutputStream
public OutputStream getOutputStream(DataLakeFileOutputStreamOptions options, Context context)
Creates and opens an output stream to write data to the file. If the file already exists on the service, it will be overwritten.
To avoid overwriting, pass "*" to setIfNoneMatch(String ifNoneMatch).
Parameters:
Returns:
openInputStream
public DataLakeFileOpenInputStreamResult openInputStream()
Opens a file input stream to download the file. Locks on ETags.
DataLakeFileOpenInputStreamResult inputStream = client.openInputStream();
Returns:
openInputStream
public DataLakeFileOpenInputStreamResult openInputStream(DataLakeFileInputStreamOptions options)
Opens a file input stream to download the specified range of the file. Defaults to ETag locking if the option is not specified.
DataLakeFileInputStreamOptions options = new DataLakeFileInputStreamOptions().setBlockSize(1024)
.setRequestConditions(new DataLakeRequestConditions());
DataLakeFileOpenInputStreamResult streamResult = client.openInputStream(options);
Parameters:
Returns:
openInputStream
public DataLakeFileOpenInputStreamResult openInputStream(DataLakeFileInputStreamOptions options, Context context)
Opens a file input stream to download the specified range of the file. Defaults to ETag locking if the option is not specified.
options = new DataLakeFileInputStreamOptions().setBlockSize(1024)
.setRequestConditions(new DataLakeRequestConditions());
DataLakeFileOpenInputStreamResult stream = client.openInputStream(options, new Context(key1, value1));
Parameters:
Returns:
openQueryInputStream
public InputStream openQueryInputStream(String expression)
Opens an input stream to query the file.
For more information, see the Azure Docs
Code Samples
String expression = "SELECT * from BlobStorage";
InputStream inputStream = client.openQueryInputStream(expression);
// Now you can read from the input stream like you would normally.
Parameters:
Returns:
InputStream
object that represents the stream to use for reading the query response.openQueryInputStreamWithResponse
public Response
Opens an input stream to query the file.
For more information, see the Azure Docs
Code Samples
String expression = "SELECT * from BlobStorage";
FileQuerySerialization input = new FileQueryDelimitedSerialization()
.setColumnSeparator(',')
.setEscapeChar('\n')
.setRecordSeparator('\n')
.setHeadersPresent(true)
.setFieldQuote('"');
FileQuerySerialization output = new FileQueryJsonSerialization()
.setRecordSeparator('\n');
DataLakeRequestConditions requestConditions = new DataLakeRequestConditions()
.setLeaseId("leaseId");
Consumer<FileQueryError> errorConsumer = System.out::println;
Consumer<FileQueryProgress> progressConsumer = progress -> System.out.println("total file bytes read: "
+ progress.getBytesScanned());
FileQueryOptions queryOptions = new FileQueryOptions(expression)
.setInputSerialization(input)
.setOutputSerialization(output)
.setRequestConditions(requestConditions)
.setErrorConsumer(errorConsumer)
.setProgressConsumer(progressConsumer);
InputStream inputStream = client.openQueryInputStreamWithResponse(queryOptions).getValue();
// Now you can read from the input stream like you would normally.
Parameters:
Returns:
InputStream
object
that represents the stream to use for reading the query response.query
public void query(OutputStream stream, String expression)
Queries an entire file into an output stream.
For more information, see the Azure Docs
Code Samples
ByteArrayOutputStream queryData = new ByteArrayOutputStream();
String expression = "SELECT * from BlobStorage";
client.query(queryData, expression);
System.out.println("Query completed.");
Parameters:
queryWithResponse
public FileQueryResponse queryWithResponse(FileQueryOptions queryOptions, Duration timeout, Context context)
Queries an entire file into an output stream.
For more information, see the Azure Docs
Code Samples
ByteArrayOutputStream queryData = new ByteArrayOutputStream();
String expression = "SELECT * from BlobStorage";
FileQueryJsonSerialization input = new FileQueryJsonSerialization()
.setRecordSeparator('\n');
FileQueryDelimitedSerialization output = new FileQueryDelimitedSerialization()
.setEscapeChar('\0')
.setColumnSeparator(',')
.setRecordSeparator('\n')
.setFieldQuote('\'')
.setHeadersPresent(true);
DataLakeRequestConditions requestConditions = new DataLakeRequestConditions().setLeaseId(leaseId);
Consumer<FileQueryError> errorConsumer = System.out::println;
Consumer<FileQueryProgress> progressConsumer = progress -> System.out.println("total file bytes read: "
+ progress.getBytesScanned());
FileQueryOptions queryOptions = new FileQueryOptions(expression, queryData)
.setInputSerialization(input)
.setOutputSerialization(output)
.setRequestConditions(requestConditions)
.setErrorConsumer(errorConsumer)
.setProgressConsumer(progressConsumer);
System.out.printf("Query completed with status %d%n",
client.queryWithResponse(queryOptions, timeout, new Context(key1, value1))
.getStatusCode());
Parameters:
Returns:
read
public void read(OutputStream stream)
Reads the entire file into an output stream.
Code Samples
client.read(new ByteArrayOutputStream());
System.out.println("Download completed.");
For more information, see the Azure Docs
Parameters:
readToFile
public PathProperties readToFile(ReadToFileOptions options)
Reads the entire file into a file specified by the path.
The file will be created and must not exist, if the file already exists a FileAlreadyExistsException will be thrown.
Code Samples
client.readToFile(new ReadToFileOptions(file));
System.out.println("Completed download to file");
For more information, see the Azure Docs
Parameters:
Returns:
readToFile
public PathProperties readToFile(String filePath)
Reads the entire file into a file specified by the path.
The file will be created and must not exist, if the file already exists a FileAlreadyExistsException will be thrown.
Code Samples
client.readToFile(file);
System.out.println("Completed download to file");
For more information, see the Azure Docs
Parameters:
Returns:
readToFile
public PathProperties readToFile(String filePath, boolean overwrite)
Reads the entire file into a file specified by the path.
If overwrite is set to false, the file will be created and must not exist, if the file already exists a FileAlreadyExistsException will be thrown.
Code Samples
boolean overwrite = false; // Default value
client.readToFile(file, overwrite);
System.out.println("Completed download to file");
For more information, see the Azure Docs
Parameters:
Returns:
readToFileWithResponse
public Response
Reads the entire file into a file specified by the path.
By default the file will be created and must not exist, if the file already exists a FileAlreadyExistsException will be thrown. To override this behavior, provide appropriate OpenOptions
Code Samples
ReadToFileOptions options = new ReadToFileOptions(file);
options.setRange(new FileRange(1024, 2048L));
options.setDownloadRetryOptions(new DownloadRetryOptions().setMaxRetryRequests(5));
options.setOpenOptions(new HashSet<>(Arrays.asList(StandardOpenOption.CREATE_NEW,
StandardOpenOption.WRITE, StandardOpenOption.READ))); //Default options
options.setParallelTransferOptions(new ParallelTransferOptions().setBlockSizeLong(4L * Constants.MB));
options.setDataLakeRequestConditions(null);
options.setRangeGetContentMd5(false);
client.readToFileWithResponse(options, timeout, new Context(key2, value2));
System.out.println("Completed download to file");
Parameters:
Returns:
readToFileWithResponse
public Response
Reads the entire file into a file specified by the path.
By default the file will be created and must not exist, if the file already exists a FileAlreadyExistsException will be thrown. To override this behavior, provide appropriate OpenOptions
Code Samples
FileRange fileRange = new FileRange(1024, 2048L);
DownloadRetryOptions downloadRetryOptions = new DownloadRetryOptions().setMaxRetryRequests(5);
Set<OpenOption> openOptions = new HashSet<>(Arrays.asList(StandardOpenOption.CREATE_NEW,
StandardOpenOption.WRITE, StandardOpenOption.READ)); // Default options
client.readToFileWithResponse(file, fileRange, new ParallelTransferOptions().setBlockSizeLong(4L * Constants.MB),
downloadRetryOptions, null, false, openOptions, timeout, new Context(key2, value2));
System.out.println("Completed download to file");
For more information, see the Azure Docs
Parameters:
Returns:
readWithResponse
public FileReadResponse readWithResponse(OutputStream stream, FileRange range, DownloadRetryOptions options, DataLakeRequestConditions requestConditions, boolean getRangeContentMd5, Duration timeout, Context context)
Reads a range of bytes from a file into an output stream.
Code Samples
FileRange range = new FileRange(1024, 2048L);
DownloadRetryOptions options = new DownloadRetryOptions().setMaxRetryRequests(5);
System.out.printf("Download completed with status %d%n",
client.readWithResponse(new ByteArrayOutputStream(), range, options, null, false,
timeout, new Context(key2, value2)).getStatusCode());
For more information, see the Azure Docs
Parameters:
Returns:
rename
public DataLakeFileClient rename(String destinationFileSystem, String destinationPath)
Moves the file to another location within the file system. For more information see the Azure Docs.
Code Samples
DataLakeDirectoryAsyncClient renamedClient = client.rename(fileSystemName, destinationPath).block();
System.out.println("Directory Client has been renamed");
Parameters:
null
for the current file system.
Returns:
renameWithResponse
public Response
Moves the file to another location within the file system. For more information, see the Azure Docs.
Code Samples
DataLakeRequestConditions sourceRequestConditions = new DataLakeRequestConditions()
.setLeaseId(leaseId);
DataLakeRequestConditions destinationRequestConditions = new DataLakeRequestConditions();
DataLakeFileClient newRenamedClient = client.renameWithResponse(fileSystemName, destinationPath,
sourceRequestConditions, destinationRequestConditions, timeout, new Context(key1, value1)).getValue();
System.out.println("Directory Client has been renamed");
Parameters:
null
for the current file system.
Returns:
scheduleDeletion
public void scheduleDeletion(FileScheduleDeletionOptions options)
Schedules the file for deletion.
Code Samples
FileScheduleDeletionOptions options = new FileScheduleDeletionOptions(OffsetDateTime.now().plusDays(1));
client.scheduleDeletion(options);
System.out.println("File deletion has been scheduled");
Parameters:
scheduleDeletionWithResponse
public Response
Schedules the file for deletion.
Code Samples
FileScheduleDeletionOptions options = new FileScheduleDeletionOptions(OffsetDateTime.now().plusDays(1));
Context context = new Context("key", "value");
client.scheduleDeletionWithResponse(options, timeout, context);
System.out.println("File deletion has been scheduled");
Parameters:
Returns:
upload
public PathInfo upload(BinaryData data)
Creates a new file. By default, this method will not overwrite an existing file.
Code Samples
try {
client.upload(binaryData);
System.out.println("Upload from file succeeded");
} catch (UncheckedIOException ex) {
System.err.printf("Failed to upload from file %s%n", ex.getMessage());
}
Parameters:
Returns:
upload
public PathInfo upload(BinaryData data, boolean overwrite)
Creates a new file, or updates the content of an existing file.
Code Samples
try {
boolean overwrite = false;
client.upload(binaryData, overwrite);
System.out.println("Upload from file succeeded");
} catch (UncheckedIOException ex) {
System.err.printf("Failed to upload from file %s%n", ex.getMessage());
}
Parameters:
Returns:
upload
public PathInfo upload(InputStream data, long length)
Creates a new file. By default, this method will not overwrite an existing file.
Code Samples
try {
client.upload(data, length);
System.out.println("Upload from file succeeded");
} catch (UncheckedIOException ex) {
System.err.printf("Failed to upload from file %s%n", ex.getMessage());
}
Parameters:
Returns:
upload
public PathInfo upload(InputStream data, long length, boolean overwrite)
Creates a new file, or updates the content of an existing file.
Code Samples
try {
boolean overwrite = false;
client.upload(data, length, overwrite);
System.out.println("Upload from file succeeded");
} catch (UncheckedIOException ex) {
System.err.printf("Failed to upload from file %s%n", ex.getMessage());
}
Parameters:
Returns:
uploadFromFile
public void uploadFromFile(String filePath)
Creates a file, with the content of the specified file. By default, this method will not overwrite an existing file.
Code Samples
try {
client.uploadFromFile(filePath);
System.out.println("Upload from file succeeded");
} catch (UncheckedIOException ex) {
System.err.printf("Failed to upload from file %s%n", ex.getMessage());
}
Parameters:
uploadFromFile
public void uploadFromFile(String filePath, boolean overwrite)
Creates a file, with the content of the specified file.
Code Samples
try {
boolean overwrite = false;
client.uploadFromFile(filePath, overwrite);
System.out.println("Upload from file succeeded");
} catch (UncheckedIOException ex) {
System.err.printf("Failed to upload from file %s%n", ex.getMessage());
}
Parameters:
uploadFromFile
public void uploadFromFile(String filePath, ParallelTransferOptions parallelTransferOptions, PathHttpHeaders headers, Map
Creates a file, with the content of the specified file.
To avoid overwriting, pass "*" to setIfNoneMatch(String ifNoneMatch).
Code Samples
PathHttpHeaders headers = new PathHttpHeaders()
.setContentMd5("data".getBytes(StandardCharsets.UTF_8))
.setContentLanguage("en-US")
.setContentType("binary");
Map<String, String> metadata = Collections.singletonMap("metadata", "value");
DataLakeRequestConditions requestConditions = new DataLakeRequestConditions()
.setLeaseId(leaseId)
.setIfUnmodifiedSince(OffsetDateTime.now().minusDays(3));
Long blockSize = 100L * 1024L * 1024L; // 100 MB;
ParallelTransferOptions parallelTransferOptions = new ParallelTransferOptions().setBlockSizeLong(blockSize);
try {
client.uploadFromFile(filePath, parallelTransferOptions, headers, metadata, requestConditions, timeout);
System.out.println("Upload from file succeeded");
} catch (UncheckedIOException ex) {
System.err.printf("Failed to upload from file %s%n", ex.getMessage());
}
Parameters:
uploadFromFileWithResponse
public Response
Creates a file, with the content of the specified file.
To avoid overwriting, pass "*" to setIfNoneMatch(String ifNoneMatch).
Code Samples
PathHttpHeaders headers = new PathHttpHeaders()
.setContentMd5("data".getBytes(StandardCharsets.UTF_8))
.setContentLanguage("en-US")
.setContentType("binary");
Map<String, String> metadata = Collections.singletonMap("metadata", "value");
DataLakeRequestConditions requestConditions = new DataLakeRequestConditions()
.setLeaseId(leaseId)
.setIfUnmodifiedSince(OffsetDateTime.now().minusDays(3));
Long blockSize = 100L * 1024L * 1024L; // 100 MB;
ParallelTransferOptions parallelTransferOptions = new ParallelTransferOptions().setBlockSizeLong(blockSize);
try {
Response<PathInfo> response = client.uploadFromFileWithResponse(filePath, parallelTransferOptions, headers,
metadata, requestConditions, timeout, new Context("key", "value"));
System.out.printf("Upload from file succeeded with status %d%n", response.getStatusCode());
} catch (UncheckedIOException ex) {
System.err.printf("Failed to upload from file %s%n", ex.getMessage());
}
Parameters:
Returns:
uploadWithResponse
public Response
Creates a new file. To avoid overwriting, pass "*" to setIfNoneMatch(String ifNoneMatch).
Code Samples
PathHttpHeaders headers = new PathHttpHeaders()
.setContentMd5("data".getBytes(StandardCharsets.UTF_8))
.setContentLanguage("en-US")
.setContentType("binary");
Map<String, String> metadata = Collections.singletonMap("metadata", "value");
DataLakeRequestConditions requestConditions = new DataLakeRequestConditions()
.setLeaseId(leaseId)
.setIfUnmodifiedSince(OffsetDateTime.now().minusDays(3));
Long blockSize = 100L * 1024L * 1024L; // 100 MB;
ParallelTransferOptions parallelTransferOptions = new ParallelTransferOptions().setBlockSizeLong(blockSize);
try {
client.uploadWithResponse(new FileParallelUploadOptions(data, length)
.setParallelTransferOptions(parallelTransferOptions).setHeaders(headers)
.setMetadata(metadata).setRequestConditions(requestConditions)
.setPermissions("permissions").setUmask("umask"), timeout, new Context("key", "value"));
System.out.println("Upload from file succeeded");
} catch (UncheckedIOException ex) {
System.err.printf("Failed to upload from file %s%n", ex.getMessage());
}
Parameters:
Returns: