Datalake Gen2 Design and understanding

azure_learner 340 Reputation points
2024-08-12T06:49:48.4266667+00:00

I need lots of clarity on the design aspects of Azure Data Lake Gen2. I will be grateful for your help.

 

1.       How to determine latency requirements, and where and what I should look at?

2.       How to know the transaction patterns on analytics workloads that are stored in the data lake?

3.       Which scenarios should I choose ingestion mode pull or push?

4.       How do you understand the encryption requirements?

5.       Who decides on data retention and archival policies? Is it business stakeholders or the technical team? Of course with consultation with business as far as I understand. Please help.

Azure Data Lake Storage
Azure Data Lake Storage
An Azure service that provides an enterprise-wide hyper-scale repository for big data analytic workloads and is integrated with Azure Blob Storage.
1,485 questions
Azure Data Factory
Azure Data Factory
An Azure service for ingesting, preparing, and transforming data at scale.
10,843 questions
0 comments No comments
{count} votes

Accepted answer
  1. Amira Bedhiafi 26,186 Reputation points
    2024-08-12T21:56:14.48+00:00

    I am splitting your questions as the following :

    1. How to determine latency requirements, and where and what should I look at?

    Latency requirements in Azure Data Lake Gen2 are critical because they influence the architecture of your data processing pipeline. To determine latency requirements, you need to consider the use cases for your data. For instance, if you're working with real-time analytics or streaming data, low latency is crucial. On the other hand, batch processing can tolerate higher latency.

    You should look at factors such as:

    • Business Needs: Determine the acceptable time from data ingestion to actionable insights. Different business processes have different thresholds for latency.
    • Data Processing Models: Analyze if the data needs to be processed in real-time, near-real-time, or in batches.
    • Storage Performance Metrics: Review the performance capabilities of Azure Data Lake Storage (like IOPS, throughput, and data access patterns).
    • Network Latency: Consider the network latency between the data sources, Azure Data Lake, and other components in your data pipeline.

    2. How to know the transaction patterns on analytics workloads that are stored in the data lake?

    Understanding transaction patterns involves analyzing how data is read, written, and accessed within the data lake. This can be done by:

    • Azure Monitor & Log Analytics: These tools allow you to collect and analyze telemetry data from your Azure services, including Azure Data Lake. You can monitor metrics like data read/write operations, latency, and request patterns.
    • Storage Analytics: Enabling storage analytics on Azure Data Lake Gen2 can help track and log detailed information about every transaction, providing insights into usage patterns.
    • Workload Profiling: By profiling your analytics workloads, you can identify how often data is accessed, the size of transactions, and the frequency of specific operations like reads and writes.

    3. Which scenarios should I choose ingestion mode: pull or push?

    The choice between pull and push ingestion modes depends on the nature of your data sources and the timeliness required for the data in the lake:

    • Push Ingestion: Use push mode when data sources need to push data as it becomes available. This is suitable for real-time data ingestion scenarios such as IoT devices, event streams, or applications that generate data continuously and need immediate processing.
    • Pull Ingestion: Pull mode is ideal when data can be retrieved at scheduled intervals, such as batch processing from databases, files, or APIs where data changes infrequently, or where you want to control the load on the source system.

    4. How do you understand the encryption requirements?

    Encryption in Azure Data Lake Gen2 is crucial for data protection and compliance. To understand encryption requirements, consider the following:

    • Compliance Requirements: Identify any regulatory or compliance standards your organization must adhere to (e.g., GDPR, HIPAA). These often dictate specific encryption standards.
    • Data Sensitivity: Assess the sensitivity of the data stored in the data lake. Highly sensitive data requires encryption at rest and in transit.
    • Azure’s Built-in Capabilities: Azure Data Lake Gen2 provides encryption at rest by default using Azure Storage Service Encryption (SSE). For more control, consider using customer-managed keys (CMK) with Azure Key Vault.
    • In-Transit Encryption: Ensure that data is encrypted during transfer using protocols like HTTPS.

    5. Who decides on data retention and archival policies? Is it business stakeholders or the technical team?

    Data retention and archival policies are typically determined through collaboration between business stakeholders and the technical team:

    • Business Stakeholders: They play a crucial role as they define the business requirements for how long data needs to be retained based on regulatory, legal, and operational needs.
    • Technical Team: The technical team, including data architects and engineers, advises on the feasibility and implementation of these policies, ensuring they align with the organization’s infrastructure and compliance capabilities.

    The decision is a collaborative process where business needs are balanced with technical possibilities and constraints, ensuring that data retention and archival strategies are both compliant and efficient.

    1 person found this answer helpful.
    0 comments No comments

1 additional answer

Sort by: Most helpful
  1. Pinaki Ghatak 4,610 Reputation points Microsoft Employee
    2024-08-13T13:37:18.4966667+00:00

    Hello @azure_learner

    To keep it concise, and to the point, the following steps will get you started:

    1. To determine latency requirements, you should consider the type of workload you will be running on the data lake. For example, if you are running real-time analytics, you will need low latency. You should also consider the size of the data you will be working with, as larger data sets may require more time to process. You can look at the performance metrics of your data lake to determine if it meets your latency requirements.
    2. To understand transaction patterns on analytics workloads stored in the data lake, you can use tools like Azure Monitor and Azure Log Analytics to monitor and analyze the performance of your data lake. These tools can help you identify any bottlenecks or issues that may be affecting your transaction patterns.
    3. The choice between ingestion mode pull or push depends on your specific use case. In general, push ingestion is better for real-time data, while pull ingestion is better for batch data. You should consider factors like the frequency of data updates, the size of the data, and the complexity of the data when deciding which ingestion mode to use.
    4. To understand encryption requirements, you should consider the sensitivity of the data you will be storing in the data lake. If your data is highly sensitive, you may need to use encryption at rest and in transit. Azure Data Lake Gen2 supports encryption at rest using Azure Storage Service Encryption (SSE) and encryption in transit using SSL/TLS.
    5. Data retention and archival policies are typically decided by the business stakeholders, with input from the technical team. The business stakeholders should consider factors like compliance requirements, data usage patterns, and cost when deciding on retention and archival policies. The technical team can provide guidance on the technical feasibility of different retention and archival options.

    I hope that this response has addressed your query and helped you overcome your challenges. If so, please mark this response as Answered. This will not only acknowledge our efforts, but also assist other community members who may be looking for similar solutions.

    1 person found this answer helpful.
    0 comments No comments

Your answer

Answers can be marked as Accepted Answers by the question author, which helps users to know the answer solved the author's problem.