Latency in API call

Vineet S 1,350 Reputation points
2024-12-23T14:34:46.37+00:00

how to fix the api latency where parallel apis sending to microsoft crm .. how to fix this latency api is taking time to send the data

Azure Databricks
Azure Databricks
An Apache Spark-based analytics platform optimized for Azure.
2,284 questions
0 comments No comments
{count} votes

2 answers

Sort by: Most helpful
  1. Dillon Silzer 57,556 Reputation points
    2024-12-23T16:35:43.1966667+00:00

    Hi Vineet,

    I would recommend reaching out to the Dynamics community:

    https://community.dynamics.com/

    They work with CRM and may be able to assist you with passing it along to the correct team.

    Another idea is to just use wait or delay commands to ensure data is being passed.


    If this is helpful please accept as answer or upvote.

    Best regards,

    Dillon Silzer | Cloudaen.com | Cloudaen Computing Solutions


  2. phemanth 12,575 Reputation points Microsoft Vendor
    2024-12-24T19:06:34.0033333+00:00

    @Vineet S

    Thanks for the question and using MS Q&A platform.

    When dealing with latency issues in Databricks API calls, especially when sending parallel requests, here are some strategies:

    1. Optimize Data Processing: Ensure that your data processing pipelines are optimized. This includes using efficient data formats like Parquet or Delta Lake, and optimizing your Spark jobs.
    2. Asynchronous Processing: Implement asynchronous processing for your API calls. This allows your system to handle multiple requests simultaneously without waiting for each one to complete.
    3. Batching Requests: Instead of sending individual API requests, batch them together. This can reduce the overhead and improve overall performance.
    4. Caching: Use caching mechanisms to store frequently accessed data. This can significantly reduce the number of API calls and improve response times.
    5. Load Balancing: Distribute your API requests across multiple nodes or clusters to avoid overloading a single point.
    6. Monitoring and Logging: Set up comprehensive monitoring and logging to identify bottlenecks and optimize performance. Tools like Datadog or Azure Monitor can be helpful.
    7. Retry Logic: Implement retry logic with exponential backoff to handle transient failures and reduce the impact of latency spikes.
    8. Delta Live Tables (DLT): Consider using Delta Live Tables for low-latency data processing. DLT can help manage data freshness and query serving latency effectively

    Please refer :https://community.databricks.com/t5/technical-blog/how-to-build-operational-low-latency-stateful-spark-structured/ba-p/40868

    Hope this helps. Do let us know if you any further queries.


    If this answers your query, do click Accept Answer and Yes for was this answer helpful. And, if you have any further query do let us know.

    0 comments No comments

Your answer

Answers can be marked as Accepted Answers by the question author, which helps users to know the answer solved the author's problem.