Muokkaa

Jaa


Page results using FetchXml

You can specify a limit on the number of rows retrieved for each request by setting a page size. Using paging, you can retrieve consecutive pages of data representing all the records that match the criteria of a query in a performant manner.

The default and maximum page size is 5,000 rows. If you don't set a page size, Dataverse will return up to 5,000 rows of data at a time. To get more rows, you must send additional requests.

Note

Paging models

Dataverse has two paging models: simple and using paging cookies:

Simple

  • Uses only the fetch element count and page attributes
  • Suitable for small data sets only
  • Can't return a data set larger than 50,000 records
  • Performance reduced as the number of rows increases

Paging cookies

Simple paging

You can request to the first page by setting the fetch element page attribute to 1 and the count attribute to the page size before sending the request:

<fetch count='3' page='1'>
  <entity name='account'>
    <attribute name='name' />
    <order attribute='name' />
    <order attribute='accountid' />
  </entity>
</fetch>

To get the next three records, increment the page value and send another request.

<fetch count='3' page='2'>
  <entity name='account'>
    <attribute name='name' />
    <order attribute='name' />
    <order attribute='accountid' />    
  </entity>
</fetch>

With simple paging, sometimes called legacy paging, Dataverse retrieves all the results of the query up to the current page, selects the number of records needed for the page and then ignores the rest. This allows for quickly paging backward and forward though the data or skipping to a specific page. However the total number of records is limited to 50,000 and there can be performance issues for complex queries and arbitrarily sorted distinct query results.

Simple paging works well for small data sets, but as the number of rows in the data set increases, performance suffers. The total number of rows that can be retrieved using simple paging is 50,000. For best performance in all cases, we recommend consistently using paging cookies.

Paging cookies

When there are more rows to retrieve after requesting the first page, Dataverse usually returns a paging cookie to be used on the following requests for the next pages.

The paging cookie contains data about the first and last record in the results and helps Dataverse retrieve the next row of data as quickly as possible and should be used when provided. You shouldn't modify the data in the paging cookie, just set the value to the fetch element paging-cookie attribute and increment the page attribute value for subsequent requests.

Queries that don't support paging cookies

Some queries do not support paging cookies. When paging cookies aren't supported by a query, no paging cookie value is returned with the result. For example, queries sorted using a link-entity attribute may not support paging cookies.

When Dataverse doesn't return a paging cookie, the paging model falls back to simple paging, with all the limitations that come with it.

How you use paging cookies depends on whether you are using the SDK for .NET or Web API.

The following RetrieveAll static method will return all records that match the FetchXml query, sending multiple requests if the number of records exceeds the page size.

After each request, the method checks the EntityCollection.MoreRecords property to determine if more records match the criteria. If there are more records, the method sets the value of the returned EntityCollection.PagingCookie property to the paging-cookie attribute of the fetch element and sends another request.

/// <summary>
/// Returns all records matching the criteria
/// </summary>
/// <param name="service">The authenticated IOrganizationService instance.</param>
/// <param name="fetchXml">The fetchXml Query string</param>
/// <param name="pageSize">The page size to use. Default is 5000</param>
/// <returns>All the records that match the criteria</returns>
static EntityCollection RetrieveAll(IOrganizationService service, string fetchXml, int pageSize = 5000)
{

    // The records to return
    List<Entity> entities = new();

    XElement fetchNode = XElement.Parse(fetchXml);

    int page = 1; //Start with page 1

    //Set the page
    fetchNode.SetAttributeValue("page", page);

    // Set the page size
    fetchNode.SetAttributeValue("count", pageSize);

    while (true)
    {
        // Get the page
        EntityCollection results = service.RetrieveMultiple(new FetchExpression(fetchNode.ToString()));

        entities.AddRange(results.Entities);

        if (!results.MoreRecords)
        {
            break;
        }

        // Set the fetch paging-cookie attribute with the paging cookie from the previous query
        fetchNode.SetAttributeValue("paging-cookie", results.PagingCookie);

        fetchNode.SetAttributeValue("page", ++page);
    }
    return new EntityCollection(entities);
}

You can adapt the Quick Start: Execute an SDK for .NET request (C#) sample to test FetchXml queries with the following steps:

  1. Add the RetrieveAll static method to the Program class.
  2. Modify the Main method as shown below:
static void Main(string[] args)
{
    using (ServiceClient serviceClient = new(connectionString))
    {
        if (serviceClient.IsReady)
        {
            //WhoAmIResponse response = 
            //    (WhoAmIResponse)serviceClient.Execute(new WhoAmIRequest());

            //Console.WriteLine("User ID is {0}.", response.UserId);

            string fetchQuery = @"<fetch count='3' page='1'>
                <entity name='contact'>
                    <attribute name='fullname'/>
                    <attribute name='jobtitle'/>
                    <attribute name='annualincome'/>
                    <order descending='true' attribute='fullname'/>
                </entity>
        </fetch>";

            EntityCollection records = RetrieveAll(service: serviceClient,
                        fetchXml: fetchQuery,
                        pageSize: 25);

            Console.WriteLine($"Success: {records.Entities.Count}");
        }
        else
        {
            Console.WriteLine(
                "A web service connection was not established.");
        }
    }

    // Pause the console so it does not close.
    Console.WriteLine("Press the <Enter> key to exit.");
    Console.ReadLine();
}

Note

This query will return ALL records that match the criteria. Make sure you include filter elements to limit the results.

Read the following important information about using a connection string in application code.

Important

Microsoft recommends that you use the most secure authentication flow available. The authentication flow described in this article requires a very high degree of trust in the application, and carries risks that are not present in other flows. You should only use this flow when other more secure flows, such as managed identities, aren't viable.

Ordering and paging

How a page is ordered makes a big difference when paging data. If the information about how the results are ordered is ambiguous, Dataverse can't consistently or efficiently return paged data.

Specify an order for your query. With FetchXml, if you don't add any order elements to your query, Dataverse adds an order based on the primary key of the table. However QueryExpression does not, and when your query specifies distinct results, no primary key values are returned, so Dataverse can't add this default order. You must specify a paging order. Without any order specified, distinct query results might be returned in random order. OData doesn't provide any option to return distinct results, but you should still apply an order when retrieving paged results.

Paging is dynamic. Each request is evaluated independently as they're received. A paging cookie tells Dataverse the previous page. With this paging cookie data, Dataverse can start with the next record after the last one on the preceding page.

Paging works best going forward. If you go back and retrieve a page you previously retrieved, the results can be different because records could be added, deleted, or modified during since you last retrieved the page. In other words, if your page size is 50 and you go back, you get 50 records, but they might not be the same 50 records. If you keep progressing forward through the pages of a data set, you can expect all the records are returned in a consistent sequence.

Deterministic ordering is important

Deterministic ordering means that there's a way to calculate an order consistently. With a given set of records, the records are always returned in the same order. If you need consistent orders and paging, you must include some unique values or combination of column values that are and specify an order for them to be evaluated.

Nondeterministic example

Let's look at an example that is nondeterministic. This data set contains only State and Status information and is filtered to only return records in an open State. The results are ordered by Status. The first three pages are requested. The results look like this:

State Status Page
Open Active 1 Start
Open Active 1
Open Active 1 End
Open Active
Open Active
Open Inactive
Open Inactive

The paging cookie saves information about the last record on the page. When the next page is requested, the last record from the first page isn't included. However, given the nondeterministic data, there's no guarantee that the other two records on the first page aren't included in the second page.

To achieve deterministic ordering, add orders on columns that contain unique values, or values that are semi-unique.

Deterministic example

This query is like the nondeterministic one, but it includes the Case ID column that includes unique values. It's also ordered by Status, but also ordered using Case ID. The results look like this:

State Status Case ID Page
Open Active Case-0010 1 Start
Open Active Case-0021 1
Open Active Case-0032 1 End
Open Active Case-0034
Open Active Case-0070
Open Inactive Case-0015
Open Inactive Case-0047

In the next page, the cookie will have Case-0032 stored as the last record in the first page, so page two will start with the next record after that record. The results look like this:

State Status Case ID Page
Open Active Case-0010 1 Start
Open Active Case-0021 1
Open Active Case-0032 1 End
Open Active Case-0034 2 Start
Open Active Case-0070 2
Open Inactive Case-0015 2 End
Open Inactive Case-0047

Because this query orders unique column values, the order is consistent.

Best practices for orders when paging data

Note

When possible, queries should order on the primary key for the table because Dataverse is optimized for ordering on the primary key by default. Ordering by non-unique or complex fields cause excess overhead and slower queries.

When you retrieve a limited set of data to display in an application, or if you need to return more than 5,000 rows of data, you need to page the results. The choices you make in determining the order of the results can determine whether the rows in each page of data you retrieve overlaps with other pages. Without proper ordering, the same record can appear in more than one page.

To prevent the same record from appearing in more than one page, apply the following best practices:

It's best to include a column that has a unique identifier. For example:

  • Table primary key columns
  • Autonumber columns
  • User/contact IDs

If you can't include a column with a unique identifier, include multiple fields that will most likely result in unique combinations. For example:

  • First name + last name + email address
  • Full name + email address
  • Email address + company name

Anti-patterns for orders when paging data

The following are ordering choices to avoid:

  • Orders that don't include unique identifiers

  • Orders on calculated fields

  • Orders that have single or multiple fields that aren't likely to provide uniqueness such as:

    • Status and state
    • Choices or Yes/No
    • Name values by themselves. For example name, firstname, lastname
    • Text fields like titles, descriptions, and multi-line text
    • Non unique number fields

Next steps

Learn how to aggregate data.