Partilhar via


DbSyncProvider: Improving memory performance involving execissive conflicts.

This is the second post aimed at providing tips for improving runtime memory footprint when using the P2P DbSyncProvier class in V2 of SyncServices for ADO.NET.

First post link: DbSyncProvider- Improving Memory Performance In WCF Based Synchronization

We have had customers report memory usage surging when DbSyncProvider is applying changes from a peer and encounters conflicts. If the number of conflicts increases the memory usage spikes and pretty soon will run out of memory.

After debugging, we narrowed down the issue down to a property in DbSyncTableProgress class: Conflicts. This property was being used by the runtime to “log” conflicts that were skipped during that Sync session. DbSyncTableProgress holds the running count of progress made during a sync session for each sync adapter configured. Whenever the system encounters a Conflict while applying a row it will try to invoke the optional conflict event handler. The runtime will then decide what to do with the row based on the ApplyAction enum returned by that event handler. The default action is ApplyAction.Continue which means the runtime will assume “LocalWins” resolution and just update the local metadata for that row and move on. That is the best case scenario. Now lets see the worst case scenarios: we have two.

1. No event handler registered but row application failed due to Database error.

2. User registered event handler returned an ApplyAction of RetryNextSync.

The first case can happen for a variety of reasons such as errors in user configured TSQL command, typos in user configured Stored proc names or params, error in executing stored proc or database timeouts or other database related errors. The second is just a normal way of saying “dont worry about this row for this sync and we will try this again in the next sync”. Since the P2P provider is based off of Microsoft Sync Framework choosing RetryNextSync adds an exception for that particular row in the sync knowledge and moves on. This exception will tell the source to resend the row during the next sync.

Whenever the above two scenarios happen the runtime will “log” the conflict in the corresponding DbSyncTableProgress for that table. Logging includes cloning the source DataRow and the local DataRow and storing it in DbSyncTableProgress.Conflicts property. So for each conflict the runtime caches the source and destination row. If the number of conflicts increases or the size of each DataRow is large the system will gobble up available memory eventually leading to OutOfMemory exceptions.

This property was carried over from SyncTableProgress type in SyncServices V1. SyncServices V1 was an anchor based hub-spoke synchronization. V1 too had conflict resolution handling and one such option was to skip applying rows. Since the runtime had no way of storing anchors for those skipped rows, those rows would never ever be synchronized again (until a future changed to the same row happens). So, we had to aggregate and log all such conflicts so users can view it at the end of a sync session and take appropriate action. As you can see with the advent of SyncFramework this property is no longer needed as this bookkeeping is automatically done by the SyncFramework API’s.

Workaround:

So there is a simple workaround to avoid this issue. Workaround is to implement the SyncProgress event handler on DbSyncProvider and clear the Conflicts collection on the TableProgress property.

this.peerProvider.SyncProgress += new EventHandler<DbSyncProgressEventArgs>(peerProvider_SyncProgress);

void peerProvider_SyncProgress(object sender, DbSyncProgressEventArgs e)
{
e.TableProgress.Conflicts.Clear();
}

This will ensure that the collection is always cleared and no extra memory will be used.

Ps: This bug is on track to be fixed for the next release.

Maheshwar Jayaraman

Comments