TF400030: The local data store is currently in use by another operation.
This is an error which can occur when using local workspaces in Visual Studio 2012 or later. The full text of the message in English is actually “TF400030: The local data store is currently in use by another operation. Please wait and then try your operation again. If this error persists, restart the application.”.
What causes this error?
As mentioned in a previous blog post, local workspaces keep certain pieces of data for your workspace on your local machine. With server workspaces, these data are kept on the server. These local pieces of data include your working folders (mappings), your local version table, and your pending changes.
Under our current architecture, only a single operation (thread) can use this local data at a time. If two pieces of system want to use the data store simultaneously – perhaps the Solution Explorer and the Source Control Explorer both want to refresh after an Undo operation was performed – then these two components have to take turns to use the local data store. One will wait for the other to finish, and who gets to go first is arbitrary.
The ‘loser’ of the race has to wait before proceeding, but will only wait for about 45 seconds. If he ends up waiting for that full amount of time, but still doesn’t get the chance to use the local data store (because some other component of the system is still using it), then this error will be raised.
Depending on the scenario in which the error was raised, it may be benign. In the example I described, where two components of the system are both trying to refresh simultaneously, and one fails with a TF400030 timeout error, then it’s possible to manually refresh (F5) the offending component and move on with your work.
However, it’s also possible for this error to pop up during an operation you care more deeply about, like a check-in, or a merge, or an undo operation. In these cases seeing this error may be more of a real problem than a minor annoyance.
45 seconds is a long time. What’s taking so long?
There are 3 main possibilities.
The first possibility is that the ‘scanner’ is doing a full scan of the workspace to detect pending edits. This involves looking at the file attributes of each item in the workspace and comparing its size and last-modified time to our previously-observed values in the local version table. This scanner is what enables local workspaces to detect edits created just by opening up a file in Notepad and saving it to disk without ever telling TFS. Full scans are infrequent, but they always happen at application startup, and they may happen again later if a flood of changes comes in and overwhelms our notification-based ‘partial scanner’, causing a full invalidation.
The cost of scanning the disk is proportional to the number of items in the workspace, and is strongly affected by whether or not the operating system’s disk cache is hot or cold. A cold full scan (for example, after you reboot your machine) is much more expensive because actual I/O has to be performed to get the file attributes of each item in the workspace. Cold full scan performance is strongly affected by the performance of your I/O subsystem – SSDs are dramatically faster at random I/O than mechanical disks.
The second possibility is that the user is performing a ‘reconciled’ operation on the server. Since the client’s copy of the working folders, local version table, and pending changes are authoritative, when performing a server-side operation like check-in, shelve, or merge, the server’s copy of this data must first be synchronized with the client’s. Our synchronization protocol is incremental and efficient for working folders and the local version table, but to assure absolute correctness, we copy the entire pending changes table up to the server if any change has occurred since the last reconciled operation.
The cost of reconciling is therefore proportional to the number of pending changes in the workspace, and is strongly affected by the bandwidth of your connection to the TFS server and the performance of that TFS server – the speed at which it can complete the reconcile.
A third possibility for a slowdown is a deadlock between two operations – such a deadlock would not be related to the number of items in the workspace, and would be highly-scenario specific. Such issues are bugs. We fixed a couple of deadlock scenarios in Visual Studio 2012 Update 2, especially related to installing and uninstalling NuGet packages.
How can I reduce the frequency of this error or avoid it entirely?
First, please ensure that your copy of Visual Studio 2012 is updated to Update 2 or later. This ensures that you are protected from the deadlock scenarios described above.
Second, try to avoid working with extremely large sets of pending changes (10,000+) for a long time without checking in. This helps to keep ‘reconcile’ costs low.
Third, and this is probably the most important point to look at – keep the size of your workspace from growing too large. For Visual Studio 2012, we are recommending that local workspaces have 50,000 or fewer items. (‘Items’ means both files and folders.) The actual enforced limits are much higher (500,000 items) but as you increase the size of your workspace, the risk of seeing TF400030 errors increases as it takes longer for the scanner to complete a full scan.
Fourth, the problem is exacerbated by having multiple instances of Visual Studio 2012 open at the same time, especially if they have the Source Control Explorer open. Refreshing the Source Control Explorer is an operation which requires reconciling with the server – if you make a change in one instance of Visual Studio, all other instances are notified of the change and refresh themselves. This increases contention for the local data store.
Lastly – if you have exhausted all options and find you are still receiving the TF400030 error, you can switch your workspace from “local” to “server” in the Edit Workspace dialog. It is not possible to get a TF400030 error when working with a server workspace.
How can I find out how many items I have in my workspace?
Measuring this is made more complicated than it could be because of the backup copies of each file that we keep to support offline diff and offline undo.
Most workspaces are singly-rooted – that is to say, there is some local folder C:\MyWorkspace where everything is underneath that path. If you right-click on that path in Windows Explorer (now called the File Explorer) and go to Properties on that folder, you should be able to see the total number of items in that folder (recursively). This count ends up double-counting files, though, because the backup copies in our hidden $tf folder are being counted twice!
A slightly more accurate measure is to turn on the ability to see hidden files and folders, and then run the same Properties command in the File Explorer on the C:\MyWorkspace\$tf folder. This won’t count folders, so we’re undercounting slightly, but it should be clear whether you are greatly exceeding the recommendation of 50,000 items, or whether you’re generally in compliance with the recommendation. (The recommendation of 50,000 items is fairly conservative.)
I’m over the recommended limit - how do I reduce my item count?
Often customers have more than one branch of their source code mapped in the same workspace. There is some guidance here for how to start using multiple workspaces – one for each branch.
Some customers, though, will find that their codebase is simply too large for local workspaces. In this case we recommend that you switch from local workspaces to our more traditional ‘server workspaces,’ which are proven to scale to up to 10,000,000 items in a single workspace (given appropriate server hardware).
What is Microsoft doing to make this better in the future?
We are planning to include, in the next major version of Visual Studio, a built-in feedback mechanism that warns you when you greatly exceed the recommendation for the number of items in a local workspace.
The next major version of Visual Studio also includes significant performance enhancements for the ‘scanner’ component. The performance improvements are not algorithmic, but are rather the result of careful performance (and memory allocation) tuning and profiling. The performance improvement in the next release of Visual Studio for the scanner, at scale, is about a factor of two. As a result we will likely be adjusting our guidance for the number of items in a local workspace upward from 50,000 to some larger number for the next release of Visual Studio.