次の方法で共有


Can I clean one partition at a time with ReplDiag, and other tips [Part 4 of 4]

Hi, Rob here again. As we conclude our 4 part series on the ReplDiag tool, there’s always one more trick up the author’s sleeve and that is, to clean one NC at a time! Perhaps we’ll explore hidden switches and naturally, give you the latest update where the tool is going. First, an “NC”, or naming context, for those of you who have not figure it out yet, is a partition, in the NTDS.DIT database of Active Directory. The basic breakdown is here. Think of it logically, segmented sections of the database where different data is stored. Where schema, configuration, domain and application (DNS usually) partitions are segmented into logical storage chunks in the database. When lingering objects exist, be they are logged in the deleted items container, a global catalog RO(read only) NC/partitions, it may be necessary to perform cleanup per that partition, so here’s how to do just that.

ReplDiag has a command line switch that allows the user to output the equivalent repadmin.exe syntax and run lingering object cleanups using the officially supported Microsoft tool for those who are concerned about support and concerns around open source.

As stated earlier in this thread, the author wrote this tool to address the forest as a whole, however there are scenarios where more granular, by naming context control is useful. Until the author adds that functionality into the tool here is how to do that:

Command Line Syntax (using a combination of redirection and multiple commands, some basic command line tools):

ReplDiag /removelingeringobjects /OutputRepadminCommandLineSyntax| find /I “cn=configuration,dc=contoso,dc=com”> cleanConfigNc.cmd & cleanConfigNc.cmd & del /q cleanConfigNc.cmd

Note: just change the portion between the quotation marks with the name of the naming context required.

Note: One of the advantages of ReplDiag is that it cleans everything in a multithreaded fashion where possible to improve performance. This is lost when repadmin.exe is used in the above batch file scenario.

Now that you’re feeling in control, let’s dive into some hidden and other tips to get things wrapped up.

/UseRobustDCLocation – In some environments replication is so broken that the existence of all DCs has not converged across the forest. To address replication stability in the environment this, at a minimum is something that needs to be addressed. In order to reduce the number of iterations of data collection and get a clear picture of the whole environment this will contact each DC and get a list of all DCs known by each DC. These lists are then aggregated and compared to produce a consolidated list of all known DCs. Then the data collection of current replication proceeds. Due to the nature of having to query each known DC, environmental analysis time increases based on the size of the environment.

/OverrideDefaultReferenceDC – By default, the tool picks as the reference DC the DC with the most inbound connections under the assumption that this is a centrally located hub DC. If these criteria are incorrect for the environment, a reference DC per naming context can be designated.

/Save – The current state of replication, and all associated data, can be saved out to an XML file for reference and to transfer the state elsewhere. This is very useful for sending the state into Microsoft support or to compare before/after states. It saves the data with the filename “ReplicationData.XML”.

/ImportData – Loads the data from a previous “/Save” and performs analysis of the data for topology issues.

Hidden workaround for cleaning environments that can’t be stable – There certain scenarios where an environment can’t be made stable. For example, large enterprises with lots of poor quality links to esoteric locations around the globe. There are consequences to not talking to all the DCs and without a full understanding of these the author didn’t want people to just bypass the stability validation without understanding the consequences of their actions.

Finally …. So what’s next in the life of this great tool? What does the future look like? In Ken’s own words and things we are hearing down the pipeline, as follows:

There is no timeline on many of these items given other demands on my time, but I’m looking to do some work with collecting some more detailed replication data including Up-To-Dateness and High-Watermark data. I haven’t done any work on testing and validating this with RODCs yet and as customers begin to adopt this, I really need to put some time into that. The irony is that my challenge isn’t writing the code, but the time and resources necessary to setup the lab to properly test this.

There are some areas of the code I’m not entirely happy with that have some trouble in certain scenarios in collecting the data and I hope to make these a little more robust. There are some folks asking for the per NC lingering object clean up and that may eventually appear as well as initiating Garbage Collection prior to cleaning the NC to reduce log spam. The other items are the priority especially since there is a workaround that Rob tells me he is going to include in his blog.

Also, check out my other tools on the same CodePlex project. I have to balance the support and feature requests for those tools with those of ReplDiag. Though, I’m happy to work with volunteers who are passionate about adding features or functionality.

Woowhoo! We’re done! Thank you for reading out 4 month series on lingering object cleanup and ReplDiag. Keep in mind that the topic’s been addressed several times before, in some capacity - so I’d like to buy a drink to those who have been down this road before: you know who you are!