Поделиться через


tail -f on the TFS activity log

We already saw how we could use the QueryServerRequests web method to tell the calls that are actively going on for the Version Control portion of our TFS instance. I've found that to be pretty useful, as I use it on nearly a daily basis to quickly see what's splitting my infinitive going on, especially when there are problems. 

More recently, I've been looking at the TfsActivityLogging database (which has been blogged about before - make sure to check Jeff's post on getting activity logging turned on), which is a similar kind of view, but instead of showing active calls, it only shows completed calls.    It spans the various parts of the server, though, so it's not limited to just Version Control calls.

I found myself doing a lot of "top X order by foo desc" queries in SSMS on tbl_command (sometimes joined with tbl_parameter if I wanted params) to see what was going on, sometimes adding some where clause to filter it by whatever (execution time for just "expensive" commands, identityname for just things I was doing, IPAddress for all calls coming from a particular machine, etc.)

Since I still had visions of sugarplums SMO dancing in my head, I wanted to make a little PowerShell script for this little "tail -f" kind of behavior that I wanted.  I initially ended up making 2 versions - the first one indeed used SMO (much like Dan does in the linked blog entry), but using SMO for just querying the table failed the "cannon-mosquito" rule and also meant you needed more than just PowerShell (and the framework, already a pre-req for PowerShell of course) installed to run it, so the second version (shown here and attached) just uses the normal SqlClient in the framework.

I owe a HUGE thanks to Jim Truher for his help as I created this - the members of the PowerShell team are immensely helpful and supportive, Jim especially so.

Here's a quickie screen shot when I used a where clause restricting the rows to just my account. 

Other example "where clauses" you might find interesting (this is just general SQL of course) are listed here.  Make sure to check this blog post from Jeff for some other ideas.

  • "IPAddress='192.168.1.2'" (look for rows of some particular IP address)
  • "ExecutionTime > 1000000" (look for commands that took over 1 second to run since ExecutionTime is in microseconds)
  • "Application='Version Control'" (only interested in version control commands)
  • combinations like:
    • "IdentityName='NORTHAMERICA\jmanning' and Application='Version Control'" (only want to see my VC calls)
    • "ExecutionTime > 1000000 and IPAddress='192.168.1.2'" (only want to see 1+ second calls from a given host)

The general flow is:

  • first load the System.Data assembly then connect to the SQL Server
  • Once connected, create a SqlCommand with our desired query with a @commandid param since that will vary on each call
    • The query will include any additional where clause contents the user specified
  • while true (until the user interrupts)
    • fetch the rows added since the last query (the first query matches all rows)
      • Note that we use a "top" query to limit each batch of rows, include the first.  That gives us "tail -f"-like behavior since it initially shows the last current contents of a file.
      • The limit of returned rows also helps keeps things reasonable on a very-loaded system, so you may end up missing rows in such a situation.
    • Save off the commandid of the last row (for the next query)
    • Reverse the rows back to chronological (or, well, ascending commandid technically) order
    • Select off a few particular columns (now properties of the objects), then format-table with -auto to size the columns based on the data

 

   1 param(
  2    [string] $serverName = '.',
  3    [string] $additionalWhereClause = '1=1',
  4    [string] $databaseName = 'TfsActivityLogging',
  5    [int] $maxRowsPerCheck = 10,
  6    [int] $secondsBetweenChecks = 10
  7 )
  8 
  9 begin
 10 {
 11     # Load the System.Data assembly for access to SQL Server
 12     [void][System.Reflection.Assembly]::LoadWithPartialName("System.Data")
 13     # generate a connection string
 14     $builder = new-object 'system.data.sqlclient.SqlConnectionStringBuilder'
 15     $builder['Server'] = $serverName;
 16     $builder['Integrated Security'] = $true;
 17     $builder['Initial Catalog'] = $databaseName;
 18     # Establish the connection to the server
 19     $serverConn = new-object 'system.data.sqlclient.SqlConnection' $builder.ConnectionString
 20     $serverConn.Open()
 21     # Formulate the query we'll run
 22     $query = "
 23         select top $maxRowsPerCheck * from tbl_command
 24         where commandid > @commandid
 25         and $additionalWhereClause
 26         order by commandid desc
 27     "
 28     # Create the SqlCommand
 29     $sqlCommand = new-object 'system.data.sqlclient.SqlCommand' $query, $serverConn
 30     [void]$sqlCommand.Parameters.Add('@commandid', [system.data.sqldbtype]::int)
 31     # var for tracking where we are in the table so far
 32     $lastCommandId = 0;
 33 
 34     function fetchLatestRows {
 35         $sqlCommand.Parameters['@commandid'].Value = $lastCommandId
 36         $rdr = $sqlCommand.ExecuteReader()
 37         while($rdr.Read())
 38         {
 39             $row = new-object psobject
 40             $row.psobject.typenames[0] = "TfsActivityLoggingObject"
 41             for($c = 0; $c -lt $rdr.FieldCount; $c++) 
 42             { 
 43                 $row | add-member noteproperty $rdr.GetName($c) $rdr.GetValue($c) 
 44             }
 45             $row
 46         }
 47         $rdr.close();
 48     }
 49 }
 50 
 51 process
 52 {
 53     while ($true)
 54     {
 55         # wrap the call with @() (explicit array reference) to force an array even if it's only 1 row
 56         $results = @(fetchLatestRows $lastCommandId)
 57         if ($results -ne $null)
 58         {
 59             # save off the highest command id so we know where we're at in the table
 60             $lastCommandId=$results[0].CommandId;
 61             # reverse the results so they display chronologically
 62             [array]::reverse($results)
 63             # include -auto so we get the best fit for content
 64             $results | select StartTime,ExecutionTime,Command,Application,IdentityName,IPAddress | ft -auto
 65         }
 66         else
 67         {
 68             write-host "...[No activity]..."
 69         }
 70         start-sleep $secondsBetweenChecks;
 71    }
 72 }
 73 
  

Here's a few questions I would expect - hopefully they'll explain some of the not-blatantly-obvious portions of what's going on.

Why is the default server name '.'?

That's one way of specifying "default/local instance".  I'd link to some info on it, but that's a hard thing to search for.  I probably could have made it 'localhost' instead, but there seems to be a strong association with "." as the target of a connection when connecting to the "local instance" in the SQL Server world.

You don't clean up your SqlCommand or SqlConnection!

That's true - unfortunately, PowerShell V1 doesn't have a try/finally kind of mechanism, so even if I specify an end {} block that cleans up SqlCommand and SqlConnection, it won't get called when the user control-c's to interrupt the while loop.  I thought about switching the while loop to something that ran until the user pressed a key, but even if I did the user would have to remember not to hit control-c (which would exit the processing and skip the end block), which is what many (ok, at least one, me) will do since this is a "tail -f" kind of thing.

Since we really want a try/finally kind of thing here, one angle is that this should be coded in C# instead.  That's certainly a valid consideration, but I really wanted a PowerShell-based example to go with here.

Why are you selecting all the columns out of the tbl_command table if you're only going to display a few of them?

This is mainly because I expect others to be interested in different columns than I am, so I wanted the data generation to not need to change - other users can either just change the select statement columns on line 64 or change line 64 to just $results and do their own formatting outside of the script. 

Even better, a user can change line 64 to just $results and create their own view definition for the TfsActivityLoggingObject (the object we create here) - that way they'll be able to change their mind about how they want the default view of those instances to look like without even having to change the script.  A good example of a view definition is the one Jim creates for his PowerShellJobObject in his "jobs support" blog post (about 2/3 of the way down) in a file called job.format.ps1xml.

I'm violating one of the PowerShell-isms as this is currently coded because I'm outputting text from this processing (by doing the select and format-table call here) rather than output the objects and letting the caller do what they want (pipe them to something else, select their own columns and format it, whatever).  One of the reasons I'm doing the select and ft -auto is because there's no way for me to specify an object default view "inline" in this script.  Because of that, if I just output the objects, the caller ends up with many more columns of data than they're likely to actually want to see, since most likely they won't have already defined a custom view for the TfsActivityLoggingObject. 

Of course, the other route to go would be to drop the original SQL query columns down to just the ones you want to display, but since you may want to do later pipeline processing based on columns that you don't necessarily want to display, it's easier to just have the objects contain all the columns and strip it down to the display ones later when viewing them, so user-based processing of the objects has access to the full set of data.

Why are you creating your own object and adding properties to it instead of just storing each row as a hash?

I actually started out based on a hash, but Jim and others explained that while the PowerShell V1 interpreter did a good job of making hash keys look like properties off the object, but it wasn't really baked into the object itself, so you couldn't do something like "select key1,key2" on a hash and have it work.  By creating a custom object and using add-member to add on "real" properties to those objects, we can do things like select off the columns and also leverage things like the formatdata support to allow definitions of how we view this custom object (see above question).

Why did you need the explicit array reference (line 56) when calling fetchLatestRows?

This is because there are 3 possible results from fetchLatestRows as written:

  1. $null (nothing is returned back, so trying to assign that to a var gives you a null var)
  2. an array (2 or more rows are returned, so when you assign to a variable you get an array)
  3. a scalar (a single TfsActivityLoggingObject).

Rather than complicate the caller too much, I wanted to not have to deal with the scalar case separately.  Thankfully, PowerShell is extremely friendly in this regard and when you evaluate a scalar in an array context (via the @() wrapping) then it automatically promotes it up to a 1-entry array for you.  It leaves null as null for those curious as to whether it would promote that to a 0-entry array.

Note that I'm doing this route because it's more of the "PowerShell way" IMHO to emit a variable number of objects to your caller.  If I wanted to simplify things, I could just as easily have fetchLatestRows declare a $rows variable as an array or arraylist, add to it during the reader loop, then emit back that $rows structure.  Then the caller wouldn't need to wrap with an explicit array reference, and would do a count == 0 check for "no activity" instead.  However, I left the code like this as a demonstration, and as an excuse to bring up and answer this question :)

Why do you have separate begin and process sections?

True enough, I don't really need to, as I'm not dealing with any kind of input from an external source here.  I could have done the entire script in one or the other, and not even declared begin and process blocks at all (since the default contents are assumed to be in the process block).  However, I like it as a style point for a few reasons:

  1. It helps to delineate which parts of the code are "setup" and which are "do the work"
  2. It gives me a nice place to define the function I'll use during the "worker" part
  3. It gave me a chance to talk about begin/process/end a little bit.

Why aren't you using the SqlDependency class (which exposes SQL Server 2005 Query Notifications ) instead of this polling approach?

A few different reasons

  1. It keeps the example simpler and easier to understand to just poll.
  2. I don't need to know the moment a new activity log row shows up - knowing a few seconds later isn't a big deal to me, so I'd rather just get the new rows when I get around to making my next query.  I'm not in a "I need to keep this cache up to date" kind of situation.
  3. I wanted to try and keep the SQL query simple to understand as well, and I'd have to spell out the columns selected if I used query notifications since * isn't allowed in queries that will use notifications.

tail-tfsactivitylog.ps1

Comments

  • Anonymous
    September 04, 2006
    The comment has been removed
  • Anonymous
    September 05, 2006
    James Manning shares some more PowerShell goodness in his post that describes how to watch Team...
  • Anonymous
    September 13, 2006
    Great tool James. We're doing a TFS rollout pilot with a client at the moment, so it's great to see how they're actually using it.

    Grant
  • Anonymous
    February 25, 2008
    PingBack from http://ozgrant.com/2008/02/26/tfs-performance-excel-2007-heat-map/