Compartilhar via


More Support for EventSource and strongly typed logging: The Semantic Logging Application Block

If you have been following my blog at all, you have seen my articles about System.Diagnostics.Tracing.EventSource, a class that introduced in V4.5 of the .NET Runtime for production logging.    This class replaces the System.Diagnostics.TraceSource class  and we strongly encourage people to consider using EventSource instead of TraceSource for any future work (You can events from one go to the other 'stream' so you can transition slowly if you need to)   We like to believe EventSource is  'ultimate' in logging APIs for .NET in that you should be able to do pretty much any logging you desire with it and we should not have to change it in incompatible ways, ever.   We believe this simply because of the 'shape' of a logging statement, here is a prototypical one

  • myEventSource.MyEvent(eventArg1, eventArg2, ...)

Basically at the call site you specify

  1. The EventSource (myEventSource)
  2. The Name of the Event being raised as a method (MyEvent)
  3. Any 'payload' arguments you wish to log as part of logging that event. 

Notice I did not specify any formatting strings, logging levels, or other meta data at the call site, just what 'has to be there'.   What may not be quite so obvious is that all the event arguments are passed without loss of type information (the MyEvent method is strongly typed, not a 'params object[]').   Unlike 'printf' 'string.Format' logging we did not 'stringify' anything and loose information. It is all passed to the logging method.   This information is preserved in the EventSource 'pipe' which means that when you access a particular event you can do so in a strongly typed way, accessing each payload argument with a property accessor as this snipped of code demonstrates. 

MySourceParser.MyEvent += delegate(MyEventTraceEvent data) {

    Console.WriteLine("MyEvent: Arg1 = {0}  Arg2 = {2} ", data.MyArg1, data.MyArg2);    // Where MyArg1, and MyArg2 are the names of the parameters of the 'MyEvent' method.

}

Thus you really can get to the point where your logging is what you wanted, it is like passing data a method call in the program being monitored, through the logging pipeline (serialization) to pop out as a  to strongly typed structure in some automation that processes logging information files.    Basically you get a 'full fidelity' (without loss of type information or 'metadata' like payload names) end to end pipeline for logging information.   This is ideal in event logging, and EventSource is in a position to deliver it.

We are not completely there yet as there are missing pieces.  However far more of it is actually in place, people just don't know about it.  I am trying to fix this with my blog, but there is a lot to tell and event that is a work in progress.    Well in the blog entry I am here to tell you about one of the big pieces of this overarching 'strongly typed eventing story' that is falling into place:

The Semantic Logging Application Block

Microsoft has a team called the 'Patterns and Practices' team whose job it is to illustrate gpod and proven practices in using Microsoft technologies.   As part of their work they write guidance and build samples of real applications, but they also write utility libraries that 'flesh out' Microsoft technologies that currently only provided 'the basics'.   This team recognized that the strongly typed pipeline that EventSource provides is a great foundation, but only provided the basics, and they could provide the next layer of 'value add' libraries.  

This is what the Semantic Logging Application Block is.    'Semantic Logging' is their term for strongly typed logging.   They like the term because it strongly conveys the fact that the logging happens at a more structural level, and is much closer to the logging the semantics of the program (since you pass strongly typed fields without losing the types or the names of the event or fields), than classic string based logging does.    This 'Embracing Semantic Logging' article gives a great summary of their view of why they like semantic logging and why you should too. 

As you might expect, I heartily endorse their philosophy on logging, and their efforts to 'flesh out' the EventSource foundation, and make it as useful as possible to as many as possible.   If you are not already a convert to EventSource, I strongly encourage you to read 'Embracing Semantic Logging'. 

For those of you who are already 'on board' with strongly typed logging, what can Semantic Logging Application Block do for you?    

  1. EventListeners that send your EventSource data to various places like a flat files, Azure storage, the windows event log, a database etc. 
  2. An EventListener host/service that can process events from any processes on the system and do application specific monitoring / rollups. 

 For more complete details see their the PDF file documentation on the 'Download' tab.  Here are the links for the current release (but are likely to be broken in the future)

  1. SemanticLogging-DevelopersGuide-draft-CTP.pdf
  2. SemanticLogging-ReferenceDocs-draft-CTP.pdf

So, if you are doing logging on the .NET platform, you should be using EventSource.   If are looking around for what reusable code you can leverage, take a look at the Semantic Logging Application Block. 

Vance

Comments

  • Anonymous
    October 28, 2013
    I like the notion of strongly typed events but it seems to be a bit at odds with a pluggable logging architecture.  With a typical ILog interface you have a set number of methods to log in an admittedly unstructured way but the interface is pretty much fixed.  With the strongly typed (EventSource) approach you either A) skip interfaces and use EventSource directly (tight coupling) and loose the benefits of a "pluggable" approach or B) you design an interface with strongly typed methods that map 1-to-1 with some EventSource that gets plugged in.  The problem with approach B is that for every new Write method you have to update the interface.  Has anybody made EventSource work in a pluggable logging architecture? Perhaps the pluggable aspect is how ETW is pluggable?   Is it also the recommended practice that for every single new trace event, the developer adds a strongly typed method to the EventSource?  That seems like it might get old after a while and perhaps messy as folks remove code and corresponding EventSource method calls.  As those EventSource methods become dead code, you can't really remove them without affecting the event id value, right?  So you're left carrying around the dead code.

  • Anonymous
    October 28, 2013
    The comment has been removed

  • Anonymous
    October 28, 2013
    The comment has been removed

  • Anonymous
    October 28, 2013
    The comment has been removed

  • Anonymous
    October 28, 2013
    Thanks!  I'm still trying to wrap my ahead this approach to logging and this extra info helps.

  • Anonymous
    August 12, 2014
    aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa

  • Anonymous
    December 10, 2014
    SLAB is great but unfortunately it has two significant issues when used in Azure: (1) The out-of-process host/service cannot run on an Azure Website. Neither the deployment script nor a WebJob is allowed access ETW sessions. This is very unfortunate, considering how the somewhat obsolete  System.Diagnostics.Trace is integrated so well, with easy configuration for both blob and table storage (including retention policy and verbosity threshold). A SLAB service website configuration would would be a big win. (2) There is no ETW viewer that I know of that supports Azure Tables, so once your logs are written to the table your only way of viewing it is using a generic Table viewer. For filtering you can get around it using (paid) tools such as ClumsyLeaf's TableXplorer, but for activity ID correlation you're completely out of luck to the best of my knowledge. A perfview extension that reads SLAB Azure Table logs would be huge.