Condividi tramite


Developer Approach: How Would You Tackle This?

I have an idea for a set of blog posts that might get me out of the apparent writer's block I have been suffering lately.  As developers, we love to debate the "right" way to do something, so I thought I would throw out a few scenarios to see how others perceive the "right" way to solve a problem. 

The Problem

You have a Windows Service application (an application that is controlled through the SCM) written in managed code that performs a lot of work on a continual basis with peak processing times occurring in the middle of the afternoon.  The type of work the service performs is the least significant bit for the problem.  The service application will emit messages on a continual basis to monitor the application's status (ie., see some evidence of work being performed, current health, overall progress to date, etc). 

Your task is to write a Console application that can consume the messages from the service application and display the messages in the console window (for example, maybe it is a simple Console.WriteLine).  Neither the production nor the consumption of this diagnostic information can impede performance on the service application itself (or have very limited effects on the service application's primary task of continually processing data ).

For an idea of what this should look like, think about how "tail -f" works when monitoring log files.  You open a console window, point it to a file, and then watch as new data is written to the file.  How would you write a similar application that monitors these diagnostic messages without impeding the performance of the application itself?

Let's see how you would tackle this, and if you think others are doing it the right way or the wrong way.

Comments

  • Anonymous
    January 08, 2006
    Assuming I control both sides (core and monitor), the core application needs to spend as little time generating the messages, and then possibly pass the responsibility of sending them to a lower-priority thread. This means that the diagnostics messages are going to be buffered for a while. Communication between the core and monitor should be efficient, and the monitor will probably process data in batches, and run in a lower thread priority - unless the monitor runs on another machine, in which case consuming the messages off the wire to release the core application buffers sooner may take precedence.

    Can you constrain the problem by deciding if your monitoring application runs on the same machine or a different one?
  • Anonymous
    January 08, 2006
  • First you'd have to determine how much of a history of the messages you'd want to keep. This should be user configurable.
    - A simple approach, depending on the number of messages, would be to create an array/queue of messages and display the contents of the array on a timed basis. The problem here is that if the messages are big, and a large history is kept, it could begin to have an impact on memory/performance during the refresh.
    - alternative options would be to write the messages to a db/file and retrieve the last n records, just like the tail command.
  • Anonymous
    January 08, 2006
    Any reason why you should NOT use "tail -f" ?
  • Anonymous
    January 08, 2006
    Yaniv - great thinking. The solution runs on 1 machine for now, but you gave me a great idea for the follow-up problem :)

    tenbosch - When you run tail with the "f" (follow) switch, it sleeps for a second and then attempts to read data from the input source in an endless loop. The idea is that you have an open console window that can capture any new messages sent by the Windows Service application and have the console application open in a listening state indefinitely.

    Leppie - we assume that the diagnostics may not be written to file. That does not prohibit the diagnostics from going to file, but someone else may suggest a pub/sub mechanism where the subscribers simply write out the message to the console.