Compartilhar via


Sample Alert and State Change Insertion

 Update: I have updated the Management Pack to work with the final RTM bits

 First, a disclaimer. Not everything I write here works on the Beta 2 bits that are currently out. I had to fix a few bugs in order to get all these samples working, so only to most recent builds will fully support the sample management pack. I will, however, provide at the end of a the post a list of the things that don't work =).

I've attached to the post a sample management pack that should import successfully on Beta 2, please let me know if it doesn't and what errors you get. This management pack is for sample purposes only. We will be shipping, either as part of the product or as a web-download, a sealed SDK/MCF management pack that will help alert and state change insertion programmatically and that will support all the things I am demonstrating here.

What I would like to do, is go through this management pack and talk about how each component works, and then include some sample code at the end that goes over how to drive the management pack from SDK code.

This first thing you will notice in the management pack is a ConditionDetectionModuleType named System.Connectors.GenericAlertMapper. What this module type does is take as input any data type and output the proper data type for alert insertion into the database (System.Health.AlertUpdateData). This module type is marked as internal, meaning it cannot be referenced outside of this management pack, and simply provides some glue to make the whole process work.

Next, we have the System.Connectors.PublishAlert WriteActionModuleType which takes the data produced by the aforementioned mapper and publishes it to the database. Regardless of where other parts of a workflow are running, this module type must run on a machine and as an account that has database access. This is controlled by targeting as described in the previous post. This module type is also internal.

Now we have our first two public WriteActionModuleType's, System.Connectors.GenerateAlertFromSdkEvent and System.Connectors.GenerateAlertFromSdkPerformanceData. These combine the aforementioned module types into a more useable composite. They take as input System.Event.LinkedData and System.Performance.LinkedData, respectively. Note, these are the two data types that are produced by the SDK/MCF operational data insertion API. Both module types have the same configuration, allowing you to specify the various properties of an alert.

The last of the type definitions is a simple UnitMonitorType, System.Connectors.TwoStateMonitorType. This monitor represents two states, Red and Green, which can be driven by events. You'll notice that it defines two operational state types, RedEvent and GreenEvent, which correspond to the two expression filter definitions that match on the $Config/RedEventId$ and $Config/GreenEventId$ to drive state. What this monitor type essentially defines, is that if a "Red" event comes in, the state of the monitor is red, and vice-versa for a "Green" event. It also allows you to configure the event id for these events.

Now we move to the part of the management pack where we use all these defined module types.

First lets look at System.Connectors.Test.AlertOnThreshold and System.Connectors.Test.AlertOnEvent. Both these rules use the generic performance data and event data sources as mentioned in an earlier post. They produce performance data and events for any monitoring object they were inserted against, and as such, you'll notice both rules are targeted to Microsoft.SystemCenter.RootManagementServer; only have a single instance of each rule will be running. The nice thing about this is that you can generate alerts for thousands of different instances with a single workflow, assuming your criteria for the alert is the same. Which brings me to the second part of the rule, which is the expression filter. Each rule has its own expression filter module that matches the data coming in to a particular threshold or event number.  Lastly, each includes the appropriate write action to actually generate the alert, and using parameter replacement to populate the name and description of the alert.

The other two rules, System.Connectors.Test.AlertOnThresholdForComputer and System.Connectors.Test.AlertOnEventForComputer, are similar, only they use the targeted SDK data source modules and as such are targeted at System.Computer. It is important to note that targeting towards computer will only work on computers that have database access running under an account that has database access. I used this as an example because it didn't require me to discovery any new objects, plus, I had a single machine install where the only System.Computer was the root management server. The key difference between these two rules and the previous rules is that there will be a new instance of this rule running for every System.Computer object. So you can imagine, if you created a rule like this and targeted to a custom type you had defined for which you discovered hundreds or thousands of instances, you would run into performance issues. From a pure modeling perspective, this is the "correct" way to do it, since logically you would like to target your workflows to your type, however, practically, it's better to use the previous types of rules to ensure better performance.

The last object in the sample is System.Connectors.Test.Monitor. This monitor is a instance of the monitor type we defined earlier. It maps the GreenEvent type state of the monitor type to the Success health state and the RedEvent to the Error health state. It defines via configuration that events with id 1, will make the monitor go red and events with id 2 will make it go back to green. It also defines that an alert should be generated when the state goes to Error and also that the alert should be auto-resolved when the state goes back to Success. Lastly you'll notice the alert definition here actually uses the AlertMessage paradigm for alert name and description. This allows for fully localized alert names and descriptions.

This monitor uses the targeted data source and thus will create an instance of this monitor per discovered object. We are working on a similar solution to the generic alert processing rules for monitors and it will be available in RTM, it's just not available yet.

Now, what doesn't work? Well, everything that uses events should work fine. For performance data, the targeted versions of workflows won't work, but the generic non-targeted ones will. Also, any string fields in the performance data item are truncated by 4 bytes, yay marshalling. Like I said earlier, these issues have been resolved in the latest builds.  

Here is some sample code to drive the example management pack:

using System;

using System.Collections.ObjectModel;

using Microsoft.EnterpriseManagement;

using Microsoft.EnterpriseManagement.Configuration;

using Microsoft.EnterpriseManagement.Monitoring;

 

namespace Jakub_WorkSamples

{

    partial class Program

    {

        static void DriveSystemConnectorLibraryTestManagementPack()

        {

            // Connect to the sdk service on the local machine

            ManagementGroup localManagementGroup = new ManagementGroup("localhost");

 

            // Get the MonitoringClass representing a Computer

            MonitoringClass computerClass =

                localManagementGroup.GetMonitoringClass(SystemMonitoringClass.Computer);

 

            // Use the class to retrieve partial monitoring objects

            ReadOnlyCollection<PartialMonitoringObject> computerObjects =

                localManagementGroup.GetPartialMonitoringObjects(computerClass);

 

            // Loop through each computer

            foreach (PartialMonitoringObject computer in computerObjects)

            {

                // Create the perf item (this will generate alerts from

                // System.Connectors.Test.AlertOnThreshold and

                // System.Connectors.Test.AlertOnThresholdForComputer )

                CustomMonitoringPerformanceData perfData =

                    new CustomMonitoringPerformanceData("MyObject", "MyCounter", 40);

                // Allows you to set the instance name of the item.

                perfData.InstanceName = computer.DisplayName;

                // Allows you to specify a time that data was sampled.

                perfData.TimeSampled = DateTime.UtcNow.AddDays(-1);

                computer.InsertCustomMonitoringPerformanceData(perfData);

 

                // Create a red event (this will generate alerts from

                // System.Connectors.Test.AlertOnEvent,

                // System.Connectors.Test.AlertOnEventForComputer and

                // System.Connectors.Test.Monitor

                // and make the state of the computer for this monitor go red)

                CustomMonitoringEvent redEvent =

                    new CustomMonitoringEvent("My publisher", 1);

                redEvent.EventData = "<Data>Some data</Data>";

                computer.InsertCustomMonitoringEvent(redEvent);

 

                // Wait for the event to be processed

                System.Threading.Thread.Sleep(30000);

 

                // Create a green event (this will resolve the alert

                // from System.Connectors.Test.Monitor and make the state

                // go green)

                CustomMonitoringEvent greenEvent =

                    new CustomMonitoringEvent("My publisher", 2);

                greenEvent.EventData = "<Data>Some data</Data>";

                computer.InsertCustomMonitoringEvent(greenEvent);

            }

        }

    }

}

 

System.Connectors.Library.Test.xml

Comments

  • Anonymous
    October 11, 2006
    The comment has been removed

  • Anonymous
    October 12, 2006
    Yeah, localized alert descriptions were not supported in Beta 2, but instead the alert name and description were directly part of the configuration. You can try removing these references, or most preferably, move to a more recent RC0 build.

  • Anonymous
    October 12, 2006
    Alright, i will wait for the RC then. I guess it should be available to the public by this month end from the connect website , rite?

  • Anonymous
    October 13, 2006
    Yes, we are working on an RC1 right now. Should be available relatively soon, although I am not 100% sure of the date.

  • Anonymous
    November 21, 2006
    I wanted to go through and outline some of the changes we made for MCF since our last release. The things

  • Anonymous
    November 26, 2006
    The comment has been removed

  • Anonymous
    November 26, 2006
    I also noticed that you create a new event for every Performance Data insert. Is n't that will create so many events ? Is n't that bad for system's performance??

  • Anonymous
    November 27, 2006

  1. and 2. - These are hardcoded values that normally would not be "public" but need to be to allow for the added functionality I talked about.
  2. I am not entirely sure I understand your question, but if I do, then yes, you just need to match on the counter name in a condition expression filter. In terms of creating one CustomMonitoringPerformanceData for every insert, this is the only way to do it and as always, performance should be a consideration, but regarding this, probably not a concern. What kind of scale are you looking for?
  • Anonymous
    November 27, 2006
    The comment has been removed

  • Anonymous
    November 28, 2006
    That should be fine in terms of scale. The performance data is deleted based on your grooming settings and if you want to archive it, you need to move it to reporting.

  • Anonymous
    November 28, 2006
    If possible, can u explain in detail about where to set the grooming settings and how to move it to reporting?

  • Anonymous
    November 28, 2006
    How about clearing the alerts, is it the same way also ?

  • Anonymous
    November 29, 2006
    Reporting is not my area so I don't know much about that; I would suggest reading through our docs and if that does not suffice, posting to the beta newsgroups. Regarding grooming, yes, alerts are the same. The settings for this can be changed via the UI in Administration -> Settings -> Database Grooming.

  • Anonymous
    November 29, 2006
    The comment has been removed

  • Anonymous
    November 29, 2006
    Operational data does not get removed when classes get removed. You can do this from the database: DELETE FROM dbo.Alert This will delete ALL alerts. Do this at your own risk, I take no responsibility for damage caused by editing the database directly.

  • Anonymous
    November 29, 2006
    Thanks a lot, for your help. Finally managed to remove those alerts. There was almost 50000 of them, no wonder the UI crash. Now everything back to normal.

  • Anonymous
    December 04, 2006
    I noticed sometime when i insert lot of Performance Data, Sometimes i can immediately see the Performace View Graph, but sometimes need to wait too long or even have to restart the service. Then i noticed there is some table called PendingSDKDataSource, and all my inserts are queued up there, Is there anyway i can flush them to SCOM for faster processing? Or may be i can understand better , if u can explain a bit on how this stuff works actually.

  • Anonymous
    December 05, 2006
    The data in that table is picked up periodically by a workflow and inserted into the runtime. You cannot speed this process up. It can be picked up virtually instantly, or be delayed by as much as a minute.

  • Anonymous
    February 21, 2007
    I've written a management pack that's essentially a simplified version of the example MP you've given in this post.  I've also written a connector that loads the MP on startup, and then stuffs in some CustomMonitoringEvents. Everything gets converted to an alert.  Problem is that every time I run the connector, and it loads the MP, I get a new alert for EVERY SDK event that I've pushed into the data base before.  Thus, if I've pushed 50 events previously, I automatically get 50 new alerts when I reload the MP.  I only want it to convert NEW events to alerts, not existing ones. (Reposted from e-mail so the answer gets to everybody)

  • Anonymous
    February 21, 2007
    Yes, this behavior is quite awkward. I have filed a bug for this in the SP1 milestone (it won't make it into RTM). The underlying problem is that the modules that read this data are independent of each other and thus when one module reads it, it can't be deleted immediately as other instances of the module may want to pick it up. I am not sure yet the best way to solve this, but one workaround you can do now is to be extremely aggressive in grooming the table (PendingSdkDataSource). There isn't something in the product that you can tweak, but perhaps a sql job that deletes old values would be appropriate? I am sorry I don't have a better solution at this point. For your particular architecture, you could check to see if the MP is already loaded, and not reload it if its there.

  • Anonymous
    February 21, 2007
    Thanks, Jakub - removing all rows from PendingSdkDataSource did indeed fix the problem.  I guess I had been thinking of the MP as something I should load when my connector loads.  It seems that it's really something that should be there all the time, ready to accept events from my connector if it happens to load.   This actually makes my life a bit easier, as I'm free to use the web service to push events in, keeping my product connector local to my app that's pushing events in.  There's not a lot of info regarding the format of messages to be sent via the web service - is it a particular WS-* protocol?  (The event source is not running on Windows).  Worst case, if there's no info out there, I'll just set up a C# proxy as described in the SDK docs and observe the messages via Ethereal.

  • Anonymous
    February 21, 2007
    The WS-* implementation is WCF and it uses the wsHttpBinding. I am not sure what non-windows support there is yet for this. This can probably be configured to use the basicHttpBinding by changing the SDK service configuration file. This would, in theory, give you a standard asmx endpoint to communicate with, however, I have not tried this and don't know if it will work.

  • Anonymous
    February 22, 2007
    Hrm - well if need be, Ethereal will serve well. Is there a timeout after which a product connector will be disconnected when there is no activity?  I had previously set up my connector to poll every 5 seconds for alert updates, and never had problems.  I've taken that (unneeded) functionality out, and now, if I'm idle for more than a few minutes, I get a ServerDisconnectedException that asks me to reconnect.  I can catch the exception and Reconnect(), but until I do so, any of the data  inserted by my connector (discovery, event, alert) is inaccessible within the console.  I don't see any sort of heartbeat method

  • Anonymous
    February 22, 2007
    Using the web-service endpoint? I am actually not too sure what the default is (probably 20 minutes), but it should be configurable. The config file for the WCF endpoint should allow you to change any of these configuration options. If you download the WCF SDK (.Net Framework 3.0 SDK) that ships with a GUI tool allowing you to edit the configuration file more easily. This should let you set any parameters you want to customize. We didn't really do too much testing for long idle clients as our experience has been clients asking for new data every 30 seconds or so.

  • Anonymous
    February 22, 2007
    Sorry - my bad, didn't give proper context.  I actually mean with a local SDK client/product connector running right on the SCOM server.  This is an exception thrown when I try to call a MCF function - for example, IncrementalMonitoringDiscoveryData.Commit().  

  • Anonymous
    February 22, 2007
    Ah - on that channel the inactivity timeout is set to 60 minutes. Is this not the behavior you are seeing? Can you look at the inner exception and tell me what it is?

  • Anonymous
    February 23, 2007
    I'm seeing it considerably faster than 60 minutes.  I'll have to write a quick proggy to determine how long it actually is...  Anyways - it's good to know that some activity is required within a certain window. The full exception is as follows, unfortunately it'll format horribly as a blog comment... Unhandled Exception: Microsoft.EnterpriseManagement.Common.ServerDisconnectedException: The client has been disconnected from the server. Please call ManagementGroup.Reconnect() to reestablish the connection. ---> System.ServiceModel.CommunicationObjectFaultedException: The communication object, System.ServiceModel.Channels.ServiceChannel, cannot be used for communication because it is in the Faulted state. Server stack trace:   at System.ServiceModel.Channels.CommunicationObject.ThrowIfDisposedOrNotOpen()   at System.ServiceModel.Channels.ServiceChannel.EnsureOpened(TimeSpan timeout)   at System.ServiceModel.Channels.ServiceChannel.Call(String action, Boolean oneway, ProxyOperationRuntime operation, Object[] ins, Object[] outs, TimeSpan timeout)   at System.ServiceModel.Channels.ServiceChannel.Call(String action, Boolean oneway, ProxyOperationRuntime operation, Object[] ins, Object[] outs)   at System.ServiceModel.Channels.ServiceChannelProxy.InvokeService(IMethodCallMessage methodCall, ProxyOperationRuntime operation)   at System.ServiceModel.Channels.ServiceChannelProxy.Invoke(IMessage message) Exception rethrown at [0]:   at System.Runtime.Remoting.Proxies.RealProxy.HandleReturnMessage(IMessage reqMsg, IMessage retMsg)   at System.Runtime.Remoting.Proxies.RealProxy.PrivateInvoke(MessageData& msgData, Int32 type)   at Microsoft.EnterpriseManagement.Common.IAdministrationDataAccess.ProcessDiscoveryData(Int32 operation, Guid discoverySourceId, IList1 entityInstances, IList1 relationshipInstances)   at Microsoft.EnterpriseManagement.DataAbstractionLayer.AdministrationOperations.ProcessDiscoveryData(Int32 operation, Guid discoverySourceId, IList1 entityInstances, IList1 relationshipInstances)   --- End of inner exception stack trace ---   at Microsoft.EnterpriseManagement.DataAbstractionLayer.SdkDataAbstractionLayer.HandleIndigoExceptions(Exception ex)   at Microsoft.EnterpriseManagement.DataAbstractionLayer.AdministrationOperations.ProcessDiscoveryData(Int32 operation, Guid discoverySourceId, IList1 entityInstances, IList1 relationshipInstances)   at Microsoft.EnterpriseManagement.ConnectorFramework.IncrementalMonitoringDiscoveryData.Commit(ManagementGroup managementGroup, Guid discoverySourceId)   at Microsoft.EnterpriseManagement.ConnectorFramework.IncrementalMonitoringDiscoveryData.Commit(MonitoringConnector monitoringConnector)   at ConnectorTest.SCOMConnector.CreateWindowsComputer(String principalName)   at ConnectorTest.SCOMConnector.InsertCustomEvent(String systemName, Int32 eventId, String description, String severity, String category, String action, String selector, String replyIP, String replyPort)   at ConnectorTest.NotificationListener.ParseNotification(XmlDocument doc)   at ConnectorTest.NotificationListener.HttpListenerCallback(IAsyncResult result)   at System.Net.LazyAsyncResult.Complete(IntPtr userToken)   at System.Net.LazyAsyncResult.ProtectedInvokeCallback(Object result, IntPtr userToken)   at System.Net.ListenerAsyncResult.WaitCallback(UInt32 errorCode, UInt32 numBytes, NativeOverlapped* nativeOverlapped)   at System.Threading._IOCompletionCallback.IOCompletionCallback_Context(Object state)   at System.Threading.ExecutionContext.runTryCode(Object userData)   at System.Runtime.CompilerServices.RuntimeHelpers.ExecuteCodeWithGuaranteedCleanup(TryCode code, CleanupCode backoutCode, Object userData)   at System.Threading.ExecutionContext.RunInternal(ExecutionContext executionContext, ContextCallback callback, Object state)   at System.Threading.ExecutionContext.Run(ExecutionContext executionContext, ContextCallback callback, Object state)   at System.Threading._IOCompletionCallback.PerformIOCompletionCallback(UInt32 errorCode, UInt32 numBytes, NativeOverlapped* pOVERLAP)

  • Anonymous
    February 23, 2007
    Alright - got a program running that inserts discovery data while sleeping in incrementing intervals...  Perhaps it was merely my impression that it was less than 60 minutes - this'll say for sure, but will take a loooong time to get to 60 minutes. ;)

  • Anonymous
    February 23, 2007
    Let me know what you find out.

  • Anonymous
    February 23, 2007
    That took a while...  I was disconnected from the server once I reached 1800 seconds (30 min) between calls to the SDK - guess the timeout is 30 minutes, locally! (My program inserted a WindowsComputer, removed it, and then slept for 2 minutes.  It repeated this, adding 2 minutes to the sleep time each loop).  

  • Anonymous
    February 23, 2007
    The send and receive timeouts on the channel are both 30 minutes, but it's weird that the inactivity timeout is not being honored. I have read other reports of this being the case with WCF in terms of inactivity. So just make sure you make a call at least every 29 minutes =)

  • Anonymous
    February 23, 2007
    I'm not using WCF - this is all local.   Anyways, periodic calls to Reconnect() work wonders, and no matter what value that timeout is, they'll be required.   Thanks for your help, once again!

  • Anonymous
    February 23, 2007
    Local uses WCF also.

  • Anonymous
    April 03, 2007
    The comment has been removed

  • Anonymous
    April 03, 2007
    Sorry about that. I've updated the MP and it should import now.

  • Anonymous
    April 25, 2007
    What can the max time be before the event is processed and an alert is generated.  Sometimes we are experiencing alerts that arrive shortly after and other times nothing appears what so ever.  If we wait for an hour and reboot the OpsMgr server and then look at the alerts they will appear but not always.

  • Anonymous
    April 25, 2007
    It should not take longer than 1 minute or so, assuming there were no errors. Restarting the service and having it be created may be masking some sort of issue. Are there any errors in the event log?

  • Anonymous
    May 23, 2007
    I used the MCF to insert the discovered objects, and then use the above code to insert a CustomMonitoringEvent to change to HealthState of the object to Error. It works great. However, when I cleaned up the connector and run the program to insert objects again, I notice the object's health state immediately changed to Error because it picked up the event inserted previously. This doesn't seem right. How do I fix it? I'm not using the TwoStateMonitorType as specified in your MP. I slightly modified it to use Manual Reset. The following is my MonitorType: <UnitMonitorType ID="System.Connectors.ManualResetMonitorType" Accessibility="Public"> <MonitorTypeStates>  <MonitorTypeState ID="RedEvent" NoDetection="false" />  <MonitorTypeState ID="ManualResetEventRaised" NoDetection="true" /> </MonitorTypeStates> <Configuration>  <xsd:element xmlns:xsd="http://www.w3.org/2001/XMLSchema" name="RedEventId" type="xsd:int" minOccurs="1" maxOccurs="1" /> </Configuration> <MonitorImplementation>  <MemberModules>    <DataSource ID="DS" TypeID="SCLibrary!Microsoft.SystemCenter.TargetEntitySdkEventProvider" />    <ConditionDetection ID="CDRedEvent" TypeID="System!System.ExpressionFilter">      <Expression>        <SimpleExpression>          <ValueExpression>            <XPathQuery>EventNumber</XPathQuery>          </ValueExpression>          <Operator>Equal</Operator>          <ValueExpression>            <Value>$Config/RedEventId$</Value>          </ValueExpression>        </SimpleExpression>      </Expression>    </ConditionDetection>  </MemberModules>  <RegularDetections>    <RegularDetection MonitorTypeStateID="RedEvent">      <Node ID="CDRedEvent">        <Node ID="DS" />      </Node>    </RegularDetection>  </RegularDetections> </MonitorImplementation> </UnitMonitorType>

  • Anonymous
    May 23, 2007
    This is actually a known bug that is fixed for SP1. The only workaround is to either not delete your discovered objects (so that the monitors don't get reinitialized) or to manually purge the PendingSdkDataSource table in the DB.

  • Anonymous
    May 31, 2007
    I have imported the Mp successful ,when i run the program I got all MonitoringObject , Inserted Event, Inserted performancedata , that all successful , but I didnt see any alert , what's wrong?  After import MP ,anything to do?

  • Anonymous
    June 02, 2007
    Where are you looking for the alert? If you ran the whole program, the alert will be resolved and not in the Active Alerts view. You can create a new alert view that does not filter out resolved alerts, and it should be there.

  • Anonymous
    June 03, 2007
    Thanks Jakub , and another question , how to config the serverity of alert , can the serverity change with the Event LevelId, where should I set that?

  • Anonymous
    June 04, 2007
    The severity and priority are both set via the configuration to the rule/monitor. If you want to use level id to set it, in this release you need to have different rules/monitors to set the priority and severity to fixed values after filtering by level id.

  • Anonymous
    June 13, 2007
    How does the computer state change?In MOM 2005, it can change with the alert severity. When i insert a warning alert, the computer state became to warning, and when i resolve the alert, it became to success. Does computer state change with alert severity in SCOM 2007?

  • Anonymous
    June 14, 2007
    In 2005 state was driven by alerts; in 2007 the opposite is true. The last large paragraph of this post may help a bit: http://blogs.msdn.com/jakuboleksy/archive/2006/10/26/quot-how-stuff-works-quot.aspx

  • Anonymous
    June 14, 2007
    Thanks jakub, if I wanted the state of 2007 driven by alerts severity , how should I do ?Is this possible?And can you give me a sample?

  • Anonymous
    June 14, 2007
    You can't. What you would do is replicate whatever condition is causing the alert and create a monitor that changes state based on it. Subsequently, you could create an alert from that monitor.

  • Anonymous
    August 01, 2007
    Hi Jakub, I got a question. How to insert a NetworkDevice into OpsMgr? This is my code, but always cannot get the property. List<CustomMonitoringObject> MyAdd = new List<CustomMonitoringObject>();            MonitoringClass MyComputer = localManagementGroup.GetMonitoringClass(SystemMonitoringClass.NetworkDevice);            MonitoringClassProperty pathname= (MonitoringClassProperty)MyComputer.PropertyCollection["PathName"];            MonitoringClassProperty ip= (MonitoringClassProperty)MyComputer.PropertyCollection["IPAddress"];            MonitoringClassProperty name= (MonitoringClassProperty)MyComputer.PropertyCollection["Name"];            CustomMonitoringObject AddedComputer = new CustomMonitoringObject(MyComputer);            AddedComputer.SetMonitoringPropertyValue(pathname, "myPathname");            AddedComputer.SetMonitoringPropertyValue(ip, "192.168.0.1");            AddedComputer.SetMonitoringPropertyValue(name, "myName");            MyAdd.Add(AddedComputer); ComputerHealthService.InsertRemotelyManagedDevices(MyAdd); It always says there is no property["PathName"]/["IPAddress"]/["Name"]

  • Anonymous
    August 01, 2007
    It shows "Cannot find ManagementPackSubElement whit [ID=PathName]in this collection, what's wrong?  

  • Anonymous
    August 02, 2007
    First, you can't insert NetworkDevice because it is abstract. You need to insert a non-abstract class. Second, none of those properties exist on NetworkDevice. Does this make sense? You should take a look at my discovery data insertion post: http://blogs.msdn.com/jakuboleksy/archive/2006/11/07/inserting-operational-data-2.aspx

  • Anonymous
    August 05, 2007
    Thanks Jakub, I found the error, SystemMonitoringClass.NetworkDevice is wrong, it should be SystemMonitoringClass.SystemCenterNetworkDevice

  • Anonymous
    August 15, 2007
    Hi, another question, how to set the community string property of systemcenternetworkdevice? It is encrypted. The string "public" is "cAB1AGIAbABpAGMA" . Is there is a funtion to encrypt via SDK?What is it?  

  • Anonymous
    August 16, 2007
    It should be Base64 encoding.

  • Anonymous
    March 27, 2008
    I react to your communication because I didn't find anything else on the Internet. I'm trying to use in my SCOM script the community string as a parameter, but what I get is the encrypted form. I tried to use Base64 decoders and it is not correct algorithm. Do you have, by any chance, some other idea as to how to decode it? Thanks a lot

  • Anonymous
    March 28, 2008
    Are you just passing it in in clear text or are you using the run as accounts functionality? Where is the community string coming from?

  • Anonymous
    August 19, 2008
    Jakub, I need a bit more clarification on your native module implementations.  It appears that these are some sort of COM based classes.  Regardless, it is very unclear where these classes exist and I don't seem to have those GUIDs registered.  Am I missing something here?

  • Anonymous
    August 19, 2008
    I don't have any native modules.

  • Anonymous
    August 19, 2008
    I found it.  It is the "System.Mom.BackwardCompatibility.InternalAlertMapper".  And BTW, I really appreciate the info you have been providing.

  • Anonymous
    August 19, 2008
    Cool. That one is not one of mine. Are you having problems with it?

  • Anonymous
    August 20, 2008
    I am not having problems with your library. I am just trying to understand the inner workings of opsmgr better.   The issue that we are having is that we use NetQOS/NetVoyant for network device monitoring.  Currently NetVoyant sends SNMP traps to our MOM server, and the traps are then converted to alerts and forwarded to Remedy.  This is done based on some custom scripting and based on the Engyro connector.  We are able to generate alerts representing the router/switch rather than the NetVoyant appliance (SNMP sender) because we can script all the alert parameters and MOM auto-magically creates the 'unmanaged' device representing the router/switch upon submission of the alert.   Things, as you know, aren't quite so easy in OpsMgr.  With ideas from your library (and others), I can (somewhat) recreate this functionality via the SDK, but I am still investigating if it can be done without resorting to an external program.  If you have any thoughts, I would appreciate you sharing them.

  • Anonymous
    August 22, 2008
    I would recommend posting your question to the opsmgr newsgroups, specifically managementpacks or authoring. I am pretty sure you can do the same without a custom application in 2007, but I would not be the right resource for the question.

  • Anonymous
    March 16, 2011
    i'm sucessfully import ur management. but i did not get any alerts from ur management pack..., can u help me ?

  • Anonymous
    March 16, 2011
    hi... every one... i want to know how create custom alert in scom throw management pack? how to use rules and how to troubleshoot on its working properly or not? i'm new to SCOM, so kindly help me with sample code(management pack(xml) and c#) and related steps for create alerts. and i want know how to access management pack from c# code?

  • Anonymous
    March 17, 2011
    satheesh - please check out the OpsMgr newsgroups for your general questions. Also MSDN coupled with this blog are good sources of information. In terms of the alerts not being generated, did you run the code too? The MP doesn't generate alerts without executing the code. .