• Jakub@Work

    Creating and Updating Groups

    • 22 Comments

    I've received several questions over the past few days about how to create and update groups in the SDK. Below is a commented sample of how to do it.

    What InsertCustomMonitoringObject group does, is create a new singleton class that derives from Microsoft.SystemCenter.InstanceGroup. It also creates a rule that populates this group which essentially defines the formula and inclusion and exlusion lists for the group. Hopefully the sample and comments in the code below explain this in more detail.

    Note: A quick apology on the formating of the strings for formulas below. I found that when I had extra spaces, the schema validation failed and caused some exceptions to be thrown. This way, a direct cut and paste should work.

    Edit: If you receive an ArgumentException for the references collection, this means that your default management pack already has a reference for the given management pack. When this happens, replace the alias I used with the existing alias for the management pack in the default management pack and don't add it to the newReferences collection.

    using System;

    using Microsoft.EnterpriseManagement;

    using Microsoft.EnterpriseManagement.Configuration;

    using Microsoft.EnterpriseManagement.ConnectorFramework;

    using Microsoft.EnterpriseManagement.Monitoring;

     

    namespace Jakub_WorkSamples

    {

        partial class Program

        {

            static void InsertAndUpdatingGroups()

            {

                // Connect to the sdk service on the local machine

                ManagementGroup localManagementGroup = new ManagementGroup("localhost");

     

                // Create the formula

                // There can be multiple membership rules, no need for a root tag.

                // The monitoring class is the class of instance you want in the group.

                // The Relationship class is a relationhip whose source type needs to

                //      be in the base class hierarchy of the class of your group (in this

                //      case we actually create a new class using the insert method on

                //      ManagementPack and this class will derive from InstanceGroup

                //      which is the source of InstanceGroupContainsEntities) and the target

                //      class needs to be in the base class hierarchy of the of the

                //      aforementioned MonitoringClass.

                //  You can also have an include and exclude list of specific entities that

                //      also must match the relationship class and monitoring class critiera.

                //      So in the example below, you could only include or exclude instances

                //      that derive from Microsoft.Windows.Computer.

                //      These look like this: (if both are specified the include list is first)

                //      <IncludeList>

                //        <MonitoringObjectId>f5E5F15E-3D7D-1839-F4C6-13E36BCD982a

                //        </MonitoringObjectId>

                //      </IncludeList>

                //      <ExcludeList>

                //        <MonitoringObjectId>f5E5F15E-3D7D-1839-F4C6-13E36BCD982a

                //        </MonitoringObjectId>

                //      </ExcludeList>

                //  Prior to the include list, you can also add an expression to include

                //  instances by. The element is defined as Expression and the schema for

                //  it (as well as this entire MembershipRule schema) is defined in the

                //  Microsoft.SystemCenter.Library management pack.

                //      An example of an expression:

                //      <Expression>

                //          <SimpleExpression>

                //           <ValueExpression>

                //             <Property>

                //              $MPElement[Name="Microsoft.SystemCenter.HealthService"]/IsRHS$

                //              </Property>

                //           </ValueExpression>

                //           <Operator>Equal</Operator>

                //           <ValueExpression>

                //              <Value>False</Value>

                //           </ValueExpression>

                //          </SimpleExpression>

                //      </Expression>

                //      This expression can reference properties of the class of the membership

                //      rule and in this case would include any health services that are not

                //      the root health service.

                //      Note: this example doesn't work with the rule I have below, it is simple

                //      for illustrative purposes. I would need to filter by a

                //      Microsoft.Windows.Computer property in order to use it below.

     

                string formula =

                    @"<MembershipRule>

                    <MonitoringClass>$MPElement[Name=""Windows!Microsoft.Windows.Computer""]$</MonitoringClass>

                    <RelationshipClass>$MPElement[Name=""InstanceGroup!Microsoft.SystemCenter.InstanceGroupContainsEntities""]$</RelationshipClass>

                    </MembershipRule>";

     

                // Create the custom monitoring object group

                CustomMonitoringObjectGroup allComputersGroup =

                    new CustomMonitoringObjectGroup("Jakub.At.Work.Namespace",

                    "AllComputerGroup",

                    "Jakub@Work Sample All Computers Group",

                    formula);

     

                // Get the default management pack.

                ManagementPack defaultManagementPack =

                    localManagementGroup.GetManagementPacks(

                    "Microsoft.SystemCenter.OperationsManager.DefaultUser")[0];

     

                // Get the management packs for references

                ManagementPack windowsManagementPack = localManagementGroup.

                    GetManagementPack(SystemManagementPack.Windows);

                ManagementPack instanceGroupManagementPack = localManagementGroup.

                    GetManagementPack(SystemManagementPack.Group);

                ManagementPackReferenceCollection newReferences =

                    new ManagementPackReferenceCollection();

                newReferences.Add("Windows", windowsManagementPack);

                newReferences.Add("InstanceGroup", instanceGroupManagementPack);

     

                defaultManagementPack.InsertCustomMonitoringObjectGroup(allComputersGroup,

                    newReferences);

     

                // Get the class that represents my new group

                MonitoringClass myNewGroup = localManagementGroup.

                    GetMonitoringClasses("Jakub.At.Work.Namespace.AllComputerGroup")[0];

     

                // Get the discovery rule that populates this group

                // For the purposes of this sample, I know there is only 1 in the template

                MonitoringDiscovery groupPopulateDiscovery = myNewGroup.

                    GetMonitoringDiscoveries()[0];

     

                // This is the full configuration of the discovery of which the

                // membership rule is one part that you can configure and update

                Console.WriteLine("The discovery configuration: {0}",

                    groupPopulateDiscovery.DataSource.Configuration);

     

                // Update the configuration in some fasion

                string newConfiguration =

                    @"<RuleId>$MPElement$</RuleId>

                    <GroupInstanceId>$MPElement[Name=""Jakub.At.Work.Namespace.AllComputerGroup""]$</GroupInstanceId>

                    <MembershipRules>

                        <MembershipRule>

                    <MonitoringClass>$MPElement[Name=""Windows!Microsoft.Windows.Computer""]$</MonitoringClass>

                    <RelationshipClass>$MPElement[Name=""InstanceGroup!Microsoft.SystemCenter.InstanceGroupContainsEntities""]$</RelationshipClass>

                    <ExcludeList>

                        <MonitoringObjectId>f5E5F15E-3D7D-1839-F4C6-13E36BCD982a</MonitoringObjectId>

                    </ExcludeList>

                    </MembershipRule>

                    </MembershipRules>";

     

                // Now we want to update the group membership for this group

                groupPopulateDiscovery.Status = ManagementPackElementStatus.PendingUpdate;

                groupPopulateDiscovery.DataSource.Configuration = newConfiguration;

                groupPopulateDiscovery.GetManagementPack().AcceptChanges();

            }

        }

    }

     

  • Jakub@Work

    Command Shell

    • 0 Comments

    A co-worker of mine has started a blog on the SCOM Power Shell. The power shell is built on top of the SDK and provides command line functionality for many common administration tasks and much more. I've added the link to the list of SCOM blogs and also pasted it here for reference:

    System Center Operations Manager Command Shell

  • Jakub@Work

    Inserting Discovery Data

    • 28 Comments

    We've gone over how to drive state and insert operational data for existing entities, but how do you insert your own objects into the system? That's what this post will briefly touch on, as well as providing sample code (below) and a management pack to use (attached) with the code.

    Discovery data insertion via the SDK revolves around connectors as discovery sources. In order to insert data, you first need to create a connector with the system that all the data you insert will be associated with. This allows us to control the lifetime of the discovery data as a function of the lifetime of the connector.

    Once the connector is setup, you can use one of two modes for insertion; Snapshot or Incremental. Snapshot discovery indicates to the system that for this particular connector (read: discovery source), this is the definite snapshot of everything it has discovered. It will essentially delete anything that was previously discovered, and treat this snapshot as authoritative. Incremental, as the name would indicate, simply merges the existing discovery data with the discovery information provided in the incremental update. This can include additions, as well as deletions.

    Users can insert CustomMonitoringObjects and CustomMonitoringRelationshipObjects which, once inserted, map to MonitoringObjects and MonitoringRelationshipObjects. In order to insert either, you have to provide, at a minimum, the key values for objects, and the source and target for relationships. When dealing with a hosting relationship, the key values of the host must also be populated as part of the CustomMonitoringObject and no explicit CustomMonitoringRelationshipObject needs to be created. The example below should guide you through this.

    A quick discussion on managed vs. unmanaged instances. Our system will only run workflows against instances that are managed. The discovery process I talked about in the last post will insert "managed" data. Top-level instances (computers for instance) are inserted via the install agent APIs in the SDK and result in managed computers. It is also possible for rules to insert discovery data, however, this data will not be managed unless hosted by a managed instance.

    In order to be able to target workflows to your newly created instances and have them actually run, DiscoveryDataIsManaged needs to be set to true on the ConnectorInfo object when creating the connector. Alternatively, if you insert an instance as hosted by a managed instance, that instance will also be managed. For the former case, all workflows would run on the primary management server, while the latter would have them all running on the health service that manages the host. If something is not managed, you can still insert events and performance data about it, although the workflow that collects these will need to targeted against something other than the class of the instance. State change information would not be available for non-managed instances.

    using System;

    using Microsoft.EnterpriseManagement;

    using Microsoft.EnterpriseManagement.Configuration;

    using Microsoft.EnterpriseManagement.ConnectorFramework;

    using Microsoft.EnterpriseManagement.Monitoring;

     

    namespace Jakub_WorkSamples

    {

        partial class Program

        {

            static void InsertDiscoveryData()

            {

                // Connect to the sdk service on the local machine

                ManagementGroup localManagementGroup = new ManagementGroup("jakubo-test");

     

                // Get the connnector framework administration object

                ConnectorFrameworkAdministration connectorFrameworkAdministration =

                    localManagementGroup.GetConnectorFrameworkAdministration();

     

                // Create a connector

                ConnectorInfo info = new ConnectorInfo();

                info.Name = "TestConnector";

                info.DisplayName = "Test connector for discovery data";

                MonitoringConnector connector = connectorFrameworkAdministration.Setup(info);

     

                // First create an instance of SampleClassiHostedByComputer and

                // SampleClass2HostedBySampleClass1   

                // Find a computer

                MonitoringObject computer = localManagementGroup.GetMonitoringObjects(

                    localManagementGroup.GetMonitoringClass(

                    SystemMonitoringClass.WindowsComputer))[0];

     

                // Get the SampleClassiHostedByComputer class

                MonitoringClass sampleClass1HostedByComputer =

                    localManagementGroup.GetMonitoringClasses(

                    "SampleClass1HostedByComputer")[0];

     

                // Get the SampleClass2HostedBySampleClass1 class

                MonitoringClass sampleClass2HostedBysampleClass1 =

                    localManagementGroup.GetMonitoringClasses(

                    "SampleClass2HostedBySampleClass1")[0];

     

                // Get the key properties for each

                MonitoringClassProperty keyPropertyForSampleClass1 =

                    (MonitoringClassProperty)sampleClass1HostedByComputer.

                    PropertyCollection["KeyProperty1"];

                MonitoringClassProperty keyPropertyForSampleClass2 =

                    (MonitoringClassProperty)sampleClass2HostedBysampleClass1.

                    PropertyCollection["KeyProperty1SecondClass"];

     

                // Create the CustomMonitoringObjects to represent the new instances

                CustomMonitoringObject sampleClass1HostedByComputerInstance =

                    new CustomMonitoringObject(sampleClass1HostedByComputer);

                CustomMonitoringObject sampleClass2HostedBysampleClass1Instance =

                    new CustomMonitoringObject(sampleClass2HostedBysampleClass1);

     

                // Set the key property value for the first instance

                sampleClass1HostedByComputerInstance.SetMonitoringPropertyValue(

                    keyPropertyForSampleClass1, "MySampleInstance1");

     

                // Set the key property values for the second instance, which includes the

                // key property values of the host in order to populate the hosting relationship

                sampleClass2HostedBysampleClass1Instance.SetMonitoringPropertyValue(

                    keyPropertyForSampleClass1, "MySampleInstance1");

                sampleClass2HostedBysampleClass1Instance.SetMonitoringPropertyValue(

                    keyPropertyForSampleClass2, "MySampleInstance2");

     

                // In order to populate the hosting relationship, you need to also set

                // the key properties of the host. This will automatically create the hosting

                // relationship and is in fact the only way to create one programmatically.

                foreach (MonitoringClassProperty property in computer.GetMonitoringProperties())

                {

                    if (property.Key)

                    {

                        sampleClass1HostedByComputerInstance.SetMonitoringPropertyValue(

                            property, computer.GetMonitoringPropertyValue(property));

     

                        // Even though the relationship between

                        // sampleClass1HostedByComputerInstance and the computer is already

                        // defined, we need to add this key property to

                        // sampleClass2HostedBysampleClass1Instance as the entire hosting

                        // hierarchy is what uniquely identifies the instance. Without this,

                        // we wouldn't know "where" this instance exists and where it should be

                        // managed.

                        sampleClass2HostedBysampleClass1Instance.SetMonitoringPropertyValue(

                            property, computer.GetMonitoringPropertyValue(property));

                    }

                }

     

                // Let's insert what we have so far

                // We'll use Snapshot discovery to indicate this is a full snapshot of

                // all the discovery data this connector is aware of.

                SnapshotMonitoringDiscoveryData snapshot = new SnapshotMonitoringDiscoveryData();

                snapshot.Include(sampleClass1HostedByComputerInstance);

                snapshot.Include(sampleClass2HostedBysampleClass1Instance);

                snapshot.Commit(connector);

     

                // Let's retrieve the objects and ensure they were created

                MonitoringObject sampleClass1HostedByComputerMonitoringObject =

                    localManagementGroup.GetMonitoringObject(

                    sampleClass1HostedByComputerInstance.Id.Value);

                MonitoringObject sampleClass2HostedBySampleClass1MonitoringObject =

                    localManagementGroup.GetMonitoringObject(

                    sampleClass2HostedBysampleClass1Instance.Id.Value);

     

                // Now we create a relationship that isn't hosting

                MonitoringRelationshipClass computerContainsSampleClass2 =

                    localManagementGroup.GetMonitoringRelationshipClasses(

                    "ComputerContainsSampleClass2")[0];

     

                // Create the custom relationship object

                CustomMonitoringRelationshipObject customRelationship =

                    new CustomMonitoringRelationshipObject(computerContainsSampleClass2);

     

                // Do an incremental update to add the relationship.

                IncrementalMonitoringDiscoveryData incrementalAdd =

                    new IncrementalMonitoringDiscoveryData();

                customRelationship.SetSource(computer);

                customRelationship.SetTarget(sampleClass2HostedBySampleClass1MonitoringObject);

                incrementalAdd.Add(customRelationship);

                incrementalAdd.Commit(connector);

     

                // Make sure the relationship was inserted

                MonitoringRelationshipObject relationshipObject =

                    localManagementGroup.GetMonitoringRelationshipObject(

                    customRelationship.Id.Value);

     

                // Cleanup the connector. This should remove all the discovery data.

                connectorFrameworkAdministration.Cleanup(connector);

            }

        }

    }

     

  • Jakub@Work

    How Stuff Works

    • 15 Comments

    In reply to a comment I received, I wanted to put together a post about how things work in general, mostly with respect to the SDM as implemented in SCOM 2007.

    For those of you familiar with MOM 2005, you'll know that things were very computer centric, which might make the 2007 concepts a bit foreign to you. In trying to bring SCOM closer to a service-oriented management product, the idea that the computer is central has somewhat been removed. I say somewhat, because much of the rest of the world of management has not moved entirely off the computer being of central importance, so we still have to honor that paradigm to make integration with other products easier. One example of this compromise is that on an Alert object you will see the properties NetbiosComputerName, NetbiosDomainName and PrincipalName; these are present for easier integration with ticketing systems that are still largely computer centric.

    With this shift in thinking, we are able to model an enterprise to a much finer level of detail on any individual computer, as well as move above a single computer and aggregate services across machines in a single service-based representation. So what does this mean practically? For one, it means potentially a huge proliferation of discovered objects. Instead of just having a computer with a few roles discovered, we can go so far as to model each processor, each hard drive, every process, etc on every box as its own object. Now, instead of seeing a computer, you can actually see a more accurate representation of the things that are important to you in your enterprise. More importantly, however, is that this allows you to define each objects health separately from the health of the objects that depend on it and define how it's health affects the health of those things. For instance, just because one of the hard drives on a particular machine is full, doesn't necessarily mean the health of the machine is bad. Or maybe it it, and that can be modeled as well.

    As was the case with MOM 2005, at its core SCOM 2007 is driven by management packs. Managements packs define the class hierarchy that enable this form of deep discovery and they define all the tools necessary to manage these objects.

    Let's begin with discussing classes and the class hierarchy. We define a base class that all classes must derive from called System.Entity. Every class ever shipped in any management pack will derive at its core from this class. This class is abstract, meaning that there can never be an object discovered that is just a System.Entity and nothing else. We ship with an extensive abstract class hierarchy that we are still working on tweaking, but should allow for users to plug in their classes somewhere in the hierarchy that makes sense for them. You will be able to define your own abstract hierarchies as well as your own concrete classes. Concrete classes (i.e. non-abstract) are discoverable. Once you define a class as concrete, it's key properties (those that define the identity of an instance of that class) cannot be changed. For instance, if you want to specialize System.Computer, you can't define a new key property on it that would change it's identity, although you are free to add as many non-key properties as you like. In our system, the values of the key properties for a discovered instance are what uniquely identify that instance. In fact, the unique identifier (Guid) that is used internally to identity these instances is actually built off the key property values. Extending this computer example, if you do extend computer yourself, and someone else does as well, it is possible for a computer to be both of your classes at the same time, however, it will be identified as the same instance. You could imagine that Dell ships a management pack and some of your computers as both Dell Computers and Windows Computers, both of which would derive from the non-abstract Computer class that defines it's key properties. Thus an instance that is discovered as both would always we both in every context. The reason that the class of a discovered instance is important is that targeting of all workflows relies on this, but I'll talk more about this a bit later.

    In addition to discovering individual instances, the system also allows you to define relationships between instances. The relationships also have a class hierarchy that works very similarly to the class hierarchy for individual instances. The most base class here is call System.Reference and all relationship types (classes) will derive from this. There are two abstract relationship types that are of great importance that I wanted to discuss here. First, there is System.Containment which derives directly from System.Reference. While reference defines a loose coupling of instances, Containment implies that the source object of the relationship in some way contains the target object. This is very important internally to use as containment is used to allow things to flow across a hierarchy. For instance, in the UI alert views that might look at a particular set of instances (say Computers) will also automatically include alerts for anything that is contained on that computer. So a computers alert view would show alerts for hard drives on that computer as well. This is something that is an option when making the direct SDK calls, but in the UI it is the method that is chosen. Even more important than this is the fact that security scopes flow across containment relationships. If a user is given access to a particular group, they are given access to everything contained in that group by the System.Containment relationship type (or any type that derives from it). An even more specialized version of containment that is also very important is System.Hosting. This indicates a hosting relationships exists between the source and target where the target's identity is dependant upon its host. For instance, a SQL Server is hosted by a computer, since it would not exist outside the context of that computer. Going back to what I said in the previous paragraph about using key properties of an instance to calculate the unique id of it, we actually also use the key properties of all its hosts to identify it as well. Taking the SQL Server as an example, I can have hundreds of Instance1 SQL Servers running in my enterprise. Are they all the same? Of course not, they are different based on the computer they are on. That's how we differentiate them. Even in the SDK, when you get MonitoringObjects back, the property values that are populated include not only the immediate properties of the instance, but also the key properties of the host(s).

    All the examples I've mentioned thus far talk about drilling down on an individual computer, but we can also build up. I can define a service as being dependant on many components that span across physical boundaries. I can use the class hierarchy to create these new service types and extend the relationship type hierarchy to define the relationships between my service and its components.

    Before we talk about how all these instances get discovered, let's talk about why being an instance of a particular type is important. SCOM actually uses the class information about a discovered instance to determine what should run on its behalf. Management pack objects, such as rules, tasks and monitors, are all authored with a specific class as their target. What this actually means is that the management pack wants the specified workflow to run for every instance of that class that is discovered. If I have a rule that monitors the transaction log for SQL, I want that rule deployed and executed on every machine that has a SQL server discovered. What our configuration service does is determine what rules need to be deployed where, based on the discovered instance space and on where those particular instance are managed (and really, if they are managed, although that is a discussion for another post). So another important attribute about instances, is where they are being managed; essentially every discovered instance is managed by some agent in your enterprise, and it's usually the agent on the machine where the instance was discovered. When the agent receives configuration from the configuration service, it instantiates all the workflows necessary for all the instances that it manages. This is when all the discovery rules, rules and monitors will start running. Tasks, Diagnostics and Recoveries are a bit different in that they run on demand, but when they are triggered, they will actually flow to the agent that manages the instance that workflow was launched against. Class targeting is important here as well, as Tasks, Diagnostics and Recoveries can only execute against instances of the class they are targeted to. It wouldn't make sense, for instance, to launch a "Restart Service" task against a hard drive.

    Discovering instance and relationships is interesting. SCOM uses a "waterfall" approach to discovery. I will use SQL to illustrate. We'll begin by assuming we have a computer discovered. We create a discovery that discovers SQL servers and we'll target it the Computer. The system will then run this discovery rule on every Computer it knows about. When it runs on a computer that in fact has SQL installed, it will publish discovery data to our server and a new instance of SQL Server will be instantiated. Next, we have a rule targeted to SQL Server that discovers individual databases on the server, Once the configuration service gets notified of the new SQL instance, it will recalculate configuration and publish new configuration to the machine with the SQL server that includes this new discovery rule. This rule will then run and publish discovery information for the databases on the server. This allows deep discovery to occur without any user intervention, except for actually starting the waterfall. The first computer needs to be discovered, either by the discovery wizard, manual agent installation or programmatically via the SDK. For the latter, we support programmatic discovery of instances using the MCF portion of the SDK. Each connector is considered a discovery source and is able to submit discovery data on its behalf. When the connector goes away, all instances that were discovered solely by that connector also go away.

    The last thing I wanted to talk about were Monitors. Monitors define the state of an instance. Monitors also come in a hierarchy to help better model the state of an instance. The base of the hierarchy is called System.Health.EntityState and it represents THE state of an instance. Whenever you see state in the UI, it is the state of this particular monitor for that instance, unless stated otherwise. This particular monitor is an AggregateMonitor that rolls up state for its child monitors. The roll up semantics for aggregates are BestOf, WorstOf and Percentage. At the end of a monitor hierarchy chain must exist either a UnitMonitor or a DependencyMonitor. A UnitMonitor defines some single state aspect of a particular instance. For example, it may be monitoring the value of a particular performance counter. The importance of this particular monitor to the overall state of the instance is expressed by the monitor hierarchy it is a part of. Dependency monitors allow the state of one instance to depend on the state of another. Essentially, a dependency monitor allows you to define the relationship that is important to the state of this instance and the particular monitor of the target instance of this relationship that should be considered. One cool thing about monitors, is that their definition is inherited based on the class hierarchy. So System.Health.EntityState is actually targeted to System.Entity and thus all instance automatically inherit this monitor and can roll up state to it. What this means practically is that if you want to specialize a class into a class of your own, you don't need to redefine the entire health model, you can simply augment it by deriving your class from the class you which to extend and adding your own monitors to your class. You can even simply add monitors to the existing health model by targeting them anywhere in the class hierarchy that makes sense for you particular monitor.

    As always, let me know if there are any questions, anything you would like me to elaborate on or any ideas for future posts.

  • Jakub@Work

    More with Alert and State Change Insertion

    • 25 Comments

    Update: I have updated the management pack to work with the final RTM bits 

    Update #2: You cannot use the aggregate method described below to set the state of any instance not hosted on the Root Management Server.

    The last thing I had wanted to demonstrate about alert and state change insertion finally became resolved. This will not work in any of the public bits right now, but RC1 should be available soon and it will work there. Attached is the most recent version of the management pack to reference for this post. You'll have to clean up the references to match the proper public keys, but it should work otherwise.

    What I wanted to demonstrate was being able to define a monitor for a given class, but not having that monitor actually be instantiated for every instance of that class. Normally, if you define a monitor, you will get an instance of it (think a new workflow on your server) for every instance of the class the monitor is targeted to that is discovered. If you have thousands of instances, this can lead to significant performance issues. Now, we support the ability to define an aggregate monitor that will not be instantiated, as long as there are not monitors that roll up to it. In the sample MP attached you will find System.Connectors.SpecialAggregate that is an example of this kind of monitor. It works like any other aggregate monitor in the system, it just doesn't actually have any other monitors from which to roll up state. So how does its state gets set? That's where the additional changes come in.

    The first new addition is System.Connectors.Health.SetStateAction. The is the most basic form of a module that will be used to set the state of the aforementioned monitor. It this form, it accepts as configuration the ManagementGroupId, MonitorId (this is the monitor you want to set the state of), ManagedEntityId (this is the id of the instance you want to set the state of the monitor for) and the HealthState (this is the actual state you want to set the monitor to). There is a wrapper that abstracts away the need to set the ManagedEntityId and ManagementGroupId properties called System.Connectors.Health.TargetSetStateAction that will work with the SDK data types. There are also three further wrappers that explicitly define the state to set to monitor to, leaving only the MonitorId as configuration.

    I have included a sample rule (System.Connectors.SpecialAggregate.Error) that will drive the state of the aggregate monitor using the same sample code I posted earlier. Note that the rule is targeted to RootManagementServer since it will process data for a variety of instances, while the aggregate monitor should be targeted at the proper class that represents the instance you want to model and drive state for.

  • Jakub@Work

    Caching on the Brain

    • 16 Comments

    This week I have been working a lot on caching in the SDK, trying to optimize some code paths and improve performance as much as I can, so I decided to share a bit about how the cache works, and some insights into its implementation while it's fresh on my mind.

    SCOM 2007 relies heavily on configuration data to function. Class and relationship type definitions become especially important when dealing with discovered objects. We found that it was very often common to want to move up and down these class hierarchies, which would prove very costly from a performance standpoint if each operation required a roundtrip to the server and database. We also recognized that not all applications require this type of functionality, and incurring the additional memory hit was not desired (this was especially true for modules that need the SDK). Given this, the SDK has been designed with 3 different cache modes: Configuration, ManagementPacks and None. The cache mode you want to use can be specified using the ManagementGroupConnectionSettings object.

    First, let's go over what objects each cache mode will actually cache:

    Configuration: ManagementPack, MonitoringClass, MonitoringRelationshipClass, MonitoringViewType, UnitMonitorType, ModuleType derived classes, MonitoringDataType, MonitoringPage and MonitoringOverrideableParameter

    ManagementPacks: ManagementPack

    None: None =)

    For the first two modes there is also an event on ManagementGroup that will notify users of changes. OnTypeCacheRefresh is only fired in Configuration cache mode and indicates that something in the cache, other than ManagementPack objects changed. This means that the data in the cache is actually different. Many things can trigger a ManagementPack changing, but not all of them change anything other than the ManagementPack objects LastModified property (for instance, creating a new view, or renaming one). OnManagementPackCacheRefresh gets triggered when any ManagementPack object changes for whatever reason, even if it didn't change anything else in the cache. This event is available in both Configuration and ManagementPacks mode.

    So, when do you want to use each mode? Configuration is great is you are doing lots of operations in the configuration space, especially moving up and down the various type hierarchies. It is also useful when working extensively with MonitoringObject (not PartialMonitoringObject) and having the need to access the property values of many instances of different class types. Our UI runs in this mode. ManagementPacks is useful when configuration related operations are used, but not extensively. This is actually a good mode to do MP authoring in, which requires extensive hits to getting management packs, but not necessarily other objects. One thing that is important to note here is that every single object that exists in a management pack (rule, class, task, etc) requires a ManagementPack that is not returned in the initial call. If you call ManagementGroup.GetMonitoringRules(), every rule that comes back will make another call to the server to get its ManagementPack object if in cache mode None. If you are doing this, run in at least ManagementPacks cache mode, that's what it's for. None is great mode for operational data related operations. If you are mostly working with alerts or performance data, or even simply submitting a single task, this mode is for you. (None was not available until recently, and is not in the bits that are currently available for download).

    One more thing I want to mention. ManagementPacks, when cached, will always maintain the exact same instance of a ManagementPack object in memory, even if properties change. Other objects are actually purged and recreated. This is extremely useful for authoring as you can guarantee that when you have an instance of a management pack that may have been retrieved in different ways, it is always the same instance in memory. A practical example is you get a management pack calling ManagementGroup.GetManagementPack(Guid) and then you get a rule by calling ManagementGroup.GetMonitoringRule(Guid). The rule is in the same management pack conceptually as the GetManagementPack call returned, but who is to say it is the same instance? When you edit the rule, you will want to call ManagementPack.AcceptChanges() which (if the instances were not the same) would not change your rule, since the rule's internal ManagementPack may have been a different instance, and that's what maintains any changed state. This is not the case in Configuration and ManagementPacks cache mode. The instance that represents a certain management pack will always be the exact same instance and maintain the same state about what is being editing across the board. Now, that brings multi-threading and working with the same management pack across threads trickier, but there are public locking mechanisms for the ManagementPack object exposed to help with that.

    Lastly a quick note about how the cache works. The definitive copy of the cache is actually maintained in memory in the SDK service. The service registers a couple query notifications with SQL Server to be notified when things of interest change in the database, and that triggers a cache update on the server. When this update completes, the service loops through and notifies all clients that, when they had connected, requested to be notified of cache changes. Here we see another benefit of None cache mode; less server load in that less clients need to be notified of changes.

  • Jakub@Work

    RCO Sample Management Pack for Alert and State Change Insertion

    • 5 Comments

    It was brought to my attention that the management pack in the previous post did not import in RCO. I have attached an updated management pack that should.

  • Jakub@Work

    Sample Alert and State Change Insertion

    • 64 Comments

     Update: I have updated the Management Pack to work with the final RTM bits

     First, a disclaimer. Not everything I write here works on the Beta 2 bits that are currently out. I had to fix a few bugs in order to get all these samples working, so only to most recent builds will fully support the sample management pack. I will, however, provide at the end of a the post a list of the things that don't work =).

    I've attached to the post a sample management pack that should import successfully on Beta 2, please let me know if it doesn't and what errors you get. This management pack is for sample purposes only. We will be shipping, either as part of the product or as a web-download, a sealed SDK/MCF management pack that will help alert and state change insertion programmatically and that will support all the things I am demonstrating here.

    What I would like to do, is go through this management pack and talk about how each component works, and then include some sample code at the end that goes over how to drive the management pack from SDK code.

    This first thing you will notice in the management pack is a ConditionDetectionModuleType named System.Connectors.GenericAlertMapper. What this module type does is take as input any data type and output the proper data type for alert insertion into the database (System.Health.AlertUpdateData). This module type is marked as internal, meaning it cannot be referenced outside of this management pack, and simply provides some glue to make the whole process work.

    Next, we have the System.Connectors.PublishAlert WriteActionModuleType which takes the data produced by the aforementioned mapper and publishes it to the database. Regardless of where other parts of a workflow are running, this module type must run on a machine and as an account that has database access. This is controlled by targeting as described in the previous post. This module type is also internal.

    Now we have our first two public WriteActionModuleType's, System.Connectors.GenerateAlertFromSdkEvent and System.Connectors.GenerateAlertFromSdkPerformanceData. These combine the aforementioned module types into a more useable composite. They take as input System.Event.LinkedData and System.Performance.LinkedData, respectively. Note, these are the two data types that are produced by the SDK/MCF operational data insertion API. Both module types have the same configuration, allowing you to specify the various properties of an alert.

    The last of the type definitions is a simple UnitMonitorType, System.Connectors.TwoStateMonitorType. This monitor represents two states, Red and Green, which can be driven by events. You'll notice that it defines two operational state types, RedEvent and GreenEvent, which correspond to the two expression filter definitions that match on the $Config/RedEventId$ and $Config/GreenEventId$ to drive state. What this monitor type essentially defines, is that if a "Red" event comes in, the state of the monitor is red, and vice-versa for a "Green" event. It also allows you to configure the event id for these events.

    Now we move to the part of the management pack where we use all these defined module types.

    First lets look at System.Connectors.Test.AlertOnThreshold and System.Connectors.Test.AlertOnEvent. Both these rules use the generic performance data and event data sources as mentioned in an earlier post. They produce performance data and events for any monitoring object they were inserted against, and as such, you'll notice both rules are targeted to Microsoft.SystemCenter.RootManagementServer; only have a single instance of each rule will be running. The nice thing about this is that you can generate alerts for thousands of different instances with a single workflow, assuming your criteria for the alert is the same. Which brings me to the second part of the rule, which is the expression filter. Each rule has its own expression filter module that matches the data coming in to a particular threshold or event number.  Lastly, each includes the appropriate write action to actually generate the alert, and using parameter replacement to populate the name and description of the alert.

    The other two rules, System.Connectors.Test.AlertOnThresholdForComputer and System.Connectors.Test.AlertOnEventForComputer, are similar, only they use the targeted SDK data source modules and as such are targeted at System.Computer. It is important to note that targeting towards computer will only work on computers that have database access running under an account that has database access. I used this as an example because it didn't require me to discovery any new objects, plus, I had a single machine install where the only System.Computer was the root management server. The key difference between these two rules and the previous rules is that there will be a new instance of this rule running for every System.Computer object. So you can imagine, if you created a rule like this and targeted to a custom type you had defined for which you discovered hundreds or thousands of instances, you would run into performance issues. From a pure modeling perspective, this is the "correct" way to do it, since logically you would like to target your workflows to your type, however, practically, it's better to use the previous types of rules to ensure better performance.

    The last object in the sample is System.Connectors.Test.Monitor. This monitor is a instance of the monitor type we defined earlier. It maps the GreenEvent type state of the monitor type to the Success health state and the RedEvent to the Error health state. It defines via configuration that events with id 1, will make the monitor go red and events with id 2 will make it go back to green. It also defines that an alert should be generated when the state goes to Error and also that the alert should be auto-resolved when the state goes back to Success. Lastly you'll notice the alert definition here actually uses the AlertMessage paradigm for alert name and description. This allows for fully localized alert names and descriptions.

    This monitor uses the targeted data source and thus will create an instance of this monitor per discovered object. We are working on a similar solution to the generic alert processing rules for monitors and it will be available in RTM, it's just not available yet.

    Now, what doesn't work? Well, everything that uses events should work fine. For performance data, the targeted versions of workflows won't work, but the generic non-targeted ones will. Also, any string fields in the performance data item are truncated by 4 bytes, yay marshalling. Like I said earlier, these issues have been resolved in the latest builds.  

    Here is some sample code to drive the example management pack:

    using System;

    using System.Collections.ObjectModel;

    using Microsoft.EnterpriseManagement;

    using Microsoft.EnterpriseManagement.Configuration;

    using Microsoft.EnterpriseManagement.Monitoring;

     

    namespace Jakub_WorkSamples

    {

        partial class Program

        {

            static void DriveSystemConnectorLibraryTestManagementPack()

            {

                // Connect to the sdk service on the local machine

                ManagementGroup localManagementGroup = new ManagementGroup("localhost");

     

                // Get the MonitoringClass representing a Computer

                MonitoringClass computerClass =

                    localManagementGroup.GetMonitoringClass(SystemMonitoringClass.Computer);

     

                // Use the class to retrieve partial monitoring objects

                ReadOnlyCollection<PartialMonitoringObject> computerObjects =

                    localManagementGroup.GetPartialMonitoringObjects(computerClass);

     

                // Loop through each computer

                foreach (PartialMonitoringObject computer in computerObjects)

                {

                    // Create the perf item (this will generate alerts from

                    // System.Connectors.Test.AlertOnThreshold and

                    // System.Connectors.Test.AlertOnThresholdForComputer )

                    CustomMonitoringPerformanceData perfData =

                        new CustomMonitoringPerformanceData("MyObject", "MyCounter", 40);

                    // Allows you to set the instance name of the item.

                    perfData.InstanceName = computer.DisplayName;

                    // Allows you to specify a time that data was sampled.

                    perfData.TimeSampled = DateTime.UtcNow.AddDays(-1);

                    computer.InsertCustomMonitoringPerformanceData(perfData);

     

                    // Create a red event (this will generate alerts from

                    // System.Connectors.Test.AlertOnEvent,

                    // System.Connectors.Test.AlertOnEventForComputer and

                    // System.Connectors.Test.Monitor

                    // and make the state of the computer for this monitor go red)

                    CustomMonitoringEvent redEvent =

                        new CustomMonitoringEvent("My publisher", 1);

                    redEvent.EventData = "<Data>Some data</Data>";

                    computer.InsertCustomMonitoringEvent(redEvent);

     

                    // Wait for the event to be processed

                    System.Threading.Thread.Sleep(30000);

     

                    // Create a green event (this will resolve the alert

                    // from System.Connectors.Test.Monitor and make the state

                    // go green)

                    CustomMonitoringEvent greenEvent =

                        new CustomMonitoringEvent("My publisher", 2);

                    greenEvent.EventData = "<Data>Some data</Data>";

                    computer.InsertCustomMonitoringEvent(greenEvent);

                }

            }

        }

    }

     

  • Jakub@Work

    Workflow Targeting and Classes

    • 22 Comments

    I set out this week trying to put together a post about designing and deploying rules and monitors that utilize the SDK data sources that I talked about in the last post. Unfortunately, it's not ready yet. Among other things this week, I have been trying to write and deploy a sample management pack that would demonstrate the various techniques that we recommend for inserting operational data via the SDK to SCOM, but in the process I have run into a few issues that need resolving before presenting the information. I am definitely working on it and I'll get something up and soon as we work through the issues I have run into. If you need something working immediately, please contact me directly with questions you have so I can better address your specific scenario.

    In the meantime, I wanted to discuss classes in SCOM and how they relate to workflow (take this to mean a rule, monitor or task in SCOM 2007) targeting. I think this topic is a good stepping stone for understanding the techniques I'll talk about when the aforementioned post is ready.

    First, what do I mean by targeting? In MOM 2005 rules were deployed based on the rule groups they were in and their association to computer groups. Rules would be deployed irrespective of whether they were needed on a particular computer. The targeting mechanism in 2007 is much different and based entirely around the class system that describes the object space. Each workflow is assigned a specific target class and the agents will receive rules when they have objects of that particular class being managed on that machine.

    Ok, so what does that all mean? Let's start with a sample class hierarchy. First, we have a base class of all classes, System.Entity (this is the actual base class for all classes in SCOM 2007). This class is abstract (meaning that there cannot be an instance of just System.Entity). Next, suppose we have a class called Microsoft.SqlServer (note this is not the actual class hierarchy we will ship, this is only for illustrative purposes). This class is not abstract and defines all the key properties that identify a SQL Server. Key properties are the properties the uniquely identify an instance in an enterprise. For a SQL Server this would be a combination of the server name as well as the computer name the server is on.  Next, there is a class Microsoft.SqlServer.2005 which derives from Microsoft.SqlServer adding properties specific to SQL Server 2005, but it adds no key properties (and in fact cannot add any). This means that a SQL Server 2005 in your enterprise would be both a Microsoft.SqlServer AND and Microsoft.SqlServer.2005 and the object that represented it, from an identity perspective, would be indistinguishable (i.e. its the same SQL Server). Lastly, SQL Servers can't exist by themselves, so we add a System.Computer class to the mix that derives from System.Entity. We now have all the classes defined that we need to talk about our first workflow, discovery.

    Let's assume we already have a computer discovered in our enterprise, Computer1. In order to discover a SQL Server, we need two things:

    1. We need to define a discovery rule that can discover a SQL Server.
    2. We need to deploy and run the rule

    In order to make our rule deploy and run, we'll need to target it to a type that gets discovered before SQL Server, in our case System.Computer. If we target a discovery rule to the type its discovering, you'll have a classic chicken or the egg problem on your hands. When we target our discovery rule to System.Computer, the configuration service knows that there is a Computer1 in the enterprise that is running and being managed by an agent and it will deploy any workflow targeted at System.Computer, including our discovery rule, to that machine and in turn execute the rule. Once the rule executes it will submit new discovery data to our system and the SQL Server will appear; lets call the server Sql1. Our SQL Server 2005 discovery rule can be targeted to System.Computer, or we could actually target it to Microsoft.SqlServer since it will already be discovered by the aforementioned rule. This illustrates a waterfall approach to discovery, and the recommended way discovery is done in the system. There needs to be a "seed' discovered object that is leveraged for further discovery that can generated subsequent "seeds" for related objects. In SCOM 2007 we jump start the system by pre-discovering the primary management server (also a computer) and allow manual computer discovery and agent deployment that jump starts discovery on those machines.

    This example also illustrates the workflow targeting and deployment mechanism in SCOM 2007. When objects are discovered in an enterprise, they are all discovered and identified as instances of particular classes. In the previous example, Computer1 is both a System.Entity and a System.Computer. Sql1 is a System.Entity, Microsoft.SqlServer and Microsoft.SqlServer.2005. We maintain this state in the configuration service and deploy workflows to agents that are managing these instances, based on the types of instances they are managing. This ensures that workflows get deployed and executed on the agents that need them, with no need to manage targeting.

    Another example of targeting would be with a task. Let's say we have a task that restarts SQL Server. This task can actually run on any Microsoft.SqlServer. Maybe we have another task that disables a specific SQL 2005 feature that only makes sense to run against objects that are in fact SQL Server 2005 instances. The first task would be targeted against Microsoft.SqlServer while the second against Microsoft.SqlServer.2005. If you select a SQL Server object in the SCOM 2007 UI that is not a SQL Server 2005, but instead is SQL Server 2000, the 2005 specific task will not be available. If you try to run it anyway via the SDK or the command shell, it will fail because the instance you are trying to run against isn't the right class and doesn't understand the task. The first task however, restarting the service, will run against both the SQL 2000 and 2005 instances and be available for both in the UI.

    I hope this helps make a bit more sense out of the new class system and leveraging it for targeting. Like with anything else, there are edge cases, more complicated hierarchies and other considerations when designing and targeting workflows, but from a "pure" modeling perspective, this should give you an idea as to how things work.

  • Jakub@Work

    Inserting Operational Data

    • 52 Comments

    I received a few requests to talk about operational data insertion in SCOM 2007. I was a bit hesitant to take this on now, mostly because I haven't really covered a lot of the core concepts required to truly understand the data insertion process, but I decided that people can ask clarifying questions if they have any while getting this out there was an important step in helping customers and partners move from 2005 to 2007.

    There are two objects that are used for inserting operational data; CustomMonitoringEvent and CustomMonitoringPerformanceData. Inserting these objects is supported both via the SDK directly as well as the MCF web-service.

    This data can be inserted against any discovered object (unlike MOM 2005 where only programmatically inserted computers were supported for event/performance data insertion in the SDK) via the InsertCustomMonitoringEvent(s) and InsertCustomMonitoringPerformanceData methods on the PartialMonitoringObject class as well as the InsertMonitoringEvents and InsertMonitoringPerformanceData methods on the MCF endpoint.

    The below sample illustrates finding all computers in the management group and inserting a single performance data value against each:

    using System;

    using System.Collections.ObjectModel;

    using Microsoft.EnterpriseManagement;

    using Microsoft.EnterpriseManagement.Configuration;

    using Microsoft.EnterpriseManagement.Monitoring;

     

    namespace Jakub_WorkSamples

    {

        partial class Program

        {

            static void InsertCustomMonitoringPerformanceData()

            {

                // Connect to the sdk service on the local machine

                ManagementGroup localManagementGroup = new ManagementGroup("localhost");

     

                // Get the MonitoringClass representing a Computer

                MonitoringClass computerClass =

                    localManagementGroup.GetMonitoringClass(SystemMonitoringClass.Computer);

     

                // Use the class to retrieve partial monitoring objects

                ReadOnlyCollection<PartialMonitoringObject> computerObjects =

                    localManagementGroup.GetPartialMonitoringObjects(computerClass);

     

                // Loop through each computer

                foreach (PartialMonitoringObject computer in computerObjects)

                {

                    // Create a CustomMonitoringPerformanceData object

                    CustomMonitoringPerformanceData perfData =

                        new CustomMonitoringPerformanceData("CPU", "CPU Threshold", 21.3);

                    perfData.InstanceName = computer.Name;

     

                    // Insert the data

                    computer.InsertCustomMonitoringPerformanceData(perfData);

                }

            }

        }

    }

    The pattern for events is very similar, there are just more properties available on CustomMonitoringEvent.

    In order to actually use these objects and methods, it's important to understand what happens when any of the insert calls complete. First, these objects get converted to their runtime counterpart as xml; for events this is the System.Event.LinkedData* as defined in the System.Library* management pack and for performance data this is System.Performance.LinkedData* as defined in the System.Performance.Library* management pack. These items are then inserted into the PendingSdkDataSource table in the database. Once this happens, the call succeeds and returns, even though the data has not yet been processed.

    In order to actually pick up and utilize the inserted data, I also wrote several DataSourceModuleTypes (this is a management pack level concept that describes data sources (providers in MOM 2005) that are available for use in rules and monitors) that read from the PendingSdkDataSource table in the db and process the newly inserted objects. "Normal" performance data and event rules use the system defined DataSourceModuleTypes that read the system performance counters and the event log, respectively. The data inserted via the SDK will not be processed if using these data sources. All the data sources that facilitate SDK insertion are on a fixed polling interval of 30 seconds, and wake up that often to process any new data in the database. There are two DataSourceModuleTypes available for both events and performance data, all defined in the Microsoft.SystemCenter.Library*:

    Microsoft.SystemCenter.SdkEventProvider - This data source will output a System.Event.LinkedData object for every CustomMonitoringEvent inserted via the SDK, regardless of the object it was inserted against.

    Microsoft.SystemCenter.TargetEntitySdkEventProvider - This data source will only output a System.Event.LinkedData object for CustomMonitoringEvents inserted via the SDK that were inserted against the target of the workflow that is using the DataSourceModuleType. For instance, if you create a new rule and target it to the System.Computer type and use this DataSourceModuleType as the data source of the rule, the only events that will come out of the data source will be events that were inserted against objects of the System.Computer class.

    Microsoft.SystemCenter.SdkPerformanceDataProvider - The same as the SdkEventProvider, only for System.Performance.LinkedData and CustomMonitoringPerformanceData.

    Microsoft.SystemCenter.TargetEntitySdkPerformanceDataProvider - The same as the TargetEntitySdkEventProvider, only for System.Performance.LinkedData and CustomMonitoringPerformanceData.

    So, in order to actually drive state of discovered objects, or perform other actions based on the data inserted via the SDK, you will need to write rules or monitors that use the aforementioned DataSourceModuleTypes. We do ship and install by default in the Microsoft.SystemCenter.Internal* management pack two rules that automatically collect all the data inserted via the SDK; Microsoft.SystemCenter.CollectSdkEventData* and Microsoft.SystemCenter.CollectSdkPerformanceData*. These rules are both targeted at the RootManagementServer* class and will only be instantiated, as such, on the Principal Management Server.

    One very important thing to note: if you write rules or monitors that use the sdk data sources, they must be executed on a server that has database access AND the account the rule or monitor is running under must have the required database permissions. In general a rule or monitor is executed on the machine that discovered the object of a particular class; i.e. if you discover an instance of SQL Server on computer A and computer A has an agent on it, all rules and monitors targeted to SQL Server will be run on that particular SQL Server objects behalf on the agent on computer A. That's a bit of a mouthful, but I hope it makes sense.

    * These names are subject to change prior to RTM.

  • Jakub@Work

    Getting Started

    • 18 Comments

    The easiest way to get going is to install the SCOM UI on the machine that you want to develop on. This will ensure all the necessary components are installed and drop the assemblies you need in order to write against the SDK. There is currently a work item being tracked to have a special installer for just the SDK and any necessary components in order to easily facilitate developing on a non-SCOM box. Hopefully we can get this in for RTM, if not, I'll make sure to post exact instructions when that time comes.

    Programmatically, the only assemblies of interest are:

    Microsoft.EnterpriseManagement.OperationsManager.dll (The main SDK assembly)

    Microsoft.EnterpriseManagement.OperationsManager.Common.dll (A common assembly that the SDK service and client share, containing mostly exceptions)

    Although in Beta 2 there was also a dependency on EventCommon.dll and Microsoft.Mom.Common.dll, both of these dependencies have been removed for the upcoming release candidate and will not be present at RTM.

    If you want to develop and run on a machine that does not have the UI installed, you will need to:

    1. Copy over the assemblies mentioned above (the two main assemblies in bold will have to be copied from the GAC) to a directory on the machine you want to develop on.
    2. Copy over Bid2ETW.dll from the SCOM install directory to the same machine.
    3. Install the correct version of Microsoft .Net Framework 3.0.
    4. In the registry at HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\BidInterface\Loader add a new string value: “<Directory where you code will be running>\*”=”<Path to Bid file from #2>\Bid2ETW.dll”

    This should be everything you need to do to setup your dev machine. Now just link against the assemblies and you're ready to get started.

    The first thing any application needs to do in order to work with SCOM is create a new ManagementGroup object.  The sample code below creates a ManagementGroup connected to the local machine (Note: In order for this code to work, it needs to be run on the Primary Management Server.)

    using System;

    using Microsoft.EnterpriseManagement;

     

    namespace Jakub_WorkSamples

    {

        class Program

        {

            static void Main(string[] args)

            {

                ManagementGroup localManagementGroup = new ManagementGroup("localhost");

            }

        }

    }

     

    With this object, you can begin to explore the rest of the API and accomplish, hopefully, any tasks you need to. In future posts, I will talk more about connecting options and the ManagementGroupSettings class, but for now, this should get you started.

    EDIT - In recent builds (I believe RC1 included) you do not need to do step 2 and 4 above, unless you want SCOM tracing to work.

  • Jakub@Work

    Introductions

    • 6 Comments

    First, a little bit about myself. My name is Jakub Oleksy and I am a developer for the System Center Operations Manager (SCOM) team at Microsoft. I started working for Microsoft (and this particular team) in the middle of 2003 after graduating from UCLA with a Master's degree in Computer Science.

    The first feature I worked on was the MOM to MOM Product Connector for MOM 2000 SP1. I then worked on the MOM Connector Framework and the MOM to MOM Product Connector for MOM 2005. During this time I also began work on what became officially known as the Managed Class Library in MOM 2005 (I usually refer to this as the SDK).

    In 2005, the SDK was built after the product was almost complete, so it mirrored functionality that the UI did, but it did not provide any service to the UI or other components. These components were written directly against a data access layer that talked to the database. Also, in MOM 2005 there was no built in way to remote the SDK.

    For SCOM 2007, we set off with a goal that the SDK would provide 100% of the functionality that the UI exposes and in fact have the UI built entirely on top of our public SDK. SCOM being a platform more than anything else, we felt it was more important than ever to provide as much accessibility and extensibility opportunities to the product as possible, and the SDK is one of the main focuses of that effort. At this point in the product cycle, I feel that we have more than succeeded in our goals. Today the UI, the Command Shell and various modules in the product all use the public SDK as their sole provider for interacting with SCOM. In fact, the SDK provides more functionality that is even exposed in these components. Further, the SDK is fully remotable using WCF (Windows Communication Foundation).

    My primary areas of responsibility for SCOM 2007 are the SDK, MCF (or OMCF now) and parts of tiering (the replacement for the MOM to MOM Product Connector). I was also primarily responsible for the redesign of MCF and making it fit with the new SCOM 2007 product design as we move to a service-oriented architecture.

    My goal with this blog is to provide as much information as I can about topics that interest you about SCOM in general and particularly the SDK, MCF and tiering. Please feel free to contact me with any requests or questions you might have, I will be more than happy to address them either directly, or in future posts. Thanks!

Page 3 of 3 (62 items) 123