• Jakub@Work

    Management Pack Authoring in SCOM SDK - Quick Start Guide

    • 88 Comments

    Well, it has been quite a holiday! My wife delivered our son, Jamison Jakub, on December 26th at 8:14 AM. Mom and baby are doing great. I've posted some photos on our families personal site: www.oleksyfamily.com. I am more or less back at work for now and will be taking my paternity leave later, most likely after we ship SCOM 2007.

    I wanted to talk a little about management pack authoring and editing in the SDK. This is the largest part of the SDK that I did not write, so I am not an expert at the internal workings of a lot of it, but I do know how it works.

    First, there is a rather large object hierarchy when it comes to management pack related objects. At the most derived level, you will almost always find a Monitoring* object derived from a ManagementPack* object (e.g. MonitoringRule and ManagementPackRule). Monitoring* objects are returned by the online, connected SDK from the database. These objects are not "newable" and contain slightly more context, and sometimes more functionality, then their respective base class. (For instance they always have a pointer to ManagementGroup, which ManagementPack* object do not). Whenever you want to create a new management pack object, you have to use the ManagementPack* version of it to create it.

    The context of a management pack object is always a management pack. When you create an object, you need to specify which management pack it goes into. This is the management pack that the ultimate commit call (AcceptChanges()) will need to be made.

    If you want to change an existing object, simply change whatever you want about it (i.e. any settable properties) and then set its Status property to PendingUpdate. Once you do this, internally it gets placed into a pending changes collection for that management pack. But which instance you might ask? If you are working in connected SDK and you retrieve a management pack object from the database that object belongs to a particular management pack. Internally, this management pack is cached (unless you are running on CacheMode.None in which you should not be doing management pack authoring) and we always maintain the same instance of that management pack. No matter how many times you retrieve that object, that management pack instance will be the same. In fact no matter how you retrieve that management pack (with the ONE caveat that if you do criteria based searching for a management pack, the instance will actually be different as it doesn't use the cache to perform the query) it will be the same instance.

    This poses an interesting problem in that if multiple users in the same application try to update the same management pack, they will be writing into the same management pack at the same time. To alleviate this problem, the ManagementPack class exposes a LockObject property that will lock the management pack for editing. This lock should be taken prior to updating the management pack and released after committing the changes.

    In order to commit changes, simply retrieve the management pack you are changing (either by calling GetManagementPack() on the object(s) you are changing, or some other independent SDK method that is not criteria based) and call AcceptChanges(). Once this method returns, the changes are committed.

    I am sure users will have questions in this area, but it is such a large topic to cover, I just wanted to put together a small quick start guide to get everyone started. As always, please ask any questions you might have or if you want an expanded post on anything in particular regarding this area.

  • Jakub@Work

    Differences between MCF in MOM 2005 and SCOM 2007

    • 78 Comments

    I wanted to go through and outline some of the changes we made for MCF since our last release. The things I really wanted to accomplish in redesigning MCF was to make it more integrated with SCOM 2007 than it was in MOM 2005, to make necessary performance improvements and maintain at least the same level of functionality as we had before, while keeping with the model-based approach of SCOM 2007.

    Integration with SCOM 2007

    With the significant changes that took place between these releases, we had an opportunity to integrate MCF more closely with the product than it was before. The first step here, was getting rid of resolution state as the mechanism to mark alerts for forwarding. Instead, the alert object has a special nullable field (ConnectorId) that represents the identifier of the connector that currently "owns" this alert.

    There are a few ways to mark alerts for forwarding. In the UI, you can right-click on an alert and there will be a fly-out menu with available connectors so you can explicitly mark an alert for forwarding. Programmatically, the ConnectorId field on a MonitoringAlert object is settable and once set can be applied by calling the Update() method on the alert. Lastly, we have introduced a concept of connector subscriptions that are exposed via the UI and programmatically (MonitoringConnectorSubscription) that allow users to "subscribe" to a particular set of alerts by using criteria that should be marked for a particular connector.

    Another aspect of more tight-knit integration is the fact that 100% of MCF functionality is available via the managed SDK. While we still ship with a web-service (using WCF) and a proxy to communicate with it, the recommendation is to use the SDK to do MCF functionality unless you are not running on windows; this provides a richer object model and a better programming experience. The namespace in the SDK that servers MCF functionality is Microsoft.EnterpriseManagement.ConnectorFramework and the main object the provides the root of much of the functionality is ConnectorFrameworkAdministration which can be retrieved off a ManagementGroup object via GetConnectorFrameworkAdministration().

    The last aspect of integration is the fact that a connector is modeled in SCOM 2007. While we aren't planning on shipping anything that provides health information for a connector out of the box (time constraints), there is an ability for customers and partners who write connectors to develop a health model for their connector. In a nutshell, creating a connector creates a new instance of Microsoft.SystemCenter.Connector. In order to develop a health model around a new connector, a user should extend this class with their own and develop a health model targeted at the new class. Once this is complete, the built-in setup process will need to be augmented by the connector author to discover their connector as an instance of their new class. Assuming the model is developed and working correctly, the connector object will now have state like anything else SCOM is managing.

    Performance Improvements

    In 2005 we have many performance issues at scale due to the fact that a lot of data is copied from place to place. In the MOM to MOM Connector scenario in particular, we physically moved alerts from bottom tiers to the top tier, replicating the data and keeping it in sync.

    The first change for performance was with respect to tiering. We no longer copy alerts from bottom tiers to the top tier, but instead make available alerts from multiple tiers on the top tier. A SCOM administrator can setup tiers in the UI and make them available to connectors. Once available, MCF provides special "tiered" versions of various methods that aggregate data across these tiers the administrator setup. For instance, calling GetMonitoringAlertsForTiers() will loop through all available tiers and aggregate the results, returning alerts from each. The recommended approach for tiering, however, is to manage the calls to each tier from within the connector code itself as this allows more robustness in terms of error handling and allows for parallel processing for multiple tiers. There is much more to talk about with respect to tiering and how to accomplish this, but it is not in the scope of this article. If there is interest, I will put together a more detailed post on tiering.

    The next improvement for performance has to do with how we process alerts and alert changes. In MOM 2005, we distinguished between "New" and "Updated" alerts and actually kept a cached copy of each change to an alert in a pending list for each connector. If an alert changed 3 times, we would have three independently acknowledgeable changes and three copies of the alert in the cache. This caused performance problems when dealing with large scale deployments. In order to remedy this problem, in SCOM we manage alerts for connectors in place. The alerts in the alert table have a timestamp associated with non-connector related changes (so you don't get an update to an alert you changed as a connector) and that timestamp is checked against the current bookmark of a connector when retrieving alerts. Acknowledging data is now a function of moving a connectors bookmark. In other words, when a connector retrieves 10 alerts, when they are done processing these alerts, they should acknowledge receipt and processing by calling AcknowledgeMonitoringAlerts() with the maximum time of the alerts in the batch that was retrieved. This will ensure no data is lost.

    Important Note: - Since SQL datetime is not particularly granular, it is possible that two alerts with the same datetime timestamp get split between two SELECT calls. Even worse, when acknowledging across tiers, the system times on the machines will not be the same which might cause data loss by acknowledging with a time from one alert on one tier to another tier that has a time slightly in the past. In order to remedy this we actually have a built-in "delay" when retrieving alerts so that we don't get the newest alert but instead get alerts older than 30 seconds. This actually causes some "weird" behavior in that if you mark an alert for forwarding and immediately call GetMonitoringAlerts() you won't get anything, even though the alert is marked. This value is configurable in the database. This query should do it, pending any changes to the property name at RTM: (The SettingValue is the number of seconds of "buffer" you want in GetMonitoringAlerts calls. It needs to be at least 1)

    UPDATE GlobalSettings SET SettingValue = 1 WHERE ManagedTypePropertyId in
    (SELECT ManagedTypePropertyId FROM ManagedTypeProperty WHERE ManagedTypePropertyName = 'TierTimeDifferenceThreshold')

    Other Random Things

    1. We no longer support discovery data insertion via the web-service. This can be done as described in my earlier post.
    2. We support batching (and in fact enforce it with a maximum size of 1000), however the batch size is an "approximation" as ties in the alert timestamp might require more than the requested number of alerts to be returned. Also, batch size for tiered methods is the total across all tiers, not per tier.
    3. The MonitoringConnector object in the SDK has an Initialized property and a Bookmark property available for reading. The web-service has GetConnectorState() and GetConnectorBookmark() for the same purpose.
    4. Alert insertion is not supported. Please see this post and this one.
    5. Alert history retrieval is not done as part of the alert retrieval. Via the SDK there is a GetMonitoringAlertHistory() method on the alert. Via the web-service we have GetMonitoringAlertHistoryByAlertIds() for the same purpose.
    6. We added a way to generically get alerts via the web-service: GetMonitoringAlertsByIds(). This is available in the SDK in a variety of methods.
    7. When updating alerts via the web-service, the connector is implicit. When updating alerts via the SDK on behalf of a connector, it is important to use the overload that accepts a MonitoringConnector object to indicate it is the connector that is changing the alert.
  • Jakub@Work

    Overrides in SCOM 2007

    • 23 Comments

    In SCOM 2007 we have implemented a powerful, but somewhat complicated override architecture.

    Basic overrides for rules, monitors and other workflows fall into two categories: configuration and property overrides. Configuration overrides allow you to override particular configuration elements that are unique to a workflow, while property overrides allow you to override the properties (such as the Enabled property) that are common among all workflows or a particular type (i.e. rules, monitors, etc).

    Configuration overrides actually begin with the the management pack author declaring what portions of configuration are overrideable. For instance, if I declare the following DataSourceModuleType in a management pack:

    <DataSourceModuleType ID="MyDSModuleType" Accessibility="Public">

     <Configuration>

      <xsd:element name="Name" type="xsd:string"></xsd:element>

     </Configuration>

     <OverrideableParameters>

      <OverrideableParameter ID="Name" Selector="$Config/Name$" ParameterType="string"/>

     </OverrideableParameters>

     <ModuleImplementation>

      <Managed>

       <Assembly>MyAssembly</Assembly>

       <Type>MyType</Type>

      </Managed>

     </ModuleImplementation>

     <OutputType>MyDataType</OutputType>

    </DataSourceModuleType>

    the configuration element Name is declared as being overrideable via the OverrideableParameters element. This means that any workflow that uses this data source type, will be able to leverage this as an overrideable parameter. If a portion of configuration is not marked as overrideable, it can't be overridden. Also, we only support simple types of overrideable parameters, so if you have something complex that needs to be overridden, it needs to be declared as a string and your module would need to do the parsing on its own.

    Nothing special needs to be declared for property based overrides. All workflows support overriding the "Enabled" property (an implicit property indicating whether the workflow is enabled or not) and monitors have additional property overrides defined that allow all the various alert related parameters to be overridden.

    When actually defining an override, there are several things that need/can be specified. First off, you need to specify which workflow you want the override to apply to. You also have to specify which configuration element, by referencing the OverrideableParameter ID, or which property you want to override as well as the new value you want. Lastly, you need to define where this override applies. We provide two ways to specify this: Context and ContextInstance. The Context attribute specifies which class you want the override to apply to while the ContextInstance (which is the Guid Id of a monitoring object you want to target) specifies a particular instance. I will go over exactly how these are resolved in a bit, when I talk about the algorithm we use for applying overrides. The other thing you can specify for these overrides is whether or not they are "Enforced". Enforced overrides always take precedence over non-Enforced and can only exists in non-sealed management packs to better allow administrators to manage their overrides.

    Orthogonal to the aforementioned overrides, we also have the concept of category overrides. Category overrides apply to workflows whose Enabled property as defined as  onAdvancedMonitoring, onStandardMonitoring or onEssentialMonitoring. These overrides follow the same targeting concepts as the others with the addition of which Category of workflow to apply to (EventCollection, StateCollection, etc), but act in broad strokes by enabling and disabling many workflows with a single override. If I have a rule that is defined as onAdvancedMonitoring, it will only run if there is an override indicating that onAdvancedMonitoring is enabled. If I have a rule that is marked as onEssentialMonitoring, it will run with an override applied of onAdvancedMonitoring, onStandardMonitoring or onEssentialMonitoring. Essentially, each level is a superset of the level before it, and is enabled as such. SCOM will ship out of the box with overrides enabling onStandardMonitoring while SCE will ship with onEssentialMonitoring.

    Now, how do all these overrides actually get applied? To take the most complicated case, we'll work with an instance. We want to know what overrides apply to a particular instance of IIS for a particular rule targeted toward IIS. The first thing the algorithm does, conceptually, is gather all the types and instances that are in this instances hierarchy. So, this instance would include the IIS class and any classes that it derives from all the way to System.Entity and the computer instance that hosts this IIS instance and the computer class of this computer as well as all it's base classes. Next, the algorithm collects any overrides that may apply to this particular rule and overlays them on the hierarchy. So, if you have an enabled override disabling this rule with a context of System.Entity, it will exist in this objects hierarchy. With this conceptual hierarchy in mind, the algorithm starts at the top and traverses down applying the following criteria, in priority order:

      1. Non-Category enabled overrides always win over category overrides
      2. An enforced override always wins over a non-enforced override
      3. Instance targeted overrides always win over class targeted overrides
      4. Overrides in non-sealed management packs always win over sealed overrides
      5. A child in the hierarchy of overrides always wins over a direct parent override of the same type (instance vs class)
      6. Class overrides from contained instances always win over class overrides of the instance.
      7. Instance overrides with the higher relative depth win over those with a lower depth. Depth is calculated from the root node(s) of the containment hierarchy of which the instance in question is a part
      8. Randomly chosen

    There are methods in the SDK that will give you resultant set. One MonitoringObject and MonitoringClass there are several overloads for GetResultantOverrides. The UI also exposes resultant overrides via the overrides UI for workflows and eventually there will be a summary view of overrides available as well.

  • Jakub@Work

    SCOM Connector QuickStart Guide

    • 24 Comments

    Ambrose Wong put together a great quick start guide to connectors in SCOM. I've attached the Word version below. Any feedback is appreciated and I can pass it on to Ambrose.

  • Jakub@Work

    Notification Subscriptions

    • 45 Comments

    There have been a few requests recently for more granular notification subscriptions. While this is fully supported, our UI is limited in what it exposes for users to tweak. Here is a look at the SDK and how to use it to create a subscription (and related objects).

    First of all, to use subscriptions, you need to setup some auxiliary objects. In my example below, the first thing I create is an endpoint. We support Sip, Sms and Smtp. The endpoint is stored as a ModuleType in the database, although this is abstracted away by the API. The fact that it's a module type, however, means that the name has to follow the naming restrictions for management pack elements. After the endpoint, I create an action (also a ModuleType). The action combines an endpoint with configuration that allows the format of the endpoint to be specified. In the Smtp case, this allows you to specify email properties. Finally, I create a recipient that the subscription will be targeted to.

    The actual subscription involves combining all the aforementioned components and specifying the criteria by which to filter notifications. In SCOM 2007, notifications are based on alerts. You configure which alerts you want to trigger notifications by using the AlertNotChangedSubscriptionConfiguration and the AlertChangedSubscriptionConfiguration classes. These are also used for connector framework subscriptions to mark alerts for forwarding. These two classes represent criteria by which to match alerts. The first matches alert that have not changed that match the criteria while the latter matches alerts that have changed that match the criteria. Both work off a polling interval. If you look at the classes in the SDK, you will notice that you can specify groups and classes to filter by, but what I really wanted to outline here was the criteria property as this is what is really not exposed fully by the UI. The criteria has to match the schema as defined in the Microsoft.SystemCenter.Library called AlertCriteriaType. Note, you don't need the Expression tag, that is filled in for you. In terms of which columns you can query for in the criteria, it is anything that is defined on the Alert table in the db.

    EDIT: Stefan Koell has put together a great powershell example to accomplish the same task.

    Here's the code:

    (Note - If the criteria is not all on one line, it doesn't work correctly, that's why the formatting is a bit weird below. If you select the code, the full criteria should select)

    using System;

    using Microsoft.EnterpriseManagement;

    using Microsoft.EnterpriseManagement.Administration;

    using Microsoft.EnterpriseManagement.Configuration;

    using Microsoft.EnterpriseManagement.ConnectorFramework;

    using Microsoft.EnterpriseManagement.Monitoring;

     

    namespace Jakub_WorkSamples

    {

        partial class Program

        {

            static void InsertSubscription()

            {

                // Connect to the sdk service on the local machine

                ManagementGroup localManagementGroup = new ManagementGroup("localhost);

     

                // Setup an Smtp Endpoint

                SmtpServer smtpServer = new SmtpServer("localhost");

                smtpServer.PortNumber = 42;

                smtpServer.AuthenticationType =

                    SmtpNotificationAuthenticationProtocol.Anonymous;

     

                // The guid is here for a unique name so this can be rerun

                SmtpNotificationEndpoint smtpEndpoint =

                    new SmtpNotificationEndpoint(

                    "SmtpEndpoint_" + Guid.NewGuid().ToString().Replace('-', '_'),

                    "smtp",

                    smtpServer);

                smtpEndpoint.DisplayName = "My Smtp Endpoint";

                smtpEndpoint.Description = "This is my Smtp Endpoint";

                smtpEndpoint.MaxPrimaryRecipientsPerMail = 10;

                smtpEndpoint.PrimaryServerSwitchBackIntervalSeconds = 15;

     

                // This commits the endpoint into the system

                localManagementGroup.InsertNotificationEndpoint(smtpEndpoint);

     

                // Setup the Smtp Action (this includes email format)

                SmtpNotificationAction smtpNotificationAction =

                    new SmtpNotificationAction(

                    "SmtpAction" + Guid.NewGuid().ToString().Replace('-', '_'));

     

                smtpNotificationAction.Body = "Body";

                smtpNotificationAction.Description = "Description";

                smtpNotificationAction.DisplayName = "DisplayName";

                smtpNotificationAction.Endpoint = smtpEndpoint;

                smtpNotificationAction.From = "Test@Test.com";

                smtpNotificationAction.Headers.Add(

                    new SmtpNotificationActionHeader("Name", "Value"));

                smtpNotificationAction.IsBodyHtml = false;

                smtpNotificationAction.ReplyTo = "replyto@test.com";

                smtpNotificationAction.Subject = "my subject";

     

                // This commits the action into the system

                localManagementGroup.InsertNotificationAction(smtpNotificationAction);

     

                // Setup a recipient

                NotificationRecipientScheduleEntry scheduleEntry =

                    new NotificationRecipientScheduleEntry();

                scheduleEntry.ScheduleEntryType =

                    NotificationRecipientScheduleEntryType.Inclusion;

                scheduleEntry.ScheduledDays =

                    NotificationRecipientScheduleEntryDaysOfWeek.Weekdays;

                scheduleEntry.DailyStartTime =

                    new NotificationRecipientScheduleEntryTime(8, 0);

                scheduleEntry.DailyEndTime =

                    new NotificationRecipientScheduleEntryTime(17, 0);

     

                NotificationRecipientDevice recipientDevice =

                    new NotificationRecipientDevice("smtp", "test@test.com");

                recipientDevice.Name = "TestDevice";

                recipientDevice.ScheduleEntries.Add(scheduleEntry);

     

                NotificationRecipient recipient =

                    new NotificationRecipient("RecipientName" + DateTime.Now.ToString());

                recipient.Devices.Add(recipientDevice);

                recipient.ScheduleEntries.Add(scheduleEntry);

     

                // Commits the recipient

                localManagementGroup.InsertNotificationRecipient(recipient);

     

                // Alert configuration

                AlertNotChangedSubscriptionConfiguration config =

                    new AlertNotChangedSubscriptionConfiguration(

                    AlertSubscriptionConfigurationType.Any);

                config.Criteria = "<SimpleExpression><ValueExpression><Property>ResolutionState</Property></ValueExpression><Operator>Equal</Operator><ValueExpression><Value>255</Value></ValueExpression></SimpleExpression>";

                config.ExpirationStartTime = DateTime.Now;

                config.PollingIntervalMinutes = 1;

     

                // Subscription

                AlertNotificationSubscription alertChangeSubscription =

                    new AlertNotificationSubscription(

                    "MyNewAlertChangeSubscription" + Guid.NewGuid().ToString().Replace('-', '_'),

                    config);

                alertChangeSubscription.DisplayName = "My Subscription";

                alertChangeSubscription.Description = "My Subscription Description";

                alertChangeSubscription.ToRecipients.Add(recipient);

                alertChangeSubscription.Actions.Add(smtpNotificationAction);

     

                // Commits the subscription

                localManagementGroup.InsertNotificationSubscription(alertChangeSubscription);

            }

        }

    }

     

  • Jakub@Work

    Sample Alert and State Change Insertion

    • 64 Comments

     Update: I have updated the Management Pack to work with the final RTM bits

     First, a disclaimer. Not everything I write here works on the Beta 2 bits that are currently out. I had to fix a few bugs in order to get all these samples working, so only to most recent builds will fully support the sample management pack. I will, however, provide at the end of a the post a list of the things that don't work =).

    I've attached to the post a sample management pack that should import successfully on Beta 2, please let me know if it doesn't and what errors you get. This management pack is for sample purposes only. We will be shipping, either as part of the product or as a web-download, a sealed SDK/MCF management pack that will help alert and state change insertion programmatically and that will support all the things I am demonstrating here.

    What I would like to do, is go through this management pack and talk about how each component works, and then include some sample code at the end that goes over how to drive the management pack from SDK code.

    This first thing you will notice in the management pack is a ConditionDetectionModuleType named System.Connectors.GenericAlertMapper. What this module type does is take as input any data type and output the proper data type for alert insertion into the database (System.Health.AlertUpdateData). This module type is marked as internal, meaning it cannot be referenced outside of this management pack, and simply provides some glue to make the whole process work.

    Next, we have the System.Connectors.PublishAlert WriteActionModuleType which takes the data produced by the aforementioned mapper and publishes it to the database. Regardless of where other parts of a workflow are running, this module type must run on a machine and as an account that has database access. This is controlled by targeting as described in the previous post. This module type is also internal.

    Now we have our first two public WriteActionModuleType's, System.Connectors.GenerateAlertFromSdkEvent and System.Connectors.GenerateAlertFromSdkPerformanceData. These combine the aforementioned module types into a more useable composite. They take as input System.Event.LinkedData and System.Performance.LinkedData, respectively. Note, these are the two data types that are produced by the SDK/MCF operational data insertion API. Both module types have the same configuration, allowing you to specify the various properties of an alert.

    The last of the type definitions is a simple UnitMonitorType, System.Connectors.TwoStateMonitorType. This monitor represents two states, Red and Green, which can be driven by events. You'll notice that it defines two operational state types, RedEvent and GreenEvent, which correspond to the two expression filter definitions that match on the $Config/RedEventId$ and $Config/GreenEventId$ to drive state. What this monitor type essentially defines, is that if a "Red" event comes in, the state of the monitor is red, and vice-versa for a "Green" event. It also allows you to configure the event id for these events.

    Now we move to the part of the management pack where we use all these defined module types.

    First lets look at System.Connectors.Test.AlertOnThreshold and System.Connectors.Test.AlertOnEvent. Both these rules use the generic performance data and event data sources as mentioned in an earlier post. They produce performance data and events for any monitoring object they were inserted against, and as such, you'll notice both rules are targeted to Microsoft.SystemCenter.RootManagementServer; only have a single instance of each rule will be running. The nice thing about this is that you can generate alerts for thousands of different instances with a single workflow, assuming your criteria for the alert is the same. Which brings me to the second part of the rule, which is the expression filter. Each rule has its own expression filter module that matches the data coming in to a particular threshold or event number.  Lastly, each includes the appropriate write action to actually generate the alert, and using parameter replacement to populate the name and description of the alert.

    The other two rules, System.Connectors.Test.AlertOnThresholdForComputer and System.Connectors.Test.AlertOnEventForComputer, are similar, only they use the targeted SDK data source modules and as such are targeted at System.Computer. It is important to note that targeting towards computer will only work on computers that have database access running under an account that has database access. I used this as an example because it didn't require me to discovery any new objects, plus, I had a single machine install where the only System.Computer was the root management server. The key difference between these two rules and the previous rules is that there will be a new instance of this rule running for every System.Computer object. So you can imagine, if you created a rule like this and targeted to a custom type you had defined for which you discovered hundreds or thousands of instances, you would run into performance issues. From a pure modeling perspective, this is the "correct" way to do it, since logically you would like to target your workflows to your type, however, practically, it's better to use the previous types of rules to ensure better performance.

    The last object in the sample is System.Connectors.Test.Monitor. This monitor is a instance of the monitor type we defined earlier. It maps the GreenEvent type state of the monitor type to the Success health state and the RedEvent to the Error health state. It defines via configuration that events with id 1, will make the monitor go red and events with id 2 will make it go back to green. It also defines that an alert should be generated when the state goes to Error and also that the alert should be auto-resolved when the state goes back to Success. Lastly you'll notice the alert definition here actually uses the AlertMessage paradigm for alert name and description. This allows for fully localized alert names and descriptions.

    This monitor uses the targeted data source and thus will create an instance of this monitor per discovered object. We are working on a similar solution to the generic alert processing rules for monitors and it will be available in RTM, it's just not available yet.

    Now, what doesn't work? Well, everything that uses events should work fine. For performance data, the targeted versions of workflows won't work, but the generic non-targeted ones will. Also, any string fields in the performance data item are truncated by 4 bytes, yay marshalling. Like I said earlier, these issues have been resolved in the latest builds.  

    Here is some sample code to drive the example management pack:

    using System;

    using System.Collections.ObjectModel;

    using Microsoft.EnterpriseManagement;

    using Microsoft.EnterpriseManagement.Configuration;

    using Microsoft.EnterpriseManagement.Monitoring;

     

    namespace Jakub_WorkSamples

    {

        partial class Program

        {

            static void DriveSystemConnectorLibraryTestManagementPack()

            {

                // Connect to the sdk service on the local machine

                ManagementGroup localManagementGroup = new ManagementGroup("localhost");

     

                // Get the MonitoringClass representing a Computer

                MonitoringClass computerClass =

                    localManagementGroup.GetMonitoringClass(SystemMonitoringClass.Computer);

     

                // Use the class to retrieve partial monitoring objects

                ReadOnlyCollection<PartialMonitoringObject> computerObjects =

                    localManagementGroup.GetPartialMonitoringObjects(computerClass);

     

                // Loop through each computer

                foreach (PartialMonitoringObject computer in computerObjects)

                {

                    // Create the perf item (this will generate alerts from

                    // System.Connectors.Test.AlertOnThreshold and

                    // System.Connectors.Test.AlertOnThresholdForComputer )

                    CustomMonitoringPerformanceData perfData =

                        new CustomMonitoringPerformanceData("MyObject", "MyCounter", 40);

                    // Allows you to set the instance name of the item.

                    perfData.InstanceName = computer.DisplayName;

                    // Allows you to specify a time that data was sampled.

                    perfData.TimeSampled = DateTime.UtcNow.AddDays(-1);

                    computer.InsertCustomMonitoringPerformanceData(perfData);

     

                    // Create a red event (this will generate alerts from

                    // System.Connectors.Test.AlertOnEvent,

                    // System.Connectors.Test.AlertOnEventForComputer and

                    // System.Connectors.Test.Monitor

                    // and make the state of the computer for this monitor go red)

                    CustomMonitoringEvent redEvent =

                        new CustomMonitoringEvent("My publisher", 1);

                    redEvent.EventData = "<Data>Some data</Data>";

                    computer.InsertCustomMonitoringEvent(redEvent);

     

                    // Wait for the event to be processed

                    System.Threading.Thread.Sleep(30000);

     

                    // Create a green event (this will resolve the alert

                    // from System.Connectors.Test.Monitor and make the state

                    // go green)

                    CustomMonitoringEvent greenEvent =

                        new CustomMonitoringEvent("My publisher", 2);

                    greenEvent.EventData = "<Data>Some data</Data>";

                    computer.InsertCustomMonitoringEvent(greenEvent);

                }

            }

        }

    }

     

  • Jakub@Work

    Inserting Operational Data

    • 52 Comments

    I received a few requests to talk about operational data insertion in SCOM 2007. I was a bit hesitant to take this on now, mostly because I haven't really covered a lot of the core concepts required to truly understand the data insertion process, but I decided that people can ask clarifying questions if they have any while getting this out there was an important step in helping customers and partners move from 2005 to 2007.

    There are two objects that are used for inserting operational data; CustomMonitoringEvent and CustomMonitoringPerformanceData. Inserting these objects is supported both via the SDK directly as well as the MCF web-service.

    This data can be inserted against any discovered object (unlike MOM 2005 where only programmatically inserted computers were supported for event/performance data insertion in the SDK) via the InsertCustomMonitoringEvent(s) and InsertCustomMonitoringPerformanceData methods on the PartialMonitoringObject class as well as the InsertMonitoringEvents and InsertMonitoringPerformanceData methods on the MCF endpoint.

    The below sample illustrates finding all computers in the management group and inserting a single performance data value against each:

    using System;

    using System.Collections.ObjectModel;

    using Microsoft.EnterpriseManagement;

    using Microsoft.EnterpriseManagement.Configuration;

    using Microsoft.EnterpriseManagement.Monitoring;

     

    namespace Jakub_WorkSamples

    {

        partial class Program

        {

            static void InsertCustomMonitoringPerformanceData()

            {

                // Connect to the sdk service on the local machine

                ManagementGroup localManagementGroup = new ManagementGroup("localhost");

     

                // Get the MonitoringClass representing a Computer

                MonitoringClass computerClass =

                    localManagementGroup.GetMonitoringClass(SystemMonitoringClass.Computer);

     

                // Use the class to retrieve partial monitoring objects

                ReadOnlyCollection<PartialMonitoringObject> computerObjects =

                    localManagementGroup.GetPartialMonitoringObjects(computerClass);

     

                // Loop through each computer

                foreach (PartialMonitoringObject computer in computerObjects)

                {

                    // Create a CustomMonitoringPerformanceData object

                    CustomMonitoringPerformanceData perfData =

                        new CustomMonitoringPerformanceData("CPU", "CPU Threshold", 21.3);

                    perfData.InstanceName = computer.Name;

     

                    // Insert the data

                    computer.InsertCustomMonitoringPerformanceData(perfData);

                }

            }

        }

    }

    The pattern for events is very similar, there are just more properties available on CustomMonitoringEvent.

    In order to actually use these objects and methods, it's important to understand what happens when any of the insert calls complete. First, these objects get converted to their runtime counterpart as xml; for events this is the System.Event.LinkedData* as defined in the System.Library* management pack and for performance data this is System.Performance.LinkedData* as defined in the System.Performance.Library* management pack. These items are then inserted into the PendingSdkDataSource table in the database. Once this happens, the call succeeds and returns, even though the data has not yet been processed.

    In order to actually pick up and utilize the inserted data, I also wrote several DataSourceModuleTypes (this is a management pack level concept that describes data sources (providers in MOM 2005) that are available for use in rules and monitors) that read from the PendingSdkDataSource table in the db and process the newly inserted objects. "Normal" performance data and event rules use the system defined DataSourceModuleTypes that read the system performance counters and the event log, respectively. The data inserted via the SDK will not be processed if using these data sources. All the data sources that facilitate SDK insertion are on a fixed polling interval of 30 seconds, and wake up that often to process any new data in the database. There are two DataSourceModuleTypes available for both events and performance data, all defined in the Microsoft.SystemCenter.Library*:

    Microsoft.SystemCenter.SdkEventProvider - This data source will output a System.Event.LinkedData object for every CustomMonitoringEvent inserted via the SDK, regardless of the object it was inserted against.

    Microsoft.SystemCenter.TargetEntitySdkEventProvider - This data source will only output a System.Event.LinkedData object for CustomMonitoringEvents inserted via the SDK that were inserted against the target of the workflow that is using the DataSourceModuleType. For instance, if you create a new rule and target it to the System.Computer type and use this DataSourceModuleType as the data source of the rule, the only events that will come out of the data source will be events that were inserted against objects of the System.Computer class.

    Microsoft.SystemCenter.SdkPerformanceDataProvider - The same as the SdkEventProvider, only for System.Performance.LinkedData and CustomMonitoringPerformanceData.

    Microsoft.SystemCenter.TargetEntitySdkPerformanceDataProvider - The same as the TargetEntitySdkEventProvider, only for System.Performance.LinkedData and CustomMonitoringPerformanceData.

    So, in order to actually drive state of discovered objects, or perform other actions based on the data inserted via the SDK, you will need to write rules or monitors that use the aforementioned DataSourceModuleTypes. We do ship and install by default in the Microsoft.SystemCenter.Internal* management pack two rules that automatically collect all the data inserted via the SDK; Microsoft.SystemCenter.CollectSdkEventData* and Microsoft.SystemCenter.CollectSdkPerformanceData*. These rules are both targeted at the RootManagementServer* class and will only be instantiated, as such, on the Principal Management Server.

    One very important thing to note: if you write rules or monitors that use the sdk data sources, they must be executed on a server that has database access AND the account the rule or monitor is running under must have the required database permissions. In general a rule or monitor is executed on the machine that discovered the object of a particular class; i.e. if you discover an instance of SQL Server on computer A and computer A has an agent on it, all rules and monitors targeted to SQL Server will be run on that particular SQL Server objects behalf on the agent on computer A. That's a bit of a mouthful, but I hope it makes sense.

    * These names are subject to change prior to RTM.

  • Jakub@Work

    Creating and Updating Groups

    • 22 Comments

    I've received several questions over the past few days about how to create and update groups in the SDK. Below is a commented sample of how to do it.

    What InsertCustomMonitoringObject group does, is create a new singleton class that derives from Microsoft.SystemCenter.InstanceGroup. It also creates a rule that populates this group which essentially defines the formula and inclusion and exlusion lists for the group. Hopefully the sample and comments in the code below explain this in more detail.

    Note: A quick apology on the formating of the strings for formulas below. I found that when I had extra spaces, the schema validation failed and caused some exceptions to be thrown. This way, a direct cut and paste should work.

    Edit: If you receive an ArgumentException for the references collection, this means that your default management pack already has a reference for the given management pack. When this happens, replace the alias I used with the existing alias for the management pack in the default management pack and don't add it to the newReferences collection.

    using System;

    using Microsoft.EnterpriseManagement;

    using Microsoft.EnterpriseManagement.Configuration;

    using Microsoft.EnterpriseManagement.ConnectorFramework;

    using Microsoft.EnterpriseManagement.Monitoring;

     

    namespace Jakub_WorkSamples

    {

        partial class Program

        {

            static void InsertAndUpdatingGroups()

            {

                // Connect to the sdk service on the local machine

                ManagementGroup localManagementGroup = new ManagementGroup("localhost");

     

                // Create the formula

                // There can be multiple membership rules, no need for a root tag.

                // The monitoring class is the class of instance you want in the group.

                // The Relationship class is a relationhip whose source type needs to

                //      be in the base class hierarchy of the class of your group (in this

                //      case we actually create a new class using the insert method on

                //      ManagementPack and this class will derive from InstanceGroup

                //      which is the source of InstanceGroupContainsEntities) and the target

                //      class needs to be in the base class hierarchy of the of the

                //      aforementioned MonitoringClass.

                //  You can also have an include and exclude list of specific entities that

                //      also must match the relationship class and monitoring class critiera.

                //      So in the example below, you could only include or exclude instances

                //      that derive from Microsoft.Windows.Computer.

                //      These look like this: (if both are specified the include list is first)

                //      <IncludeList>

                //        <MonitoringObjectId>f5E5F15E-3D7D-1839-F4C6-13E36BCD982a

                //        </MonitoringObjectId>

                //      </IncludeList>

                //      <ExcludeList>

                //        <MonitoringObjectId>f5E5F15E-3D7D-1839-F4C6-13E36BCD982a

                //        </MonitoringObjectId>

                //      </ExcludeList>

                //  Prior to the include list, you can also add an expression to include

                //  instances by. The element is defined as Expression and the schema for

                //  it (as well as this entire MembershipRule schema) is defined in the

                //  Microsoft.SystemCenter.Library management pack.

                //      An example of an expression:

                //      <Expression>

                //          <SimpleExpression>

                //           <ValueExpression>

                //             <Property>

                //              $MPElement[Name="Microsoft.SystemCenter.HealthService"]/IsRHS$

                //              </Property>

                //           </ValueExpression>

                //           <Operator>Equal</Operator>

                //           <ValueExpression>

                //              <Value>False</Value>

                //           </ValueExpression>

                //          </SimpleExpression>

                //      </Expression>

                //      This expression can reference properties of the class of the membership

                //      rule and in this case would include any health services that are not

                //      the root health service.

                //      Note: this example doesn't work with the rule I have below, it is simple

                //      for illustrative purposes. I would need to filter by a

                //      Microsoft.Windows.Computer property in order to use it below.

     

                string formula =

                    @"<MembershipRule>

                    <MonitoringClass>$MPElement[Name=""Windows!Microsoft.Windows.Computer""]$</MonitoringClass>

                    <RelationshipClass>$MPElement[Name=""InstanceGroup!Microsoft.SystemCenter.InstanceGroupContainsEntities""]$</RelationshipClass>

                    </MembershipRule>";

     

                // Create the custom monitoring object group

                CustomMonitoringObjectGroup allComputersGroup =

                    new CustomMonitoringObjectGroup("Jakub.At.Work.Namespace",

                    "AllComputerGroup",

                    "Jakub@Work Sample All Computers Group",

                    formula);

     

                // Get the default management pack.

                ManagementPack defaultManagementPack =

                    localManagementGroup.GetManagementPacks(

                    "Microsoft.SystemCenter.OperationsManager.DefaultUser")[0];

     

                // Get the management packs for references

                ManagementPack windowsManagementPack = localManagementGroup.

                    GetManagementPack(SystemManagementPack.Windows);

                ManagementPack instanceGroupManagementPack = localManagementGroup.

                    GetManagementPack(SystemManagementPack.Group);

                ManagementPackReferenceCollection newReferences =

                    new ManagementPackReferenceCollection();

                newReferences.Add("Windows", windowsManagementPack);

                newReferences.Add("InstanceGroup", instanceGroupManagementPack);

     

                defaultManagementPack.InsertCustomMonitoringObjectGroup(allComputersGroup,

                    newReferences);

     

                // Get the class that represents my new group

                MonitoringClass myNewGroup = localManagementGroup.

                    GetMonitoringClasses("Jakub.At.Work.Namespace.AllComputerGroup")[0];

     

                // Get the discovery rule that populates this group

                // For the purposes of this sample, I know there is only 1 in the template

                MonitoringDiscovery groupPopulateDiscovery = myNewGroup.

                    GetMonitoringDiscoveries()[0];

     

                // This is the full configuration of the discovery of which the

                // membership rule is one part that you can configure and update

                Console.WriteLine("The discovery configuration: {0}",

                    groupPopulateDiscovery.DataSource.Configuration);

     

                // Update the configuration in some fasion

                string newConfiguration =

                    @"<RuleId>$MPElement$</RuleId>

                    <GroupInstanceId>$MPElement[Name=""Jakub.At.Work.Namespace.AllComputerGroup""]$</GroupInstanceId>

                    <MembershipRules>

                        <MembershipRule>

                    <MonitoringClass>$MPElement[Name=""Windows!Microsoft.Windows.Computer""]$</MonitoringClass>

                    <RelationshipClass>$MPElement[Name=""InstanceGroup!Microsoft.SystemCenter.InstanceGroupContainsEntities""]$</RelationshipClass>

                    <ExcludeList>

                        <MonitoringObjectId>f5E5F15E-3D7D-1839-F4C6-13E36BCD982a</MonitoringObjectId>

                    </ExcludeList>

                    </MembershipRule>

                    </MembershipRules>";

     

                // Now we want to update the group membership for this group

                groupPopulateDiscovery.Status = ManagementPackElementStatus.PendingUpdate;

                groupPopulateDiscovery.DataSource.Configuration = newConfiguration;

                groupPopulateDiscovery.GetManagementPack().AcceptChanges();

            }

        }

    }

     

  • Jakub@Work

    Getting Started with Type Projections

    • 1 Comments

    A very powerful new concept in the SDK is the notion of Type Projections. I always describe type projections as a view over our type system, much like a SQL view over tables, with the added capability that our type projections are hierarchal. What projections allow you to do is query over and retrieve collections of objects related somehow in our model in one go. In this post I want to provide a very simply example of how to define and retrieve a type projection, and in future posts I will dive much deeper into working with them.

    For this post, I've attached a management pack (NFL.xml) that defines the classes and type projection used in the sample code. Also, I have the code I used to populate my objects in the Reference Code section below.

    In the management pack you will notice two classes defined, NFL.Conference and NFL.Division, and a hosting relationship type, NFL.ConferenceHostsDivision, between them. My scenario is that I want to display all the conferences and divisions in the UI, so I define a type projection that puts the two classes together:

    <TypeProjection ID="NFL.Conference.All" Accessibility="Public" Type="NFL.Conference">
         <Component Alias="Division" Path="$Target/Path[Relationship='NFL.ConferenceHostsDivision']$" />
    </TypeProjection>

    The first thing you should notice in the above definition is the Type property on the root element; this defines the type of instance that should exist at the root or "seed" of the projection. Next, you'll notice the projection has one component, Division, that relates to the root. The Path attribute is what defines how to find the desired component, relative to the seed. I will describe the path notation in more depth in future posts, but here Target refers to the seed element of the projection (an instance of NFL.Conference), followed by a Path construct that tells the system to follow the NFL.ConferenceHostsDivision relationship type.  This will result in a structure that looks like this:

    * NFL.Conference

    * NFL.Division

    Now, in order to work with this, the SDK has introduced a new object type, EnterpriseManagementObjectProjection, along with an interface, IComposableProjection, to allow for retrieval of and then navigation of the returned structure. The following code shows a basic example of retrieving and iterating through the objects:

                // Connect to the management group
                EnterpriseManagementGroup managementGroup =
                    new EnterpriseManagementGroup("localhost");
    
                // Get the management pack
                ManagementPack nflManagementPack = 
                    managementGroup.ManagementPacks.GetManagementPack("NFL", null, new Version("1.0.0.0"));
    
                // Get the type projection
                ManagementPackTypeProjection nflProjection = 
                    nflManagementPack.GetTypeProjection("NFL.Conference.All");
    
                // Get the relationship type
                ManagementPackRelationship nflConferenceContainsDivision = 
                    nflManagementPack.GetRelationship("NFL.ConferenceHostsDivision");
    
                IObjectProjectionReader<EnterpriseManagementObject> objectProjections =
                    managementGroup.Instances.GetObjectProjectionReader<EnterpriseManagementObject>(
                        new ObjectProjectionCriteria(nflProjection), ObjectQueryOptions.Default);
    
                foreach (EnterpriseManagementObjectProjection projection in objectProjections)
                {
                    Console.WriteLine("Conference {0}", projection.Object.DisplayName);
                    foreach (IComposableProjection division in projection[nflConferenceContainsDivision.Target])
                    {
                        Console.WriteLine("Division {0}", division.Object.DisplayName);
                    }
                }

    The sample first connects to the server and retrieves the type projection definition which is necessary for interacting with the SDK and the relationship type definition which is necessary for navigating the resulting instance. Next, we get an IObjectProjectionReader<EnterpriseManagementObject> that will contain all instances of our type projection. Finally, the example shows how to iterate through the object and get data.

    Reference Code

                // Connect to the management group
                EnterpriseManagementGroup managementGroup =
                    new EnterpriseManagementGroup("localhost");
    
                // Import the management pack
                ManagementPack managementPack = new ManagementPack("NFL.xml");
                managementGroup.ManagementPacks.ImportManagementPack(managementPack);
                managementPack = managementGroup.ManagementPacks.GetManagementPack(managementPack.Id);
    
                // Populate Data
                IncrementalDiscoveryData dataTransaction = new IncrementalDiscoveryData();
    
                // Get System.Entity class
                ManagementPackClass systemEntity = managementGroup.EntityTypes.GetClass(SystemClass.Entity);
    
                // Conferences
                ManagementPackClass conference = managementPack.GetClass("NFL.Conference");
                CreatableEnterpriseManagementObject nfc = new CreatableEnterpriseManagementObject(managementGroup, conference);
                nfc[conference, "Name"].Value = "NFC";
                nfc[systemEntity, "DisplayName"].Value = "National Football Conference";
                dataTransaction.Add(nfc);
    
                CreatableEnterpriseManagementObject afc = new CreatableEnterpriseManagementObject(managementGroup, conference);
                afc[conference, "Name"].Value = "AFC";
                afc[systemEntity, "DisplayName"].Value = "American Football Conference";
                dataTransaction.Add(afc);
    
                // Divisions
                ManagementPackClass division = managementPack.GetClass("NFL.Division");
                string[] nfcDivisionNames = new string[] {
                    "NFC East",
                    "NFC North",
                    "NFC South",
                    "NFC West"
                };
                
                foreach (string nfcDivisionName in nfcDivisionNames)
                {
                    CreatableEnterpriseManagementObject nfcDivision = new CreatableEnterpriseManagementObject(managementGroup, division);
                    
                    // Need to make sure to set key values of the host
                    nfcDivision[conference, "Name"].Value = "NFC";
                    
                    nfcDivision[division, "Name"].Value = nfcDivisionName;
                    nfcDivision[systemEntity, "DisplayName"].Value = nfcDivisionName;
                    dataTransaction.Add(nfcDivision);
                }
    
                string[] afcDivisionNames = new string[] {
                    "AFC East",
                    "AFC North",
                    "AFC South",
                    "AFC West"
                };
    
                foreach (string afcDivisionName in afcDivisionNames)
                {
                    CreatableEnterpriseManagementObject afcDivision = new CreatableEnterpriseManagementObject(managementGroup, division);
    
                    // Need to make sure to set key values of the host
                    afcDivision[conference, "Name"].Value = "AFC";
    
                    afcDivision[division, "Name"].Value = afcDivisionName;
                    afcDivision[systemEntity, "DisplayName"].Value = afcDivisionName;
                    dataTransaction.Add(afcDivision);
                }
    
                // Use Overwrite instead of Commit here because I want to bypass optimistic concurrency checks which will not allow the 
                // insertion of an existing instances and I want this method to be re-runnable
                dataTransaction.Overwrite(managementGroup);
  • Jakub@Work

    Improvements in SDK from Operations Manager 2007

    • 9 Comments

    I wanted to provide a high level overview of feature additions to the SDK since Operations Manager 2007 shipped. The items are in no particular order and will be discussed in more depth in subsequent posts.

    Default Values for Properties

    Types can define default values on properties. The default values need to be literals and are implemented using the default value column construct in SQL.

    Required Properties

    Types can now defined non-key required properties. Values for these properties will always be required during insertion unless the property has a default value, in which case that value will be used.

    Auto-Increment Properties

    Integer and string properties can be defined as auto-increment in a management pack. When instances of types that contain auto-increment properties are created in the SDK client, the client retrieves the next available value from the database and removes it from future use (whether or not the instance is committed). This value is then embedded in the property using a standard .Net format string with {0} replaced by the value from the database. As an example, this can be used to create unique, readable ticket numbers: “SR_{0}”.

    Binary Properties

    Binary properties have been added and are written and returned as streams. The stream is retrieved on demand and will not be brought back as part of normal instance retrieval.

    Enumeration Properties

    Management packs can define enumerations that can be used as property values in the instance space. These enumerations are hierarchal and extensible via other management packs.

    Categories in Management Packs

    Categories allow the association of an enumeration (as defined above) and a management pack element. An element can have an unlimited number of associated enumerations that act to categorize the element for various uses.

    Type Extensions

    Type extensions are an MP schema change to allow a derived type to be defined as an extension of its base type. The design is such that a user can add a property to a type, for all instances of that type. Thus, when someone imports a type extension, the system adds this type to all instances of the base type and all new instances will automatically be an instance of the type extension class, regardless of the data the discovery packet that discovered them contains. Type extensions can be explicitly added or updated on an instance, but cannot be removed without removing the entire instance.

    Relationship Endpoints

    Relationships now have sub-elements that represent the source and target endpoints. These endpoints can define cardinality value of 0, 1 and MaxInt. If max and min cardinality are both 1, it is enforced by the infrastructure, otherwise it is used as a hint to the UI and other consumers.

    Instance History

    The CMDB stores property and relationship changes for instances. They can be retrieved per instance from the SDK and are used by other components internally that rely on instance space changes.

    Instance Change Subscriptions

    Similar to the alert change subscriptions in OM, we now have an instance space change subscription data source that allows you to trigger other actions because on an instance space change. We use this to trigger WWF workflows.

    Projections

    A new concept in the type space, type projections allow you to define a hierarchal view over the type space that gets materialized on retrieval by the SDK client. This allows retrieval of logical instances that are composed of many instances and the relationships that join them together. The mechanism is used as the backing instance space definition for all forms and views in Service Manager; that is a view or form is bound to a particular projection.

    SDK also have a lot of functionality built in to support this concept, including advanced query support over these definitions. For instance, you can define a projection that is an incident with its assigned to user and then query for all instances of this projection where to assigned to user is some particular user.

    Object Templates

    Object templates are essentially saved instances and projections defined in a management pack that can be used to create new instances or be applied to existing instances and set their values to those found in the template.

    Type indexes

    You can now define indexes on types in a management pack to help improve performance.

    Criteria Improvements

    Criteria has changed (we still support the old format as well) to be xml-based to remove ambiguity problems we had with the old model and to consolidate it with group calculation and subscription criteria. We’ve also enhanced criteria to support projection queries, containment queries and added an IN clause predicate. The new features are only available with the new xml format.

    Full-Text Queries

    Full-Text is supported for string and binary properties, with significant limitations in how the full-text query syntax can be leveraged within other criteria elements. Currently this is only being used for knowledge search within some limited scenarios.

    Partial Instances

    Users of the SDK can now specify which properties they want to get back when they retrieve instances. This functionality applies to all mechanisms of retrieving instances and is recommended to help with performance issues when dealing with large data sets.

    Paging and Sorting

    The SDK supports retrieving instances in a paged mode that returns some buffer worth of data on each remote call. This functionality should be combined with sorting that will allow for UI design that yields better performance.

    Abstract Query Support

    Abstract types are now supported within type-based queries, both for types and projections.

    Security Enhancements

    We’ve expanded the security model to support property level write granularity, group + type scoping on read and implied permissions based on the instance itself (i.e. the owner of an incident as some set of implied permissions on it).

    Resource Support in Management Packs

    Management packs can now have embedded resources (the file is msi-based) that separate large resources (reports, form binaries, etc) from the xml manifest of the management pack.

    XPath Navigable Instance Space

    With the introduction of projections it became important to have a friendlier way to traverse a large hierarchal instance on the client. To server that need projection instances and instances are both XPathNavigable.

    Optimistic Concurrency on Instances

    We support full-optimistic concurrency support, per instance.

    Unification of CustomMonitoringObject and MonitoringObject

    We’ve merged the two objects for a more consistent experience when working with the instance space in the SDK.

  • Jakub@Work

    More with Type Projections

    • 4 Comments

    In my last post I introduced you to a new concept in the Service Manager infrastructure called Type Projections. Here, I want to expand on my previous example and discuss type projection structure in more depth. I've attached the management pack that I will be discussing (NFL.xml) and there is reference code at the bottom of this post for populating sample data.

    Type projection definitions allow users to create virtual object collections through any relationships found in the system. In the previous post I showed how a projection could bring hosted and host objects together in one go. The following projection expands on that example by introducing System.Containment and System.Reference relationships, as well as introducing the concepts of type constraints and relationship direction:

          <TypeProjections>
            <TypeProjection ID="NFL.Conference.All" Accessibility="Public" Type="NFL.Conference">
              <Component Alias="Division" Path="$Target/Path[Relationship='NFL.ConferenceHostsDivision']$">
                <Component Alias="AFCTeam" Path="$Target/Path[Relationship='NFL.DivisionContainsTeam' SeedRole='Source' TypeConstraint='AFC.Team']$"/>
                <Component Alias="NFCTeam" Path="$Target/Path[Relationship='NFL.DivisionContainsTeam' SeedRole='Source' TypeConstraint='NFC.Team']$">
                  <Component Alias="Coach" Path="$Target/Path[Relationship='NFL.CoachCoachesTeam' SeedRole='Target']$" />
                </Component>
              </Component>
            </TypeProjection>
          </TypeProjections>

    Added to the Division component from the previous post, I've now introduced two components that bring in all the teams in the NFL. I've defined my NFL.Team class as abstract and created concrete classes, AFC.Team and NFC.Team. I've also created a relationship derived from System.Containment associating an NFL.Division to an NFL.Team. Of note is that this relationship ends with an abstract class as the target endpoint. While this is supported by type projection definitions, the projection I have defined actually demonstrates the ability to constraint a component to a particular class. The two components, AFCTeam and NFCTeam each share a relationship, however, they each add a different type constraint. Given something can't be an NFCTeam and AFCTeam at the same time, these components will introduce unique sets of data.

    If I were to change the AFCTeam component to remove the type constraint, that component would return all NFL.Team instances while the NFCTeam component would return only those NFL.Team objects that are NFC.Teams as well.

    Next, I added a System.Reference relationship between NFL.Coach and NFL.Team. Notice that the coach is the source of that relationship. If I want to include all coaches for NFC.Teams, I need to indicate in the type projection definition that I want to traverse the relationship from target to source. This is done via the SeedRole attribute. The "Seed" refers to the parent component of the current component; the attribute describes which role the seed should play relative to the relationship used in the current component. In the sample, the relationship is NFL.CoachCoachesTeam and the seed (NFCTeam) is the target of that relationship and hence SeedRole='Target'.

    Traversing this structure is similar to the example in my previous post:

                // Connect to the management group
                EnterpriseManagementGroup managementGroup =
                    new EnterpriseManagementGroup("localhost");
    
                // Get the management pack
                ManagementPack nflManagementPack = 
                    managementGroup.ManagementPacks.GetManagementPack("NFL", null, new Version("1.0.0.0"));
    
                // Get the type projection
                ManagementPackTypeProjection nflProjection =
                    nflManagementPack.GetTypeProjection("NFL.Conference.All");
    
                // Get the relationship types
                ManagementPackRelationship nflConferenceContainsDivision =
                    nflManagementPack.GetRelationship("NFL.ConferenceHostsDivision");
    
                ManagementPackRelationship divisionContainTeam =
                    nflManagementPack.GetRelationship("NFL.DivisionContainsTeam");
    
                ManagementPackRelationship coachCoachesTeam =
                    nflManagementPack.GetRelationship("NFL.CoachCoachesTeam");
    
                IObjectProjectionReader<EnterpriseManagementObject> objectProjections =
                    managementGroup.Instances.GetObjectProjectionReader<EnterpriseManagementObject>(
                        new ObjectProjectionCriteria(nflProjection), ObjectQueryOptions.Default);
    
                foreach (EnterpriseManagementObjectProjection projection in objectProjections)
                {
                    Console.WriteLine("Conference: {0}", projection.Object.DisplayName);
                    foreach (IComposableProjection division in projection[nflConferenceContainsDivision.Target])
                    {
                        Console.WriteLine("\tDivision: {0}", division.Object.DisplayName);
                        foreach (IComposableProjection team in division[divisionContainTeam.Target])
                        {
                            Console.WriteLine("\t\tTeam: {0}", team.Object.DisplayName);
                            foreach (IComposableProjection coach in team[coachCoachesTeam.Source])
                            {
                                Console.WriteLine("\t\t\tCoach: {0}", coach.Object.DisplayName);
                            }
                        }
                    }
                }

    When we traverse from team to coach, we use the .Source property of the relationship which corresponds to the SeedRole description in the type projection definition. As part of traversing the projection, think of the role as where you are traversing to; in this example you are traversing from the team (the .Target property) to the coach (the .Source property).

    When traversing from divisions to teams, I am using the abstract .Target property of the NFL.DivisionContainsTeam relationship. Type constraints are not expressed in the traversal mechanism for these objects; the key from going to one component to another is the relationship endpoint, save any type constraints.

    Reference Code

               // Connect to the management group
                EnterpriseManagementGroup managementGroup =
                    new EnterpriseManagementGroup("localhost");
    
                // Import the management pack
                ManagementPack managementPack = new ManagementPack("NFL.xml");
                managementGroup.ManagementPacks.ImportManagementPack(managementPack);
                managementPack = managementGroup.ManagementPacks.GetManagementPack(managementPack.Id);
    
                // Populate Data
                IncrementalDiscoveryData dataTransaction = new IncrementalDiscoveryData();
    
                // Get System.Entity class
                ManagementPackClass systemEntity = managementGroup.EntityTypes.GetClass(SystemClass.Entity);
    
                // Conferences
                ManagementPackClass conference = managementPack.GetClass("NFL.Conference");
                CreatableEnterpriseManagementObject nfc = new CreatableEnterpriseManagementObject(managementGroup, conference);
                nfc[conference, "Name"].Value = "NFC";
                nfc[systemEntity, "DisplayName"].Value = "National Football Conference";
                dataTransaction.Add(nfc);
    
                CreatableEnterpriseManagementObject afc = new CreatableEnterpriseManagementObject(managementGroup, conference);
                afc[conference, "Name"].Value = "AFC";
                afc[systemEntity, "DisplayName"].Value = "American Football Conference";
                dataTransaction.Add(afc);
    
                // Divisions
                Dictionary<string, List<string>> divisions = new Dictionary<string,List<string>>();
                divisions.Add("AFC North", new List<string>());
                divisions["AFC North"].Add("Baltimore Ravens");
                divisions["AFC North"].Add("Cincinnati Bengals");
                divisions["AFC North"].Add("Cleveland Browns");
                divisions["AFC North"].Add("Pittsburgh Steelers");
                divisions.Add("NFC North", new List<string>());
                divisions["NFC North"].Add("Chicago Bears");
                divisions["NFC North"].Add("Detroit Lions");
                divisions["NFC North"].Add("Green Bay Packers");
                divisions["NFC North"].Add("Minnesota Vikings");
                
                divisions.Add("AFC South", new List<string>());
                divisions["AFC South"].Add("Houston Texans");
                divisions["AFC South"].Add("Indianapolis Colts");
                divisions["AFC South"].Add("Jacksonville Jaguars");
                divisions["AFC South"].Add("Tennessee Titans");
                divisions.Add("NFC South", new List<string>());
                divisions["NFC South"].Add("Atlanta Falcons");
                divisions["NFC South"].Add("Carolina Panthers");
                divisions["NFC South"].Add("New Orleans Saints");
                divisions["NFC South"].Add("Tampa Bay Buccaneers");
    
                divisions.Add("AFC East", new List<string>());
                divisions["AFC East"].Add("Buffalo Bills");
                divisions["AFC East"].Add("Miami Dolphins");
                divisions["AFC East"].Add("New England Patriots");
                divisions["AFC East"].Add("New York Jets");
                divisions.Add("NFC East", new List<string>());
                divisions["NFC East"].Add("Dallas Cowboys");
                divisions["NFC East"].Add("New York Giants");
                divisions["NFC East"].Add("Philadelphia Eagles");
                divisions["NFC East"].Add("Washington Redskins");
    
                divisions.Add("AFC West", new List<string>());
                divisions["AFC West"].Add("Denver Broncos");
                divisions["AFC West"].Add("Kansas City Chiefs");
                divisions["AFC West"].Add("Oakland Raiders");
                divisions["AFC West"].Add("San Diego Chargers");
                divisions.Add("NFC West", new List<string>());
                divisions["NFC West"].Add("Arizona Cardinals");
                divisions["NFC West"].Add("San Francisco 49ers");
                divisions["NFC West"].Add("Seattle Seahawks");
                divisions["NFC West"].Add("St. Louis Rams");
    
                Dictionary<string, string> teamToCoach = new Dictionary<string, string>();
                teamToCoach.Add("Chicago Bears", "Lovie Smith");
    
                ManagementPackClass divisionClass = managementPack.GetClass("NFL.Division");
                ManagementPackClass nfcTeamClass = managementPack.GetClass("NFC.Team");
                ManagementPackClass afcTeamClass = managementPack.GetClass("AFC.Team");
                ManagementPackClass coachClass = managementPack.GetClass("NFL.Coach");
    
                ManagementPackRelationship divisionContainsTeamClass = managementPack.GetRelationship("NFL.DivisionContainsTeam");
                ManagementPackRelationship coachCoachesTeamClass = managementPack.GetRelationship("NFL.CoachCoachesTeam");
                foreach (string divisionName in divisions.Keys)
                {
                    CreatableEnterpriseManagementObject division = new CreatableEnterpriseManagementObject(managementGroup, divisionClass);
    
                    string conferenceName;
                    ManagementPackClass teamClass;
    
                    if (divisionName.Contains("AFC") == true)
                    {
                        conferenceName = "AFC";
                        teamClass = afcTeamClass;
                    }
                    else
                    {
                        conferenceName = "NFC";
                        teamClass = nfcTeamClass;
                    }
    
                    // Need to make sure to set key values of the host
                    division[conference, "Name"].Value = conferenceName;
                    division[divisionClass, "Name"].Value = divisionName;
                    division[systemEntity, "DisplayName"].Value = divisionName;
                    dataTransaction.Add(division);
                    foreach (string teamName in divisions[divisionName])
                    {
                        CreatableEnterpriseManagementObject team = new CreatableEnterpriseManagementObject(managementGroup, teamClass);
                        team[teamClass, "Name"].Value = teamName;
                        team[systemEntity, "DisplayName"].Value = teamName;
                        dataTransaction.Add(team);
    
                        CreatableEnterpriseManagementRelationshipObject divisionContainsTeam = 
    new CreatableEnterpriseManagementRelationshipObject(managementGroup, divisionContainsTeamClass); divisionContainsTeam.SetSource(division); divisionContainsTeam.SetTarget(team); dataTransaction.Add(divisionContainsTeam); if (teamToCoach.ContainsKey(teamName) == true) { CreatableEnterpriseManagementObject coach = new CreatableEnterpriseManagementObject(managementGroup, coachClass); coach[coachClass, "Name"].Value = teamToCoach[teamName]; coach[systemEntity, "DisplayName"].Value = teamToCoach[teamName]; dataTransaction.Add(coach); CreatableEnterpriseManagementRelationshipObject coachCoachesTeam =
    new CreatableEnterpriseManagementRelationshipObject(managementGroup, coachCoachesTeamClass); coachCoachesTeam.SetSource(coach); coachCoachesTeam.SetTarget(team); dataTransaction.Add(coachCoachesTeam); } } } // Use Overwrite instead of Commit here because I want to bypass optimistic concurrency checks which will not allow the // insertion of an existing instances and I want this method to be re-runnable dataTransaction.Overwrite(managementGroup);
  • Jakub@Work

    Getting and Working With Type Projections - Basic

    • 1 Comments

    In my last post in gave you a more in-depth look at creating type projections in Service Manager. In this post, I'd like to provide you with some basic examples on how you can retrieve and work with instances of type projections. I'll be using the same Management Pack and data set from the previous post.

    On the Instances interface you will find a few methods for retrieving instances of EnterpriseManagementObjectProjection. Post Beta 1, we've actually cleaned up this interface significantly so I don't want to concentrate too much on the actual method call, but rather the parameters to it and working with the result.

    In Beta 1 the method you would use looks like this:

    IList<EnterpriseManagementObjectProjection> GetObjectProjections<T>(ManagementPackTypeProjection managementPackTypeProjection, 
    ObjectProjectionCriteria criteria, ObjectQueryOptions queryOptions) where T : EnterpriseManagementObject;

    Post Beta 1, this method is replaced by:

    IObjectProjectionReader<T> GetObjectProjectionReader<T>(ObjectProjectionCriteria criteria, ObjectQueryOptions queryOptions)
                where T : EnterpriseManagementObject;

    The ManagementPackTypeProjection from the first simply gets rolled into the ObjectProjectionCriteria object and the return type gets changed to support buffered reads to help with performance, however, IObjectProjectionReader can actually be used like an IList (although it doesn't implement IList); I'll discuss the reader more once we ship Beta 2 later this year. My examples below will use the latest code since that will be around longer; hopefully it is easily transferable for you to the Beta 1 API.

    The first thing you need in order to get a type projection instance, is the type projection definition. This narrows down the types of instances you will get back to those that fit the structure of the type projection:

          // Connect to the management group
          EnterpriseManagementGroup managementGroup =
                new EnterpriseManagementGroup("localhost");
    
          // Get the management pack
          ManagementPack nflManagementPack = 
                managementGroup.ManagementPacks.GetManagementPack("NFL", null, new Version("1.0.0.0"));
    
          // Get the type projection
          ManagementPackTypeProjection nflProjection =
                nflManagementPack.GetTypeProjection("NFL.Conference.All");

    Very simply you can retrieve projections by specifying the type:

    IObjectProjectionReader<EnterpriseManagementObject> objectProjections =
          managementGroup.Instances.GetObjectProjectionReader<EnterpriseManagementObject>(
          new ObjectProjectionCriteria(nflProjection), ObjectQueryOptions.Default);

    This will retrieve all projections in the system of that structure. A projection exists if the root object of the projection exists, regardless of whether any of the components are found. If you want to limit your result somehow, you will need to specify criteria to limit your result set. One example of simple criteria you could add is matching on one of the generic properties of the root object:

                string displayNameCriteria = @"<Criteria xmlns=""http://Microsoft.EnterpriseManagement.Core.Criteria/"">
                      <Expression>
                        <SimpleExpression>
                          <ValueExpressionLeft>
                            <GenericProperty>DisplayName</GenericProperty>
                          </ValueExpressionLeft>
                          <Operator>Equal</Operator>
                          <ValueExpressionRight>
                            <Value>National Football Conference</Value>
                          </ValueExpressionRight>
                        </SimpleExpression>
                      </Expression>
                    </Criteria>";
    
                IObjectProjectionReader<EnterpriseManagementObject> objectProjectionsByDisplayName =
                     managementGroup.Instances.GetObjectProjectionReader<EnterpriseManagementObject>(
                         new ObjectProjectionCriteria(displayNameCriteria, nflProjection, managementGroup), ObjectQueryOptions.Default);

    This would limit the results to any projections whose root object had a display name matching "National Football Conference." The list of available generic properties for Service Manager is Id, Name, DisplayName, LastModified and new post Beta 1 CreatedDate and LastModifiedBy.

    It is also possible to query on type specific properties of the root object:

                string propertyCriteria = @"<Criteria xmlns=""http://Microsoft.EnterpriseManagement.Core.Criteria/"">
                      <Expression>
                        <SimpleExpression>
                          <ValueExpressionLeft>
                            <Property>$Target/Property[Type='NFL.Conference']/Name$</Property>
                          </ValueExpressionLeft>
                          <Operator>Equal</Operator>
                          <ValueExpressionRight>
                            <Value>NFC</Value>
                          </ValueExpressionRight>
                        </SimpleExpression>
                      </Expression>
                    </Criteria>";
    
                IObjectProjectionReader<EnterpriseManagementObject> objectProjectionsByProperty =
                     managementGroup.Instances.GetObjectProjectionReader<EnterpriseManagementObject>(
                         new ObjectProjectionCriteria(propertyCriteria, nflProjection, nflManagementPack, managementGroup), ObjectQueryOptions.Default);

    Criteria can get more complicated than the above examples, including specifying criteria on the individual components of the projection, but an in-depth discussion will happen in a later post.

    Once you get an EnterpriseManagementObjectProjection, you essentially have a collection of instances organized in a hierarchy as defined by the type projection. If you run the following code:

                foreach (EnterpriseManagementObjectProjection projection in objectProjections)
                {
                    Console.WriteLine("Conference: {0}", projection.Object.DisplayName);
                    foreach (IComposableProjection division in projection[nflConferenceContainsDivision.Target])
                    {
                        Console.WriteLine("\tDivision: {0}", division.Object.DisplayName);
                        foreach (IComposableProjection team in division[divisionContainTeam.Target])
                        {
                            Console.WriteLine("\t\tTeam: {0}", team.Object.DisplayName);
                            foreach (IComposableProjection coach in team[coachCoachesTeam.Source])
                            {
                                Console.WriteLine("\t\t\tCoach: {0}", coach.Object.DisplayName);
                            }
                        }
                    }
                }

    You'll get a result that shows you the structure of the projection in memory:

    Conference: American Football Conference
            Division: AFC West
                    Team: Oakland Raiders
                    Team: Kansas City Chiefs
                    Team: San Diego Chargers
                    Team: Denver Broncos
            Division: AFC East
                    Team: Buffalo Bills
                    Team: New England Patriots
                    Team: New York Jets
                    Team: Miami Dolphins
            Division: AFC South
                    Team: Houston Texans
                    Team: Jacksonville Jaguars
                    Team: Tennessee Titans
                    Team: Indianapolis Colts
            Division: AFC North
                    Team: Cleveland Browns
                    Team: Pittsburgh Steelers
                    Team: Baltimore Ravens
                    Team: Cincinnati Bengals
    Conference: National Football Conference
            Division: NFC North
                    Team: Green Bay Packers
                    Team: Chicago Bears
                            Coach: Lovie Smith
                    Team: Detroit Lions
                    Team: Minnesota Vikings
            Division: NFC South
                    Team: Carolina Panthers
                    Team: Atlanta Falcons
                    Team: Tampa Bay Buccaneers
                    Team: New Orleans Saints
            Division: NFC East
                    Team: New York Giants
                    Team: Dallas Cowboys
                    Team: Philadelphia Eagles
                    Team: Washington Redskins
            Division: NFC West
                    Team: St. Louis Rams
                    Team: Seattle Seahawks
                    Team: Arizona Cardinals
                    Team: San Francisco 49ers

    If the line is indented, it indicates a jump across a relationship. The hierarchy is organized by the relationship types that bring in each component; put another way, as you traverse from parent to child you are traversing a relationship of a particular relationship type in one direction. So, as you go from American Football Conference to AFC West, you are moving from "Conference" to "Division" on the NFL.ConferenceHostsDivision relationship type.

    Each node in an EnterpriseManagementObjectProjection implements IComposableProjection which offers various ways at traversing through the hierarchy. The object is also IEnumerable<KeyValuePair<ManagementPackRelationshipEndpoint, IComposableProjection>>, which shows that the hierarchy is organized by the relationship endpoint that brings each node in. You also get a pointer to the actual object at the node:

            /// <summary>
            /// The current object in the projection.
            /// </summary>
            EnterpriseManagementObject Object
            {
                get;
            }

    The role that brought the current object into the current projection:

            /// <summary>
            /// Gets the role of this projection, relative to its parent.
            /// </summary>
            /// <value></value>
            ManagementPackRelationshipEndpoint ObjectRole
            {
                get;
            }

    The object that brought the current object into the current projection:

            /// <summary>
            /// The parent object, if any, for the current object in the projection.
            /// </summary>
            IComposableProjection ParentObject
            {
                get;
            }

    And a few indexers to aid in traversal, one of which we used in the sample above:

            /// <summary>
            /// Access all IComposableProjection elements of this IComposableProjection, optionally recursively.
            /// </summary>
            IList<IComposableProjection> this[TraversalDepth traversalDepth]
            {
                get;
            }
    
            /// <summary>
            /// Access IComposableProjection elements of this IComposableProjection, by role name.
            /// </summary>
            IList<IComposableProjection> this[string roleName]
            {
                get;
            }
    
            /// <summary>
            /// Access IComposableProjection elements of this IComposableProjection, by role name.
            /// </summary>
            IList<IComposableProjection> this[string roleName, ManagementPackClass classConstraint]
            {
                get;
            }
    
            /// <summary>
            /// Gets the IComposableProjection child of the projection by id of the contained object
            /// </summary>
            IComposableProjection this[string roleName, Guid id]
            {
                get;
            }
    
            /// <summary>
            /// Access IComposableProjection elements of this IComposableProjection, by relationship role.
            /// </summary>
            IList<IComposableProjection> this[ManagementPackRelationshipEndpoint role]
            {
                get;
            }
    
            /// <summary>
            /// Access IComposableProjection elements of this IComposableProjection, by relationship role and a class constraint.
            /// </summary>
            IList<IComposableProjection> this[ManagementPackRelationshipEndpoint role, ManagementPackClass classConstraint]
            {
                get;
            }
    
            /// <summary>
            /// Gets the IComposableProjection child of the projection by id of the contained object
            /// </summary>
            IComposableProjection this[ManagementPackRelationshipEndpoint role, Guid id]
            {
                get;
            }

    In future posts I'll discuss more advanced concepts around working with projections instances (they're IXPathNavigable!), a deeper look at criteria, creating and editing projections and a lot more. I hope that by the time Beta 2 ships later this year to have a comprehensive list of topics that help introduce you to new API concepts.

  • Jakub@Work

    Getting Started with the Service Manager SDK

    • 5 Comments

    The Service Manager public API is partitioned into several assemblies all of which can by found either in the GAC after install or in the Program Files folder in "SDK Binaries". The primary assembly is Microsoft.EnterpriseManagement.Core (Core) which replaces Microsoft.EnterpriseManagement.Common from Operations Manager 2007. Core also contains a lot of the functionality from the Microsoft.EnterpriseManagement.OperationsManager (OM) assembly that shipped with Operations Manager 2007. The OM assembly still exists, but contains only Operations Manager specific functionality. Service Manager also has a couple other assemblies with Service Manager and Data Warehouse functionality. The following table describes the relationships between Service Manager and Operations Manager assemblies.

    Service Manager Operations Manager 2007
    Microsoft.EnterpriseManagement.Core Microsoft.EnterpriseManagement.Common and Microsoft.EnterpriseManagement.OperationsManager
    Microsoft.EnterpriseManagement.ServiceManager (SM) None
    Microsoft.EnterpriseManagement.OperationsManager Microsoft.EnterpriseManagement.OperationsManager
    Microsoft.EnterpriseManagement.DataWarehouse (DW) None

    The refractoring of the Operations Manager assemblies was done to enable a better story for sharing functionality across the products. Core is delivered by the Service Manager team and provides the fundamental services that System Center products built on top of the OM CMDB share. Core contains the majority of the instance space functionality, all of the management pack infrastructure and other common features across the products. SM contains functionality on the Service Manager product needs while OM is for Operations Manager only. In Beta 1 Service Manager still uses the OM assembly as we haven't completely refractored everything as necessary. The DW assembly contains new data warehouse specific functionality; this is functionality specific to the new Service Manager based data warehouse and not the old Operations Manager data warehouse functionality, which can still be found in the OM assembly.

    In order to get started, you need to link at the very least against the Core assembly. Once you do that, the following code will establish a connection to the locally installed SDK service:

    using System;
    using Microsoft.EnterpriseManagement;
    
    namespace Jakub_WorkSamples
    {
        partial class Program
        {
            static void Connect()
            {
                // Connect to the local management group
                EnterpriseManagementGroup localManagementGroup = new EnterpriseManagementGroup("localhost");
            }
        }
    }

    You'll notice this part is almost identical to the Operations Manager. Once the connection is established, you can begin playing around with the various services provided by EnterpriseManagementGroup. Previously most functionality was actually directly on EnterpriseManagementGroup and while it is all still there (on ManagementGroup, derived from EnterpriseManagementGroup) , it is marked as obsolete as we are using a new paradigm for providing access to various features of the API. The new mechanism for accessing various parts of the API is centered around services (namely interfaces) exposed via properties on the EnterpriseManagementGroup object. You'll notice, for example, a property called Instances. This provides access to a variety of methods that deal with instance space access. A quick sample follows:

    using System;
    using Microsoft.EnterpriseManagement;
    using Microsoft.EnterpriseManagement.Common;
    
    namespace Jakub_WorkSamples
    {
        partial class Program
        {
            static void GetInstance()
            {
                // Connect to the local management group
                EnterpriseManagementGroup localManagementGroup = new EnterpriseManagementGroup("localhost");
    
                // Get an instance by Id (need to replace with valid Id of an instance)
                EnterpriseManagementObject instance = 
    localManagementGroup.Instances.GetObject<EnterpriseManagementObject>(Guid.NewGuid(), ObjectQueryOptions.Default); } } }

    Over time I'll be talking about the various interfaces and the specific functionality they provide and be talking much more about the Instances interface which introduces significant new functionality from the equivalent OM objects (MonitoringObject).

    This model of partitioning is used in all the assemblies and allows us to better group and share functionality across products. We're definitely still open to feedback on how to better partition our services; our goal is to group common functionality together such that most tasks would be done by using one or two services.

    Also, a couple things to note. First, this is Beta 1 and there will be changes moving forward. The Instances API in particular will undergo a significant facelift as we are added buffered read support for instances. We are also cleaning up the interfaces and trying to remove unnecessary overloads as much as possible.

  • Jakub@Work

    More on Group Membership and Calculations

    • 22 Comments

    A coworker of mine, Joel Pothering, put together a great article on groups in SCOM. I am posting it here on his behalf:

    I wanted to post a message that gives more details on how to author group membership rules. I’ll start with a review of Jakub Oleksy’s blog entry, found here:

    http://blogs.msdn.com/jakuboleksy/archive/2006/11/15/creating-and-updating-groups.aspx

    Jakub’s example shows how you can use the SDK to create a group with a corresponding membership rule, and then update it. I’ll concentrate on the different types of membership rules (or formulas) that you can create beyond what this example shows. You should still use Jakub’s example as a guide to create your own – in short, we’ll just modify the formula variable in his example.

    Simple group membership rule

    Here’s the first membership rule in Jakub’s example:

    <MembershipRule>

      <MonitoringClass>$MPElement[Name="Windows!Microsoft.Windows.Computer"]$</MonitoringClass>

      <RelationshipClass>$MPElement[Name="InstanceGroup!Microsoft.SystemCenter.InstanceGroupContainsEntities"]$</RelationshipClass>

    </MembershipRule>

    This is the simplest form of a group membership rule. It groups every instance (MonitoringObject) of the MonitoringClass type specified in the MonitoringClass element. What this really means is that the system will create a new instance of a relationship (MonitoringRelationshipObject) of the type specified in the RelationshipClass element for every instance of MonitoringClass. The source of the relationship is the group and the target is the MonitoringClass instance.

    Group membership rules are therefore discovery rules that “discover” relationships between a group and the instances you want it to contain. So you’re not discovering entities in your enterprise, but relationships between entities you already discovered. The target of this discovery rule is the group type itself. The data source module for the rule is Microsoft.SystemCenter.GroupPopulator, defined in the SystemCenter management pack. Let’s refer to this entire group membership rule system as simply GroupCalc.

    One other note about this simple membership rule: use this if you want to group another group, that is, create a subgroup. Simply specify the subgroup’s type in the MonitoringClass element, and its singleton instance will become a member.

    Using an expression to filter members

    You probably want to be more specific about what instances you want your group to contain beyond just their type. You can filter instances using an expression in the membership rule.  What we’ll do is add an Expression element after RelationshipClass to filter on computers with an IsVirtualMachine property value of ‘false’. The membership rule now becomes:

    <MembershipRule>

      <MonitoringClass>$MPElement[Name="Windows!Microsoft.Windows.Computer"]$</MonitoringClass>

      <RelationshipClass>$MPElement[Name="InstanceGroup!Microsoft.SystemCenter.InstanceGroupContainsEntities"]$</RelationshipClass>

      <Expression>

        <SimpleExpression>

          <ValueExpression>

            <Property>$MPElement[Name="Windows!Microsoft.Windows.Computer"]/IsVirtualMachine$</Property>

          </ValueExpression>

          <Operator>Equal</Operator>

          <ValueExpression>

            <Value>False</Value>

          </ValueExpression>

        </SimpleExpression>

      </Expression>

    </MembershipRule>

    The SimpleExpression element defines a left and right operand, and an operator, and GroupCalc evaluates this for every instance of computer. Here’s a simplified representation of this predicate:

    Computer.IsVirtualMachine = ‘False’

    The left operand is a Property element and therefore references a property of MonitoringClass. The right operand is a Value element, which can be a string or a number. In this case it is a string that represents a boolean value. GroupCalc knows that since IsVirtualMachine is a boolean property, ‘False’ should be converted to a boolean value of 0.

    You can apply logical operators on each predicate defined by a SimpleExpression. For example, let’s change the membership rule to filter based on this simplified representation:

    Computer.IsVirtualMachine = ‘False’ AND ( Computer.NetbiosDomainName LIKE ‘EASTCOAST’ OR Computer.NetbiosDomainName LIKE ‘WESTCOAST’ )

    The expression in the membership rule becomes:

      <Expression>

        <And>

          <Expression>

            <SimpleExpression>

              <ValueExpression>

                <Property>$MPElement[Name="Windows!Microsoft.Windows.Computer"]/IsVirtualMachine$</Property>

              </ValueExpression>

              <Operator>Equal</Operator>

              <ValueExpression>

                <Value>False</Value>

              </ValueExpression>

            </SimpleExpression>

          </Expression>

          <Expression>

            <Or>

              <Expression>

                <SimpleExpression>

                  <ValueExpression>

                    <Property>$MPElement[Name="Windows!Microsoft.Windows.Computer"]/NetbiosDomainName$</Property>

                  </ValueExpression>

                  <Operator>Like</Operator>

                  <ValueExpression>

                    <Value>EASTCOAST</Value>

                  </ValueExpression>

                </SimpleExpression>

              </Expression>

              <Expression>

                <SimpleExpression>

                  <ValueExpression>

                    <Property>$MPElement[Name="Windows!Microsoft.Windows.Computer"]/NetbiosDomainName$</Property>

                  </ValueExpression>

                  <Operator>Like</Operator>

                  <ValueExpression>

                    <Value>WESTCOAST</Value>

                  </ValueExpression>

                </SimpleExpression>

              </Expression>

            </Or>

          </Expression>

        </And>

      </Expression>

    The left side of And has an Or expression, and the parenthesis we required above is implied. The two SimpleExpressions in Or use the Like operator -- similar to T-SQL’s LIKE -- to do a case insensitive comparison.

    To see what other operators we can use with SimpleExpression, let’s examine parts of GroupCalc’s configuration schema, Microsoft.SystemCenter.GroupPopulationSchema, located in the SystemCenter management pack. Here is the definition of SimpleExpression:

      <xsd:complexType name="SimpleCriteriaType">

        <xsd:sequence maxOccurs="1" minOccurs="0">

          <xsd:annotation>

            <xsd:documentation>Expression comparing two values.</xsd:documentation>

          </xsd:annotation>

          <xsd:element name="ValueExpression" type="ValueExpressionType"/>

          <xsd:element name="Operator" type="CriteriaCompareType"/>

          <xsd:element name="ValueExpression" type="ValueExpressionType"/>

        </xsd:sequence>

      </xsd:complexType>

    Here is the definition of CriteriaCompareType:

      <xsd:simpleType name="CriteriaCompareType">

          <xsd:restriction base="xsd:string">

              <xsd:enumeration value="Like">

                  <xsd:annotation>

                      <xsd:documentation>LIKE</xsd:documentation>

                  </xsd:annotation>

              </xsd:enumeration>

              <xsd:enumeration value="NotLike">

                  <xsd:annotation>

                      <xsd:documentation>NOT LIKE</xsd:documentation>

                  </xsd:annotation>

              </xsd:enumeration>

              <xsd:enumeration value="Equal">

                  <xsd:annotation>

                      <xsd:documentation>Equal to.</xsd:documentation>

                  </xsd:annotation>

              </xsd:enumeration>

              <xsd:enumeration value="NotEqual">

                  <xsd:annotation>

                      <xsd:documentation>Not equal to.</xsd:documentation>

                  </xsd:annotation>

              </xsd:enumeration>

              <xsd:enumeration value="Greater">

                  <xsd:annotation>

                      <xsd:documentation>Greater than.</xsd:documentation>

                  </xsd:annotation>

              </xsd:enumeration>

              <xsd:enumeration value="Less">

                  <xsd:annotation>

                      <xsd:documentation>Less than.</xsd:documentation>

                  </xsd:annotation>

              </xsd:enumeration>

              <xsd:enumeration value="GreaterEqual">

                  <xsd:annotation>

                      <xsd:documentation>Greator than or equal to.</xsd:documentation>

                  </xsd:annotation>

              </xsd:enumeration>

              <xsd:enumeration value="LessEqual">

                  <xsd:annotation>

                      <xsd:documentation>Less than or equal to.</xsd:documentation>

                  </xsd:annotation>

              </xsd:enumeration>

          </xsd:restriction>

      </xsd:simpleType>

    These operators should cover everything you need to do in a SimpleExpression.

    Now take a look at ExpressionType in GroupPopulationSchema to see what other expressions we can use:

    <xsd:complexType name="ExpressionType">

      <xsd:choice>

        <xsd:element name="SimpleExpression" type="SimpleCriteriaType"/>

        <xsd:element name="UnaryExpression" type="UnaryCriteriaType"/>

        <xsd:element name="RegExExpression" type="RegExCriteriaType"/>

        <xsd:element name="Contains" type="ContainsCriteriaType"/>

        <xsd:element name="NotContains" type="ContainsCriteriaType"/>

        <xsd:element name="Contained" type="ContainedCriteriaType"/>

        <xsd:element name="NotContained" type="ContainedCriteriaType"/>

        <xsd:element name="And" type="AndType"/>

        <xsd:element name="Or" type="OrType"/>

      </xsd:choice>

    </xsd:complexType>

    We already used SimpleExpression, And, and Or. The UnaryExpression is used for testing null-ness. For example, to test whether a value for Computer.NetbiosDomainName was never discovered (i.e. it is NULL), you can use this expression:

      <Expression>

        <UnaryExpression>

          <ValueExpression>

            <Property>$MPElement[Name="Windows!Microsoft.Windows.Computer"]/NetbiosDomainName$</Property>

          </ValueExpression>

          <Operator>IsNull</Operator>

        </UnaryExpression>

      </Expression>

    RegExExpression allows you to use regular expression syntax to test property values. This is similar to what you use in other types of rules in Operations Manager that require regular expressions. Here’s one example that tests the value of NetbiosDomainName for the pattern ‘^WEST’:

      <Expression>

          <RegExExpression>

              <ValueExpression>

                  <Property>$MPElement[Name="Windows!Microsoft.Windows.Computer"]/NetbiosDomainName$</Property>

              </ValueExpression>

              <Operator>MatchesRegularExpression</Operator>

              <Pattern>^WEST</Pattern>

          </RegExExpression>

      </Expression>

    Containment expressions

    Contains allows you to group based on what instances MonitoringClass contains, and Contained on what instances contain MonitoringClass. These are powerful expressions that look beyond attributes of MonitoringClass and allow you to query to see how MonitoringClass is related to other instances.

    Let’s look at the SystemCenter management pack again and find an example of a group membership rule that uses Contains. The following is the membership rule for the internal group we use that contains all agent managed computers:

      <MembershipRule>

          <MonitoringClass>$MPElement[Name="SCLibrary!Microsoft.SystemCenter.ManagedComputer"]$</MonitoringClass>

          <RelationshipClass>$MPElement[Name="SCLibrary!Microsoft.SystemCenter.ComputerGroupContainsComputer"]$</RelationshipClass>

          <Expression>

              <Contains>

                  <MonitoringClass>$MPElement[Name="SCLibrary!Microsoft.SystemCenter.Agent"]$</MonitoringClass>

              </Contains>

          </Expression>

      </MembershipRule>

    An agent managed computer is represented in the system as a ManagedComputer instance that hosts an Agent instance. Hosting is a special type of containment that requires the host instance exists before the hosted instance can exist.

    Contains also takes an Expression element. Here’s an example from the SystemCenter management pack that groups all management servers that are not the root management server:

      <Expression>

          <Contains>

              <MonitoringClass>$MPElement[Name="SCLibrary!Microsoft.SystemCenter.CollectionManagementServer"]$</MonitoringClass>

              <Expression>

                  <SimpleExpression>

                      <ValueExpression>

                          <Property>$MPElement[Name="SCLibrary!Microsoft.SystemCenter.HealthService"]/IsRHS$</Property>

                      </ValueExpression>

                      <Operator>Equal</Operator>

                      <ValueExpression>

                          <Value>False</Value>

                      </ValueExpression>

                  </SimpleExpression>

              </Expression>

          </Contains>

      </Expression>

    A management server is a ManagedComputer that contains (hosts) a CollectionManagementServer, which derives from HealthService. A SimpleExpression is used to test the HealthService property IsRHS.  Notice that we now reference properties on the contained MonitoringClass, not the MonitoringClass that we are grouping.

    Any of the previously mentioned expression types can be used here, including any type of containment expression.

    Both Contains and Contained have compliments, NotContains and NotContained. For example, agentless managed computers (or remotely managed) are computers that are monitored, but do not host a health service (they aren’t agents). Let’s say we want to create a membership rule to group these. One way you can do this is group all instances of ManagedComputer that are not a member -- NotContained by -- the agent managed computer group. Here’s what the membership rule would look like:

    <MembershipRule>

      <MonitoringClass>$MPElement[Name="SCLibrary!Microsoft.SystemCenter.ManagedComputer"]$</MonitoringClass>

      <RelationshipClass>$MPElement[Name="InstanceGroup!Microsoft.SystemCenter.InstanceGroupContainsEntities"]$</RelationshipClass>

      <Expression>

        <NotContained>

          <MonitoringClass>SCLibrary!Microsoft.SystemCenter.AgentManagedComputerGroup</MonitoringClass>

        </NotContained>

      </Expression>

    </MembershipRule>

    Multiple membership rules

    You can define multiple membership rules for a single group. Each MembershipRule element is actually independent of the other. It is used primarily to create heterogeneous groups.

    Relationship source/target types and inheritance

    It’s worth reviewing how inheritance works with GroupCalc, and how you choose the RelationshipClass for your membership rule. Let’s take another look at the membership rule from the SystemCenter management pack for grouping management servers that are not the root management server:

    <MembershipRule>

        <MonitoringClass>$MPElement[Name="SCLibrary!Microsoft.SystemCenter.ManagedComputer"]$</MonitoringClass>

        <RelationshipClass>$MPElement[Name="SCLibrary!Microsoft.SystemCenter.ComputerGroupContainsComputer"]$</RelationshipClass>

        <Expression>

            <Contains maxDepth="1">

                <MonitoringClass>$MPElement[Name="SCLibrary!Microsoft.SystemCenter.CollectionManagementServer"]$</MonitoringClass>

                <Expression>

                    <SimpleExpression>

                        <ValueExpression>

                            <Property>$MPElement[Name="SCLibrary!Microsoft.SystemCenter.HealthService"]/IsRHS$</Property>

                        </ValueExpression>

                        <Operator>Equal</Operator>

                        <ValueExpression>

                            <Value>False</Value>

                        </ValueExpression>

                    </SimpleExpression>

                </Expression>

            </Contains>

        </Expression>

    </MembershipRule>

    The group (not shown above) is the singleton type Microsoft.SystemCenter.CollectionManagementServerComputersGroup, which derives from Microsoft.SystemCenter.ComputerGroup, an abstract type. The type we want to group is Microsoft.SystemCenter.ManagedComputer, which derives from Microsoft.Windows.Computer. Now, we specified the relationship type Microsoft.SystemCenter.ComputerGroupContainsComputer in the RelationshipClass element, which means GroupCalc creates instances of that type of relationship to group members. What we need to be sure of is that this is a valid relationship type to use. Here’s the definition of ComputerGroupContainsComputer:

    <RelationshipType ID="Microsoft.SystemCenter.ComputerGroupContainsComputer Abstract="false" Base="System!System.Containment">

        <Source>Microsoft.SystemCenter.ComputerGroup</Source>

        <Target>System!System.Computer</Target>

    </RelationshipType>

    Look at the Source and Target types. What these tell us is that the group has to be of a type derived from the abstract type Microsoft.SystemCenter.ComputerGroup, and the instance we’re grouping has to derive from System.Computer, another abstract type. It looks like our group matches -- CollectionManagementServerComputersGroup derives from ComputerGroup. And since Windows.Computer derives from System.Computer, and ManagedComputer derives from Windows.Computer, our grouped instances can be part of this relationship too – all is good.

    What happens if we pick the wrong the relationship type? GroupCalc will reject the configuration for the rule, and you’ll see an error in the event log from the runtime. An example of an incorrect relationship type would be the one used in the first example we discussed, Microsoft.SystemCenter.InstanceGroupContainsEntities. Here’s its definition:

    <RelationshipType ID="Microsoft.SystemCenter.InstanceGroupContainsEntities" Base="System!System.Containment" Abstract="false"

      <Source>Microsoft.SystemCenter.InstanceGroup</Source>

      <Target>System!System.Entity</Target>

    </RelationshipType>

    The Source is Microsoft.SystemCenter.InstanceGroup which is a singleton group type that derives directly from the abstract type System.Group – this doesn’t look good. The Target type is the base type of everything, so we’re definitely okay there. So it’s the Source type that makes InstanceGroupContainsEntities invalid with the management server membership rule, because CollectionManagementServerComputersGroup doesn’t derive from InstanceGroup.

    The reason you have to be careful is that our verification process for importing management packs will not catch this. We don’t have that level of knowledge, or semantic checks, available in our verification process to help us here. Only GroupCalc has this check in its verification process.

    Referenced properties

    I have one more note on the membership rule for Microsoft.SystemCenter.CollectionManagementServerComputersGroup. Notice that we are grouping CollectionManagementServer instances, as specified in the MonitoringClass element. In the expression, we reference a property on HealthService, which is a base type of CollectionManagementServer. We do this because you need to specify the type that defines the property. So you cannot do this:

    <ValueExpression>

         <Property>$MPElement[Name="SCLibrary!Microsoft.SystemCenter.CollectionManagementServer"]/IsRHS$</Property> // WRONG!!!

    </ValueExpression>

    This will produce an error from our management pack verification process since there is no such property, which is good – we’ll know right away, before we can even import, that this is incorrect.

    What we won’t know right away, and what you need to be careful of, are property references that can pass management pack verification but still can be rejected by GroupCalc. For example, if we were to put in a Windows.Computer property there, it would pass management pack verification – the property is valid after all. But our GroupCalc schema implies that the property referenced must be from the type you’re grouping, or from any of its base types. Again, only GroupCalc has this knowledge, and you’ll get a runtime error logged after GroupCalc rejects the configuration.

    Conclusion

    GroupCalc can help you in most of your scenarios that involve grouping instances. I hope you get a chance to investigate GroupCalc more thoroughly. If you do and have questions, please don’t hesitate to post these questions to the newsgroup – we’ll be sure to help.

    Thanks - Joel

  • Jakub@Work

    Inserting Discovery Data

    • 28 Comments

    We've gone over how to drive state and insert operational data for existing entities, but how do you insert your own objects into the system? That's what this post will briefly touch on, as well as providing sample code (below) and a management pack to use (attached) with the code.

    Discovery data insertion via the SDK revolves around connectors as discovery sources. In order to insert data, you first need to create a connector with the system that all the data you insert will be associated with. This allows us to control the lifetime of the discovery data as a function of the lifetime of the connector.

    Once the connector is setup, you can use one of two modes for insertion; Snapshot or Incremental. Snapshot discovery indicates to the system that for this particular connector (read: discovery source), this is the definite snapshot of everything it has discovered. It will essentially delete anything that was previously discovered, and treat this snapshot as authoritative. Incremental, as the name would indicate, simply merges the existing discovery data with the discovery information provided in the incremental update. This can include additions, as well as deletions.

    Users can insert CustomMonitoringObjects and CustomMonitoringRelationshipObjects which, once inserted, map to MonitoringObjects and MonitoringRelationshipObjects. In order to insert either, you have to provide, at a minimum, the key values for objects, and the source and target for relationships. When dealing with a hosting relationship, the key values of the host must also be populated as part of the CustomMonitoringObject and no explicit CustomMonitoringRelationshipObject needs to be created. The example below should guide you through this.

    A quick discussion on managed vs. unmanaged instances. Our system will only run workflows against instances that are managed. The discovery process I talked about in the last post will insert "managed" data. Top-level instances (computers for instance) are inserted via the install agent APIs in the SDK and result in managed computers. It is also possible for rules to insert discovery data, however, this data will not be managed unless hosted by a managed instance.

    In order to be able to target workflows to your newly created instances and have them actually run, DiscoveryDataIsManaged needs to be set to true on the ConnectorInfo object when creating the connector. Alternatively, if you insert an instance as hosted by a managed instance, that instance will also be managed. For the former case, all workflows would run on the primary management server, while the latter would have them all running on the health service that manages the host. If something is not managed, you can still insert events and performance data about it, although the workflow that collects these will need to targeted against something other than the class of the instance. State change information would not be available for non-managed instances.

    using System;

    using Microsoft.EnterpriseManagement;

    using Microsoft.EnterpriseManagement.Configuration;

    using Microsoft.EnterpriseManagement.ConnectorFramework;

    using Microsoft.EnterpriseManagement.Monitoring;

     

    namespace Jakub_WorkSamples

    {

        partial class Program

        {

            static void InsertDiscoveryData()

            {

                // Connect to the sdk service on the local machine

                ManagementGroup localManagementGroup = new ManagementGroup("jakubo-test");

     

                // Get the connnector framework administration object

                ConnectorFrameworkAdministration connectorFrameworkAdministration =

                    localManagementGroup.GetConnectorFrameworkAdministration();

     

                // Create a connector

                ConnectorInfo info = new ConnectorInfo();

                info.Name = "TestConnector";

                info.DisplayName = "Test connector for discovery data";

                MonitoringConnector connector = connectorFrameworkAdministration.Setup(info);

     

                // First create an instance of SampleClassiHostedByComputer and

                // SampleClass2HostedBySampleClass1   

                // Find a computer

                MonitoringObject computer = localManagementGroup.GetMonitoringObjects(

                    localManagementGroup.GetMonitoringClass(

                    SystemMonitoringClass.WindowsComputer))[0];

     

                // Get the SampleClassiHostedByComputer class

                MonitoringClass sampleClass1HostedByComputer =

                    localManagementGroup.GetMonitoringClasses(

                    "SampleClass1HostedByComputer")[0];

     

                // Get the SampleClass2HostedBySampleClass1 class

                MonitoringClass sampleClass2HostedBysampleClass1 =

                    localManagementGroup.GetMonitoringClasses(

                    "SampleClass2HostedBySampleClass1")[0];

     

                // Get the key properties for each

                MonitoringClassProperty keyPropertyForSampleClass1 =

                    (MonitoringClassProperty)sampleClass1HostedByComputer.

                    PropertyCollection["KeyProperty1"];

                MonitoringClassProperty keyPropertyForSampleClass2 =

                    (MonitoringClassProperty)sampleClass2HostedBysampleClass1.

                    PropertyCollection["KeyProperty1SecondClass"];

     

                // Create the CustomMonitoringObjects to represent the new instances

                CustomMonitoringObject sampleClass1HostedByComputerInstance =

                    new CustomMonitoringObject(sampleClass1HostedByComputer);

                CustomMonitoringObject sampleClass2HostedBysampleClass1Instance =

                    new CustomMonitoringObject(sampleClass2HostedBysampleClass1);

     

                // Set the key property value for the first instance

                sampleClass1HostedByComputerInstance.SetMonitoringPropertyValue(

                    keyPropertyForSampleClass1, "MySampleInstance1");

     

                // Set the key property values for the second instance, which includes the

                // key property values of the host in order to populate the hosting relationship

                sampleClass2HostedBysampleClass1Instance.SetMonitoringPropertyValue(

                    keyPropertyForSampleClass1, "MySampleInstance1");

                sampleClass2HostedBysampleClass1Instance.SetMonitoringPropertyValue(

                    keyPropertyForSampleClass2, "MySampleInstance2");

     

                // In order to populate the hosting relationship, you need to also set

                // the key properties of the host. This will automatically create the hosting

                // relationship and is in fact the only way to create one programmatically.

                foreach (MonitoringClassProperty property in computer.GetMonitoringProperties())

                {

                    if (property.Key)

                    {

                        sampleClass1HostedByComputerInstance.SetMonitoringPropertyValue(

                            property, computer.GetMonitoringPropertyValue(property));

     

                        // Even though the relationship between

                        // sampleClass1HostedByComputerInstance and the computer is already

                        // defined, we need to add this key property to

                        // sampleClass2HostedBysampleClass1Instance as the entire hosting

                        // hierarchy is what uniquely identifies the instance. Without this,

                        // we wouldn't know "where" this instance exists and where it should be

                        // managed.

                        sampleClass2HostedBysampleClass1Instance.SetMonitoringPropertyValue(

                            property, computer.GetMonitoringPropertyValue(property));

                    }

                }

     

                // Let's insert what we have so far

                // We'll use Snapshot discovery to indicate this is a full snapshot of

                // all the discovery data this connector is aware of.

                SnapshotMonitoringDiscoveryData snapshot = new SnapshotMonitoringDiscoveryData();

                snapshot.Include(sampleClass1HostedByComputerInstance);

                snapshot.Include(sampleClass2HostedBysampleClass1Instance);

                snapshot.Commit(connector);

     

                // Let's retrieve the objects and ensure they were created

                MonitoringObject sampleClass1HostedByComputerMonitoringObject =

                    localManagementGroup.GetMonitoringObject(

                    sampleClass1HostedByComputerInstance.Id.Value);

                MonitoringObject sampleClass2HostedBySampleClass1MonitoringObject =

                    localManagementGroup.GetMonitoringObject(

                    sampleClass2HostedBysampleClass1Instance.Id.Value);

     

                // Now we create a relationship that isn't hosting

                MonitoringRelationshipClass computerContainsSampleClass2 =

                    localManagementGroup.GetMonitoringRelationshipClasses(

                    "ComputerContainsSampleClass2")[0];

     

                // Create the custom relationship object

                CustomMonitoringRelationshipObject customRelationship =

                    new CustomMonitoringRelationshipObject(computerContainsSampleClass2);

     

                // Do an incremental update to add the relationship.

                IncrementalMonitoringDiscoveryData incrementalAdd =

                    new IncrementalMonitoringDiscoveryData();

                customRelationship.SetSource(computer);

                customRelationship.SetTarget(sampleClass2HostedBySampleClass1MonitoringObject);

                incrementalAdd.Add(customRelationship);

                incrementalAdd.Commit(connector);

     

                // Make sure the relationship was inserted

                MonitoringRelationshipObject relationshipObject =

                    localManagementGroup.GetMonitoringRelationshipObject(

                    customRelationship.Id.Value);

     

                // Cleanup the connector. This should remove all the discovery data.

                connectorFrameworkAdministration.Cleanup(connector);

            }

        }

    }

     

  • Jakub@Work

    Getting Started

    • 18 Comments

    The easiest way to get going is to install the SCOM UI on the machine that you want to develop on. This will ensure all the necessary components are installed and drop the assemblies you need in order to write against the SDK. There is currently a work item being tracked to have a special installer for just the SDK and any necessary components in order to easily facilitate developing on a non-SCOM box. Hopefully we can get this in for RTM, if not, I'll make sure to post exact instructions when that time comes.

    Programmatically, the only assemblies of interest are:

    Microsoft.EnterpriseManagement.OperationsManager.dll (The main SDK assembly)

    Microsoft.EnterpriseManagement.OperationsManager.Common.dll (A common assembly that the SDK service and client share, containing mostly exceptions)

    Although in Beta 2 there was also a dependency on EventCommon.dll and Microsoft.Mom.Common.dll, both of these dependencies have been removed for the upcoming release candidate and will not be present at RTM.

    If you want to develop and run on a machine that does not have the UI installed, you will need to:

    1. Copy over the assemblies mentioned above (the two main assemblies in bold will have to be copied from the GAC) to a directory on the machine you want to develop on.
    2. Copy over Bid2ETW.dll from the SCOM install directory to the same machine.
    3. Install the correct version of Microsoft .Net Framework 3.0.
    4. In the registry at HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\BidInterface\Loader add a new string value: “<Directory where you code will be running>\*”=”<Path to Bid file from #2>\Bid2ETW.dll”

    This should be everything you need to do to setup your dev machine. Now just link against the assemblies and you're ready to get started.

    The first thing any application needs to do in order to work with SCOM is create a new ManagementGroup object.  The sample code below creates a ManagementGroup connected to the local machine (Note: In order for this code to work, it needs to be run on the Primary Management Server.)

    using System;

    using Microsoft.EnterpriseManagement;

     

    namespace Jakub_WorkSamples

    {

        class Program

        {

            static void Main(string[] args)

            {

                ManagementGroup localManagementGroup = new ManagementGroup("localhost");

            }

        }

    }

     

    With this object, you can begin to explore the rest of the API and accomplish, hopefully, any tasks you need to. In future posts, I will talk more about connecting options and the ManagementGroupSettings class, but for now, this should get you started.

    EDIT - In recent builds (I believe RC1 included) you do not need to do step 2 and 4 above, unless you want SCOM tracing to work.

  • Jakub@Work

    How Stuff Works

    • 15 Comments

    In reply to a comment I received, I wanted to put together a post about how things work in general, mostly with respect to the SDM as implemented in SCOM 2007.

    For those of you familiar with MOM 2005, you'll know that things were very computer centric, which might make the 2007 concepts a bit foreign to you. In trying to bring SCOM closer to a service-oriented management product, the idea that the computer is central has somewhat been removed. I say somewhat, because much of the rest of the world of management has not moved entirely off the computer being of central importance, so we still have to honor that paradigm to make integration with other products easier. One example of this compromise is that on an Alert object you will see the properties NetbiosComputerName, NetbiosDomainName and PrincipalName; these are present for easier integration with ticketing systems that are still largely computer centric.

    With this shift in thinking, we are able to model an enterprise to a much finer level of detail on any individual computer, as well as move above a single computer and aggregate services across machines in a single service-based representation. So what does this mean practically? For one, it means potentially a huge proliferation of discovered objects. Instead of just having a computer with a few roles discovered, we can go so far as to model each processor, each hard drive, every process, etc on every box as its own object. Now, instead of seeing a computer, you can actually see a more accurate representation of the things that are important to you in your enterprise. More importantly, however, is that this allows you to define each objects health separately from the health of the objects that depend on it and define how it's health affects the health of those things. For instance, just because one of the hard drives on a particular machine is full, doesn't necessarily mean the health of the machine is bad. Or maybe it it, and that can be modeled as well.

    As was the case with MOM 2005, at its core SCOM 2007 is driven by management packs. Managements packs define the class hierarchy that enable this form of deep discovery and they define all the tools necessary to manage these objects.

    Let's begin with discussing classes and the class hierarchy. We define a base class that all classes must derive from called System.Entity. Every class ever shipped in any management pack will derive at its core from this class. This class is abstract, meaning that there can never be an object discovered that is just a System.Entity and nothing else. We ship with an extensive abstract class hierarchy that we are still working on tweaking, but should allow for users to plug in their classes somewhere in the hierarchy that makes sense for them. You will be able to define your own abstract hierarchies as well as your own concrete classes. Concrete classes (i.e. non-abstract) are discoverable. Once you define a class as concrete, it's key properties (those that define the identity of an instance of that class) cannot be changed. For instance, if you want to specialize System.Computer, you can't define a new key property on it that would change it's identity, although you are free to add as many non-key properties as you like. In our system, the values of the key properties for a discovered instance are what uniquely identify that instance. In fact, the unique identifier (Guid) that is used internally to identity these instances is actually built off the key property values. Extending this computer example, if you do extend computer yourself, and someone else does as well, it is possible for a computer to be both of your classes at the same time, however, it will be identified as the same instance. You could imagine that Dell ships a management pack and some of your computers as both Dell Computers and Windows Computers, both of which would derive from the non-abstract Computer class that defines it's key properties. Thus an instance that is discovered as both would always we both in every context. The reason that the class of a discovered instance is important is that targeting of all workflows relies on this, but I'll talk more about this a bit later.

    In addition to discovering individual instances, the system also allows you to define relationships between instances. The relationships also have a class hierarchy that works very similarly to the class hierarchy for individual instances. The most base class here is call System.Reference and all relationship types (classes) will derive from this. There are two abstract relationship types that are of great importance that I wanted to discuss here. First, there is System.Containment which derives directly from System.Reference. While reference defines a loose coupling of instances, Containment implies that the source object of the relationship in some way contains the target object. This is very important internally to use as containment is used to allow things to flow across a hierarchy. For instance, in the UI alert views that might look at a particular set of instances (say Computers) will also automatically include alerts for anything that is contained on that computer. So a computers alert view would show alerts for hard drives on that computer as well. This is something that is an option when making the direct SDK calls, but in the UI it is the method that is chosen. Even more important than this is the fact that security scopes flow across containment relationships. If a user is given access to a particular group, they are given access to everything contained in that group by the System.Containment relationship type (or any type that derives from it). An even more specialized version of containment that is also very important is System.Hosting. This indicates a hosting relationships exists between the source and target where the target's identity is dependant upon its host. For instance, a SQL Server is hosted by a computer, since it would not exist outside the context of that computer. Going back to what I said in the previous paragraph about using key properties of an instance to calculate the unique id of it, we actually also use the key properties of all its hosts to identify it as well. Taking the SQL Server as an example, I can have hundreds of Instance1 SQL Servers running in my enterprise. Are they all the same? Of course not, they are different based on the computer they are on. That's how we differentiate them. Even in the SDK, when you get MonitoringObjects back, the property values that are populated include not only the immediate properties of the instance, but also the key properties of the host(s).

    All the examples I've mentioned thus far talk about drilling down on an individual computer, but we can also build up. I can define a service as being dependant on many components that span across physical boundaries. I can use the class hierarchy to create these new service types and extend the relationship type hierarchy to define the relationships between my service and its components.

    Before we talk about how all these instances get discovered, let's talk about why being an instance of a particular type is important. SCOM actually uses the class information about a discovered instance to determine what should run on its behalf. Management pack objects, such as rules, tasks and monitors, are all authored with a specific class as their target. What this actually means is that the management pack wants the specified workflow to run for every instance of that class that is discovered. If I have a rule that monitors the transaction log for SQL, I want that rule deployed and executed on every machine that has a SQL server discovered. What our configuration service does is determine what rules need to be deployed where, based on the discovered instance space and on where those particular instance are managed (and really, if they are managed, although that is a discussion for another post). So another important attribute about instances, is where they are being managed; essentially every discovered instance is managed by some agent in your enterprise, and it's usually the agent on the machine where the instance was discovered. When the agent receives configuration from the configuration service, it instantiates all the workflows necessary for all the instances that it manages. This is when all the discovery rules, rules and monitors will start running. Tasks, Diagnostics and Recoveries are a bit different in that they run on demand, but when they are triggered, they will actually flow to the agent that manages the instance that workflow was launched against. Class targeting is important here as well, as Tasks, Diagnostics and Recoveries can only execute against instances of the class they are targeted to. It wouldn't make sense, for instance, to launch a "Restart Service" task against a hard drive.

    Discovering instance and relationships is interesting. SCOM uses a "waterfall" approach to discovery. I will use SQL to illustrate. We'll begin by assuming we have a computer discovered. We create a discovery that discovers SQL servers and we'll target it the Computer. The system will then run this discovery rule on every Computer it knows about. When it runs on a computer that in fact has SQL installed, it will publish discovery data to our server and a new instance of SQL Server will be instantiated. Next, we have a rule targeted to SQL Server that discovers individual databases on the server, Once the configuration service gets notified of the new SQL instance, it will recalculate configuration and publish new configuration to the machine with the SQL server that includes this new discovery rule. This rule will then run and publish discovery information for the databases on the server. This allows deep discovery to occur without any user intervention, except for actually starting the waterfall. The first computer needs to be discovered, either by the discovery wizard, manual agent installation or programmatically via the SDK. For the latter, we support programmatic discovery of instances using the MCF portion of the SDK. Each connector is considered a discovery source and is able to submit discovery data on its behalf. When the connector goes away, all instances that were discovered solely by that connector also go away.

    The last thing I wanted to talk about were Monitors. Monitors define the state of an instance. Monitors also come in a hierarchy to help better model the state of an instance. The base of the hierarchy is called System.Health.EntityState and it represents THE state of an instance. Whenever you see state in the UI, it is the state of this particular monitor for that instance, unless stated otherwise. This particular monitor is an AggregateMonitor that rolls up state for its child monitors. The roll up semantics for aggregates are BestOf, WorstOf and Percentage. At the end of a monitor hierarchy chain must exist either a UnitMonitor or a DependencyMonitor. A UnitMonitor defines some single state aspect of a particular instance. For example, it may be monitoring the value of a particular performance counter. The importance of this particular monitor to the overall state of the instance is expressed by the monitor hierarchy it is a part of. Dependency monitors allow the state of one instance to depend on the state of another. Essentially, a dependency monitor allows you to define the relationship that is important to the state of this instance and the particular monitor of the target instance of this relationship that should be considered. One cool thing about monitors, is that their definition is inherited based on the class hierarchy. So System.Health.EntityState is actually targeted to System.Entity and thus all instance automatically inherit this monitor and can roll up state to it. What this means practically is that if you want to specialize a class into a class of your own, you don't need to redefine the entire health model, you can simply augment it by deriving your class from the class you which to extend and adding your own monitors to your class. You can even simply add monitors to the existing health model by targeting them anywhere in the class hierarchy that makes sense for you particular monitor.

    As always, let me know if there are any questions, anything you would like me to elaborate on or any ideas for future posts.

  • Jakub@Work

    MMS Demo - Discovery and operational data insertion via SDK

    • 12 Comments

    I put together and presented a short demostration on the SDK at MMS on Monday. I think it went over well and was a short glimspe on some of the power of the SDK. I wanted to make my demo available so here it is. Let me know if there are any issues with it.

  • Jakub@Work

    More with Alert and State Change Insertion

    • 25 Comments

    Update: I have updated the management pack to work with the final RTM bits 

    Update #2: You cannot use the aggregate method described below to set the state of any instance not hosted on the Root Management Server.

    The last thing I had wanted to demonstrate about alert and state change insertion finally became resolved. This will not work in any of the public bits right now, but RC1 should be available soon and it will work there. Attached is the most recent version of the management pack to reference for this post. You'll have to clean up the references to match the proper public keys, but it should work otherwise.

    What I wanted to demonstrate was being able to define a monitor for a given class, but not having that monitor actually be instantiated for every instance of that class. Normally, if you define a monitor, you will get an instance of it (think a new workflow on your server) for every instance of the class the monitor is targeted to that is discovered. If you have thousands of instances, this can lead to significant performance issues. Now, we support the ability to define an aggregate monitor that will not be instantiated, as long as there are not monitors that roll up to it. In the sample MP attached you will find System.Connectors.SpecialAggregate that is an example of this kind of monitor. It works like any other aggregate monitor in the system, it just doesn't actually have any other monitors from which to roll up state. So how does its state gets set? That's where the additional changes come in.

    The first new addition is System.Connectors.Health.SetStateAction. The is the most basic form of a module that will be used to set the state of the aforementioned monitor. It this form, it accepts as configuration the ManagementGroupId, MonitorId (this is the monitor you want to set the state of), ManagedEntityId (this is the id of the instance you want to set the state of the monitor for) and the HealthState (this is the actual state you want to set the monitor to). There is a wrapper that abstracts away the need to set the ManagedEntityId and ManagementGroupId properties called System.Connectors.Health.TargetSetStateAction that will work with the SDK data types. There are also three further wrappers that explicitly define the state to set to monitor to, leaving only the MonitorId as configuration.

    I have included a sample rule (System.Connectors.SpecialAggregate.Error) that will drive the state of the aggregate monitor using the same sample code I posted earlier. Note that the rule is targeted to RootManagementServer since it will process data for a variety of instances, while the aggregate monitor should be targeted at the proper class that represents the instance you want to model and drive state for.

  • Jakub@Work

    Tiering Management Groups in SCOM 2007

    • 0 Comments

    In MOM 2005 tiering was achieved using the Mom to Mom Connector which physically copied data from lower tiers to upper tiers. While this generally worked, it certainly posed problems around performance as well as keeping the items in sync. The main scenarios this was meant to solve were:

    1. A single console for viewing data from multiple tiers
    2. No connectivity requirement from console to lower tiers

    In SCOM 2007 the approach we took for tiering was different, although accomplishes the same goals. As opposed to copying the data from tier to tier, instead we allow the upper tier console to connect and interact with the lower tiers directly, without requiring direct access to those management groups. The way we accomplish this is by connecting to those management groups via the local management group that console is connecting to.

    In order to use this feature, the first thing required is actually setting up the tiered connection from the local management group to the management group you want to connect to. This can be accomplished in the UI via the "Connected Management Groups" settings in the Administration pane, or via the SDK. Via the SDK, you would start by retrieving the TieringAdministration class off of an instance of ManagementGroup using:

    public TieringAdministration GetTieringAdministration()

    Once you have this object there are two overloaded methods for creating a TieredManagementGroup.

    public TieredManagementGroup AddMonitoringTier(string name, ManagementGroupConnectionSettings tieredConnectionSettings)
    public TieredManagementGroup AddMonitoringTier(string name, ManagementGroupConnectionSettings tieredConnectionSettings, 
     MonitoringSecureData runAs, bool availableForConnectors)

    The first gives the tiered connection a name and specifies the connection settings to use to connect to that tier; the ServerName, UserName, Domain and Password are required on the ManagementGroupConnectionSettings object. The second overload is actually much more convenient when later using the TieredManagementGroup. If you specify a RunAs account here, you can actually have the system use that account to connect instead of having to provide the username and password each time. If you specify availableForConnectors to true, the TieringAdministration method GetMonitoringTiersForConnector() will return this tier and subsequently all "tiered' MCF overloads will include this tier in their processing. You do not actually have to make the tier available for connectors to use the RunAs feature, but most likely those go together. Also note that when requesting the stored account be used to connect, the caller must be a SCOM Administrator.

    Now, in order to actually connect to the tier, you first need to retrieve the TieredManagementGroup for the tier you want to connect to. There are a few methods on TieringAdministration that allow you to do this:

    public ReadOnlyCollection<TieredManagementGroup> GetMonitoringTiers()
    public TieredManagementGroup GetMonitoringTier(Guid id)
    public TieredManagementGroup GetMonitoringTier(string name)
    public ReadOnlyCollection<TieredManagementGroup> GetMonitoringTiersForConnectors()

    Once you have the TieredManagementGroup you want to connect to you simple call the following method on that instance:

    public ManagementGroup Connect(TieredManagementGroupConnectionSettings tieredConnectionSettings)

    The TieredManagementGroupConnectionSettings are very similar to the regular ManagementGroupConnectionSettings with a single addition ConnectForConnector property. If this property is true, than the SDK service of the local tier will use the associated RunAs account to connect to the lower tier, requiring the caller to be admin. If this is false, credentials must be provided. For security reasons, we do not use the SDK Service account to connect to the lower tier.

    Once you call the Connect method above, you get a ManagementGroup back that you can work with just like a local ManagementGroup instance, only this one is connected to a different tier via the local one.

    Largely, this is the recommended approach for interacting with different tiers as it gives you full control over handling errors in each when aggregating data across multiple tiers, however, we do provider some "tiered" methods specifically for the SCOM Connector Framework. If you look at the MonitoringConnector class found in the Microsoft.EnterpriseManagement.ConnectorFramework namespace, it has several methods names with a "ForTiers" suffix. The behavior of each is roughly the same, so I won't go over all of them, instead we'll just look at one sample:

    public ReadOnlyCollection<ConnectorMonitoringAlert> GetMonitoringAlertsForTiers(out IList<ConnectorTieredOperationFailure> failures)

    The non-tiered version of this method gets all alerts for the given connector from the local management group based on the bookmark of the connector. This method does the same, only it does it for all the tiers that have been created as "Available for Connectors" as mentioned earlier. The implementation is nothing special in that it simply uses the aforementioned tiering method to retrieved tiers, connect to them and call GetMonitoringAlerts on each, subsequently aggregating the results. Exceptions will be thrown if there are connectivity problems to the local tier, otherwise errors are captured in the out parameter.

  • Jakub@Work

    Working with Management Pack Templates

    • 9 Comments

    Management pack templates provide a powerful way to create single or a collection of management pack objects. Essentially, you author the fragment of a management pack you want created, with some missing values that become configuration to your template, and upon execution of the template the missing values are provided and the finished management pack object(s) are materialized and imported.

    Templates show up in the UI under the Authoring pane. In order for your own custom template to show up there, you need to create a folder for it and place the template in the folder as a folder item. It will show up without the folder, but won't work quite right. Enabling executing the template via the UI is outside the scope of this post, but should be eventually available as a tutorial on www.authormps.com.

    I've attached a sample management pack that essentially recreates the instance group template we use for creating groups.

    If you take a look at the template defined there, you will see that the configuration section is very similar to other management pack elements. The configuration schema here specifies what values must be provided to the template processor.

    The second section is the references for the actual template. The references refer to the references section of the management pack the template is in, by alias. There is one special alias defined, 'Self', which refers to the management pack the template is in.

    Finally, under the implementation section is the fragment of the management pack you want your template to create. You can have the configuration values replace any part of the implementation by referring to the values via the $TemplateConfig/<configuration variable name>$ format.

    In the templates management pack you will also notice that I put the sample templates in a newly created folder. This will ensure the UI behaves properly with any template output I produce. What happens is that the template output can be placed in a folder, and the UI will treat these folders as individual instances of execution of the template, such that they can be managed as homogenous units, even though they may have created a wide variety of management pack objects.

    The code below runs my template using the SDK.

    First you will notice that I need to get the management pack the template is in and the template itself. I need the management pack for two reasons. First, I need a management pack to run the template against; all the objects produced by the template will be placed in this management pack. Second, I need this particular management pack because it is not sealed and thus any templates defined in it, must be run against it. If you seal the management pack that contains the template, you can run it against any non-sealed management pack.

    Next, I have to build the configuration for my template. This is just XML that matches the schema of my template. You will also notice that within my configuration I have to referenced some management packs. This will be reflected by adding additional references as a parameter to processing the template. Note that if I want to use references that already exist in the management pack the template output will be put in, these aliases must match the already existing aliases for the same management packs.

    Finally, when I process my template, I provide additional information that will be used to name the folder the template output is put into. This is optional, but required if you want the output to show up in the UI and want to be able to delete it easily (by deleting everything in this folder). The method actually returns the folder the output was put in.

    using System; using System.Collections.ObjectModel; using System.Text; using System.Xml; using Microsoft.EnterpriseManagement; using Microsoft.EnterpriseManagement.Configuration; namespace Jakub_WorkSamples { partial class Program { static void ProcessTemplate() { // Connect to the local management group ManagementGroup localManagementGroup = new ManagementGroup("localhost"); // Get template management pack. This is where we will store out template output since // the sample template management pack is not sealed and the output needs to be // in the same management pack as the template in this case. ManagementPack templateManagementPack = localManagementGroup.GetManagementPacks( "Template.Sample")[0]; // Get the template you want to process MonitoringTemplate sampleTemplate = localManagementGroup.GetMonitoringTemplates( new MonitoringTemplateCriteria("Name = 'Sample.Template'"))[0]; // Populate the configuration for the template string formula = @"<MembershipRule> <MonitoringClass>$MPElement[Name=""Windows!Microsoft.Windows.Computer""]$</MonitoringClass> <RelationshipClass>$MPElement[Name=""InstanceGroup!Microsoft.SystemCenter.InstanceGroupContainsEntities""]$</RelationshipClass> </MembershipRule>"; StringBuilder stringBuilder = new StringBuilder(); XmlWriter configurationWriter = XmlWriter.Create(stringBuilder); configurationWriter.WriteStartElement("Configuration"); configurationWriter.WriteElementString("Namespace", "Sample.Namespace"); configurationWriter.WriteElementString("TypeName", "MyClass"); configurationWriter.WriteElementString("LocaleId", "ENU"); configurationWriter.WriteElementString("GroupDisplayName", "My Class"); configurationWriter.WriteElementString("GroupDescription", "My Class Description"); configurationWriter.WriteStartElement("MembershipRules"); configurationWriter.WriteRaw(formula); configurationWriter.WriteEndElement(); configurationWriter.WriteEndElement(); configurationWriter.Flush(); // Get the management packs for references ManagementPack windowsManagementPack = localManagementGroup. GetManagementPack(SystemManagementPack.Windows); ManagementPack instanceGroupManagementPack = localManagementGroup. GetManagementPack(SystemManagementPack.Group); ManagementPackReferenceCollection newReferences = new ManagementPackReferenceCollection(); newReferences.Add("Windows", windowsManagementPack); newReferences.Add("InstanceGroup", instanceGroupManagementPack); templateManagementPack.ProcessMonitoringTemplate(sampleTemplate, stringBuilder.ToString(), newReferences, "MyTemplateRunFolder", "My Template Run", "This is the folder for my sample template output"); } } }
  • Jakub@Work

    Workflow Targeting and Classes

    • 22 Comments

    I set out this week trying to put together a post about designing and deploying rules and monitors that utilize the SDK data sources that I talked about in the last post. Unfortunately, it's not ready yet. Among other things this week, I have been trying to write and deploy a sample management pack that would demonstrate the various techniques that we recommend for inserting operational data via the SDK to SCOM, but in the process I have run into a few issues that need resolving before presenting the information. I am definitely working on it and I'll get something up and soon as we work through the issues I have run into. If you need something working immediately, please contact me directly with questions you have so I can better address your specific scenario.

    In the meantime, I wanted to discuss classes in SCOM and how they relate to workflow (take this to mean a rule, monitor or task in SCOM 2007) targeting. I think this topic is a good stepping stone for understanding the techniques I'll talk about when the aforementioned post is ready.

    First, what do I mean by targeting? In MOM 2005 rules were deployed based on the rule groups they were in and their association to computer groups. Rules would be deployed irrespective of whether they were needed on a particular computer. The targeting mechanism in 2007 is much different and based entirely around the class system that describes the object space. Each workflow is assigned a specific target class and the agents will receive rules when they have objects of that particular class being managed on that machine.

    Ok, so what does that all mean? Let's start with a sample class hierarchy. First, we have a base class of all classes, System.Entity (this is the actual base class for all classes in SCOM 2007). This class is abstract (meaning that there cannot be an instance of just System.Entity). Next, suppose we have a class called Microsoft.SqlServer (note this is not the actual class hierarchy we will ship, this is only for illustrative purposes). This class is not abstract and defines all the key properties that identify a SQL Server. Key properties are the properties the uniquely identify an instance in an enterprise. For a SQL Server this would be a combination of the server name as well as the computer name the server is on.  Next, there is a class Microsoft.SqlServer.2005 which derives from Microsoft.SqlServer adding properties specific to SQL Server 2005, but it adds no key properties (and in fact cannot add any). This means that a SQL Server 2005 in your enterprise would be both a Microsoft.SqlServer AND and Microsoft.SqlServer.2005 and the object that represented it, from an identity perspective, would be indistinguishable (i.e. its the same SQL Server). Lastly, SQL Servers can't exist by themselves, so we add a System.Computer class to the mix that derives from System.Entity. We now have all the classes defined that we need to talk about our first workflow, discovery.

    Let's assume we already have a computer discovered in our enterprise, Computer1. In order to discover a SQL Server, we need two things:

    1. We need to define a discovery rule that can discover a SQL Server.
    2. We need to deploy and run the rule

    In order to make our rule deploy and run, we'll need to target it to a type that gets discovered before SQL Server, in our case System.Computer. If we target a discovery rule to the type its discovering, you'll have a classic chicken or the egg problem on your hands. When we target our discovery rule to System.Computer, the configuration service knows that there is a Computer1 in the enterprise that is running and being managed by an agent and it will deploy any workflow targeted at System.Computer, including our discovery rule, to that machine and in turn execute the rule. Once the rule executes it will submit new discovery data to our system and the SQL Server will appear; lets call the server Sql1. Our SQL Server 2005 discovery rule can be targeted to System.Computer, or we could actually target it to Microsoft.SqlServer since it will already be discovered by the aforementioned rule. This illustrates a waterfall approach to discovery, and the recommended way discovery is done in the system. There needs to be a "seed' discovered object that is leveraged for further discovery that can generated subsequent "seeds" for related objects. In SCOM 2007 we jump start the system by pre-discovering the primary management server (also a computer) and allow manual computer discovery and agent deployment that jump starts discovery on those machines.

    This example also illustrates the workflow targeting and deployment mechanism in SCOM 2007. When objects are discovered in an enterprise, they are all discovered and identified as instances of particular classes. In the previous example, Computer1 is both a System.Entity and a System.Computer. Sql1 is a System.Entity, Microsoft.SqlServer and Microsoft.SqlServer.2005. We maintain this state in the configuration service and deploy workflows to agents that are managing these instances, based on the types of instances they are managing. This ensures that workflows get deployed and executed on the agents that need them, with no need to manage targeting.

    Another example of targeting would be with a task. Let's say we have a task that restarts SQL Server. This task can actually run on any Microsoft.SqlServer. Maybe we have another task that disables a specific SQL 2005 feature that only makes sense to run against objects that are in fact SQL Server 2005 instances. The first task would be targeted against Microsoft.SqlServer while the second against Microsoft.SqlServer.2005. If you select a SQL Server object in the SCOM 2007 UI that is not a SQL Server 2005, but instead is SQL Server 2000, the 2005 specific task will not be available. If you try to run it anyway via the SDK or the command shell, it will fail because the instance you are trying to run against isn't the right class and doesn't understand the task. The first task however, restarting the service, will run against both the SQL 2000 and 2005 instances and be available for both in the UI.

    I hope this helps make a bit more sense out of the new class system and leveraging it for targeting. Like with anything else, there are edge cases, more complicated hierarchies and other considerations when designing and targeting workflows, but from a "pure" modeling perspective, this should give you an idea as to how things work.

  • Jakub@Work

    Caching on the Brain

    • 16 Comments

    This week I have been working a lot on caching in the SDK, trying to optimize some code paths and improve performance as much as I can, so I decided to share a bit about how the cache works, and some insights into its implementation while it's fresh on my mind.

    SCOM 2007 relies heavily on configuration data to function. Class and relationship type definitions become especially important when dealing with discovered objects. We found that it was very often common to want to move up and down these class hierarchies, which would prove very costly from a performance standpoint if each operation required a roundtrip to the server and database. We also recognized that not all applications require this type of functionality, and incurring the additional memory hit was not desired (this was especially true for modules that need the SDK). Given this, the SDK has been designed with 3 different cache modes: Configuration, ManagementPacks and None. The cache mode you want to use can be specified using the ManagementGroupConnectionSettings object.

    First, let's go over what objects each cache mode will actually cache:

    Configuration: ManagementPack, MonitoringClass, MonitoringRelationshipClass, MonitoringViewType, UnitMonitorType, ModuleType derived classes, MonitoringDataType, MonitoringPage and MonitoringOverrideableParameter

    ManagementPacks: ManagementPack

    None: None =)

    For the first two modes there is also an event on ManagementGroup that will notify users of changes. OnTypeCacheRefresh is only fired in Configuration cache mode and indicates that something in the cache, other than ManagementPack objects changed. This means that the data in the cache is actually different. Many things can trigger a ManagementPack changing, but not all of them change anything other than the ManagementPack objects LastModified property (for instance, creating a new view, or renaming one). OnManagementPackCacheRefresh gets triggered when any ManagementPack object changes for whatever reason, even if it didn't change anything else in the cache. This event is available in both Configuration and ManagementPacks mode.

    So, when do you want to use each mode? Configuration is great is you are doing lots of operations in the configuration space, especially moving up and down the various type hierarchies. It is also useful when working extensively with MonitoringObject (not PartialMonitoringObject) and having the need to access the property values of many instances of different class types. Our UI runs in this mode. ManagementPacks is useful when configuration related operations are used, but not extensively. This is actually a good mode to do MP authoring in, which requires extensive hits to getting management packs, but not necessarily other objects. One thing that is important to note here is that every single object that exists in a management pack (rule, class, task, etc) requires a ManagementPack that is not returned in the initial call. If you call ManagementGroup.GetMonitoringRules(), every rule that comes back will make another call to the server to get its ManagementPack object if in cache mode None. If you are doing this, run in at least ManagementPacks cache mode, that's what it's for. None is great mode for operational data related operations. If you are mostly working with alerts or performance data, or even simply submitting a single task, this mode is for you. (None was not available until recently, and is not in the bits that are currently available for download).

    One more thing I want to mention. ManagementPacks, when cached, will always maintain the exact same instance of a ManagementPack object in memory, even if properties change. Other objects are actually purged and recreated. This is extremely useful for authoring as you can guarantee that when you have an instance of a management pack that may have been retrieved in different ways, it is always the same instance in memory. A practical example is you get a management pack calling ManagementGroup.GetManagementPack(Guid) and then you get a rule by calling ManagementGroup.GetMonitoringRule(Guid). The rule is in the same management pack conceptually as the GetManagementPack call returned, but who is to say it is the same instance? When you edit the rule, you will want to call ManagementPack.AcceptChanges() which (if the instances were not the same) would not change your rule, since the rule's internal ManagementPack may have been a different instance, and that's what maintains any changed state. This is not the case in Configuration and ManagementPacks cache mode. The instance that represents a certain management pack will always be the exact same instance and maintain the same state about what is being editing across the board. Now, that brings multi-threading and working with the same management pack across threads trickier, but there are public locking mechanisms for the ManagementPack object exposed to help with that.

    Lastly a quick note about how the cache works. The definitive copy of the cache is actually maintained in memory in the SDK service. The service registers a couple query notifications with SQL Server to be notified when things of interest change in the database, and that triggers a cache update on the server. When this update completes, the service loops through and notifies all clients that, when they had connected, requested to be notified of cache changes. Here we see another benefit of None cache mode; less server load in that less clients need to be notified of changes.

  • Jakub@Work

    Monitoring Objects

    • 11 Comments

    Monitoring objects are the instances that are discovered in SCOM. They are identified by unique values for key properties as defined by the class(es) that they are as well as the key properties of their host, if applicable. Their identity is solidified by the first non-abstract class in the objects class hierarchy, even though there may still exist a hierarchy of non-abstract classes that derive from this class. For instance, a computer is defined by its System.Computer class, even though it is also a System.Windows.Computer and potentially a MyCompany.SpecialComputer, both of which derive from System.Computer. It is important to note that these additional non-abstract classes do not in any way change the identity of the object itself and can be thought of as roles.

    Since a monitoring object is a representation of an instantiated class, it really represents a meta-object in a way. Early on, we were trying to figure out if we want strongly typed monitoring objects, such as Computer or Server, but for the most part decided against it, with a few exceptions to SCOM specific classes like ManagementServer. Given that, the MonitoringObject, in it's most basic form, is a property bag of values for all the class(es) properties of the object as well as the objects host(s).

    When we store monitoring objects in the database, they are stored in two places. We have a general store where common attributes of all monitoring objects are maintained, such as Name, Id and class list (what classes the object is) and a dynamically generated view per identifying class that gets populated with class values. As such, all SDK methods that return MonitoringObject must make at least two queries to construct the object. If the response is heterogeneous in object class, there are even more queries performed. This fact and it's performance implications led to the SDK exposing a full MonitoringObject that represents the property bag as mentioned above as well as a stripped PartialMonitoringClass that includes all the same functionality as MonitoringObject, but does not include any property values or property value related methods. Any method returning PartialMonitoringObject will always perform a single database query and thus is the recommended approach when working with these objects, unless the properties are necessary. The only additional method on MonitoringObject that is not on PartialMonitoringObject is:

    public object GetMonitoringPropertyValue(MonitoringClassProperty property)

    Please note that this method can be used properties of the monitoring object itself, or any key properties of any host class.

    Before diving into all the various ways to query for monitoring objects, I wanted to go through some of features of the object (both MonitoringObject and PartialMonitoringObject) itself. First, there is a public event available on the object:

    public event EventHandler<MonitoringObjectMembershipChangedEventArgs> OnRelatedEntitiesChanged

    This event will get signaled any time any object in the containment hierarchy of the current object is changed. The event includes the Id of the object whose contained members changed, but does NOT include what objects actually changed; a query would need to be performed to get contained objects and compare accordingly.

    The object also contains the following properties for quick reference without needing a full Monitoring Object

    public string Name public string Path public string DisplayName public string FullName public bool IsManaged public DateTime LastModified public HealthState HealthState public DateTime? StateLastModified public bool IsAvailable public DateTime? AvailabilityLastModified public bool InMaintenanceMode public DateTime? MaintenanceModeLastModified public ReadOnlyCollection<Guid> MonitoringClassIds public Guid LeastDerivedNonAbstractMonitoringClassId

    Name is the key value(s) of the monitoring object, while DisplayName is the same as name unless a friendly display name was specified by the discovery source via the DisplayName property defined on the System.Entity class. Path is the concatenated names of the hosts, while FullName is the uniquely identifying full name of the current instance, including name, path and class name. The HealthState of the System.EntityHealth monitor for the object is populated, as is the currently IsAvailable property (which is a function of the health service that is currently monitoring this object being available) and whether or not the object is in maintenance mode. IsManaged is always true and is not used. The MonitoringClassIds are all the non-abstract classes this object is, and the LeastDerivedNonAbstractMonitoringClassId is what it says it is, namely the class that brings in the identity of the object, just with a really long name :-).

    There are a ton of methods on this object, that I will leave that for you to browse through. I will go through some as the topics come up with future posts.

    Now, how do you get this objects? Well, first, let's go over MonitoringObjectGenericCriteria and MonitoringObjectCriteria. MonitoringObjectGenericCriteria allows you to query by generic properties shared across all monitoring objects. This is the list of properties you can use:

    Id
    Name
    Path
    FullName
    DisplayName
    LastModified
    HealthState
    StateLastModified
    IsAvailable
    AvailabilityLastModified
    InMaintenanceMode
    MaintenanceModeLastModified

    MonitoringObjectCriteria uses the same properties and allows you to query for monitoring objects by specific property values, as defined on the class(es) of the objects (such as PrincipalName on Computer), however, you can NOT use non-abstract classes. Note that when querying by MonitoringObjectCriteria, MonitoringObject instances are always returned as we have to make a query on both tables in the database anyway.

    What follows is an overview of the methods that are available for retrieving MonitoringObject(s) and PartialMonitoringObject(s). I won't go over all of them as hopefully the docs/Intellisense can help you find what you need.

    These are methods that accept criteria as accessible on the ManagementGroup object:

    public ReadOnlyCollection<MonitoringObject> GetMonitoringObjects(MonitoringObjectGenericCriteria criteria) public ReadOnlyCollection<MonitoringObject> GetMonitoringObjects(MonitoringObjectCriteria criteria) public ReadOnlyCollection<MonitoringObject> GetMonitoringObjects(ICollection<MonitoringObjectCriteria> criteriaCollection) public ReadOnlyCollection<PartialMonitoringObject> GetPartialMonitoringObjects(MonitoringObjectGenericCriteria criteria)

    These are methods that accept criteria as accessible on the monitoring object instance itself. TraversalDepth can be OneLevel or Recursive and determines whether we return immediately contained instances, or all contained instances:

    public ReadOnlyCollection<MonitoringObject> GetRelatedMonitoringObjects(MonitoringObjectGenericCriteria criteria, TraversalDepth traversalDepth) public ReadOnlyCollection<PartialMonitoringObject> GetRelatedPartialMonitoringObjects(MonitoringObjectGenericCriteria criteria, TraversalDepth traversalDepth)

    These methods allow you to specify a class as your criteria, including abstract classes, and are found on ManagementGroup (similar methods are available on MonitoringObject with TraversalDepth as a parameter):

    public ReadOnlyCollection<MonitoringObject> GetMonitoringObjects(MonitoringClass monitoringClass) public ReadOnlyCollection<PartialMonitoringObject> GetPartialMonitoringObjects(MonitoringClass monitoringClass)

    For getting related objects for multiple instances (the UI State View uses these methods), ManagementGroup allows you to get related object to a collection of objects by class, relationship type or MonitoringObjectCriteria (MonitoringObject also has similar methods for a single object):

    public Dictionary<T, ReadOnlyCollection<MonitoringObject>> GetRelatedMonitoringObjects<T>(ICollection<T> monitoringObjects, MonitoringClass monitoringClass, TraversalDepth traversalDepth) where T : PartialMonitoringObject public Dictionary<T, ReadOnlyCollection<MonitoringObject>> GetRelatedMonitoringObjects<T>(ICollection<T> monitoringObjects, MonitoringObjectCriteria criteria, TraversalDepth traversalDepth) where T : PartialMonitoringObject public Dictionary<T, ReadOnlyCollection<MonitoringObject>> GetRelatedMonitoringObjects<T>(ICollection<T> monitoringObjects, MonitoringRelationshipClass relationshipClass, TraversalDepth traversalDepth) where T : PartialMonitoringObject public Dictionary<T, ReadOnlyCollection<PartialMonitoringObject>> GetRelatedPartialMonitoringObjects<T>(ICollection<T> monitoringObjects, MonitoringClass monitoringClass, TraversalDepth traversalDepth) where T : PartialMonitoringObject public Dictionary<T, ReadOnlyCollection<PartialMonitoringObject>> GetRelatedPartialMonitoringObjects<T>(ICollection<T> monitoringObjects, MonitoringRelationshipClass relationshipClass, TraversalDepth traversalDepth) where T : PartialMonitoringObject

    This list isn't exhaustive, but it's pretty close. If you have any particular scenarios that you are confused about with regard to these methods and how best to accomplish a particular query, shoot me an email or post a comment. Also note that MonitoringRelationship is another class that has a source MonitoringObject and a target MonitoringObject included in the class itself and provides another convenient way to retrieve monitoring objects, but that is for another post.

  • Jakub@Work

    MCF from non-Windows Clients

    • 7 Comments

    Note: This will only work in RTM

    In order to make our MCF web-service work from non-windows clients, the first step is to actually change the binding the MCF web-service uses. In theory, what we ship with (wsHttpBinding) should work cross-platform, but at the moment there are no non-Windows products that generate a proper proxy and thus make it difficult to use (See Update below). If at all possible, please use a windows based client to talk to MCF or a non-Windows based proxy that fully supports wsHttpBinding as the functionality is much richer, especially around errors. If you choose to proceed down this route, note that exceptions are not properly propagated (they all show up as generic fault exceptions) and it will be impossible to tell from the client what server-side errors occurred. If you have no choice, keep reading...

    If we switch the proxy to use basicHttpBinding, the service will act like an asmx web-service and everything should work cross-platform with existing web-service products. In order to actually use basicHttpBinding, however, the service will require some additional configuration. Ultimately, we need the caller to be presented as a windows account. For cross platform, since you can't use windows authentication, you are forced to use client certificates and map them accordingly. In order to use client certificates, however, you need to setup SSL (you can also use Message level security but I only setup and tested Transport level). Here are the steps:

     

    1. Create a server certificate to use for your MCF endpoint to enable SSL (this certificate will need to be trusted by your clients)

    2. Import this certificate into the Local Machine store on the Root Management Server

    3. Setup the MCF endpoint to use SSL

    Since we are self-hosted, this cannot be done in IIS. You will need to find HttpCfg.exe (for Win2K3 this is found under SupportTools on the CD) and run the following command:

    HttpCfg.exe set ssl -i 0.0.0.0:6000 -h 82e8471434ab1d57d4ecf5fbed0f1ceeba975d8d -n LOCAL_MACHINE -c MY -f 2
     
    The 6000 is the port you are using in the configuration file. The "82e8471434ab1d57d4ecf5fbed0f1ceeba975d8d " is the thumbprint of the certificate you want to use (this can be found under the Details tab of the certificate snap-in after viewing the certificate). The -f 2 enables the server to accept client certificates.

    4. Update Microsoft.Mom.Sdk.ServiceHost.exe.config to look like the attached file

    You can pick whatever port you want to use, although I was never able to get 80 to work in testing.

    Also note that when I tried this out generating the proxy using wsdl.exe from a machine other than the server itself, it failed when my endpoint was defined as localhost. I had to specify the server name in the endpoint definition for the tool to work.

    5. Restart the omsdk (OpsMgr Sdk Service) service

    6. Generate a client certificate

    There seem to be two ways to do this. The first, and the one I tried successfully, is to generate a client certificate that has, in the Subject Alternate Name field the principal name of the user you want to map to. This will work if the CA that issues the certificate is an Enterprise CA on the domain your OpsMgr Sdk Service is running on. In the details under the Subject Alternate Name field this looks something like this:

    Other Name:
    Principal Name=youruser@yourdomain

    Alternatively, AD allows for the configuration of certificate mapping directly on the respective user object. I did not try this method as I do not have domain admin access on the domain I was testing on, but this should work as well.

    7. Use the client certificate in the request

     

    I tested this out using a proxy generated by wsdl.exe from the 2.0 .Net Framework and everything seemed to work ok. Things didn't work well with the 1.1 Framework wsdl.exe as some of the normally non-nullable fields (such as DateTime fields) are not nullable in 1.1, but can be null from MCF. Thus, whatever tool you use to generate proxies, it needs to be able to handle null values for value types.

    Update: I did some digging around, and although I did not test any of these, it seems there are some java projects for WCF interop that should allow you to communicate directly with our wsHttpBinding. There is the JAX-WS project and Project Tango. There are probably more out there, but I was reading about this in particular and people having success using them to interop specifically with wsHttpBinding.

Page 1 of 3 (62 items) 123