I'm really exited to announce that today we released the Technical Preview of the IIS Admin Pack and it includes 7 new features for IIS Manager that will help you in a bunch of different scenarios.
You can download the IIS 7.0 Admin Pack Technical Preview from (It requires less than 1MB):
(x86) http://www.iis.net/downloads/default.aspx?tabid=34&g=6&i=1646 (x64) http://www.iis.net/downloads/default.aspx?tabid=34&g=6&i=1647
These UI modules include the following features:
Please, help us, we want to ask for your help on trying them and give us feedback of all these modules, do they work for you? what would you change? what would you add? What features are we missing?
Some things to think about,
Database Manager, what other database features are critical for you to build applications?
IIS Reports set of reports, what reports would you find useful?, would you want to have Configuration based reports (such as summarizing the Sites and their configuration, which configuration)? More Security Reports (such as)?
Configuration Editor, is it easy to use?, what concepts from configuration would you like to see?, etc
Given that each individual feature above has a lot of interesting features that can easily be missed, or might be confusing, I will be blogging in the near feature talking about why we decided to build each feature, what makes them different from any other thing you've seen as well as how you can make the most out of each of them.
In the past few days I've been reading a bit about SEO and trying to understand more about what makes a Web Site be SEO (Search-Engine-Optimized) and what are some of the typical headaches when trying to achieve that as well as how we can implement them in IIS.
Today I decided to post how you can make your Web Site running IIS 7.0 a bit "friendlier" to Search Engines without having to modify any code in your application. Being SEO is a big statement since it can include several things, so for now I will scope the discussion to 3 things that can be easily addressed using the IIS URL Rewrite Module:
Basically the goal of canonicalization is to ensure that the content of a page is only exposed as a unique URI. The reason this is important is because even though for humans it's easy to tell that http://www.carlosag.net is the same as http://carlosag.net, many search engines will not make any assumptions and keep them as two separate entries, potentially splitting the rankings of them lowering their relevance. Another example of this is http://www.carlosag.net/default.aspx and http://www.carlosag.net/. You can certainly minimize the impact of this by writing your application using the canonical forms of your links, for example in your links you can always link to the right content for example: http://www.carlosag.net/tools/webchart/ and remove the default.aspx, however that only accounts for part of the equation since you cannot assume everyone referencing your Web Site will follow this carefully, you cannot control their links.
This is when URL Rewrite comes into play and truly solves this problem.
URL Rewrite can help you redirect when the users type your URL in a way you don't unnecessarily want them to, for example just carlosag.net. Choosing between using WWW or not is a matter of taste but once you choose one you should ensure that you guide everyone to the right one. The following rule will automatically redirect everyone using just carlosag.net to www.carlosag.net. This configuration can be saved in the Web.config file in the root of your Web Site.Note that I'm only including the XML in this blog, however I used IIS Manager to generate all of these settings so you don't need to memorize the XML schema since the UI includes several friendly capabilities to generate all of these..
Note that one important thing is to use Permanent redirects (301) , this will ensure that if anybody links your page using a non-WWW link when the search engine bot crawls their Web Site it will identify the link as permanently moved and it will treat the new URL as the correct address and it will not index the old URL, which is the case when using Temporary (302) redirects. The following shows how the response of the server looks like:
IIS has a feature called Default Document that allows you to specify the content that should be processed when a user enters a URL that is mapped to a directory and not an actual file. In other words, if the user enters http://www.carlosag.net/tools/ then they will actually get the content as if they entered http://www.carlosag.net/tools/default.aspx. That is all great, the problem is that this feature only works one way by mapping a Directory to a File, however it does not map the File to the Document, this means that if some of your links or other users enter the full URL, then search engines will see two different URL's. To solve that problem we can use a configuration very similar to the rule above, following is a rule that will redirect the default.aspx to the canonical URL (the folder).
This again, uses a Permanent redirect to extract everything before Default.aspx and redirect it to the "parent" URL path, so for example, if the user enters http://www.carlosag.net/Tools/WindowsLiveWriter/default.aspx it will be redirected to http://www.carlosag.net/Tools/WindowsLiveWriter/ as well as http://www.carlosag.net/Tools/default.aspx to http://www.carlosag.net/Tools/. You can place this rule at the root of your site and it will take care of all the default documents (if you have a default.aspx in every folder)
Asking your user to remember that www.contoso.com/books.aspx?isbn=0735624410 is the URL for the IIS Resource Kit is not the nicest thing to do, first of all why do they care about this being an ASPX and the fact that it takes arguments and what not. It seems that providing them with a URL like www.contoso.com/books/IISResourceKit will truly resonate with them and be easier for them to remember and pass along. Most importantly it really doesn't tie you to any Web technology.
With URL Rewrite you can easily build this kind of logic automatically without having to modify your code using Rewrite Maps:
The configuration above includes a rule that uses a Rewrite Map to translate a URL like: http://www.contoso.com/books/IISResourceKit into http://www.contoso.com/books.aspx?isbn=0735624410 automatically. Using maps is a very convenient way to have a "table" of values that can be transformed into any other value to be used in the result URL. Of course there are better ways of doing this when using large catalogs or values that change frequently but is extremely useful when you have a consistent set of values or when you can't make changes to an existing application. Note that since we use Rewrite the end users never see the "ugly-URL" unless they knew it already and typed it, and of course this means you can use the inverse approach to ensure the canonicalization is preserved:
The rule above does the "inverse" by matching the URL books.aspx, extracting the ISBN query string value and doing a lookup in the ISBN table and redirecting the client to the canonical URL, so again if user enters http://www.contoso.com/books.aspx?isbn=0735624410 they will be redirected to http://www.contoso.com/books/IISResourceKit.
This Friendly URL to me is more of a user feature than a SEO feature, however I've read in every SEO guide to reduce the number of parameters in your Query String, however, I have not find yet any document that clearly states if there is truly a limit in the search engine bot's that would truly impact the search relevance. I guess it makes sense that they wouldn't keep track of thousands of links to a catalog.aspx that has zillions of permutations based on hundreds of values in the query string (category, department, price range, etc) even if all of them were linked, but again I don't have any prove.
One complex tasks that Web Developers face sometimes is trying to reorganize their current Web Site structure, whether its moving a section to a different path, or something as simple as renaming a single file, you need to take into consideration things like, Is this move a temporary thing?, How do I ensure old clients get the new URL?, How do I prevent losing the search engine relevance?. URL Rewrite will help you perform these tasks.
If you rename a file you can very easily just write a Rewrite or Redirect Rule that ensures that your users continue getting the content. If your intent is to never go back to the old name you should use a Redirect Permanent so everyone starts getting the new content with its new "Canonical URL", however, if this could be a temporary thing you should use a Redirect Temporary. Finally a Rewrite is useful if you still want both URL's to continue to be valid (though this breaks the canonicality).
Another common scenario is when you need to move an entire directory to another place of the Web Site. It could also be that based on some criteria (say Mobile browsers or other User Agent) get a different set of pages/images. Either way, URL rewrite helps with this. The following configuration will redirect every call to the /Images directory to the /NewImages directory.
A related scenario is if you wanted to show different smaller images whenever a user of Windows CE was accessing your site, you could have a "img" directory where all the small images are stored and use a rule like the following:
Note, that in this case the use of Rewrite makes sense since we want the small images to look as the original images to the browser and it will save a "round-trip" to it.
Another common operation is when you randomly need to relocate pages for whatever reason (such as Marketing Campaigns, Branding, etc). In this case if you have several files that have been moved or renamed you can have a single rule that catches all of those and redirects them accordingly. Similarly, another sample could include an incremental migration from one technology to another where say you are moving from Classic ASP to ASP.NET and as you rewrite some of the old ASP pages into ASPX pages you want to start serving them without breaking any links or the search engine relevance.
Now, you can just keep adding to this table any broken link and specify its new address.
Other potential use of URL Rewrite is when using RIA applications in the browser, whether using things like AJAX, Silverlight or Flash, that are not easy to parse and index by search engines, you could use URL Rewrite to rewrite the URL to static HTML versions of your content, however you should make sure that the content is consistent so you don't misguide users and search engines. For example the following rule will rewrite all the files in the RIAFiles table to their static HTML counterpart but only if the User Agent is the MSNBot or the GoogleBot:
Related to this is that you might want to prevent search engines from crawling certain files (or your entire site), for that, you can use the Robots.txt semantics and use a "disallow", however, you can also use URL Rewrite to prevent this with more functionality such as blocking only a specific user agent:
There are several other things you can do to ensure that your Web Site is friendly with Search Engines, however most of them require changes to your application, but certainly worth the effort, for example:
For this entry I read and used some of the resources at several Web Sites, including:
In this blog we are going to write an example on how to extend the SEO Toolkit functionality, so for that we are going to pretend our company has a large Web site that includes several images, and now we are interested in making sure all of them comply to a certain standard, lets say all of them should be smaller than 1024x768 pixels and that the quality of the images is no less than 16 bits per pixel. Additionally we would also like to be able to make custom queries that can later allow us to further analyze the contents of the images and filter based on directories and more.
For this we will extend the SEO Toolkit crawling process to perform the additional processing for images, we will be adding the following new capabilities:
A crawler module is a class that extends the crawling process in Site Analysis to provide custom functionality while processing each URL. By deriving from this class you can easily raise your own set of violations or add your own data and links to any URL.
It includes three main methods:
Create a Class Library in Visual Studio and add the code shown below.
As you can see in the BeginAnalysis the module registers three new properties with the Report using the Crawler property. This is only required if you want to provide either a custom text or use it for different type other than a string. Note that current version only allows primitive types like Integer, Float, DateTime, etc.
During the Process method it first makes sure that it only runs for known content types, then it performs any validations raising a set of custom violations that are defined in the Violations static helper class. Note that we load the content from the Response Stream, which is the property that contains the received from the server. Note that if you were analyzing text the property Response would contain the content (this is based on Content Type, so HTML, XML, CSS, etc, will be kept in this String property).
When running inside IIS Manager, crawler modules need to be registered as a standard UI module first and then inside their initialization they need to be registered using the IExtensibilityManager interface. In this case to keep the code as simple as possible everything is added in a single file. So add a new file called "RegistrationCode.cs" and include the contents below:
This code defines a standard UI IIS Manager module and in its client-side initialize method it uses the IExtensibilityManager interface to register the new instance of the Image extension. This will make it visible to the Site Analysis feature.
To test it we need to add the UI module to Administration.config, that also means that the assembly needs to be registered in the GAC.
To Strongly name the assembly
In Visual Studio, you can do this easily by using the menu "Project->Properties", and select the "Signing" tab, check the "Sign the assembly", and choose a file, if you don't have one you can easily just choose New and specify a name.
After this you can compile and now should be able to add it to the GAC.
To GAC it
If you have the SDK's you should be able to call it like in my case:
"\Program Files\Microsoft SDKs\Windows\v6.0A\bin\gacutil.exe" /if SampleCrawlerModule.dll
(Note, you could also just open Windows Explorer, navigate to c:\Windows\assembly and drag & drop your file in there, that will GAC it automatically).
Finally to see the right name that should be use in Administration.config run the following command:
"\Program Files\Microsoft SDKs\Windows\v6.0A\bin\gacutil.exe" /l SampleCrawlerModule
In my case it displays:
SampleCrawlerModule, Version=184.108.40.206, Culture=neutral, PublicKeyToken=6f4d9863e5b22f10, …
Finally register it in Administration.config
Open Administration.config in Notepad using an elevated instance, find the </moduleProviders> and add a string like the one below but replacing the right values for Version and PublicKeyToken:
After registration you now should be able to launch IIS Manager and navigate to Search Engine Optimization. Start a new Analysis to your Web site. Once completed if there are any violations you will see them correctly in the Violations Summary or any other report. For example see below all the violations in the "Images" category.
Since we also extended the metadata by including the new fields (Image Width, Image Height, and Image Pixel Format) now you can use them with the Query infrastructure to easily create a report of all the images:
And since they are standard fields, they can be used in Filters, Groups, and any other functionality, including exporting data. So for example the following query can be opened in the Site Analysis feature and will display an average of the width and height of images summarized by type of image:
And of course violation details are shown as specified, including Recommendation, Description, etc:
As you can see extending the SEO Toolkit using a Crawler Module allows you to provide additional information, whether Metadata, Violations or Links to any document being processed. This can be used to add support for content types not supported out-of-the box such as PDF, Office Documents or anything else that you need. It also can be used to extend the metadata by writing custom code to wire data from other system into the report giving you the ability to exploit this data using the Query capabilities of Site Analysis.
Today there was a question in the IIS.net Forums asking how to expose two different Internet sites from another site making them look like if they were subdirectories in the main site.
So for example the goal was to have a site: www.site.com expose a www.site.com/company1 and a www.site.com/company2 and have the content from “www.company1.com” served for the first one and “www.company2.com” served in the second one. Furthermore we would like to have the responses cached in the server for performance reasons. The following image shows a simple diagram of this:
To try to clarify this, imagine that a user requests using his browser the page at http://www.site.com/company1/default.aspx, and so based on the specification above the request is proxied/routed to http://www.company1.com/default.aspx on the server-side. So far so good, however, imagine that the markup returned by this HTML turns out to have an image tag like “<img src=/some-image.png />”, well the problem is that now the browser will resolve that relative path using the base path on the original request he made which was http://www.site.com/company1/default.aspx resulting in a request for the image at http://www.site.com/some-image.png instead of the right “company1” folder that would be http://www.site.com/company1/some-image.png .
Do you see it? Basically the problem is that any relative path or for that matter absolute paths as well need to be translated to the new URL structure imposed by the original goal.
So how do we do it then?
URL Rewrite 2.0 includes the ability to rewrite the content of a response as it is getting served back to the client which will allow us to rewrite those links without having to touch the actual application.
The first rule is an inbound rewrite rule that basically captures all the requests to the root folder /company1/*, so if using Default Web Site, anything going to http://localhost/company1/* will be matched by this rule and it will rewrite it to www.company1.com respecting the HTTP vs HTTPS traffic.
One thing to highlight which is what took me a bit of time is the “serverVariables” entry in that rule that basically is overwriting the Accept-Encoding header, the reason I do this is because if you do not remove that header then the response will likely be compressed (Gzip or deflate) and Output Rewriting is not supported on that case, and you will end up with an error message like:
HTTP Error 500.52 - URL Rewrite Module Error. Outbound rewrite rules cannot be applied when the content of the HTTP response is encoded ("gzip").
Also note that to be able to use this feature for security reasons you need to explicitly enable this by allowing the server variable. See enabling server variables here.
The last two rules just rewrite the links and scripts and other resources so that the URLs are translated to the right structure. The first one rewrites absolute paths, and the last one rewrites the relative paths. Note that if you use relative paths using “..” this will not work, but you can easily fix the rule above, I was too lazy to do that and since I never use those when I create a site it works for me :)
A huge added value of using ARR is that now we can with a couple of clicks enable disk caching so that the requests are cached locally in the www.site.com, so that not every single request ends up paying the price to go to the backend servers.
As easy as that now you will see caching working and your site will act as a container of other servers in the internet. Pretty cool hah! :)
So in this post we saw how with literally few lines of XML, URL Rewrite and ARR we were able to enable a proxy/routing scenario with the ability to rewrite links and furthermore with caching support.
Today we are releasing the IIS Search Engine Optimization Toolkit. The IIS SEO Toolkit is a set of features that aim to help you keep your Web site and its content in good shape for both Users and Search Engines.
The features that are included in this Beta release include:
Checkout the great blog about IIS SEO Toolkit by ScottGu, or this IIS SEO simple video of some of its capabilities.
One of the problems with many similar tools out there is that they require you to publish the updates to your production sites before you can even use the tools, and of course would never be usable for Intranet or internal applications that are not exposed to the Web. The IIS Search Engine Optimization Toolkit can be used internally in your own development or staging environments giving you the ability to clean up the content before publishing to the Web. This way your users do not need to pay the price of broken links once you publish to the Web and you will not need to wait for those tools or Search Engines to crawl your site to finally discover you broke things.
For developers this means that they can now easily look at the potential impact of removing or renaming a file, easily check which files are referring to this page and which files he can remove because of only being referenced by this page.
One thing that is important to clarify is that you can target and analyze your production sites if you want to, and you can target Web applications running in any platform, whether its ASP.NET, PHP, or plain HTML text files running in your local IIS or on any other remote server.
Bottom line, try it against your Web site, look at the different features and give us feedback for additional reports, options, violations, content to parse, etc, post any comments or questions at the IIS Search Engine Optimization Forum.
The IIS SEO Toolkit documentation can be found at http://learn.iis.net/page.aspx/639/using-iis-search-engine-optimization-toolkit/, but remember this is only Beta 1 so we will be adding more features and content.
Today we are releasing the Technical Preview 2 of the IIS Admin Pack, it is an update of the release we made on February.
Install the Admin Pack and Database Manager today!
Admin Pack (x86): http://www.iis.net/downloads/default.aspx?tabid=34&i=1682&g=6
Database Manager (x86): http://www.iis.net/downloads/default.aspx?tabid=34&g=6&i=1684
Admin Pack (x64): http://www.iis.net/downloads/default.aspx?tabid=34&i=1683&g=6
Database Manager (x64): http://www.iis.net/downloads/default.aspx?tabid=34&g=6&i=1685
There are a lot of interesting features we've added to almost every component for this release:
As you can see the extensibility is a big theme now, and in my following posts I will be showing how to extend both IIS Reports as well as DBManager.
The other day I was asked if I knew about a tool that would allow users to easily analyze the IIS Log Files, to process and look for specific data that could easily be automated. My recommendation was that if they were comfortable with using a SQL-like language that they should use Log Parser. Log Parser is a very powerful tool that provides a generic SQL-like language on top of many types of data like IIS Logs, Event Viewer entries, XML files, CSV files, File System and others; and it allows you to export the result of the queries to many output formats such as CSV (Comma-Separated Values, etc), XML, SQL Server, Charts and others; and it works well with IIS 5, 6, 7 and 7.5.
To use it you just need to install it and use the LogParser.exe that is found in its installation directory (on my x64 machine it is located at: C:\Program Files (x86)\Log Parser 2.2).
I also thought on sharing some of my favorite queries. To run them, just execute LogParser.exe and make sure to specify that the input is an IIS Log file (-i:W3C) and for ease of use in this case we will export to a CSV file that can be then opened in Excel (-o:CSV) for further analysis:
A final note: any time you deal with Date and Time, remember to use the TO_LOCALTIME function to convert the log times to your local time, otherwise you will find it very confusing when your entries seem to be reported incorrectly.
If you need any help you can always visit the Log Parser Forums to find more information or ask specific questions.
Any other useful queries I missed?