• CarlosAg Blog

    Redirects, 301, 302 and IIS SEO Toolkit


    In the URL Rewrite forum somebody posted the question "are redirects bad for search engine optimization?". The answer is: not necessarily, Redirects are an important tool for Web sites and if used in the right context they actually are a required tool. But first a bit of background.

    What is a Redirect?

    A redirect in simple terms is a way for the server to indicate to a client (typically a browser) that a resource has moved and they do this by the use of an HTTP status code and a HTTP location header. There are different types of redirects but the most common ones used are:

    • 301 - Moved Permanently. This type of redirect signals that the resource has permanently moved and that any further attempts to access it should be directed to the location specified in the header
    • 302 - Redirect or Found. This type of redirect signals that the resource is temporarily located in a different location, but any further attempts to access the resource should still go to the same original location.

    Below is an example on the response sent from the server when requesting

    HTTP/1.1 302 Found
    Connection: Keep-Alive
    Content-Length: 161
    Content-Type: text/html; charset=utf-8
    Date: Wed, 10 Jun 2009 17:04:09 GMT
    Location: /sqlserver/2008/en/us/default.aspx
    Server: Microsoft-IIS/7.0
    X-Powered-By: ASP.NET


    So what do redirects mean for SEO?

    One of the most important factors in SEO is the concept called organic linking, in simple words it means that your page gets extra points for every link that external Web sites have linking to your page. So now imagine the Search Engine Bot is crawling an external Web site and finds a link pointing to your page ( and when it tries to visit your page it runs into a redirect to another location (say Now the Search Engine has to decide if it should add the original "some-page" into its index as well as if it should "add the extra points" to the new location or to the original location, or if it should just ignore it entirely. Well the answer is not that simple, but a simplification of it could be:

    • if you return a 301 (Permanent Redirect) you are telling the search engine that the resource moved to a new location permanently so that all further traffic should be directed to that location. This clearly means that the search engine should ignore the original location (some-page) and index the new location (somepage), and that it should add all the "extra points" to it, as well as any further references to the original location should now be "treated" as if it was the new one.
    • if you return a 302 (Temporary Redirect) the answer can depend on search engines, but its likely to decide to index the original location and ignore the new location at all (unless directly linked in other places) since its only temporary and it could at any given point stop redirecting and start serving the content from the original location. This of course makes it very ambiguous on how to deal with the "extra points" and likely will be added to the original location and not the new destination.


    Enter IIS SEO Toolkit

    IIS Search Optimization Toolkit has a couple of rules that look for different patterns related to Redirects. The Beta version includes the following:

    1. The redirection did not include a location header. Believe it or not there are a couple of applications out there that does not generate a location header which completely breaks the model of redirection. So if your application is one of them, it will let you know.
    2. The redirection response results in another redirection. In this case it detected that your page (A) is linking to another page (B) which caused a redirection to another page (C) which resulted in another redirection to yet another page (D). In this case it is trying to let you know that the number of redirects could significantly impact the SEO "bonus points" since the organic linking could be all broken by this jumping around and that you should consider just linking from (A) to (D) or whatever actual end page is supposed to be the final destination.
    3. The page contains unnecessary redirects. In this case it detected that your page (A) is linking to another page (B) in your Web site that resulted in a redirect to another page (C) within your Web site. Note that this is an informational rule, since there are valid scenarios where you would want this behavior, such as when tracking page impressions, or login pages, etc. but in many cases you do not need them since we detect that you own the three pages we are suggesting to look and see if it wouldn't be better to just change the markup in (A) to point directly to (C) and avoid the (B) redirection entirely.
    4. The page uses a refresh definition instead of using redirection. Finally related to redirection, IIS SEO will flag when it detects that the use of the refresh meta-tag is being used as a mean for causing a redirection. This is a practice that is not recommended since the use of this tag does not include any semantics for search engines on how to process the content and in many cases is actually consider to be a tactic to confuse search engines, but I won't go there.

    So how does it look like? In the image below I ran Site Analysis against a Web site and it found a few of these violations (2 and 3).


    Notice that when you double click the violations it will tell you the details as well as give you direct access to the related URL's so that you can look at the content and all the relevant information about them to make the decision. From that menu you can also look at which other pages are linking to the different pages involved as well as launch it in the browser if needed.


    Similarly with all the other violations it tries to explain the reason it is being flagged as well as recommended actions to follow for each of them.

    IIS Search Engine Optimization Toolkit can also help you find all the different types of redirects and the locations where they are being used in a very easy way, just select Content->Status Code Summary in the Dashboard view and you will see all the different HTTP Status codes received from your Web site. Notice in the image below how you can see the number of redirects (in this case 18 temporary redirects and 2 permanent redirects). You can also see how much content they accounted for, in this case about 2.5 kb (Note that I've seen Web sites generate a large amount of useless content in redirect traffic, speaking of spending in bandwidth). You can double click any of those rows and it will show you the details of the URL's that returned that and from there you can see who links to them, etc.


    So what should I do?

    1. Know your Web site. Run Site Analysis against your Web site and see all the different redirects that are happening.
    2. Try to minimize redirections. If possible with the knowledge gain on 1, make sure to look for places where you can update your content to reduce the number of redirects.
    3. Use the right redirect. Understand what is the intent of the redirection you are trying to do and make sure you are using the right semantics (is it permanent or temporary). Whenever possible prefer Permanent Redirects 301.
    4. Use URL Rewrite to easily configure them. URL Rewrite allows you to configure a set of rules using both regular expressions and wildcards that live along with your application (no-administrative privileges required) that can let you set the right redirection status code. A must for SEO. More on this on a future blog.


    So going back to the original question: "are redirects bad for Search Engine Optimization?". Not necessarily, they are an important tool used by Web application for many reasons such as:

    • Canonicalization. Ensure that users are accessing your site with www. or without www. use permanent redirects
    • Page impressions and analytics. Using temporary redirects to ensure that the original link is preserved and counters work as expected.
    • Content reorganization. Whether you are changing your host due to a brand change or just renaming a page, you should make sure to use permanent redirects to keep your page rankings.
    • etc

    Just make sure you don't abuse them by having redirects to redirects, unnecessary redirects, infinite loops, and use the right semantics.

  • CarlosAg Blog

    Adding ASP.NET Tracing to IIS 7.0 Failed Request Tracing


    IIS 7.0 Failed Request Tracing (for historical reasons internally we refer to it as FREB, since it used to be called Failed Request Event Buffering, and there are no "good-sounding-decent" acronyms for the new name) is probably the best diagnosing tool that IIS has ever had (that doesn't require Debugging skills), in a simplistic way it exposes all the interesting events that happen during the request processing in a way that allows you to really understand what went wrong with any request. To learn more you can go to

    What is not immediately obvious is that you can use this tracing capabilities from your ASP.NET applications to output the tracing information in our infrastructure so that your users get a holistic view of the request.

    When you are developing in ASP.NET there are typically two Tracing infrastructures you are likely to use, the ASP.NET Page Tracing and the System.Diagnostics Tracing. In recent versions they have been better integrated (attribute writeToDiagnosticsTrace)  but still you want to know about both of them.

    Today I'll just focus on logging ASP.NET Tracing to FREB, and in a future post I will show how to do it for System.Diagnostics Tracing.

    To send the ASP.NET Tracing to FREB you just need to enable ASP.NET tracing, use the ASPNET trace provider and you will get those entries in the FREB log. The following web.config will enable FREB and ASP.NET Tracing. (Note that you need to go to the Default Web Site and Enable Failed Request Filtering so that this rules get executed)

    <?xml version="1.0" encoding="UTF-8"?>
    <trace enabled="true" pageOutput="false" />
    <add path="*.aspx">
    <add provider="ASPNET" areas="Infrastructure,Module,Page,AppServices" verbosity="Verbose" />
    add provider="WWW Server" areas="Authentication,Security,Filter,StaticFile,Compression,Cache,RequestNotifications,Module,Rewrite" verbosity="Verbose" />
    <failureDefinitions statusCodes="100-600" />

    Now if you have a sample page like the following:

    <%@ Page Language="C#" %>
    <script runat="server">
    void Page_Load() {
    Page.Trace.Write("Hello world from my ASP.NET Application");
    Page.Trace.Warn("This is a warning from my ASP.NET Application");
    <html xmlns="">
    <head runat="server">
    <title>Untitled Page</title>
    <form id="form1" runat="server">

    The result is that in \inetpub\logs\FailedReqLogsFiles\ you will get an XML file that includes all the details of the request including the Page Traces from ASP.NET. Note that we provide an XSLT transformation that parses the Xml file and provides a friendly view of it where it shows different views of the trace file. For example below only the warning is shown in the Request Summary view:


    There is also a Request Details view where you can filter by all the ASP.NET Page Traces that includes both of the traces we added in the Page code.


  • CarlosAg Blog

    Install IIS SEO Toolkit in Windows 8.1


    Today I was trying to install the SEO Toolkit in IIS 8.5 running in my Windows 8.1 desktop machine. It appears that the Web Pl has not been updated to allow the installation of it, but you can easily install it if you use the MSI directly so feel free to install them from:

    IIS SEO Toolkit x64 / IIS SEO Toolkit x86

  • CarlosAg Blog

    Using Windows Authentication with Web Deploy and WMSVC


    By default in Windows Server 2008 when you are using the Web Management Service (WMSVC) and Web Deploy (also known as MSDeploy) it will use Basic authentication to perform your deployments. If you want to enable Windows Authentication you will need to set a registry key so that the Web Management Service also supports using NTLM. To do this, update the registry on the server by adding a DWORD key named "WindowsAuthenticationEnabled" under HKEY_LOCAL_MACHINE\Software\Microsoft\WebManagement\Server, and set it to 1. If the Web Management Service is already started, the setting will take effect after the service is restarted.

    For more details on other configuration options see:

  • CarlosAg Blog

    IIS SEO Toolkit: Find warnings of HTTP content linked by pages using HTTPS


    Are you an developer/owner/publisher/etc of a site that uses HTTPS (SSL) for secure access? If you are, please continue to read.

    Have you ever visited a Web site that is secured using SSL (Secure Sockets Layer) just to get an ugly Security Warning message like:

    Do you want to view only the webpage content that was delivered securely?

    This webpage contains content that will not be delivered using a secure HTTPS connection, which could compromise the security of the entire webpage.


    How frustrating is this for you? Do you think that end-users know what is the right answer to the question above? Honestly, I think it actually even feels like the Yes/No buttons and the phrasing of the question would cause me to click the wrong option.

    What this warning is basically trying to tell the user is that even though he/she navigated to a page that you thought was secured by using SSL, the page is consuming resources that are coming from an unsecured location, this could be scripts, style-sheets or other types of objects that could potentially pose a security risk since they could be tampered on the way or come from different locations.

    As a site owner/developer/publisher/etc should always make sure that you are not going to expose your customers to such a bad experience, leaving them with an answer that they can’t possibly choose right. For one if they ‘choose Yes’ they will get an incomplete experience being broken images, broken scripts or something worse; otherwise they can ‘choose No’ which is even worse since that means you are actually teaching them to ignore this warnings which could indeed in some cases be real signs of security issues.

    Bottom-line it should be imperative that any issue like this should be treated as a bug and fixed in the application if possible.

    But the big question is how do you find these issues? Well the answer is very simple yet extremely time consuming, just navigate to every single page of your site using SSL and as you do that examine every single resource in the page (styles, objects, scripts, etc) and see if the URL is pointing to a non-HTTPS location.

    Enter the IIS Search Engine Optimization (SEO) Toolkit.

    The good news is that using the SEO Toolkit is extremely simple to find these issues.

    1. To do that just start a new Analysis using the IIS SEO Toolkit using the HTTPS URL of your site, for example:
    2. Once the analysis is over just select the option “Query->Open Query” and open the following XML file:
    3. <?xml version="1.0" encoding="utf-8"?>
      <query dataSource="links">
      <expression field="LinkingUrl" operator="Begins" value="https://" />
      expression field="LinkedUrl" operator="Begins" value="http://" />
      expression field="LinkType" operator="NotEquals" value="Link" />
      expression field="LinkType" operator="NotEquals" value="Rss" />
      <field name="LinkingUrl" />
      field name="LinkedUrl" />
      field name="LinkedStatus" />
      field name="LinkType" />
    4. Just by doing that it will open a Query Window that will show all the links in your site that have such a problem. Note that the query simply looks for all the resources that are being linked by a URL that begins with HTTPS and that the target resource is using HTTP and that are not normal links (since they do not have that problem).
    5. This is how my quick repro looks like. Note that it actually tells you the type of resource it is (an image and a style in this case). Additionally if you double click the row it will show you exactly the place in the markup where the problem occurs so you can easily fix it.



    Using the IIS SEO Toolkit and it powerful Query Engine you can easily detect conditions on your site that otherwise would take an incredible amount of time and that would be prohibitively expensive to do constantly.

  • CarlosAg Blog

    SEO Tip - Beware of the Login pages - add them to Robots Exclusion


    A lot of sites today have the ability for users to sign in to show them some sort of personalized content, whether its a forum, a news reader, or some e-commerce application. To simplify their users life they usually want to give them the ability to log on from any page of the Site they are currently looking at. Similarly, in an effort to keep a simple navigation for users Web Sites usually generate dynamic links to have a way to go back to the page where they were before visiting the login page, something like: <a href="/login?returnUrl=/currentUrl">Sign in</a>.

    If your site has a login page you should definitely consider adding it to the Robots Exclusion list since that is a good example of the things you do not want a search engine crawler to spend their time on. Remember you have a limited amount of time and you really want them to focus on what is important in your site.

    Out of curiosity I searched for login.php and login.aspx and found over 14 million login pages… that is a lot of useless content in a search engine.

    Another big reason is because having this kind of URL's that vary depending on each page means there will be hundreds of variations that crawlers will need to follow, like /login?returnUrl=page1.htm, /login?returnUrl=page2.htm, etc, so it basically means you just increased the work for the crawler by two-fold. And even worst, in some cases if you are not careful you can easily cause an infinite loop for them when you add the same "login-link" in the actual login page since you get /login?returnUrl=login as the link and then when you click that you get /login?returnUrl=login?returnUrl=login... and so on with an ever changing URL for each page on your site. Note that this is not hypothetical this is actually a real example from a few famous Web sites (which I will not disclose). Of course crawlers will not infinitely crawl your Web site and they are not that silly and will stop after looking at the same resource /login for a few hundred times, but this means you are just reducing the time of them looking at what really matters to your users.

    IIS SEO Toolkit

    If you use the IIS SEO Toolkit it will detect the condition when the same resource (like login.aspx) is being used too many times (and only varying the Query String) and will give you a violation error like: Resource is used too many times.


    So how do I fix this?

    There are a few fixes, but by far the best thing to do is just add the login page to the Robots Exclusion protocol.

    1. Add the URL to the /robots.txt, you can use the IIS Search Engine Optimization Toolkit to edit the robots file, or just drop a file with something like:
      User-agent: *
      Disallow: /login
    2. Alternatively (or additionally)  you can add a rel attribute with the nofollow value to tell them not to even try. Something like:
      <a href="/login?returnUrl=page" rel="nofollow">Log in</a>
    3. Finally make sure to use the Site Analysis feature in the IIS SEO Toolkit to make sure you don't have this kind of behavior. It will automatically flag a violation when it identifies that the same "page" (with different Query String) has already been visited over 500 times.


    To summarize always add the login page to the robots exclusion protocol file, otherwise you will end up:

    1. sacrificing valuable "search engine crawling time" in your site.
    2. spending unnecessary bandwidth and server resources.
    3. potentially even blocking crawlsers from your content.
  • CarlosAg Blog

    Razor Migration Notes 2: Use URL Rewrite to maintain your Page rankings (SEO)


    This is the second note of the series:

    1: Moving a SitemapPath Control to ASP.NET Web Pages

    My current Web Site was built using ASP.NET 2.0 and WebForms, that means that all of my pages have the extension .aspx. While moving each page to use ASP.NET Web Pages their extension is being changed to .cshtml, and while I’m sure I could configure it in a way to get them to keep their aspx extensions it is a good opportunity to “start clean”. Furthermore, in ASP.NET WebPages you can also access them without the extension at all, so if you have /my-page.cshtml, you can also get to it using just /my-page. Given I will go through this migration I decided to use the clean URL format (no extension) and in the process get better URLs for SEO purposes, for example, today one of the URLs look like but this would be a good time to enforce lower-case semantics and also get rid of those ugly camel casing and get a much more standard a friendly format for Search Engines using “-“, like:

    Use URL Rewrite to make sure to keep your Page Ranking and no broken links

    The risk of course is that if you just change the URLs of your site you will end up not only with lots of 404’s (Not Found), but your page ranking will be reset and you will loose all the “juice” that external links and history have provided to it. The right way to do this is to make sure that you perform a permanent redirect (301) from the old URL to the new URL, this way Search Engines (and browsers) will know that the content has permanently moved to a new location so they should “pass all the page ranking” to the new page.

    There are many ways to achieve this, but I happen to like URL Rewrite a lot, so I decided to use it. To do that I basically created one rule that uses a Rewrite Map (think of it as a Dictionary) to match the URL and if it matches it will perform a permanent redirect to the new one. So for example, if /aboutme.aspx is requested, then it will 301 to /about-me:

    <?xml version="1.0"?>
    <rule name="Redirect for OldUrls" stopProcessing="true">
    <match url=".*"/>
    <add input="{OldUrls:{REQUEST_URI}}" pattern="(.+)"/>
    <action type="Redirect" url="{C:1}" appendQueryString="true" redirectType="Permanent" />
    <rewriteMap name="OldUrls">
    <add key="/aboutme.aspx" value="/about-me"/>
    add key="/soon.aspx?id=1" value="/coming-soon"/>
    add key="/Articles/configureComPlus.aspx" value="/articles/configure-com-plus"/>
    add key="/Articles/createChartHandler.aspx" value="/articles/create-aspnet-chart-handler"/>
    add key="/Articles/createVsTemplate.aspx" value="/articles/create-vs-template"/>


    Note that I could have also created a simple rule that would change the extension to cshtml, however I decided that I also wanted to change the page names. The best thing is that you can do it incrementally and only rewrite them once your new page is ready or even switch back to the old one later if any problems occur.


    Using URL Rewrite you can easily keep your SEO and pages without broken links. You can also achieve lots more, check out: SEO made easy with IIS URL Rewrite 2.0 SEO templates – CarlosAg

  • CarlosAg Blog

    Razor Migration Notes 3: Use app_offline.htm to deploy the new version


    This is the third post on the series:

    1: Moving a SitemapPath Control to ASP.NET Web Pages

    2: Use URL Rewrite to maintain your Page rankings (SEO)


    ASP.NET has a nice feature to help for deployment processes where you can drop an HTML file named app_offline.htm and it will unload all assemblies and code that it has loaded letting you easily delete binaries and deploy the new version while still serving back to customers the friendly message that you provide telling them that your site is under maintenance.

    One caveat though, is that Internet Explorer users might still see the “friendly” error that they display and not your nice message. This happens because of a page size validation that IE performs. See Scott’s blog on how to workaround that problem: App_Offline.htm and working around the IE Friendly Errors

    Note: The live site is now running in .NET 4.0 and all using Razor.

  • CarlosAg Blog

    IIS SEO Toolkit and W3C Validation Service


    One thing that I’ve been asked several times about the SEO Toolkit is if it does a full standards validation on the markup and content that is processed, and if not, to add support for more comprehensive standards validation, in particular XHTML and HTML 4.01. Currently the markup validation performed by the SEO Toolkit is really simple, its main goal is to make sure that the markup is correctly organized, for example that things like <b><i>Test</b></i> are not found in the markup, the primary reason is to make sure that basic blocks of markup are generally "easy" to parse by Search Engines and that the semantics will not be terribly broken if a link, text or style is not correctly closed (since all of them would affect SEO).

    So the first thing I would say is that we have heard the feedback and are looking at what we could possibly add in future versions, however why wait, right?

    One thing that many people do not realize is that the SEO Toolkit can be extended to add new violations, new metadata and new rules to the analysis process and as such during a demo I gave a few weeks ago I decided to write a sample on how to consume the online W3C Markup Validation Service from the SEO Toolkit.


    You can download the SEOW3Validator including the source code at

    How to install it

    To run it you just need to:

    1. Unzip the contents in a folder.
    2. Install the SEOW3Validator.dll assembly in the GAC:
      1. Open a Windows Explorer window and navigate to c:\Windows\assembly
      2. Drag and Drop the SEOW3Validator.dll to the c:\Windows\assembly explorer window.
      3. Alternatively you can just run gacutil.exe /i SEOW3Validator.dll, usually located at C:\Program Files\Microsoft SDKs\Windows\v6.0A\bin or v7A.
      4. If you have problems with this, you could try just copying the assembly to the GAC (copy SEOW3Validator.dll c:\Windows\assembly\GAC_MSIL\SEOW3Validator\\SEOW3Validator.dll)
    3. Register the moduleProvider in Administration.config: In an elevated prompt open C:\Windows\System32\Inetsrv\config\Administration.config and add the following line right inside the <moduleProviders> right before closing the </moduleProviders>:
    4.   <add name="SEOW3Validator" 
      ="SEOW3Validator.SEOW3ValidatorModuleProvider, SEOW3Validator, Version=, Culture=neutral, PublicKeyToken=995ee9b8fa017847" />

    You should be able to now run the SEO Toolkit just as before but now you will find new violations, for example in my site I get the ones below. Notice that there are a new set of violations like W3 Validator – 68, etc, and all of them belong to the W3C category. (I would have liked to have better names, but the way the W3 API works is not really friendly for making this any better).


    And when double clicking any of those results you get the details as reported by the W3 Validation Service:


    The Code

    The code is actually pretty simple, the main class is called SEOW3ValidatorExtension that derives from CrawlerModule and overrides the Process method to call the W3C Validation service sending the actual markup in the request, this means that it does not matter if your site is an Intranet or in the Internet, it will work; and for every warning and error that is returned by the Validator it will add a new violation to the SEO report.

    The code looks like this:

        W3Validator validator = new W3Validator();
    W3ValidatorResults results = validator.Validate(context.UrlInfo.FileName, 

    foreach (W3ValidatorWarning warning in results.Warnings) {

    foreach (W3ValidatorError error in results.Errors) {




    I created a helper class W3Validator that basically encapsulates the consumption of the W3C Validation Service, the code is far from what I would like it to be however there are some "interesting" decisions on the way the API is exposed, I would have probably designed the service differently and not return the results formatted in HTML when this is actually an API/WebService that can be presented somewhere else than a browser. So a lot of the code is to just re-format the results to look "decent", but to be honest I did not want to spend too much time on it so everything was put together quite quickly. Also, if you look at the names I used for violations, I did not want to hard-code specific Message IDs and since the Error Message was different for all of them even within the same Message ID, it was not easy to provide better messages. Anyway, overall it is pretty usable and should be a good way to do W3 Validation.

    Note that one of the cool things you get for free is that since these are stored as violations, you can then re-run the report and use the Compare Report feature to see the progress while fixing them. Also, since they are stored as part of the report you will not need to keep running the validator over and over again but instead just open it and continue looking at them, as well as analyzing the data in the Reports and Queries, and be able to export them to Excel, etc.

    Hopefully this will give you a good example on some of the interesting things you can achieve with the SEO Toolkit and its extensibility.

  • CarlosAg Blog

    Backgammon and Connect4 for Windows Mobile


    During the holidays my wife and I went back to visit our families in Mexico City where we are originally from. Again, during the flights I had enough spare time to build a couple of my favorite games, Backgammon and Connect4.

    I've already built both games for Windows using Visual Basic 5 almost 11 years ago but as you would imagine I was far from feeling proud of the implementation. So this time I started from scratch and ended up with what I think are better versions of them (still not the best code, but pretty decent for just a few hours of coding). In fact the AI for the Backgammon version is a bit better and the Connect4 is faster and more suited for a Mobile device.

    You can go with your PDA/Smartphone to to install both games or just click the images below to take you to the install page of each of them. Enjoy and feel free to add any feedback/features as comments to this blog post.

    The one thing I learned during the development of these versions is that you do want to download the Windows Mobile 6 SDK if you are going to target that version (which is what my cell phone has), since it will add new Visual Studio 2005 Project Templates and new Emulator images which will help you a lot. For example I was trying to use buttons in my forms, and testing it in Pocket PC worked, but as soon as I tried them in my cell phone it crashed with a NotSupportedException. When I installed the SDK and switched to target that platform, Visual Studio immediately warned me that my platform didn't supported buttons which was great.

    Bottom line I'm more and more amazed of how easy it is to build games in Windows Mobile and the things you can achieve with both Windows Mobile and the .NET Compact Framework.

Page 5 of 10 (94 items) «34567»