Posts
  • CarlosAg Blog

    IIS SEO Toolkit - Start new analysis automatically through code

    • 9 Comments

    One question that I've been asked several times is: "Is it possible to schedule the IIS SEO Toolkit to run automatically every night?". Other related questions are: "Can I automate the SEO Toolkit so that as part of my build process I'm able to catch regressions on my application?", or "Can I run it automatically after every check-in to my source control system to ensure no links are broken?", etc.

    The good news is that the answer is YES!. The bad news is that you have to write a bit of code to be able to make it work. Basically the SEO Toolkit includes a Managed code API to be able to start the analysis just like the User Interface does, and you can call it from any application you want using Managed Code.

    In this blog I will show you how to write a simple command application that will start a new analysis against the site provided in the command line argument and process a few queries after finishing.

    IIS SEO Crawling APIs

    The most important type included is a class called WebCrawler. This class takes care of all the process of driving the analysis. The following image shows this class and some of the related classes that you will need to use for this.

    image

    The WebCrawler class is initialized through the configuration specified in the CrawlerSettings. The WebCrawler class also contains two methods Start() and Stop() which starts the crawling process in a set of background threads. With the WebCrawler class you can also gain access to the CrawlerReport through the Report property. The CrawlerReport class represents the results (whether completed or in progress) of the crawling process. It has a method called GetUrls() that returns an instance to all the UrlInfo items. A UrlInfo is the most important class that represents a URL that has been downloaded and processed, it has all the metadata such as Title, Description, ContentLength, ContentType, and the set of Violations and Links that it includes.

    Developing the Sample

    1. Start Visual Studio.
    2. Select the option "File->New Project"
    3. In the "New Project" dialog select the template "Console Application", enter the name "SEORunner" and press OK.
    4. Using the menu "Project->Add Reference" add a reference to the IIS SEO Toolkit Client assembly "c:\Program Files\Reference Assemblies\Microsoft\IIS\Microsoft.Web.Management.SEO.Client.dll".
    5. Replace the code in the file Program.cs with the code shown below.
    6. Build the Solution
    using System;
    using System.IO;
    using System.Linq;
    using System.Net;
    using System.Threading;
    using Microsoft.Web.Management.SEO.Crawler;

    namespace SEORunner {
       
    class Program {

           
    static void Main(string[] args) {

               
    if (args.Length != 1) {
                   
    Console.WriteLine("Please specify the URL.");
                   
    return;
               
    }

               
    // Create a URI class
                Uri startUrl = new Uri(args[0]);

               
    // Run the analysis
                CrawlerReport report = RunAnalysis(startUrl);

               
    // Run a few queries...
                LogSummary(report);

               
    LogStatusCodeSummary(report);

               
    LogBrokenLinks(report);
           
    }

           
    private static CrawlerReport RunAnalysis(Uri startUrl) {
               
    CrawlerSettings settings = new CrawlerSettings(startUrl);
               
    settings.ExternalLinkCriteria = ExternalLinkCriteria.SameFolderAndDeeper;
               
    // Generate a unique name
                settings.Name = startUrl.Host + " " + DateTime.Now.ToString("yy-MM-dd hh-mm-ss");

               
    // Use the same directory as the default used by the UI
                string path = Path.Combine(
                   
    Environment.GetFolderPath(Environment.SpecialFolder.MyDocuments),
                    "IIS SEO Reports"
    );

               
    settings.DirectoryCache = Path.Combine(path, settings.Name);

               
    // Create a new crawler and start running
                WebCrawler crawler = new WebCrawler(settings);
               
    crawler.Start();

               
    Console.WriteLine("Processed - Remaining - Download Size");
               
    while (crawler.IsRunning) {
                   
    Thread.Sleep(1000);
                   
    Console.WriteLine("{0,9:N0} - {1,9:N0} - {2,9:N2} MB",
                       
    crawler.Report.GetUrlCount(),
                       
    crawler.RemainingUrls,
                       
    crawler.BytesDownloaded / 1048576.0f);
               
    }

               
    // Save the report
                crawler.Report.Save(path);

               
    Console.WriteLine("Crawling complete!!!");

               
    return crawler.Report;
           
    }

           
    private static void LogSummary(CrawlerReport report) {
               
    Console.WriteLine();
               
    Console.WriteLine("----------------------------");
               
    Console.WriteLine(" Overview");
               
    Console.WriteLine("----------------------------");
               
    Console.WriteLine("Start URL:  {0}", report.Settings.StartUrl);
               
    Console.WriteLine("Start Time: {0}", report.Settings.StartTime);
               
    Console.WriteLine("End Time:   {0}", report.Settings.EndTime);
               
    Console.WriteLine("URLs:       {0}", report.GetUrlCount());
               
    Console.WriteLine("Links:      {0}", report.Settings.LinkCount);
               
    Console.WriteLine("Violations: {0}", report.Settings.ViolationCount);
           
    }

           
    private static void LogBrokenLinks(CrawlerReport report) {
               
    Console.WriteLine();
               
    Console.WriteLine("----------------------------");
               
    Console.WriteLine(" Broken links");
               
    Console.WriteLine("----------------------------");
               
    foreach (var item in from url in report.GetUrls()
                                    
    where url.StatusCode == HttpStatusCode.NotFound &&
                                          
    !url.IsExternal
                                    
    orderby url.Url.AbsoluteUri ascending
                                    
    select url) {
                   
    Console.WriteLine(item.Url.AbsoluteUri);
               
    }
           
    }

           
    private static void LogStatusCodeSummary(CrawlerReport report) {
               
    Console.WriteLine();
               
    Console.WriteLine("----------------------------");
               
    Console.WriteLine(" Status Code summary");
               
    Console.WriteLine("----------------------------");
               
    foreach (var item in from url in report.GetUrls()
                                    
    group url by url.StatusCode into g
                                    
    orderby g.Key
                                    
    select g) {
                   
    Console.WriteLine("{0,20} - {1,5:N0}", item.Key, item.Count());
               
    }
           
    }
       
    }
    }

     

    If you are not using Visual Studio, you can just save the contents above in a file, call it SEORunner.cs and compile it using the command line:

    C:\Windows\Microsoft.NET\Framework\v3.5\csc.exe /r:"c:\Program Files\Reference Assemblies\Microsoft\IIS\Microsoft.Web.Management.SEO.Client.dll" /optimize+ SEORunner.cs

     

    After that you should be able to run SEORunner.exe and pass the URL of your site as a argument, you will see an output like:

    Processed - Remaining - Download Size
           56 -       149 -      0.93 MB
          127 -       160 -      2.26 MB
          185 -       108 -      3.24 MB
          228 -        72 -      4.16 MB
          254 -        48 -      4.98 MB
          277 -        36 -      5.36 MB
          295 -        52 -      6.57 MB
          323 -        25 -      7.53 MB
          340 -         9 -      8.05 MB
          358 -         1 -      8.62 MB
          362 -         0 -      8.81 MB
    Crawling complete!!!
    
    ----------------------------
     Overview
    ----------------------------
    Start URL:  http://www.carlosag.net/
    Start Time: 11/16/2009 12:16:04 AM
    End Time:   11/16/2009 12:16:15 AM
    URLs:       362
    Links:      3463
    Violations: 838
    
    ----------------------------
     Status Code summary
    ----------------------------
                      OK -   319
        MovedPermanently -    17
                   Found -    23
                NotFound -     2
     InternalServerError -     1
    
    ----------------------------
     Broken links
    ----------------------------
    http://www.carlosag.net/downloads/ExcelSamples.zip

     

    The most interesting method above is RunAnalysis, it creates a new instance of the CrawlerSettings and specifies the start URL. Note that it also specifies that we should consider internal all the pages that are hosted in the same directory or subdirectories. We also set the a unique name for the report and use the same directory as the IIS SEO UI uses so that opening IIS Manager will show the reports just as if they were generated by it. Then we finally call Start() which will start the number of worker threads specified in the WebCrawler::WorkerCount property. We finally just wait for the WebCrawler to be done by querying the IsRunning property.

    The remaining methods just leverage LINQ to perform a few queries to output things like a report aggregating all the URLs processed by Status code and more.

    Summary

    As you can see the IIS SEO Toolkit crawling APIs allow you to easily write your own application to start the analysis against your Web site which can be easily integrated with the Windows Task Scheduler or your own scripts or build system to easily allow for continuous integration.

    Once the report is saved locally it can then be opened using IIS Manager and continue further analysis as with any other report. This sample console application can be scheduled using the Windows Task Scheduler so that it can run every night or at any time. Note that you could also write a few lines of PowerShell to automate it without the need of writing C# code and do that by only command line, but that is left for another post.

  • CarlosAg Blog

    Redirects, 301, 302 and IIS SEO Toolkit

    • 12 Comments

    In the URL Rewrite forum somebody posted the question "are redirects bad for search engine optimization?". The answer is: not necessarily, Redirects are an important tool for Web sites and if used in the right context they actually are a required tool. But first a bit of background.

    What is a Redirect?

    A redirect in simple terms is a way for the server to indicate to a client (typically a browser) that a resource has moved and they do this by the use of an HTTP status code and a HTTP location header. There are different types of redirects but the most common ones used are:

    • 301 - Moved Permanently. This type of redirect signals that the resource has permanently moved and that any further attempts to access it should be directed to the location specified in the header
    • 302 - Redirect or Found. This type of redirect signals that the resource is temporarily located in a different location, but any further attempts to access the resource should still go to the same original location.

    Below is an example on the response sent from the server when requesting http://www.microsoft.com/SQL/

    HTTP/1.1 302 Found
    Connection: Keep-Alive
    Content-Length: 161
    Content-Type: text/html; charset=utf-8
    Date: Wed, 10 Jun 2009 17:04:09 GMT
    Location: /sqlserver/2008/en/us/default.aspx
    Server: Microsoft-IIS/7.0
    X-Powered-By: ASP.NET

     

    So what do redirects mean for SEO?

    One of the most important factors in SEO is the concept called organic linking, in simple words it means that your page gets extra points for every link that external Web sites have linking to your page. So now imagine the Search Engine Bot is crawling an external Web site and finds a link pointing to your page (example.com/some-page) and when it tries to visit your page it runs into a redirect to another location (say example.com/somepage). Now the Search Engine has to decide if it should add the original "some-page" into its index as well as if it should "add the extra points" to the new location or to the original location, or if it should just ignore it entirely. Well the answer is not that simple, but a simplification of it could be:

    • if you return a 301 (Permanent Redirect) you are telling the search engine that the resource moved to a new location permanently so that all further traffic should be directed to that location. This clearly means that the search engine should ignore the original location (some-page) and index the new location (somepage), and that it should add all the "extra points" to it, as well as any further references to the original location should now be "treated" as if it was the new one.
    • if you return a 302 (Temporary Redirect) the answer can depend on search engines, but its likely to decide to index the original location and ignore the new location at all (unless directly linked in other places) since its only temporary and it could at any given point stop redirecting and start serving the content from the original location. This of course makes it very ambiguous on how to deal with the "extra points" and likely will be added to the original location and not the new destination.

     

    Enter IIS SEO Toolkit

    IIS Search Optimization Toolkit has a couple of rules that look for different patterns related to Redirects. The Beta version includes the following:

    1. The redirection did not include a location header. Believe it or not there are a couple of applications out there that does not generate a location header which completely breaks the model of redirection. So if your application is one of them, it will let you know.
    2. The redirection response results in another redirection. In this case it detected that your page (A) is linking to another page (B) which caused a redirection to another page (C) which resulted in another redirection to yet another page (D). In this case it is trying to let you know that the number of redirects could significantly impact the SEO "bonus points" since the organic linking could be all broken by this jumping around and that you should consider just linking from (A) to (D) or whatever actual end page is supposed to be the final destination.
    3. The page contains unnecessary redirects. In this case it detected that your page (A) is linking to another page (B) in your Web site that resulted in a redirect to another page (C) within your Web site. Note that this is an informational rule, since there are valid scenarios where you would want this behavior, such as when tracking page impressions, or login pages, etc. but in many cases you do not need them since we detect that you own the three pages we are suggesting to look and see if it wouldn't be better to just change the markup in (A) to point directly to (C) and avoid the (B) redirection entirely.
    4. The page uses a refresh definition instead of using redirection. Finally related to redirection, IIS SEO will flag when it detects that the use of the refresh meta-tag is being used as a mean for causing a redirection. This is a practice that is not recommended since the use of this tag does not include any semantics for search engines on how to process the content and in many cases is actually consider to be a tactic to confuse search engines, but I won't go there.

    So how does it look like? In the image below I ran Site Analysis against a Web site and it found a few of these violations (2 and 3).

    IISSEORedirect1

    Notice that when you double click the violations it will tell you the details as well as give you direct access to the related URL's so that you can look at the content and all the relevant information about them to make the decision. From that menu you can also look at which other pages are linking to the different pages involved as well as launch it in the browser if needed.

    IISSEORedirect2

    Similarly with all the other violations it tries to explain the reason it is being flagged as well as recommended actions to follow for each of them.

    IIS Search Engine Optimization Toolkit can also help you find all the different types of redirects and the locations where they are being used in a very easy way, just select Content->Status Code Summary in the Dashboard view and you will see all the different HTTP Status codes received from your Web site. Notice in the image below how you can see the number of redirects (in this case 18 temporary redirects and 2 permanent redirects). You can also see how much content they accounted for, in this case about 2.5 kb (Note that I've seen Web sites generate a large amount of useless content in redirect traffic, speaking of spending in bandwidth). You can double click any of those rows and it will show you the details of the URL's that returned that and from there you can see who links to them, etc.

    IISSEORedirect3

    So what should I do?

    1. Know your Web site. Run Site Analysis against your Web site and see all the different redirects that are happening.
    2. Try to minimize redirections. If possible with the knowledge gain on 1, make sure to look for places where you can update your content to reduce the number of redirects.
    3. Use the right redirect. Understand what is the intent of the redirection you are trying to do and make sure you are using the right semantics (is it permanent or temporary). Whenever possible prefer Permanent Redirects 301.
    4. Use URL Rewrite to easily configure them. URL Rewrite allows you to configure a set of rules using both regular expressions and wildcards that live along with your application (no-administrative privileges required) that can let you set the right redirection status code. A must for SEO. More on this on a future blog.

    Summary

    So going back to the original question: "are redirects bad for Search Engine Optimization?". Not necessarily, they are an important tool used by Web application for many reasons such as:

    • Canonicalization. Ensure that users are accessing your site with www. or without www. use permanent redirects
    • Page impressions and analytics. Using temporary redirects to ensure that the original link is preserved and counters work as expected.
    • Content reorganization. Whether you are changing your host due to a brand change or just renaming a page, you should make sure to use permanent redirects to keep your page rankings.
    • etc

    Just make sure you don't abuse them by having redirects to redirects, unnecessary redirects, infinite loops, and use the right semantics.

  • CarlosAg Blog

    Adding ASP.NET Tracing to IIS 7.0 Failed Request Tracing

    • 2 Comments

    IIS 7.0 Failed Request Tracing (for historical reasons internally we refer to it as FREB, since it used to be called Failed Request Event Buffering, and there are no "good-sounding-decent" acronyms for the new name) is probably the best diagnosing tool that IIS has ever had (that doesn't require Debugging skills), in a simplistic way it exposes all the interesting events that happen during the request processing in a way that allows you to really understand what went wrong with any request. To learn more you can go to http://learn.iis.net/page.aspx/266/troubleshooting-failed-requests-using-tracing-in-iis7/.

    What is not immediately obvious is that you can use this tracing capabilities from your ASP.NET applications to output the tracing information in our infrastructure so that your users get a holistic view of the request.

    When you are developing in ASP.NET there are typically two Tracing infrastructures you are likely to use, the ASP.NET Page Tracing and the System.Diagnostics Tracing. In recent versions they have been better integrated (attribute writeToDiagnosticsTrace)  but still you want to know about both of them.

    Today I'll just focus on logging ASP.NET Tracing to FREB, and in a future post I will show how to do it for System.Diagnostics Tracing.

    To send the ASP.NET Tracing to FREB you just need to enable ASP.NET tracing, use the ASPNET trace provider and you will get those entries in the FREB log. The following web.config will enable FREB and ASP.NET Tracing. (Note that you need to go to the Default Web Site and Enable Failed Request Filtering so that this rules get executed)

    <?xml version="1.0" encoding="UTF-8"?>
    <configuration>
     
    <system.web>
       
    <trace enabled="true" pageOutput="false" />
      </
    system.web>
     
    <system.webServer>
       
    <tracing>
         
    <traceFailedRequests>
           
    <add path="*.aspx">
             
    <traceAreas>
               
    <add provider="ASPNET" areas="Infrastructure,Module,Page,AppServices" verbosity="Verbose" />
                <
    add provider="WWW Server" areas="Authentication,Security,Filter,StaticFile,Compression,Cache,RequestNotifications,Module,Rewrite" verbosity="Verbose" />
              </
    traceAreas>
             
    <failureDefinitions statusCodes="100-600" />
            </
    add>
         
    </traceFailedRequests>
       
    </tracing>
     
    </system.webServer>
    </configuration>

    Now if you have a sample page like the following:

    <%@ Page Language="C#" %>
    <script runat="server">
       
    void Page_Load() {
           
    Page.Trace.Write("Hello world from my ASP.NET Application");
           
    Page.Trace.Warn("This is a warning from my ASP.NET Application");
       
    }
    </script>
    <html xmlns="http://www.w3.org/1999/xhtml">
    <head runat="server">
       
    <title>Untitled Page</title>
    </head>
    <body>
       
    <form id="form1" runat="server">
       
    <div>
       
    </div>
       
    </form>
    </body>
    </html>

    The result is that in \inetpub\logs\FailedReqLogsFiles\ you will get an XML file that includes all the details of the request including the Page Traces from ASP.NET. Note that we provide an XSLT transformation that parses the Xml file and provides a friendly view of it where it shows different views of the trace file. For example below only the warning is shown in the Request Summary view:

    TraceOutput1

    There is also a Request Details view where you can filter by all the ASP.NET Page Traces that includes both of the traces we added in the Page code.

    TraceOutput2

  • CarlosAg Blog

    SEO Tip - Beware of the Login pages - add them to Robots Exclusion

    • 5 Comments

    A lot of sites today have the ability for users to sign in to show them some sort of personalized content, whether its a forum, a news reader, or some e-commerce application. To simplify their users life they usually want to give them the ability to log on from any page of the Site they are currently looking at. Similarly, in an effort to keep a simple navigation for users Web Sites usually generate dynamic links to have a way to go back to the page where they were before visiting the login page, something like: <a href="/login?returnUrl=/currentUrl">Sign in</a>.

    If your site has a login page you should definitely consider adding it to the Robots Exclusion list since that is a good example of the things you do not want a search engine crawler to spend their time on. Remember you have a limited amount of time and you really want them to focus on what is important in your site.

    Out of curiosity I searched for login.php and login.aspx and found over 14 million login pages… that is a lot of useless content in a search engine.

    Another big reason is because having this kind of URL's that vary depending on each page means there will be hundreds of variations that crawlers will need to follow, like /login?returnUrl=page1.htm, /login?returnUrl=page2.htm, etc, so it basically means you just increased the work for the crawler by two-fold. And even worst, in some cases if you are not careful you can easily cause an infinite loop for them when you add the same "login-link" in the actual login page since you get /login?returnUrl=login as the link and then when you click that you get /login?returnUrl=login?returnUrl=login... and so on with an ever changing URL for each page on your site. Note that this is not hypothetical this is actually a real example from a few famous Web sites (which I will not disclose). Of course crawlers will not infinitely crawl your Web site and they are not that silly and will stop after looking at the same resource /login for a few hundred times, but this means you are just reducing the time of them looking at what really matters to your users.

    IIS SEO Toolkit

    If you use the IIS SEO Toolkit it will detect the condition when the same resource (like login.aspx) is being used too many times (and only varying the Query String) and will give you a violation error like: Resource is used too many times.

     

    So how do I fix this?

    There are a few fixes, but by far the best thing to do is just add the login page to the Robots Exclusion protocol.

    1. Add the URL to the /robots.txt, you can use the IIS Search Engine Optimization Toolkit to edit the robots file, or just drop a file with something like:
      User-agent: *
      Disallow: /login
    2. Alternatively (or additionally)  you can add a rel attribute with the nofollow value to tell them not to even try. Something like:
      <a href="/login?returnUrl=page" rel="nofollow">Log in</a>
    3. Finally make sure to use the Site Analysis feature in the IIS SEO Toolkit to make sure you don't have this kind of behavior. It will automatically flag a violation when it identifies that the same "page" (with different Query String) has already been visited over 500 times.

    Summary

    To summarize always add the login page to the robots exclusion protocol file, otherwise you will end up:

    1. sacrificing valuable "search engine crawling time" in your site.
    2. spending unnecessary bandwidth and server resources.
    3. potentially even blocking crawlsers from your content.
  • CarlosAg Blog

    Backgammon and Connect4 for Windows Mobile

    • 5 Comments

    During the holidays my wife and I went back to visit our families in Mexico City where we are originally from. Again, during the flights I had enough spare time to build a couple of my favorite games, Backgammon and Connect4.

    I've already built both games for Windows using Visual Basic 5 almost 11 years ago but as you would imagine I was far from feeling proud of the implementation. So this time I started from scratch and ended up with what I think are better versions of them (still not the best code, but pretty decent for just a few hours of coding). In fact the AI for the Backgammon version is a bit better and the Connect4 is faster and more suited for a Mobile device.

    You can go with your PDA/Smartphone to http://www.carlosag.net/mobile/ to install both games or just click the images below to take you to the install page of each of them. Enjoy and feel free to add any feedback/features as comments to this blog post.

    The one thing I learned during the development of these versions is that you do want to download the Windows Mobile 6 SDK if you are going to target that version (which is what my cell phone has), since it will add new Visual Studio 2005 Project Templates and new Emulator images which will help you a lot. For example I was trying to use buttons in my forms, and testing it in Pocket PC worked, but as soon as I tried them in my cell phone it crashed with a NotSupportedException. When I installed the SDK and switched to target that platform, Visual Studio immediately warned me that my platform didn't supported buttons which was great.

    Bottom line I'm more and more amazed of how easy it is to build games in Windows Mobile and the things you can achieve with both Windows Mobile and the .NET Compact Framework.

  • CarlosAg Blog

    Free Sudoku Game for Windows

    • 4 Comments

    A couple of years ago a friend of mine introduced me to a game called Sudoku, and immediately I loved it. As any good game its rules are very simple, basically you have to lay out the numbers from 1 to 9 horizontally in a row without repeating them, while at the same time you have to layout the same 1 to 9 numbers vertically in a column, and also within a group (a 3x3 square).

    After that, every time I had to take a flight I got addicted to buying a new puzzles magazine that would entertain me for the flight. On December 2006 while flying to Mexico I decided to change the tradition and instead build a simple Sudoku game that I could play any time I felt like doing it without having to find a magazine store and that turned into this simple game. It is not yet a great game since I haven't had time to finalize it, but I figure I would share it anyway in case someone finds it fun.

    Click Here to go to the Download Page

    Sudoku

  • CarlosAg Blog

    Razor Migration Notes 3: Use app_offline.htm to deploy the new version

    • 1 Comments

    This is the third post on the series:

    1: Moving a SitemapPath Control to ASP.NET Web Pages

    2: Use URL Rewrite to maintain your Page rankings (SEO)

     

    ASP.NET has a nice feature to help for deployment processes where you can drop an HTML file named app_offline.htm and it will unload all assemblies and code that it has loaded letting you easily delete binaries and deploy the new version while still serving back to customers the friendly message that you provide telling them that your site is under maintenance.

    One caveat though, is that Internet Explorer users might still see the “friendly” error that they display and not your nice message. This happens because of a page size validation that IE performs. See Scott’s blog on how to workaround that problem: App_Offline.htm and working around the IE Friendly Errors

    Note: The live site is now running in .NET 4.0 and all using Razor.

  • CarlosAg Blog

    Razor Migration Notes 2: Use URL Rewrite to maintain your Page rankings (SEO)

    • 0 Comments

    This is the second note of the series:

    1: Moving a SitemapPath Control to ASP.NET Web Pages

    My current Web Site was built using ASP.NET 2.0 and WebForms, that means that all of my pages have the extension .aspx. While moving each page to use ASP.NET Web Pages their extension is being changed to .cshtml, and while I’m sure I could configure it in a way to get them to keep their aspx extensions it is a good opportunity to “start clean”. Furthermore, in ASP.NET WebPages you can also access them without the extension at all, so if you have /my-page.cshtml, you can also get to it using just /my-page. Given I will go through this migration I decided to use the clean URL format (no extension) and in the process get better URLs for SEO purposes, for example, today one of the URLs look like http://www.carlosag.net/Articles/configureComPlus.aspx but this would be a good time to enforce lower-case semantics and also get rid of those ugly camel casing and get a much more standard a friendly format for Search Engines using “-“, like: http://www.carlosag.net/articles/configure-com-plus.aspx.

    Use URL Rewrite to make sure to keep your Page Ranking and no broken links

    The risk of course is that if you just change the URLs of your site you will end up not only with lots of 404’s (Not Found), but your page ranking will be reset and you will loose all the “juice” that external links and history have provided to it. The right way to do this is to make sure that you perform a permanent redirect (301) from the old URL to the new URL, this way Search Engines (and browsers) will know that the content has permanently moved to a new location so they should “pass all the page ranking” to the new page.

    There are many ways to achieve this, but I happen to like URL Rewrite a lot, so I decided to use it. To do that I basically created one rule that uses a Rewrite Map (think of it as a Dictionary) to match the URL and if it matches it will perform a permanent redirect to the new one. So for example, if /aboutme.aspx is requested, then it will 301 to /about-me:

    <?xml version="1.0"?>
    <configuration>
     
    <system.webServer>
       
    <rewrite>
         
    <rules>
           
    <rule name="Redirect for OldUrls" stopProcessing="true">
             
    <match url=".*"/>
              <
    conditions>
               
    <add input="{OldUrls:{REQUEST_URI}}" pattern="(.+)"/>
              </
    conditions>
             
    <action type="Redirect" url="{C:1}" appendQueryString="true" redirectType="Permanent" />
            </
    rule>
         
    </rules>
         
    <rewriteMaps>
           
    <rewriteMap name="OldUrls">
             
    <add key="/aboutme.aspx" value="/about-me"/>
              <
    add key="/soon.aspx?id=1" value="/coming-soon"/>
              <
    add key="/Articles/configureComPlus.aspx" value="/articles/configure-com-plus"/>
              <
    add key="/Articles/createChartHandler.aspx" value="/articles/create-aspnet-chart-handler"/>
              <
    add key="/Articles/createVsTemplate.aspx" value="/articles/create-vs-template"/>
          ...
            </
    rewriteMap>
         
    </rewriteMaps>
       
    </rewrite>
     
    </system.webServer>
    </configuration>

     

    Note that I could have also created a simple rule that would change the extension to cshtml, however I decided that I also wanted to change the page names. The best thing is that you can do it incrementally and only rewrite them once your new page is ready or even switch back to the old one later if any problems occur.

    Summary

    Using URL Rewrite you can easily keep your SEO and pages without broken links. You can also achieve lots more, check out: SEO made easy with IIS URL Rewrite 2.0 SEO templates – CarlosAg

  • CarlosAg Blog

    Using Windows Authentication with Web Deploy and WMSVC

    • 0 Comments

    By default in Windows Server 2008 when you are using the Web Management Service (WMSVC) and Web Deploy (also known as MSDeploy) it will use Basic authentication to perform your deployments. If you want to enable Windows Authentication you will need to set a registry key so that the Web Management Service also supports using NTLM. To do this, update the registry on the server by adding a DWORD key named "WindowsAuthenticationEnabled" under HKEY_LOCAL_MACHINE\Software\Microsoft\WebManagement\Server, and set it to 1. If the Web Management Service is already started, the setting will take effect after the service is restarted.

    For more details on other configuration options see:

    http://technet.microsoft.com/en-us/library/dd722796(WS.10).aspx

  • CarlosAg Blog

    Not getting IntelliSense in your web.config for system.webServer sections in Visual Studio 2008?

    • 2 Comments

    Today I was playing a bit with Visual Studio 2008 and was surprised to see that I was not getting IntelliSense in my web.config. As you might already know IntelliSense in Xml in Visual Studio is implemented by using a set of schemas that are stored in a folder inside the VS folder, something like: \Program Files\Microsoft Visual Studio 9.0\Xml\Schemas. After looking to the files it was easy to understand what was going on, turns out I was developing using .NET 2.0 settings and Visual Studio now ships different schemas for Web.config files depending on the settings that you are using: DotNetConfig.xsd, DotNetConfig20.xsd and DotNetConfig30.xsd.

    As I imagine I looked into the DotNetConfig.xsd and it indeed has all the definitions for the system.webServer sections as well as the DotNetConfig30.xsd. However, DotNetConfig20.xsd does not include the section details, only its definition, so to fix your IntelliSense you can just open DotNetConfig.xsd, select the entire section from:

    <xs:element name="system.webServer" vs:help="configuration/system.webServer">...</xs:element>

    and just replace the entry in DotNetConfig20.xsd. You might also want to copy the system.applicationHost section and add it to the DotNetConfig20.xsd since it does not include it as well.

Page 5 of 10 (91 items) «34567»