Posts
  • CarlosAg Blog

    Using the SEO Toolkit to generate a Sitemap of a remote Web Site

    • 3 Comments

    The SEO Toolkit includes a set of features (like Robots Editor and Sitemap Editor) that only work when you are working with a local copy of your Web Site. The reason behind it is that we have to understand where we need to save the files that we need to generate (like Robots.txt and Sitemap XML files) without having to ask for physical paths as well as to verify that the functionality is added correctly such as only allowing Robots.txt in the root of a site, etc. Unfortunately this means that if you have a remote server that you cannot have a running local copy then you cannot use those features. (Note that you can still use Site Analysis tool since that will crawl your Web Site regardless of platform or framework and will store the report locally just fine.)

    The Good News

    The good news is that you can technically trick the SEO Toolkit into thinking you have a working copy locally and allow you to generate the Sitemap or Robots.txt file without too much hassle (“too much” being the key).

    For this sample, lets assume we want to create a Sitemap from a remote Web site, in this case I will use my own Web site (http://www.carlosag.net/ , but you can specify your own Web site, below are the steps you need to follow to enable those features for any remote Web site (even if it is running in other versions of IIS or any other Web Server).

    Create a Fake Site

    • Open IIS Manager (Start Menu->InetMgr.exe)
    • Expand the Tree until you can see the “Sites” node.
    • Right-click the “Sites” node and select “Add Web Site…”
    • Specify a Name (in my case I’ll use MySite)
    • Click “Select” to choose the DefaultAppPool from the Application Pool list. This will avoid creating an additional AppPool that will never run.
    • Specify a Physical Path where you will want the Robots and Sitemap files to be saved. I recommend creating just a temporary directory that clearly states this is a fake site. So I will choose c:\FakeSite\ for that.
    • Important. Set the Host name so that it matches your Web Site, for example in my case www.carlosag.net.
    • Uncheck the “Start Web site immediately”, since we do not need this to run.
    • Click OK

    This is how my Create site dialog looks like:

    image

    Use the Sitemap Editor

    Since we have a site that SEO Toolkit thinks it is locally now you should be able to use the features as usual.

    • Select the new site created above in the Tree
    • Double-click the Search Engine Optimization icon in the Home page
    • Click the link “Create a new Sitemap”
    • Specify a name, in my case Sitemap.xml
    • Since this is a remote site, you will see that the physical location option shows an empty list, so change the “URL structure” to will use the “<Run new Site Analysis>..” or if you already have one you can choose that.
    • If creating a new one, just specify a name and click OK (I will use MySite). At this point the SEO Toolkit starts crawling the Remote site to discover links and URLs, when it is done it will present you the virtual namespace structure so you can work with.
    • After the crawling is done, you can now check any files you want to include in your Sitemap and leverage the Server response to define the changed date and all the features as if the content was local. and Click OK

    This is the way the dialog looks when discovered my remote Web site URLs:

    image

    You will find your Sitemap.xml generated in the physical directory specified when creating the site (in my case c:\FakeSite\Sitemap.xml").

    Use the Robots Editor

    Just as with the Sitemap Editor, once you prepare a fake site for the remote server, you should be able to use the Robots Editor and leverage the same Site analysis output to build your Robots.txt file.

    image

    Summary

    In this blog I show how you can use the Sitemap and Robots Editor included in the SEO Toolkit when working with remote Web sites that might be running in different platforms or different versions of IIS.

  • CarlosAg Blog

    Free SEO Analysis using IIS SEO Toolkit

    • 10 Comments

    In my spare time I’ve been thinking about new ideas for the SEO Toolkit, and it occurred to me that rather than continuing trying to figure out more reports and better diagnostics against some random fake sites, that it could be interesting to ask openly for anyone that is wanting a free SEO analysis report of your site and test drive some of it against real sites.

    • So what is in it for you, I will analyze your site to look for common SEO errors, I will create a digest of actions to do and other things (like generating a diagram of your site, layer information, etc), and will deliver it to you in a form of an email. If you agree I will post some of the results (hiding identification information like site, url, etc, so that it is made anonymously if needed).
    • and what is in it for me, well I will crawl your Web Site (once or twice at most, with a limit set to a few hundred pages) using the SEO Toolkit to test drive some ideas and reporting stuff that I’m starting to build and to continue investigating common patterns and errors.

    So if you want in, just post me your URL in the comments of this blog (make sure you are reading this blog from a URL inside http://blogs.msdn.com/carlosag/ , otherwise you might be posting comments in some syndicating site.), I will only allow the first few sites (if successful I will start another batch in the future) and I will be doing one by one in the following days. Make sure to include a way to contact you whether using the MSDN user infrastructure or include an email so that I can contact you with the results.

    Alternatively I will take also URLs using Twitter at http://twitter.com/CarlosAguilarM so hurry up and let me know if you want me to look at your site.

  • CarlosAg Blog

    IIS SEO Toolkit and W3C Validation Service

    • 4 Comments

    One thing that I’ve been asked several times about the SEO Toolkit is if it does a full standards validation on the markup and content that is processed, and if not, to add support for more comprehensive standards validation, in particular XHTML and HTML 4.01. Currently the markup validation performed by the SEO Toolkit is really simple, its main goal is to make sure that the markup is correctly organized, for example that things like <b><i>Test</b></i> are not found in the markup, the primary reason is to make sure that basic blocks of markup are generally "easy" to parse by Search Engines and that the semantics will not be terribly broken if a link, text or style is not correctly closed (since all of them would affect SEO).

    So the first thing I would say is that we have heard the feedback and are looking at what we could possibly add in future versions, however why wait, right?

    One thing that many people do not realize is that the SEO Toolkit can be extended to add new violations, new metadata and new rules to the analysis process and as such during a demo I gave a few weeks ago I decided to write a sample on how to consume the online W3C Markup Validation Service from the SEO Toolkit.

    Download

    You can download the SEOW3Validator including the source code at http://www.carlosag.net/downloads/SEOW3Validator.zip.

    How to install it

    To run it you just need to:

    1. Unzip the contents in a folder.
    2. Install the SEOW3Validator.dll assembly in the GAC:
      1. Open a Windows Explorer window and navigate to c:\Windows\assembly
      2. Drag and Drop the SEOW3Validator.dll to the c:\Windows\assembly explorer window.
      3. Alternatively you can just run gacutil.exe /i SEOW3Validator.dll, usually located at C:\Program Files\Microsoft SDKs\Windows\v6.0A\bin or v7A.
      4. If you have problems with this, you could try just copying the assembly to the GAC (copy SEOW3Validator.dll c:\Windows\assembly\GAC_MSIL\SEOW3Validator\1.0.0.0__995ee9b8fa017847\SEOW3Validator.dll)
    3. Register the moduleProvider in Administration.config: In an elevated prompt open C:\Windows\System32\Inetsrv\config\Administration.config and add the following line right inside the <moduleProviders> right before closing the </moduleProviders>:
    4.   <add name="SEOW3Validator" 
             type
      ="SEOW3Validator.SEOW3ValidatorModuleProvider, SEOW3Validator, Version=1.0.0.0, Culture=neutral, PublicKeyToken=995ee9b8fa017847" />

    You should be able to now run the SEO Toolkit just as before but now you will find new violations, for example in my site I get the ones below. Notice that there are a new set of violations like W3 Validator – 68, etc, and all of them belong to the W3C category. (I would have liked to have better names, but the way the W3 API works is not really friendly for making this any better).

    SampleValidatorResults

    And when double clicking any of those results you get the details as reported by the W3 Validation Service:

    SampleValidatorDetails

    The Code

    The code is actually pretty simple, the main class is called SEOW3ValidatorExtension that derives from CrawlerModule and overrides the Process method to call the W3C Validation service sending the actual markup in the request, this means that it does not matter if your site is an Intranet or in the Internet, it will work; and for every warning and error that is returned by the Validator it will add a new violation to the SEO report.

    The code looks like this:

        W3Validator validator = new W3Validator();
       
    W3ValidatorResults results = validator.Validate(context.UrlInfo.FileName, 
           
    context.UrlInfo.ContentTypeNormalized, 
           
    context.UrlInfo.Response);

       
    foreach (W3ValidatorWarning warning in results.Warnings) {
           
    context.UrlInfo.AddViolation(CreateWarning(warning));
       
    }

       
    foreach (W3ValidatorError error in results.Errors) {
           
    context.UrlInfo.AddViolation(CreateError(error));
       
    }

     

     

     

    I created a helper class W3Validator that basically encapsulates the consumption of the W3C Validation Service, the code is far from what I would like it to be however there are some "interesting" decisions on the way the API is exposed, I would have probably designed the service differently and not return the results formatted in HTML when this is actually an API/WebService that can be presented somewhere else than a browser. So a lot of the code is to just re-format the results to look "decent", but to be honest I did not want to spend too much time on it so everything was put together quite quickly. Also, if you look at the names I used for violations, I did not want to hard-code specific Message IDs and since the Error Message was different for all of them even within the same Message ID, it was not easy to provide better messages. Anyway, overall it is pretty usable and should be a good way to do W3 Validation.

    Note that one of the cool things you get for free is that since these are stored as violations, you can then re-run the report and use the Compare Report feature to see the progress while fixing them. Also, since they are stored as part of the report you will not need to keep running the validator over and over again but instead just open it and continue looking at them, as well as analyzing the data in the Reports and Queries, and be able to export them to Excel, etc.

    Hopefully this will give you a good example on some of the interesting things you can achieve with the SEO Toolkit and its extensibility.

  • CarlosAg Blog

    IIS SEO Toolkit Available in 10 Languages

    • 2 Comments

    A couple of months ago I blogged about the release of the v1.0.1 of the IIS Search Engine Optimization Toolkit. In March we released the localized versions of the SEO Toolkit so now it is available in 10 languages: English, Japanese, French, Russian, Korean, German, Spanish, Chinese Simplified, Italian and Chinese Traditional.

    Here are all the direct links to download it.

    Name Language Download URL
    IIS SEO Toolkit 32bit english http://download.microsoft.com/download/A/C/A/ACA8D740-A59D-4D25-A2D5-1DCFD1D9A01F/IISSEO_x86.msi
    IIS SEO Toolkit 64bit english http://download.microsoft.com/download/A/C/A/ACA8D740-A59D-4D25-A2D5-1DCFD1D9A01F/IISSEO_amd64.msi
    IIS SEO Toolkit 32bit ja-jp http://download.microsoft.com/download/3/6/1/36179752-3497-4C2C-B2C5-9B4FA14EAC3A/IISSEO_x86_ja-JP.msi
    IIS SEO Toolkit 64bit ja-jp http://download.microsoft.com/download/3/6/1/36179752-3497-4C2C-B2C5-9B4FA14EAC3A/IISSEO_amd64_ja-JP.msi
    IIS SEO Toolkit 32bit fr-fr http://download.microsoft.com/download/D/C/5/DC576407-7273-412C-9AC8-AE78E4CFE017/IISSEO_x86_fr-FR.msi
    IIS SEO Toolkit 64bit fr-fr http://download.microsoft.com/download/D/C/5/DC576407-7273-412C-9AC8-AE78E4CFE017/IISSEO_amd64_fr-FR.msi
    IIS SEO Toolkit 32bit ru-ru http://download.microsoft.com/download/8/6/A/86A0BCE1-419F-4550-968E-A8E5A8467B32/IISSEO_x86_ru-RU.msi
    IIS SEO Toolkit 64bit ru-ru http://download.microsoft.com/download/8/6/A/86A0BCE1-419F-4550-968E-A8E5A8467B32/IISSEO_amd64_ru-RU.msi
    IIS SEO Toolkit 32bit ko-kr http://download.microsoft.com/download/F/8/6/F8654213-40C6-4706-9128-536A6A4BC570/IISSEO_x86_ko-KR.msi
    IIS SEO Toolkit 64bit ko-kr http://download.microsoft.com/download/F/8/6/F8654213-40C6-4706-9128-536A6A4BC570/IISSEO_amd64_ko-KR.msi
    IIS SEO Toolkit 32bit de-de http://download.microsoft.com/download/1/8/1/1813318E-6358-4BDC-B148-C1826A74994D/IISSEO_x86_de-DE.msi
    IIS SEO Toolkit 64bit de-de http://download.microsoft.com/download/1/8/1/1813318E-6358-4BDC-B148-C1826A74994D/IISSEO_amd64_de-DE.msi
    IIS SEO Toolkit 32bit es-es http://download.microsoft.com/download/1/6/6/166C82C0-4B72-4282-9A86-47C85CE7E20C/IISSEO_x86_es-ES.msi
    IIS SEO Toolkit 64bit es-es http://download.microsoft.com/download/1/6/6/166C82C0-4B72-4282-9A86-47C85CE7E20C/IISSEO_amd64_es-ES.msi
    IIS SEO Toolkit 32bit zh-cn http://download.microsoft.com/download/D/6/C/D6C6DE59-2EE8-4DD3-9E30-739A5BE42F3C/IISSEO_x86_zh-CN.msi
    IIS SEO Toolkit 64bit zh-cn http://download.microsoft.com/download/D/6/C/D6C6DE59-2EE8-4DD3-9E30-739A5BE42F3C/IISSEO_amd64_zh-CN.msi
    IIS SEO Toolkit 32bit it-it http://download.microsoft.com/download/6/1/F/61FC149C-A950-40F4-9795-F3D4F2115721/IISSEO_x86_it-IT.msi
    IIS SEO Toolkit 64bit it-it http://download.microsoft.com/download/6/1/F/61FC149C-A950-40F4-9795-F3D4F2115721/IISSEO_amd64_it-IT.msi
    IIS SEO Toolkit 32bit zh-tw http://download.microsoft.com/download/6/1/F/61FC149C-A950-40F4-9795-F3D4F2115721/IISSEO_x86_zh-TW.msi
    IIS SEO Toolkit 64bit zh-tw http://download.microsoft.com/download/6/4/0/64067386-3BF6-493E-B7DB-4423839C316B/IISSEO_amd64_zh-TW.msi

    Here is a screenshot of how the SEO Toolkit running in Spanish.

    seo-toolkit-in-spanish

    If you want to read the download files in the Microsoft Download Center you can click the links below:

    IIS Search Engine Optimization Toolkit - 32bit ja-jp
    IIS Search Engine Optimization Toolkit - 32bit fr-fr
    IIS Search Engine Optimization Toolkit - 32bit ru-ru
    IIS Search Engine Optimization Toolkit - 32bit ko-kr
    IIS Search Engine Optimization Toolkit - 32bit de-de
    IIS Search Engine Optimization Toolkit - 32bit es-es
    IIS Search Engine Optimization Toolkit - 32bit zh-cn
    IIS Search Engine Optimization Toolkit - 32bit it-it
    IIS Search Engine Optimization Toolkit - 32bit zh-tw

    To learn more about the SEO Toolkit you can visit:

    http://blogs.msdn.com/carlosag/archive/tags/SEO/default.aspx

    http://www.iis.net/expand/SEOToolkit

    And for any help or provide us feedback you can do that in the IIS.NET SEO Forum.

  • CarlosAg Blog

    Setting up a Reverse Proxy using IIS, URL Rewrite and ARR

    • 23 Comments

    Today there was a question in the IIS.net Forums asking how to expose two different Internet sites from another site making them look like if they were subdirectories in the main site.

    So for example the goal was to have a site: www.site.com expose a www.site.com/company1  and a www.site.com/company2 and have the content from “www.company1.com” served for the first one and “www.company2.com” served in the second one. Furthermore we would like to have the responses cached in the server for performance reasons. The following image shows a simple diagram of this:

    Reverse Proxy Sample 

    This sounds easy since its just about routing or proxying every single request to the correct servers, right? Wrong!!! If it only it was that easy. Turns out the most challenging thing is that in this case we are modifying the structure of the underlying URLs and the original layout in the servers which makes relative paths break and of course images, Stylesheets (css), javascripts and other resources are not shown correctly.

    To try to clarify this, imagine that a user requests using his browser the page at http://www.site.com/company1/default.aspx, and so based on the specification above the request is proxied/routed to http://www.company1.com/default.aspx on the server-side. So far so good, however, imagine that the markup returned by this HTML turns out to have an image tag like “<img src=/some-image.png />”, well the problem is that now the browser will resolve that relative path using the base path on the original request he made which was http://www.site.com/company1/default.aspx resulting in a request for the image at http://www.site.com/some-image.png instead of the right “company1” folder that would be http://www.site.com/company1/some-image.png .

    Do you see it? Basically the problem is that any relative path or for that matter absolute paths as well need to be translated to the new URL structure imposed by the original goal.

    So how do we do it then?

    Enter URL Rewrite 2.0 and Application Request Routing

    URL Rewrite 2.0 includes the ability to rewrite the content of a response as it is getting served back to the client which will allow us to rewrite those links without having to touch the actual application.

    Software Required:


    Steps

    1. The first thing you need to do is enable Proxy support in ARR.
      1. To do that just launch IIS Manager and click the server node in the tree view.
      2. Double click the “Application Request Routing Cache” icon
      3. Select the “Server Proxy Settings…” task in the Actions panel
      4. And Make sure that “Enable Proxy” checkbox is marked. What this will do is allow any request in the server that is rewritten to a server that is not the local machine will be routed to the right place automatically without any further configuration.
    2. Configure URL Rewrite to route the right folders and their requests to the right site. But rather than bothering you with UI steps I will show you the configuration and then explain step by step what each piece is doing.
    3. Note that for this post I will only take care of Company1, but you can imagine the same steps apply for Company2, and to test this you can just save the configuration file below as web.config and save it in your inetpub\wwwroot\  or in any other site root and you can test it.
    <?xml version="1.0" encoding="UTF-8"?>
    <configuration>
       
    <system.webServer>
           
    <rewrite>
               
    <rules>
                   
    <rule name="Route the requests for Company1" stopProcessing="true">
                       
    <match url="^company1/(.*)" />
                        <
    conditions>
                           
    <add input="{CACHE_URL}" pattern="^(https?)://" />
                        </
    conditions>
                       
    <action type="Rewrite" url="{C:1}://www.company1.com/{R:1}" />
                        <
    serverVariables>
                           
    <set name="HTTP_ACCEPT_ENCODING" value="" />
                        </
    serverVariables>
                   
    </rule>
               
    </rules>
               
    <outboundRules>
                   
    <rule name="ReverseProxyOutboundRule1" preCondition="ResponseIsHtml1">
                       
    <match filterByTags="A, Area, Base, Form, Frame, Head, IFrame, Img, Input, Link, Script" pattern="^http(s)?://www.company1.com/(.*)" />
                        <
    action type="Rewrite" value="/company1/{R:2}" />
                    </
    rule>
                   
    <rule name="RewriteRelativePaths" preCondition="ResponseIsHtml1">
                       
    <match filterByTags="A, Area, Base, Form, Frame, Head, IFrame, Img, Input, Link, Script" pattern="^/(.*)" negate="false" />
                        <
    action type="Rewrite" value="/company1/{R:1}" />
                    </
    rule>
                   
    <preConditions>
                       
    <preCondition name="ResponseIsHtml1">
                           
    <add input="{RESPONSE_CONTENT_TYPE}" pattern="^text/html" />
                        </
    preCondition>
                   
    </preConditions>
               
    </outboundRules>
           
    </rewrite>
       
    </system.webServer>
    </configuration>

    Setup the Routing

                    <rule name="Route the requests for Company1" stopProcessing="true">
                       
    <match url="^company1/(.*)" />
                        <
    conditions>
                           
    <add input="{CACHE_URL}" pattern="^(https?)://" />
                        </
    conditions>
                       
    <action type="Rewrite" url="{C:1}://www.company1.com/{R:1}" />
                        <
    serverVariables>
                           
    <set name="HTTP_ACCEPT_ENCODING" value="" />
                        </
    serverVariables>
                   
    </rule>

    The first rule is an inbound rewrite rule that basically captures all the requests to the root folder /company1/*, so if using Default Web Site, anything going to http://localhost/company1/* will be matched by this rule and it will rewrite it to www.company1.com respecting the HTTP vs HTTPS traffic.

    One thing to highlight which is what took me a bit of time is the “serverVariables” entry in that rule that basically is overwriting the Accept-Encoding header, the reason I do this is because if you do not remove that header then the response will likely be compressed (Gzip or deflate) and Output Rewriting is not supported on that case, and you will end up with an error message like:

    HTTP Error 500.52 - URL Rewrite Module Error.
    Outbound rewrite rules cannot be applied when the content of the HTTP response is encoded ("gzip").

    Also note that to be able to use this feature for security reasons you need to explicitly enable this by allowing the server variable. See enabling server variables here.

     

    Outbound Rewriting to fix the Links

    The last two rules just rewrite the links and scripts and other resources so that the URLs are translated to the right structure. The first one rewrites absolute paths, and the last one rewrites the relative paths. Note that if you use relative paths using “..” this will not work, but you can easily fix the rule above, I was too lazy to do that and since I never use those when I create a site it works for me :)

    Setting up Caching for ARR

    A huge added value of using ARR is that now we can with a couple of clicks enable disk caching so that the requests are cached locally in the www.site.com, so that not every single request ends up paying the price to go to the backend servers.

    1. To do that just launch IIS Manager and click the server node in the tree view.
    2. Double click the “Application Request Routing Cache” icon
    3. Select the “Add Drive…” task in the Actions panel.
    4. Specify a directory where you want to keep your cache. Note that this can be any subfolder in your system.
    5. Make sure that “Enable Disk Cache” checkbox is marked in the Server Proxy Settings mentioned above.

    As easy as that now you will see caching working and your site will act as a container of other servers in the internet. Pretty cool hah! :)

    So in this post we saw how with literally few lines of XML, URL Rewrite and ARR we were able to enable a proxy/routing scenario with the ability to rewrite links and furthermore with caching support.

  • CarlosAg Blog

    SEO made easy with IIS URL Rewrite 2.0 SEO templates

    • 8 Comments

    A few weeks ago my team released the version 2.0 of the URL Rewrite for IIS. URL Rewrite is probably the most powerful Rewrite engine for Web Applications. It gives you many features including Inbound Rewriting (ie. Rewrite the URL, Redirect to another URL, Abort Requests, use of Maps, and more), and in Version 2.0 it also includes Outbound Rewriting so that you can rewrite URLs or any markup as the content is being sent back even if its generated using PHP, ASP.NET or any other technology.

    It also includes a very powerful User Interface that allows you to test your regular expressions and even better it includes a set of templates for common types of Rules. Some of those rules are incredibly valuable for SEO (Search Engine Optimization) purposes. The SEO rules are:

    1. Enforce Lowercase URLs. It will make sure that every URL is used with only lower case and if not it will redirect with a 301 to the lower-case version.
    2. Enforce a Canonical Domain Name. It will help you specify what domain name you want to use for your site and it will redirect the traffic to the right host name.
    3. Append or Remove the Trailing Slash. It will make sure your request either include or not include the trailing slash depending on your preference.

    image

    For more information on the SEO Templates look at: http://learn.iis.net/page.aspx/806/seo-rule-templates/

    What is really cool is that you can use the SEO Toolkit to run it against your application and you probably will get some violations around lower-case, or canonical domains, etc. And after seeing those you can use URL Rewrite 2.0 to fix them with one click.

    I have personally used it in my Web site, try the following three URLs and all of them will be redirected to the canonical form (http://www.carlosag.net/Tools/CodeTranslator/) and you will see URL Rewrite in action:

    1. http://www.carlosag.net/Tools/CodeTranslator/
    2. http://carlosag.net/Tools/CodeTranslator/
    3. http://www.carlosag.net/Tools/CodeTranslator

    Note that at the end those templates just translate to web.config settings that become part of your application that can be XCOPY with it. This works with ASP.NET, PHP, or any other server technology including static files. Below is the output of the Canonical Host Name rule which I use on my Web site’s web.config.

    <?xml version="1.0" encoding="UTF-8"?>
    <configuration>
       
    <system.webServer>
           
    <rewrite>
               
    <rules>
                   
    <rule name="CanonicalHostNameRule1">
                       
    <match url="(.*)" />
                        <
    conditions>
                           
    <add input="{HTTP_HOST}" pattern="^www\.carlosag\.net$" negate="true" />
                        </
    conditions>
                       
    <action type="Redirect" url="http://www.carlosag.net/{R:1}" />
                    </
    rule>
               
    </rules>
           
    </rewrite>
       
    </system.webServer>
    </configuration>

    There are many more features that I could talk, but for now this was just a quick SEO related post.

  • CarlosAg Blog

    Analyze your IIS Log Files - Favorite Log Parser Queries

    • 14 Comments

    The other day I was asked if I knew about a tool that would allow users to easily analyze the IIS Log Files, to process and look for specific data that could easily be automated. My recommendation was that if they were comfortable with using a SQL-like language that they should use Log Parser. Log Parser is a very powerful tool that provides a generic SQL-like language on top of many types of data like IIS Logs, Event Viewer entries, XML files, CSV files, File System and others; and it allows you to export the result of the queries to many output formats such as CSV (Comma-Separated Values, etc), XML, SQL Server, Charts and others; and it works well with IIS 5, 6, 7 and 7.5.

    To use it you just need to install it and use the LogParser.exe that is found in its installation directory (on my x64 machine it is located at: C:\Program Files (x86)\Log Parser 2.2).

    I also thought on sharing some of my favorite queries. To run them, just execute LogParser.exe and make sure to specify that the input is an IIS Log file (-i:W3C) and for ease of use in this case we will export to a CSV file that can be then opened in Excel (-o:CSV) for further analysis:

    LogParser.exe -i:W3C "Query-From-The-Table-Below" -o:CSV
    Purpose Query Sample Output
    Number of Hits per Client IP, including a Reverse DNS lookup (SLOW) SELECT c-ip As Machine, 
           
    REVERSEDNS(c-ip) As Name, 
           
    COUNT(*) As Hits 
     
    FROM c:\inetpub\logs\LogFiles\W3SVC1\* 
     
    GROUP BY Machine ORDER BY Hits DESC
    Machine Name Hits
    ::1 CARLOSAGDEV 57
    127.0.0.1 MACHINE1 28
    127.X.X.X MACHINE2 1
    Top 25 File Types SELECT TOP 25 
       
    EXTRACT_EXTENSION(cs-uri-stem) As Extension, 
       
    COUNT(*) As Hits 
    FROM c:\inetpub\logs\LogFiles\W3SVC1\* 
    GROUP BY Extension 
    ORDER BY Hits DESC
    Extension Hits
    gif 52127
    bmp 20377
    axd 10321
    txt 460
    htm 362
    Top 25 URLs SELECT TOP 25 
       
    cs-uri-stem as Url, 
       
    COUNT(*) As Hits 
    FROM c:\inetpub\logs\LogFiles\W3SVC1\* 
    GROUP BY cs-uri-stem 
    ORDER By Hits DESC
    Url Hits
    /WebResource.axd 10318
    /favicon.ico 8523
    /Tools/CodeTranslator/Translate.ashx 6519
    /App_Themes/Silver/carlosag.css 5898
    /images/arrow.gif 5720
    Number of hits per hour for the month of March SELECT 
       
    QUANTIZE(TO_LOCALTIME(TO_TIMESTAMP(date, time)), 3600) AS Hour, 
       
    COUNT(*) AS Hits 
    FROM c:\inetpub\logs\LogFiles\W3SVC1\* 
    WHERE date>'2010-03-01' and date<'2010-04-01' 
    Group By Hour
    Hour   Hits
    3/3/2010 10:00:00 33
    3/3/2010 11:00:00 5
    3/3/2010 12:00:00 3
    Number of hits per Method (GET, POST, etc) SELECT 
       
    cs-method As Method, 
       
    COUNT(*) As Hits 
    FROM c:\inetpub\logs\LogFiles\W3SVC1\* 
    GROUP BY Method
    Method Hits
    GET 133566
    POST 10901
    HEAD 568
    OPTIONS 11
    PROPFIND 18
    Number of requests made by user SELECT TOP 25 
       
    cs-username As User, 
       
    COUNT(*) as Hits 
    FROM c:\inetpub\logs\LogFiles\W3SVC1\* 
    WHERE User Is Not Null 
    GROUP BY User
    User Count
    Administrator 566
    Guest 1
    Extract Values from Query String (d and t) and use them for Aggregation SELECT TOP 25 
       
    EXTRACT_VALUE(cs-uri-query,'d') as Query_D, 
       
    EXTRACT_VALUE(cs-uri-query,'t') as Query_T, 
       
    COUNT(*) As Hits 
    FROM c:\inetpub\logs\LogFiles\W3SVC1\* 
    WHERE Query_D IS NOT NULL 
    GROUP BY Query_D, Query_T 
    ORDER By Hits DESC
    Query_D Query_T Hits
    Value in Query1 Value in T1 1556
    Value in Query2 Value in T2 938
    Value in Query3 Value in T3 877
    Value in Query4 Value in T4 768
    Find the Slowest 25 URLs (in average) in the site SELECT TOP 25 
       
    cs-uri-stem as URL, 
       
    MAX(time-taken) As Max, 
       
    MIN(time-taken) As Min, 
       
    Avg(time-taken) As Average 
    FROM c:\inetpub\logs\LogFiles\W3SVC1\* 
    GROUP BY URL 
    ORDER By Average DESC
    URL Max Min Average
    /Test/Default.aspx 23215 23215 23215
    /WebSite/Default.aspx 5757 2752 4178
    /Remote2008.jpg 3510 3510 3510
    /wordpress/ 6541 2 3271
    /RemoteVista.jpg 3314 2 1658
    List the count of each Status and Substatus code SELECT TOP 25 
       
    STRCAT(TO_STRING(sc-status), 
       
    STRCAT('.', TO_STRING(sc-substatus))) As Status, 
       
    COUNT(*) AS Hits 
    FROM c:\inetpub\logs\LogFiles\W3SVC1\* 
    GROUP BY Status 
    ORDER BY Status ASC
    Status Hits
    200 144
    304 38
    400 9
    403.14 10
    404 64
    404.3 2
    500.19 23
    List all the requests by user agent SELECT 
       
    cs(User-Agent) As UserAgent, 
       
    COUNT(*) as Hits 
    FROM c:\inetpub\logs\LogFiles\W3SVC1\* 
    GROUP BY UserAgent 
    ORDER BY Hits DESC
    UserAgent Hits
    iisbot/1.0+(+http://www.iis.net/iisbot.html) 104
    Mozilla/4.0+(compatible;+MSIE+8.0;… 77
    Microsoft-WebDAV-MiniRedir/6.1.7600 23
    DavClnt 1
    List all the Win32 Error codes that have been logged SELECT 
       
    sc-win32-status As Win32-Status, 
       
    WIN32_ERROR_DESCRIPTION(sc-win32-status) as Description, 
       
    COUNT(*) AS Hits 
    FROM c:\inetpub\logs\LogFiles\W3SVC1\* 
    WHERE Win32-Status<>0 
    GROUP BY Win32-Status 
    ORDER BY Win32-Status ASC
    Win32-Status Description Hits
    2 The system cannot find the file specified. 64
    13 The data is invalid. 9
    50 The request is not supported. 2

    A final note: any time you deal with Date and Time, remember to use the TO_LOCALTIME function to convert the log times to your local time, otherwise you will find it very confusing when your entries seem to be reported incorrectly.

    If you need any help you can always visit the Log Parser Forums to find more information or ask specific questions.

    Any other useful queries I missed?

  • CarlosAg Blog

    IIS SEO Toolkit: Find warnings of HTTP content linked by pages using HTTPS

    • 0 Comments

    Are you an developer/owner/publisher/etc of a site that uses HTTPS (SSL) for secure access? If you are, please continue to read.

    Have you ever visited a Web site that is secured using SSL (Secure Sockets Layer) just to get an ugly Security Warning message like:

    Do you want to view only the webpage content that was delivered securely?

    This webpage contains content that will not be delivered using a secure HTTPS connection, which could compromise the security of the entire webpage.

    image

    How frustrating is this for you? Do you think that end-users know what is the right answer to the question above? Honestly, I think it actually even feels like the Yes/No buttons and the phrasing of the question would cause me to click the wrong option.

    What this warning is basically trying to tell the user is that even though he/she navigated to a page that you thought was secured by using SSL, the page is consuming resources that are coming from an unsecured location, this could be scripts, style-sheets or other types of objects that could potentially pose a security risk since they could be tampered on the way or come from different locations.

    As a site owner/developer/publisher/etc should always make sure that you are not going to expose your customers to such a bad experience, leaving them with an answer that they can’t possibly choose right. For one if they ‘choose Yes’ they will get an incomplete experience being broken images, broken scripts or something worse; otherwise they can ‘choose No’ which is even worse since that means you are actually teaching them to ignore this warnings which could indeed in some cases be real signs of security issues.

    Bottom-line it should be imperative that any issue like this should be treated as a bug and fixed in the application if possible.

    But the big question is how do you find these issues? Well the answer is very simple yet extremely time consuming, just navigate to every single page of your site using SSL and as you do that examine every single resource in the page (styles, objects, scripts, etc) and see if the URL is pointing to a non-HTTPS location.

    Enter the IIS Search Engine Optimization (SEO) Toolkit.

    The good news is that using the SEO Toolkit is extremely simple to find these issues.

    1. To do that just start a new Analysis using the IIS SEO Toolkit using the HTTPS URL of your site, for example: https://www.example.com/
    2. Once the analysis is over just select the option “Query->Open Query” and open the following XML file:
    3. <?xml version="1.0" encoding="utf-8"?>
      <query dataSource="links">
       
      <filter>
         
      <expression field="LinkingUrl" operator="Begins" value="https://" />
          <
      expression field="LinkedUrl" operator="Begins" value="http://" />
          <
      expression field="LinkType" operator="NotEquals" value="Link" />
          <
      expression field="LinkType" operator="NotEquals" value="Rss" />
        </
      filter>
       
      <displayFields>
         
      <field name="LinkingUrl" />
          <
      field name="LinkedUrl" />
          <
      field name="LinkedStatus" />
          <
      field name="LinkType" />
        </
      displayFields>
      </query>
    4. Just by doing that it will open a Query Window that will show all the links in your site that have such a problem. Note that the query simply looks for all the resources that are being linked by a URL that begins with HTTPS and that the target resource is using HTTP and that are not normal links (since they do not have that problem).
    5. This is how my quick repro looks like. Note that it actually tells you the type of resource it is (an image and a style in this case). Additionally if you double click the row it will show you exactly the place in the markup where the problem occurs so you can easily fix it.

    image

    Summary

    Using the IIS SEO Toolkit and it powerful Query Engine you can easily detect conditions on your site that otherwise would take an incredible amount of time and that would be prohibitively expensive to do constantly.

  • CarlosAg Blog

    Announcing: IIS SEO Toolkit v1.0.1

    • 5 Comments

    Last week we released a refresh for the IIS Search Engine Optimization (SEO) Toolkit v1.0. This version is a minor update that includes fixes for all the important bugs reported in the IIS.NET SEO Forum.

    Some of the fixes included in this version are:

    1. Pages sending the XHTML content type 'application/xhtml+xml' are not parsed correctly as HTML causing their links and violations to be empty.
    2. Report Analysis fails if the META tags include certain characters.
    3. <style> tag is not parsed correctly if it is empty causing Invalid Markup violations to be flagged incorrectly.
    4. Memory is not released when the "Store Copies of analyzed web pages locally" button is unchecked.
    5. HTML with leading empty lines and doctype fails to parse correctly causing their links and violations to be empty.
    6. Internal Link criteria option of "host: <sitename> and subdomains (*.<sitename>)" fails to work as expected under certain configurations.
    7. System.NullReferenceException when content attribute is misisng in Meta tag
    8. Windows authentication does not work with servers configured with NTLM or Kerberos only challenge.
    9. External META tags are stored in the report making it cumbersome to use the important ones.
    10. Several localization related bugs.
    11. DTD error when navigating to the Sitemap and Sitemap Index User Interface.
    12. And others…

    This release is compatible with v1.0 RTM and it will upgrade if already installed. So go ahead and install the new version using Web Platform Installed by clicking: http://go.microsoft.com/?linkid=9695987

     

    Learn more about it at: http://www.iis.net/expand/SEOToolkit

  • CarlosAg Blog

    IIS SEO Toolkit – Crawler Module Extensibility

    • 27 Comments

     

    Sample SEO Toolkit CrawlerModule Extensibility

    In this blog we are going to write an example on how to extend the SEO Toolkit functionality, so for that we are going to pretend our company has a large Web site that includes several images, and now we are interested in making sure all of them comply to a certain standard, lets say all of them should be smaller than 1024x768 pixels and that the quality of the images is no less than 16 bits per pixel. Additionally we would also like to be able to make custom queries that can later allow us to further analyze the contents of the images and filter based on directories and more.

    For this we will extend the SEO Toolkit crawling process to perform the additional processing for images, we will be adding the following new capabilities:

    1. Capture additional information from the Content. In this case we will capture information about the image, in particular we will extend the report to add a "Image Width", "Image Height" and a "Image Pixel Format".
    2. Flag additional violations. In this example we will flag three new violations:
      1. Image is too large. This violation will be flagged any time the content length of the image is larger than the "Maximum Download Size per URL" configured at the start of the analysis. It will also flag this violation if the resolution is larger than 1024x768.
      2. Image pixel format is too small. This violation will be flagged if the image is 8 or 4 bits per pixel.
      3. Image has a small resolution. This will be flagged if the image resolution per inch is less than 72dpi.

    Enter CrawlerModule

    A crawler module is a class that extends the crawling process in Site Analysis to provide custom functionality while processing each URL. By deriving from this class you can easily raise your own set of violations or add your own data and links to any URL.

    public abstract class CrawlerModule : IDisposable
    {
       
    // Methods
       
    public virtual void BeginAnalysis();
        public virtual void EndAnalysis(bool cancelled);
       
    public abstract void Process(CrawlerProcessContext context);

       
    // Properties
        protected WebCrawler Crawler { get; }
       
    protected CrawlerSettings Settings { get; }
    }

    It includes three main methods:

    1. BeginAnalysis. This method is invoked once at the beginning of the crawling process and allows you to perform any initialization needed. Common tasks include registering custom properties in the Report that can be accessed through the Crawler property.
    2. Process. This method is invoked for each URL once its contents has been downloaded. The context argument includes a property URLInfo that provides all the metadata extracted for the URL. It also includes a list of Violations and Links in the URL. Common tasks include augmenting the metadata of the URL whether using its contents or external systems, flagging new custom Violations, or discovering new links in the contents.
    3. EndAnalysis. This method is invoked once at the end of the crawling process and allows you to do any final calculations on the report once all the URLs have been processed. Common tasks in this method include performing aggregations of data across all the URLs, or identifying violations that depend on all the data being available (such as finding duplicates).

    Coding the Image Crawler Module

    Create a Class Library in Visual Studio and add the code shown below.

    1. Open Visual Studio and select the option File->New Project
    2. In the New Project dialog select the Class Library project template and specify a name and a location such as "SampleCrawlerModule"
    3. Using the Menu "Project->Add Reference", add a reference to the IIS SEO Toolkit client library (C:\Program Files\Reference Assemblies\Microsoft\IIS\Microsoft.Web.Management.SEO.Client.dll).
    4. Since we are going to be registering this through the IIS Manager extensibility, add a reference to the IIS Manager extensibility DLL (c:\windows\system32\inetsrv\Microsoft.Web.Management.dll) using the "Project->Add Reference" menu.
    5. Also, since we will be using the .NET Bitmap class you need to add a reference to "System.Drawing" using the "Project->Add Reference" menu.
    6. Delete the auto-generated Class1.cs since we will not be using it.
    7. Using the Menu "Project->Add New Item" Add a new class named "ImageExtension".
    using System;
    using System.Drawing;
    using System.Drawing.Imaging;
    using Microsoft.Web.Management.SEO.Crawler;

    namespace SampleCrawlerModule {

       
    /// <summary>
        /// Extension to add validation and metadata to images while crawling
        /// </summary>
        internal class ImageExtension : CrawlerModule {
           
    private const string ImageWidthField = "iWidth";
           
    private const string ImageHeightField = "iHeight";
           
    private const string ImagePixelFormatField = "iPixFmt";

           
    public override void BeginAnalysis() {
               
    // Register the properties we want to augment at the begining of the analysis
                Crawler.Report.RegisterProperty(ImageWidthField, "Image Width", typeof(int));
               
    Crawler.Report.RegisterProperty(ImageHeightField, "Image Height", typeof(int));
               
    Crawler.Report.RegisterProperty(ImagePixelFormatField, "Image Pixel Format", typeof(string));
           
    }

           
    public override void Process(CrawlerProcessContext context) {
               
    // Make sure only process the Content Types we need to
                switch (context.UrlInfo.ContentTypeNormalized) {
                   
    case "image/jpeg":
                   
    case "image/png":
                   
    case "image/gif":
                   
    case "image/bmp":
                       
    // Process only known content types
                        break;
                   
    default:
                       
    // Ignore any other
                        return;
               
    }

               
    //--------------------------------------------
                // If the content length of the image was larger than the max
                //   allowed to download, then flag a violation, and stop
                if (context.UrlInfo.ContentLength >
                   
    Crawler.Settings.MaxContentLength) {
                   
    Violations.AddImageTooLargeViolation(context,
                        "It is larger than the allowed download size"
    );
                   
    // Stop processing since we do not have all the content
                    return;
               
    }

               
    // Load the image from the response into a bitmap
                using (Bitmap bitmap = new Bitmap(context.UrlInfo.ResponseStream)) {
                   
    Size size = bitmap.Size;

                   
    //--------------------------------------------
                    // Augment the metadata by adding our fields
                    context.UrlInfo.SetPropertyValue(ImageWidthField, size.Width);
                   
    context.UrlInfo.SetPropertyValue(ImageHeightField, size.Height);
                   
    context.UrlInfo.SetPropertyValue(ImagePixelFormatField, bitmap.PixelFormat.ToString());

                   
    //--------------------------------------------
                    // Additional Violations:
                    //
                    // If the size is outside our standards, then flag violation
                    if (size.Width > 1024 &&
                       
    size.Height > 768) {
                       
    Violations.AddImageTooLargeViolation(context,
                           
    String.Format("The image size is: {0}x{1}",
                                         
    size.Width, size.Height));
                   
    }

                   
    // If the format is outside our standards, then flag violation
                    switch (bitmap.PixelFormat) {
                       
    case PixelFormat.Format1bppIndexed:
                       
    case PixelFormat.Format4bppIndexed:
                       
    case PixelFormat.Format8bppIndexed:
                           
    Violations.AddImagePixelFormatSmall(context);
                           
    break;
                   
    }

                   
    if (bitmap.VerticalResolution <= 72 ||
                       
    bitmap.HorizontalResolution <= 72) {
                       
    Violations.AddImageResolutionSmall(context,
                           
    bitmap.HorizontalResolution + "x" + bitmap.VerticalResolution);
                   
    }
               
    }
           
    }

           
    /// <summary>
            /// Helper class to hold the violations
            /// </summary>
            private static class Violations {

               
    private static readonly ViolationInfo ImageTooLarge =
                   
    new ViolationInfo("ImageTooLarge",
                                     
    ViolationLevel.Warning,
                                      "Image is too large."
    ,
                                      "The Image is too large: {details}."
    ,
                                      "Make sure that the image content is required."
    ,
                                      "Images"
    );

               
    private static readonly ViolationInfo ImagePixelFormatSmall =
                   
    new ViolationInfo("ImagePixelFormatSmall",
                                     
    ViolationLevel.Warning,
                                      "Image pixel format is too small."
    ,
                                      "The Image pixel format is too small"
    ,
                                      "Make sure that the quality of the image is good."
    ,
                                      "Images"
    );

               
    private static readonly ViolationInfo ImageResolutionSmall =
                   
    new ViolationInfo("ImageResolutionSmall",
                                     
    ViolationLevel.Warning,
                                      "Image resolution is small."
    ,
                                      "The Image resolution is too small: ({res})"
    ,
                                      "Make sure that the image quality is good."
    ,
                                      "Images"
    );

               
    internal static void AddImageTooLargeViolation(CrawlerProcessContext context, string details) {
                   
    context.Violations.Add(new Violation(ImageTooLarge,
                           
    0, "details", details));
               
    }

               
    internal static void AddImagePixelFormatSmall(CrawlerProcessContext context) {
                   
    context.Violations.Add(new Violation(ImagePixelFormatSmall, 0));
               
    }

               
    internal static void AddImageResolutionSmall(CrawlerProcessContext context, string resolution) {
                   
    context.Violations.Add(new Violation(ImageResolutionSmall,
                           
    0, "res", resolution));
               
    }
           
    }
       
    }
    }

    As you can see in the BeginAnalysis the module registers three new properties with the Report using the Crawler property. This is only required if you want to provide either a custom text or use it for different type other than a string. Note that current version only allows primitive types like Integer, Float, DateTime, etc.

    During the Process method it first makes sure that it only runs for known content types, then it performs any validations raising a set of custom violations that are defined in the Violations static helper class. Note that we load the content from the Response Stream, which is the property that contains the received from the server. Note that if you were analyzing text the property Response would contain the content (this is based on Content Type, so HTML, XML, CSS, etc, will be kept in this String property).

    Registering it

    When running inside IIS Manager, crawler modules need to be registered as a standard UI module first and then inside their initialization they need to be registered using the IExtensibilityManager interface. In this case to keep the code as simple as possible everything is added in a single file. So add a new file called "RegistrationCode.cs" and include the contents below:

    using System;
    using Microsoft.Web.Management.Client;
    using Microsoft.Web.Management.SEO.Crawler;
    using Microsoft.Web.Management.Server;

    namespace SampleCrawlerModule {
       
    internal class SampleCrawlerModuleProvider : ModuleProvider {
           
    public override ModuleDefinition GetModuleDefinition(IManagementContext context) {
               
    return new ModuleDefinition(Name, typeof(SampleCrawlerModule).AssemblyQualifiedName);
           
    }

           
    public override Type ServiceType {
               
    get { return null; }
           
    }

           
    public override bool SupportsScope(ManagementScope scope) {
               
    return true;
           
    }
       
    }

       
    internal class SampleCrawlerModule : Module {
           
    protected override void Initialize(IServiceProvider serviceProvider, ModuleInfo moduleInfo) {
               
    base.Initialize(serviceProvider, moduleInfo);

               
    IExtensibilityManager em = (IExtensibilityManager)GetService(typeof(IExtensibilityManager));
               
    em.RegisterExtension(typeof(CrawlerModule), new ImageExtension());
           
    }
       
    }
    }

    This code defines a standard UI IIS Manager module and in its client-side initialize method it uses the IExtensibilityManager interface to register the new instance of the Image extension. This will make it visible to the Site Analysis feature.

    Testing it

    To test it we need to add the UI module to Administration.config, that also means that the assembly needs to be registered in the GAC.

    To Strongly name the assembly

    In Visual Studio, you can do this easily by using the menu "Project->Properties", and select the "Signing" tab, check the "Sign the assembly", and choose a file, if you don't have one you can easily just choose New and specify a name.

    After this you can compile and now should be able to add it to the GAC.

    To GAC it

    If you have the SDK's you should be able to call it like in my case:

    "\Program Files\Microsoft SDKs\Windows\v6.0A\bin\gacutil.exe" /if SampleCrawlerModule.dll

     

    (Note, you could also just open Windows Explorer, navigate to c:\Windows\assembly and drag & drop your file in there, that will GAC it automatically).

    Finally to see the right name that should be use in Administration.config run the following command:

    "\Program Files\Microsoft SDKs\Windows\v6.0A\bin\gacutil.exe" /l SampleCrawlerModule

    In my case it displays:

    SampleCrawlerModule, Version=1.0.0.0, Culture=neutral, PublicKeyToken=6f4d9863e5b22f10, …

    Finally register it in Administration.config

    Open Administration.config in Notepad using an elevated instance, find the </moduleProviders> and add a string like the one below but replacing the right values for Version and PublicKeyToken:

          <add name="SEOSample" type="SampleCrawlerModule.SampleCrawlerModuleProvider, SampleCrawlerModule, Version=1.0.0.0, Culture=neutral, PublicKeyToken=6f4d9863e5b22f10" />

    Use it

    After registration you now should be able to launch IIS Manager and navigate to Search Engine Optimization. Start a new Analysis to your Web site. Once completed if there are any violations you will see them correctly in the Violations Summary or any other report. For example see below all the violations in the "Images" category.

    image

    Since we also extended the metadata by including the new fields (Image Width, Image Height, and Image Pixel Format) now you can use them with the Query infrastructure to easily create a report of all the images:

    image

    And since they are standard fields, they can be used in Filters, Groups, and any other functionality, including exporting data. So for example the following query can be opened in the Site Analysis feature and will display an average of the width and height of images summarized by type of image:

    <?xml version="1.0" encoding="utf-8"?>
    <query dataSource="urls">
     
    <filter>
       
    <expression field="ContentTypeNormalized" operator="Begins" value="image/" />
      </
    filter>
     
    <group>
       
    <field name="ContentTypeNormalized" />
      </
    group>
     
    <displayFields>
       
    <field name="ContentTypeNormalized" />
        <
    field name="(Count)" />
        <
    field name="Average(iWidth)" />
        <
    field name="Average(iHeight)" />
      </
    displayFields>
    </query>

    image

    And of course violation details are shown as specified, including Recommendation, Description, etc:

    image

    Summary

    As you can see extending the SEO Toolkit using a Crawler Module allows you to provide additional information, whether Metadata, Violations or Links to any document being processed. This can be used to add support for content types not supported out-of-the box such as PDF, Office Documents or anything else that you need. It also can be used to extend the metadata by writing custom code to wire data from other system into the report giving you the ability to exploit this data using the Query capabilities of Site Analysis.

Page 2 of 10 (93 items) 12345»