From the MVPs: Diagnostics and logging in Windows Azure Web Sites


From the MVPs: Diagnostics and logging in Windows Azure Web Sites

  • Comments 2

This is the 37th in our series of guest posts by Microsoft Most Valued Professionals (MVPs). You can click the “MVPs” tag in the right column of our blog to see all the articles.

Since the early 1990s, Microsoft has recognized technology champions around the world with the MVP Award. MVPs freely share their knowledge, real-world experience, and impartial and objective feedback to help people enhance the way they use technology. Of the millions of individuals who participate in technology communities, around 4,000 are recognized as Microsoft MVPs. You can read more original MVP-authored content on the Microsoft MVP Award Program Blog.

This post is by Windows Azure MVP Robin Shahan. Thanks, Robin!

And don’t forget that it’s Windows Azure Week: a week of free online training by the experts that build Windows Azure. Information and registration links are here. It’s not too late to attend the Thursday and Friday sessions!

In my previous job as VP of Technology at a startup, I migrated their entire infrastructure from a hosted environment in Silicon Valley to Windows Azure in 2010. What helped make that a successful implementation was the use of a substantial amount of diagnostics. Everything we built was PaaS, because that’s all that was available at the time, but it was a good fit for us, as all of our applications were C# and .NET.

Nowadays, there’s the new quick-deployment handy dandy Windows Azure Web Sites. I wanted to know how the diagnostics correlate between PAAS and WAWS – can you get the same diagnostics for WAWS that you can get through PAAS? How do you view the diagnostics? How do you enable and configure them? Let’s take a look.

I included code in the OnStart event to enable diagnostics and logging in our cloud services; you can also configure this through Visual Studio. The data is collected on the web role (or worker role) and then transferred to table storage or blob storage at whatever interval is specified. I collected the following diagnostics information:

  • Trace Diagnostics
  • IIS Logs
  • IIS Failure Logs

After deployment, these tables in Windows Azure Table Storage are created and populated in Windows Azure Table Storage.

  • WADDirectoriesTable – this is a list of the IIS log files. You can programmatically query this table and get a list of the IIS log files written to blob storage.
  • WADLogsTable – this is the trace diagnostics written in the application using the System.Diagnostics.Trace namespace.

The IIS Logs and IIS Failure Logs are copied to Windows Azure Blob Storage and put in the following containers:

  • wad-iis-failedreqlogfiles – contains the IIS failed request log files
  • wad-iis-logfiles – contains the full IIS log files

The trace diagnostics table stores quite a bit of information, including timestamp, deployment ID, instance ID, logging level, and message. You can query this table for a specific date range, trace level, deployment ID, etc., programmatically or with a tool like Cerebrata’s Azure Management Studio (AMS). (Cerebrata is owned by Red Gate.)

This same information is available for a Windows Azure Web Site. The question is how do you configure your web site to retrieve it? The answer is in the Windows Azure portal. I’ve created a test website, added some trace logging, and published it to a Windows Azure Web Site.

If you log into the Windows Azure portal and go to the configuration for your website, you can page down a couple of times to see the sections for the configuration of the diagnostics.

First, you see the “application diagnostics” section. This is where you configure the output from the System.Diagnostics.Trace namespace. There are three storage options:

  • File System
    • Logs will be written to the web site’s file system. You can then view the logs by accessing the FTP share for the web site.
    • This only enables writing the logs for the next 12 hours.
    • You can specify the logging level for the messages you want to retain.

image

Fig 1: Application Logging to the File System.

  • Table Storage
    • Logs will be written to a Windows Azure Table.
    • There is no time limit on the storage of the logs. It will continue to write diagnostics to the table until you turn it off or the table fills up. (Since the maximum size of a Windows Azure Table is 200 TB, that will take a lot of diagnostics!)
    • You can specify the logging level for the messages you want to retain.
    • The application logs are never deleted.
    • Windows Azure Storage is an additional cost. (Unless you have terabytes of data, this should be minimal.)

image

Fig 2: Application Logging to Table Storage.

Click “manage table storage” to select an (existing) storage account to use for the log data, and then select a table. You can also ask to create a new table, and it will provide a space for you to enter a name for the new table.

image

Fig 3: Manage Table Storage for Application Diagnostics.

  • Blob Storage
    • Logs will be written to Windows Azure blob storage, one blob for each hour.
    • You can set the logging level for the messages you want to retain.
    • Windows Azure Storage is an additional cost. (Unless you have terabytes of data, this should be minimal.)
    • You can set a retention time for the logs, and they will be automatically deleted for you upon hitting that limit.

image

Figure 4: Application Logging to Blob Storage.

Click “manage blob storage” to select an (existing) storage account to use for the log data, and then select a blob container. You can also ask to create a new blob container, and it will provide a space for you to enter a name for the new container.

image

Fig 5: Manage Blob Storage for Application Diagnostics.

After running my website and generating some diagnostics, how do I view them?

For viewing Table Storage, I again use AMS. I can query the table for specific date ranges, application name, logging level, etc. You can also view this using Visual Studio, but you either have to page through the records 1000 at a time, or know exactly how to type in a WCF Data Services filter. (AMS has a wizard to create it for you.)

image

Fig 6: Application Logging.

I turned on the writing of application diagnostics to blob storage. So if I use AMS to go to my storage account and look for the blob container I specified in the portal (wawsapplogblobrobindotnet2), it shows me the blobs created in that container, starting with the name of my website and continuing with the date and time (1/23 at 2:50 pm, PST, which is 22:50 UTC). It stores all the logging for an hour in one blob.

image

Fig 7: Application Logging in Blob Storage.

These files have the same contents as table storage, but in CSV format.

The only real difficulty with application diagnostics in WAWS is with viewing them. If you have multiple websites pointing to the same storage account/table, you have to filter the table based on the website name or the deployment ID and the date/time. If you have any amount of trace logging at all, it can result in thousands of records you have to sift through to find what you’re looking for.

With Cloud Services, all of my services wrote trace logging to the same table. AMS will show you a list of your cloud services, and you can ask to see the diagnostics for a specific cloud service, and it will produce the filtered results for you, which makes it really easy and quick to view diagnostics from different services.

The ability to write the logs to blob storage makes this significantly easier, because each hour is in a separate blob, but this adds a bit of a wrinkle if you want to look through the logs for errors in the past two days (for example). You might have to open a lot of files, or download all the files and combine them into one spreadsheet, or load them into a database. I recommend going with Table Storage, because you can query across time for a specific logging level.

What about the IIS logs? Those are configured in the next section on the page in the portal titled “site diagnostics”.

image

Fig 8: Configuring Site Diagnostics.

  • Web Server Logging refers to the IIS log files. You can write to the local file system OR Windows Azure Blob Storage, but not both.

If you chose to write to the File System, you must specify a quota between 25 and 100 MB. This is the maximum amount of disk space that will be used.

If you choose to write to Windows Azure Blob Storage, click “manage storage” to bring up the dialog where you can specify which (existing) storage account to use and the name of the container in blob storage. Then, just like the application diagnostics, you can specify a new container name or select an existing one.

If writing to Blob Storage, you can set a retention policy for the number of days to retain the logs, and they will be automatically removed after that time period.

If I look at the container wawssitelogblobrobindotnet2 in blob storage, I see the following blobs. Just like application diagnostics, it stores one hour of data in each blob.

image

Fig 9: IIS logs in Blob Storage.

  • Detailed error messages: If you turn this on, the web server creates an HTML page with some additional information for failed HTTP results (those that result in status code 400 or greater).
  • Failed Request Tracing: If you turn this on, the web server creates an XML file with detailed tracing information for failed HTTP requests. The web server also provides an XSL file to format the XML in a browser. These are stored in the local file system, and can only be accessed by FTP’ing into the website.
  • You are limited to 100 MB of space on the local file system for logging, so the detailed error messages and failed request logs may not have as much history as you would like.
  • If you FTP into your website (I’m using FileZilla here), you can see the LogFiles retained locally. The folder “Detailed Errors” contains the detailed error files; the folder with the name starting with “W3SVC” contains the failed request logs and an xsl file that your browser can use to format the data (freb.xsl).

image

Fig 10: FTP into website to see Failed Request Logs and Detailed Errors.

I’ve discussed the diagnostics available with Cloud Services, and showed how to configure application diagnostics and site diagnostics to get the corresponding information for a Windows Azure Web Site.

  • Hi Robin,

    Excellent post, very thorough.

    I was unaware of those features on web sites.  It is great as it allows us to put the logs of different sites on different blob containers and / or different tables.

    Now…  what is the equivalent on cloud Services?  Can we control the table name logs are persisted to in a Cloud Service (e.g. Worker Role) in order to log in different tables of the same storage account for different Cloud Services?

  • To Vincent: You can not control the table name that logs are persisted to in a worker role or web role. All logs are written to the WADLogsTable. I use a tool from Red Gate called "Cerebrata Azure Management Studio" to look at the diagnostics for cloud service roles. They have the ability to ask to see the diagnostics just for a specific cloud service, which is really useful if you have a lot of services running in Azure.

Page 1 of 1 (2 items)
Leave a Comment
  • Please add 8 and 8 and type the answer here:
  • Post