Delay's Blog

Silverlight, WPF, Windows Phone, Web Platform, .NET, and more...

Posts
  • Delay's Blog

    TextAnalysisTool.NET to Windows 8: "Y U no run me under .NET 4?" [How to avoid the "please install .NET 3.5" dialog when running older .NET applications on the Windows 8 Developer Preview]

    • 4 Comments

    I wrote TextAnalysisTool.NET a number of years ago to streamline the task of analyzing large log files by creating an interactive experience that combines searching, filtering, and tagging and allow the user to quickly identify interesting portions of a large log file. Although .NET 2.0 was out at the time, I targeted .NET 1.1 because it was more widely available, coming pre-installed on Windows Server 2003 (the "latest and greatest" OS then). In the years since, I've heard from folks around the world running TextAnalysisTool.NET on subsequent Windows operating systems and .NET Framework versions. Because Windows and the .NET Framework do a great job maintaining backwards compatibility, the same TextAnalysisTool.NET binary has continued to work as-is for the near-decade since its initial release.

    TextAnalysisTool.NET demonstration

     

    But the story changes with Windows 8! Although Windows 8 has the new .NET 4.5 pre-installed (including .NET 4.0 upon which it's based), it does not include .NET 3.5 (and therefore .NET 2.0). While I can imagine some very sensible reasons for the Windows team to take this approach, it's inconvenient for existing applications because .NET 4 in this scenario does not automatically run applications targeting an earlier framework version. What is cool is that the public Windows 8 Developer Preview detects when an older .NET application is run and automatically prompts the user to install .NET 3.5:

    Windows 8 .NET 3.5 install prompt

    That's pretty slick and is a really nice way to bridge the gap. However, it's still kind of annoying for users as it may not always be practical for them to perform a multi-megabyte download the first time they try to run an older .NET program. So it would be nice if there were an easy way for older .NET applications to opt into running under the .NET 4 framework that's already present on Windows 8...

    And there is! :) One aspect of .NET's "side-by-side" support involves using .config files to specify which .NET versions an application is known to work with. (For more information, please refer to the MSDN article How to: Use an Application Configuration File to Target a .NET Framework Version.) Consequently, improving the Windows 8 experience for TextAnalisisTool.NET should be as easy as creating a suitable TextAnalysisTool.NET.exe.config file in the same folder as TextAnalysisTool.NET.exe.

     

    Specifically, the following should do the trick:

    <?xml version="1.0"?>
    <configuration>
      <startup>
        <supportedRuntime version="v1.1.4322"/>
        <supportedRuntime version="v2.0.50727"/>
        <supportedRuntime version="v4.0"/>
      </startup>
    </configuration>

    And it does! :) With that TextAnalysisTool.NET.exe.config file in place, TextAnalysisTool.NET runs on a clean install of the Windows 8 Developer Preview as-is and without prompting the user to install .NET 3.5. I've updated the download ZIP to include this file so new users will automatically benefit; existing users should drop TextAnalysisTool.NET.exe.config in the right place, and they'll be set as well!

    Aside: Although this trick will work in many cases, it isn't guaranteed to work. In particular, if there has been a breaking change in .NET 4, then attempting to run a pre-.NET 4 application in this manner might fail. Therefore, it's prudent to do some verification when trying a change like this!

     

    [Click here to download a ZIP file containing TextAnalysisTool.NET, the relevant .config file, its documentation, and a ReadMe.]

     

    TextAnalysisTool.NET has proven to be extremely popular with support engineers and it's always nice to hear from new users. I hope today's post extends the usefulness of TextAnalysisTool.NET by making the Windows 8 experience as seamless as people have come to expect!

  • Delay's Blog

    Make it your own [WebMatrix's new extensibility support enables developers and applications to customize the web development experience to their liking!]

    • 5 Comments

    In conjunction with Microsoft's ongoing BUILD Windows conference, we have just released betas of the next version of the Microsoft Web Platform. Today's releases include a lot of new functionality and I encourage interested parties to have a look at what's new in ASP.NET and WebMatrix.

    In this post, I'm going to focus on one aspect of the WebMatrix 2 Beta: extensibility. By exposing a public API and opening up the application to others, WebMatrix enables individuals and community members to customize the experience to best fit their own unique processes and workflow. Many aspects of the development experience are configurable, so users can become more productive by streamlining common tasks, automating monotonous ones, and simplifying difficult ones!

     

    Task-based extensibility

    Extensibility for the WebMatrix 2 Beta takes three forms: task-based extensibility, help extensibility, and extensions.

    1. Task-based extensibility refers to the ability of a web application like Umbraco or WordPress to embed a file within the install package to customize the WebMatrix user interface for that particular application. By providing some simple XML, applications in the Web App Gallery can add custom links to the Ribbon or dashboard, protect core application files, provide enhanced Intellisense for PHP, and more.

    2. Help extensibility makes it possible to integrate custom content with WebMatrix's new, context-sensitive help pane. The new help system shows links to relevant content and videos based on where the user is in the application and what he or she is doing. Help content is drawn from a variety of sources; content providers can create custom feeds to cover new topics or provide more context on existing ones. This article explains how to create custom help content.

    3. For developers, the real power lies in the ability to write extensions that run inside WebMatrix because they're capable of far richer customization. WebMatrix extensions can be written in any .NET language, are loaded by MEF, the Managed Extensibility Framework, and installed/uninstalled (behind the scenes) as NuGet packages (with a slight twist I'll explain in a different post). Similar to Visual Studio, WebMatrix has an extension gallery that allows users to browse and install extensions from a central feed - or create and share custom feeds!

     

    ColorThemeManager

    Opening WebMatrix's extension gallery (by clicking the "Extensions/Gallery" Ribbon button from the Site workspace) shows some of the extensions that have been created to help give an idea what's possible. I'll call out three of them:

    • ColorThemeManager - This extension by Yishai Galatzer allows you to customize the colors used by the file editor in WebMatrix, export your settings, and import settings from other sources. So if you're one of those people who enjoys looking at green on black text, then you're in luck. :)

    • ImageOptimizer - This extension by Mads Kristensen makes it easy to "optimize" the PNG and JPEG images of a web site by removing unnecessary metadata and recompressing content to minimize file sizes - thereby keeping bandwidth down and site responsiveness up.

    • Snippets - This extension by me makes it easy to insert snippets of text into an open document in the Files workspace. It was written to make WebMatrix demos a little easier (watch for it in the WebMatrix: uber geek in designer clothes presentation today at BUILD!), but its simplicity makes it a good learning tool, too. I'll be blogging the complete source code for Snippets in a few days.

     

    WebMatrix Extension project template

    In addition to providing an overview of the Snippets sample, I plan to discuss other aspects of WebMatrix extensibility over the next few weeks. In the meantime, you can start exploring WebMatrix extensibility today:

    1. Download the Microsoft.WebMatrix.Extensibility CHM file, unblock it (important: click here for directions on "unblocking" a file), open it, and browse the contents. (If you see the message "Navigation to the webpage was canceled", then the file is still blocked.)

    2. Download the "WebMatrix Extension" Visual Studio project template, save the ZIP file in your "%USERPROFILE%\Documents\Visual Studio 2010\Templates\ProjectTemplates" directory, choose File, New, Project in Visual Studio 2010, select the "Visual C#" node, click the "WebMatrix Extension" item, type a project name (ex: "MyExtension" (no spaces, please)), and click OK.

    The project template sets up the right infrastructure, all the necessary references, includes pre- and post-build rules to make development a little easier, and helps get you started with a simple Ribbon-based extension that demonstrates some of the basic extensibility points. The template's ReadMe.txt explains how to configure Visual Studio so that pressing F5 will automatically load the extension inside WebMatrix for a simple, seamless debugging experience with complete breakpoint support, etc.. FYI that I'd like to improve this template by adding support for NuGet package generation (so it will be easier to deploy extensions to a gallery) and maybe also create a VISX wrapper for it (to enable more seamless install of the template itself).

    Aside: While the project template "helpfully" copies your extension to the right place for WebMatrix to load it on startup, the extension is not properly installed and so WebMatrix doesn't know how to uninstall it. For now, the easiest way to get rid of a custom extension is to close WebMatrix and delete the contents of the "%USERPROFILE%\AppData\Local\Microsoft\WebMatrix\Components" directory.

     

    The new extensibility APIs in the WebMatrix 2 Beta allow developers to get started with extensions today. And while there aren't yet extension points for everything, there are enough to enable some pretty interesting scenarios. Available extension points include:

    Microsoft.WebMatrix.Extensibility help file
    • Ribbon content
      • Buttons, groups, tabs, etc.
    • Application information
      • Name, local path, remote URI
    • Integrated dialog and notification UI
    • Context menu items for application files/directories
    • Editor manipulation
      • Text buffer, settings, theming, custom types
    • Dashboard content
    • Simple commanding
    • Active workspace
    • More...

    That said, people are going to have a lot of great extension ideas that are either difficult or impossible to achieve with the Beta APIs. Not being able to put good ideas into practice is certainly disappointing, but it's also a great opportunity to let us know what features are missing from the API and how we can improve it! To make that easy, there's a WebMatrix forum where you can ask questions and exchange ideas. Once you've tried things out, please go there and share your thoughts!

     

    Aside: It probably goes without saying (but I'll say it anyway!) that the APIs available in the WebMatrix 2 Beta are subject to change and it's likely that extensions written for Beta will need to be modified in order to run on later releases. That's not to discourage people from writing extensions, but rather an attempt to set expectations appropriately. :)
    Further aside: It's natural to wonder if existing plugins for Visual Studio will "just work" in WebMatrix. The answer is that they will not - but in most cases it turns out that trying to load a Visual Studio extension inside WebMatrix wouldn't be all that meaningful anyway... At this point, the functionality of these two products and the target audience (both users and developers) are different enough that things don't align in a way that makes this scenario work.
  • Delay's Blog

    Know your place in life [Free PlaceImage control makes it easy to add placeholder images to any WPF, Silverlight, or Windows Phone application!]

    • 7 Comments

    One of the challenges with referencing online content is that you never know just how long it will take to download... On a good day, images show up immediately and your application has exactly the experience you want. On a bad day, images take a looong time to load - or never load at all! - and your application's interface is full of blank spaces. Applications that make use of remote images need to be prepared for variability like this and should have "placeholder" content to display when the desired image isn't available.

    Of course, there are a variety of ways to deal with this; I thought it would be neat to create a reusable, self-contained class and share it here. I envisioned a simple control that "looked" like a standard Image element (i.e., had the same API), but that seamlessly handled the work of displaying placeholder content before an image loaded and getting rid of it afterward. Naturally, I also wanted code that would run on WPF, Silverlight, and Windows Phone! :)

     

    <delay:PlaceImage
     PlaceholderSource="PlaceholderPhoto.png"
     Source="{Binding ImageUri}"/>

    The result of this exercise is something I've called PlaceImage. PlaceImage has the same API as the framework's Image and can be dropped in pretty much anywhere an Image is used. To enable the "placeholder" effect, simply set the PlaceholderSource property to a suitable image. (Aside: While you could specify another remote image for the placeholder, the most sensible thing to do is to reference an image that's bundled with the application (e.g., as content or a resource).) PlaceImage immediately shows your placeholder image and waits for the desired image to load - at which point, PlaceImage swaps it in and gets rid of the placeholder!

     

    I've written a sample application for each of the supported platforms that displays contact cards of imaginary employees. When the sample first runs, none of the remote images have loaded, so each card shows the "?" placeholder image:

    PlaceImageDemo on Windows Phone

    After a while, some of the remote images will have loaded:

    PlaceImageDemo on Silverlight

    Eventually, all the remote images load:

    PlaceImageDemo on WPF

    Thanks to placekitten for the handy placeholder images!

     

    Making use of online content in an application is easy to do and a great way to enrich an application. However, the unpredictable nature of the network means content might not always be available when it's needed. PlaceImage makes it easy to add placeholder images to common scenarios and helps keep the user interface free of blank spaces. With easy support for WPF, Silverlight, and Windows Phone, you can add it to pretty much any XAML-based application!

     

    [Click here to download the PlaceImageDemo project which includes PlaceImage.cs and sample applications for WPF, Silverlight, and Windows Phone.]

     

    Notes:

    • Just like Image, PlaceImage has properties for Source, Stretch, and StretchDirection (the last being available only on WPF). PlaceImage's additional PlaceholderSource property is used just like Source and identifies the placeholder image to be displayed before the Source image is available. (So set it to a local image!)

    • Changes to the Source property of a loaded Image immediately clear its contents. Similarly, changing the Source of a loaded PlaceImage immediately switches to its placeholder image while the new remote content loads. You can trigger this behavior in the sample application by clicking any kitten.

    • Because the Silverlight version of the demo application references web content, it needs to be run from the PlaceImageDemoSL.Web project. (Although running PlaceImageDemoSL will show placeholders, the kitten pictures never load.) The MSDN article URL Access Restrictions in Silverlight has more information on Silverlight's "cross-scheme access" limitations.

    • Control subclasses typically live in a dedicated assembly and define their default Style/Template in Generic.xaml. This is a great, general-purpose model, but I wanted PlaceImage to be easy to add to existing projects in source code form, so it does everything in a single file. All you need to do is include PlaceImage.cs in your project, and PlaceImage will be available in the Delay namespace.

    • The absence of the StretchDirection property on Silverlight and Windows Phone isn't the only platform difference PlaceImage runs into: whereas Silverlight and Windows Phone offer the handy Image.ImageOpened event, WPF has only the (more cumbersome) BitmapSource.DownloadCompleted event. The meaning of these two events isn't quite identical, but for the purposes of PlaceImage, they're considered equivalent.

  • Delay's Blog

    Invisible pixels are just as clickable as real pixels! [Tip: Use a Transparent brush to make "empty" parts of a XAML element respond to mouse and touch input]

    • 2 Comments

    Tip

    Use a Transparent brush to make "empty" parts of a XAML element respond to mouse and touch input

    Explanation

    I got a question yesterday and thought the answer would make a good addition to my Development Tips series. As you probably know, WPF, Silverlight, and Windows Phone support a rich, hierarchical way of laying out an application's UI. Elements can be created in XAML or in code and respond to input by firing the relevant events (MouseLeftButtonDown, Click, etc.). Input events bubble from the element "closest" to the user all the way up to the root element (stopping if an event is marked Handled). Every now and then someone finds that an element they expect to be getting input is not (and they've made sure none of its children are "eating" the event). The most common reason is that the element doesn't have any pixels for the user to click on! For example, in a 100x100 panel containing a short message, only the text pixels are considered part of the panel and respond to mouse input - everything else passes "through" the empty area and bubbles up to the parent. This behavior enables the creation of elements with any shape, but sometimes it's not what you want. Fortunately, it's simple to get empty parts of an element to respond to input: just draw some pixels! And while a Brush of any color will do the trick, painting with Transparent pixels is a fantastic way to keep empty space looking empty while also being clickable!

    Good Example

    <Grid
        Background="Transparent"
        MouseLeftButtonDown="Grid_MouseLeftButtonDown">
        <TextBlock
            Text="You can click anywhere in the Grid!"
            HorizontalAlignment="Center"
            VerticalAlignment="Center"/>
    </Grid>

    More information

  • Delay's Blog

    Preprocessor? .NET don't need no stinkin' preprocessor! [DebugEx.Assert provides meaningful assertion failure messages automagically!]

    • 2 Comments

    If you use .NET's Debug.Assert method, you probably write code to validate assumptions like so:

    Debug.Assert(args == null);

    If the expression above evaluates to false at run-time, .NET immediately halts the program and informs you of the problem:

    Typical Debug.Assert

    The stack trace can be really helpful (and it's cool you can attach a debugger right then!), but it would be even more helpful if the message told you something about the faulty assumption... Fortunately, there's an overload of Debug.Assert that lets you provide a message:

    Debug.Assert(args == null, "args should be null.");

    The failure dialog now looks like this:

    Debug.Assert with message

    That's a much nicer experience - especially when there are lots of calls to Assert and you're comfortable ignoring some of them from time to time (for example, if a network request failed and you know there's no connection). Some code analysis tools (notably StyleCop) go as far as to flag a warning for any call to Assert that doesn't provide a custom message.

    At first, adding a message to every Assert seems like it ought to be pure goodness, but there turn out to be some drawbacks in practice:

    • The comment is often a direct restatement of the code - especially for very simple conditions. Redundant redundancy is redundant, and when I see messages like that, I'm reminded of code comments like this:

      i++; // Increment i
    • It takes time and energy to type out all those custom messages, and the irony is that most of them will never be seen at all!

     

    Because the code for the condition is often expressive enough as-is, it would be nice if Assert automatically used the code as the message!

    Aside: This is hardly a new idea; C developers have been doing this for years by leveraging macro magic in the preprocessor to create a string from the text of the condition.
    Further aside: This isn't even a new idea for .NET; I found a couple places on the web where people ask how to do this. And though nobody I saw seemed to have done quite what I show here, I'm sure there are other examples of this technique "in the wild".

    As it happens, displaying the code for a condition can be accomplished fairly easily in .NET without introducing a preprocessor! However, it requires that calls to Assert be made slightly differently so as to defer execution of the condition. In a normal call to Assert, the expression passed to the condition parameter is completely evaluated before being checked. But by changing the type of the condition parameter from bool to Func<bool> and then wrapping it in the magic Expression<Func<bool>>, we're able to pass nearly complete information about the expression into the Assert method where it can be used to recreate the original source code at run-time!

     

    To make this a little more concrete, the original "message-less" call I showed at the beginning of the post can be trivially changed to:

    DebugEx.Assert(() => args == null);

    And the DebugEx.Assert method I've written will automatically provide a meaningful message (by calling the real Debug.Assert and passing the condition and a message):

    DebugEx.Assert with automatic message

     

    The message above is identical to the original code - but maybe that's because it's so simple... Let's try something more complex:

    DebugEx.Assert(() => args.Select(a => a.Length).Sum() == 10);

    Becomes:

    Assertion Failed: (args.Select(a => a.Length).Sum() == 10)

    Wow, amazing! So is it always perfect? Unfortunately, no:

    DebugEx.Assert(() => args.Length == 5);

    Becomes:

    Assertion Failed: (ArrayLength(args) == 5)

    The translation of the code to an expression tree and back seems to have lost a little fidelity along the way; the compiler translated the Length access into an expression tree that doesn't map back to code exactly the same. Similarly:

    DebugEx.Assert(() => 5 + 3 + 2 >= 100);

    Becomes:

    Assertion Failed: False

    In this case, the compiler evaluated the constant expression at compile time (it's constant, after all!), and the information about which numbers were used in the computation was lost.

    Yep, the loss of fidelity in some cases is a bit of a shame, but I'll assert (ha ha!) that nearly all the original intent is preserved and that it's still quite easy to determine the nature of the failing code without having to provide a message. And of course, you can always switch an ambiguous DebugEx.Assert back to a normal Assert and provide a message parameter whenever you want. :)

     

    [Click here to download the source code for DebugEx.Assert and the sample application used for the examples above.]

     

    DebugEx.Assert was a fun experiment and a great introduction to .NET's powerful expression infrastructure. DebugEx.Assert is a nearly-direct replacement for Debug.Assert and (similarly) applies only when DEBUG is defined, so it costs nothing in release builds. It's worth noting there will be a bit of extra overhead due to the lambda, but it should be negligible - especially when compared to the time you'll save by not having to type out a bunch of unnecessary messages!

    If you're getting tired of typing the same code twice, maybe DebugEx.Assert can help! :)

     

    Notes:

    • The code for DebugEx.Assert turned out to be simple because nearly all the work is done by the Expression(T) class. The one bit of trickiness stems from the fact that in order to create a lambda to pass as the Func(T), the compiler creates a closure which introduces an additional class (though they're never exposed to the developer). Therefore, even simple statements like the original example become kind of hard to read: Assertion Failed: (value(Program+<>c__DisplayClass0).args == null).

      To avoid that problem, I created an ExpressionVisitor subclass to rewrite the expression tree on the fly, getting rid of the references to such extra classes along the way. What I've done with SimplifyingExpressionVisitor is simple, but seems to work nicely for the things I've tried. However, if you find scenarios where it doesn't work as well, I'd love to know so I can handle them too!

  • Delay's Blog

    Use it or lose it, part deux [New Delay.FxCop code analysis rule helps identify uncalled public or private methods and properties in a .NET assembly]

    • 2 Comments

    Previous posts introduced the Delay.FxCop custom code analysis assembly and demonstrated the benefits of automated code analysis for easily identifying problem areas in an assembly. The Delay.FxCop project included two rules, DF1000: Check spelling of all string literals and DF1001: Resources should be referenced - today I'm introducing another! The new rule follows in the footsteps of DF1001 by identifying unused parts of an assembly that can be removed to save space and reduce complexity. But while DF1001 operated on resources, today's DF1002: Uncalled methods should be removed analyzes the methods and properties of an assembly to help find those stale bits of code that aren't being used any more.

    Note: If this functionality seems familiar, it's because CA1811: Avoid uncalled private code is one of the standard FxCop rules. I've always been a big fan of CA1811, but frequently wished it could look beyond just private code to consider all code. Of course, limiting the scope of the "in-box" rule makes perfect sense from an FxCop point of view: you don't want the default rules to be noisy or else they'll get turned off and ignored. But the Delay.FxCop assembly isn't subject to the same restrictions, so I thought it would be neat to experiment with an implementation that analyzed all of an assembly's code.
    Further note: One of the downsides of this increased scope is that DF1002 can't distinguish between methods that are part of a library's public API and those that are accidentally unused. As far as DF1002 is concerned, they're both examples of code that's not called from within the assembly. Therefore, running this rule on a library involves some extra overhead to suppress the warnings for public APIs. If it's just a little extra work, maybe it's still worthwhile - but if it's overwhelming, you can always disable DF1002 for library assemblies and restrict it to applications where it's more relevant.

     

    Implementation-wise, DF1002: Uncalled methods should be removed isn't all that different from its predecessors - in fact, it extends and reuses the same assembly node enumeration helper introduced with DF1001. During analysis, every method of the assembly is visited and if it isn't "used" (more on this in a moment), a code analysis warning is output:

    DF1002 : Performance : The method 'SilverlightApplication.MainPage.UnusedPublicMethod' does not appear to be used in code.

    Of course, these warnings can be suppressed in the usual manner:

    [assembly: SuppressMessage("Usage", "DF1001:ResourcesShouldBeReferenced",
               MessageId = "app.xaml", Scope = "resource", Target = "SilverlightApplication.g.resources",
               Justification = "Loaded by Silverlight for App.xaml.")]

     

    It's interesting to consider what it means for a method or a property to be "used"... (Internally, properties are implemented as a pair of get/set methods.) Clearly, a direct call to a method means it's used - but that logic alone results in a lot of false positives! For example, a class implementing an interface must define all the relevant interface methods in order to compile successfully. Therefore, explicit and implicit interface method implementations (even if uncalled) do not result in a DF1002 warning. Similarly, a method override may not be directly called within an assembly, but can still be executed and should not trigger a warning. Other kinds of "unused" methods that do not result in a warning include: static constructors, assembly entry-points, and methods passed as parameters (ex: to a delegate for use by an event).

    With all those special cases, you might think nothing would ever be misdiagnosed. :) But there's a particular scenario that leads to many DF1002 warnings in a perfectly correct application: reflection-based access to properties and methods. Granted, reflection is rare at the application level - but at the framework level, it forms the very foundation of data binding as implemented by WPF and Silverlight! Therefore, running DF1002 against a XAML application with data binding can result in warnings for the property getters on all model classes...

    To avoid that problem, I've considered whether it would make sense to suppress DF1002 for classes that implement INotifyPropertyChanged (which most model classes do), but it seems like that would also mask a bunch of legitimate errors. The same reasoning applies to subclasses of DependencyObject or implementations of DependencyProperty (though the latter might turn out to be a decent heuristic with a bit more work). Another approach might be for the rule to also parse the XAML in an assembly and identify the various forms of data binding within. That seems promising, but goes way beyond the initial scope of DF1002! :)

    Of course, there may be other common patterns which generate false positives - please let me know if you find one and I'll look at whether I can improve things for the next release.

     

    [Click here to download the Delay.FxCop rule assembly, associated .ruleset files, samples, and the complete source code.]

    For directions about running Delay.FxCop on a standalone assembly or integrating it into a project, please refer to the steps in my original post.

     

    Unused code is an unnecessary tax on the development process. It's a distraction when reading, incurs additional costs during coding (ex: when refactoring), and it can mislead others about how an application really works. That's why there's DF1002: Uncalled methods should be removed - to help you easily identify unused methods. Try running it on your favorite .NET application; you might be surprised by what you find! :)

  • Delay's Blog

    Use it or lose it! [New Delay.FxCop code analysis rule helps identify unused resources in a .NET assembly]

    • 11 Comments

    My previous post outlined the benefits of automated code analysis and introduced the Delay.FxCop custom code analysis assembly. The initial release of Delay.FxCop included only one rule, DF1000: Check spelling of all string literals, which didn't seem like enough to me, so today's update doubles the number of rules! :) The new rule is DF1001: Resources should be referenced - but before getting into that I'm going to spend a moment more on spell-checking...

     

    What I planned to write for the second code analysis rule was something to check the spelling of .NET string resources (i.e., strings from a RESX file). This seemed like another place misspellings might occur and I'd heard of other custom rules that performed this same task (for example, here's a sample by Jason Kresowaty). However, in the process of doing research, I discovered rule CA1703: Resource strings should be spelled correctly which is part of the default set of rules!

    To make sure it did what I expected, I started a new application, added a misspelled string resource, and ran code analysis. To my surprise, the misspelling was not detected... However, I noticed a different warning that seemed related: CA1824: Mark assemblies with NeutralResourcesLanguageAttribute "Because assembly 'Application.exe' contains a ResX-based resource file, mark it with the NeutralResourcesLanguage attribute, specifying the language of the resources within the assembly." Sure enough, when I un-commented the (project template-provided) NeutralResourcesLanguage line in AssemblyInfo.cs, the desired warning showed up:

    CA1703 : Microsoft.Naming : In resource 'WpfApplication.Properties.Resources.resx', referenced by name
    'SampleResource', correct the spelling of 'mispelling' in string value 'This string has a mispelling.'.

    In my experience, a some people suppress CA1824 instead of addressing it. But as we've just discovered, they're also giving up on free spell checking for their assembly's string resources. That seems silly, so I recommend setting NeutralResourcesLanguageAttribute for its helpful side-effects!

    Note: For expository purposes, I've included an example in the download: CA1703 : Microsoft.Naming : In resource 'WpfApplication.Properties.Resources.resx', referenced by name 'IncorrectSpelling', correct the spelling of 'mispelling' in string value 'This string has a single mispelling.'.

     

    Once I realized resource spell checking was unnecessary, I decided to focus on a different pet peeve of mine: unused resources in an assembly. In much the same way stale chunks of unused code can be found in most applications, it's pretty common to find resources that aren't referenced and are just taking up valuable space. But while there's a built-in rule to detect certain kinds of uncalled code (CA1811: Avoid uncalled private code), I'm not aware of anything similar for resources... And though it's possible to perform this check manually (by searching for the use of each individual resource), this is the kind of boring, monotonous task that computers are made for! :)

    Therefore, I've created the second Delay.FxCop rule, DF1001: Resources should be referenced, which compares the set of resource references in an assembly with the set of resources that are actually present. Any cases where a resource exists (whether it's a string, stream, or object), but is not referenced in code will result in an instance of the DF1001 warning during code analysis.

    Aside: For directions about how to run the Delay.FxCop rules on a standalone assembly or integrate them into a project, please refer to the steps in my original post.

    As a word of caution, there can be cases where DF1001 reports that a resource isn't referenced from code, but that resource is actually used by an assembly. While I don't think it will miss typical uses from code (either via the automatically-generated Resources class or one of the lower-level ResourceManager methods), the catch is that not all resource references show up in code! For example, the markup for a Silverlight or WPF application is included as a XAML/BAML resource which is loaded at run-time without an explicit reference from the assembly itself. DF1001 will (correctly; sort of) report this resource as unused, so please remember that global code analysis suppressions can be used to squelch false-positives:

    [assembly: SuppressMessage("Usage", "DF1001:ResourcesShouldBeReferenced", MessageId = "mainwindow.baml",
        Scope = "resource", Target = "WpfApplication.g.resources", Justification = "Loaded by WPF for MainWindow.xaml.")]
    Aside: There are other ways to "fool" DF1001, such as by loading a resource from a different assembly or passing a variable to ResourceManager.GetString. But in terms of how things are done 95% of the time, the rule's current implementation should be accurate. Of course, if you find cases where it misreports unused resources, please let me know and I'll look into whether it's possible to improve things in a future release!

     

    [Click here to download the Delay.FxCop rule assembly, associated .ruleset files, samples, and the complete source code.]

     

    Stale references are an unnecessary annoyance: they bloat an assembly, waste time and money (for example, when localized unnecessarily), confuse new developers, and generally just get in the way. Fortunately, detecting them in an automated fashion is easy with DF1001: Resources should be referenced! After making sure unused resources really are unused, remove them from your project - and enjoy the benefits of a smaller, leaner application!

  • Delay's Blog

    Speling misteaks make an aplikation look sily [New Delay.FxCop code analysis rule finds spelling errors in a .NET assembly's string literals]

    • 1 Comments

    No matter how polished the appearance of an application, web site, or advertisement is, the presence of even a single spelling error can make it look sloppy and unprofessional. The bad news is that spelling errors are incredibly easy to make - either due to mistyping or because one forgot which of the many, conflicting special cases applies in a particular circumstance. The good news is that technology to detect and correct spelling errors exists and is readily available. By making regular use of a spell-checker, you don't have to be a good speller to look like one. Trust me! ;)

    Spell-checking of documents is pretty well covered these days, with all the popular word processors offering automated, interactive assistance. However, spell-checking of code is not quite so far along - even high-end editors like Visual Studio don't tend to offer interactive spell-checking support. Fortunately, it's possible - even easy! - to augment the capabilities of many development tools to integrate spell-checking into the development workflow. There are a few different ways of doing this: one is to incorporate the checking into the editing experience (like this plugin by coworker Mikhail Arkhipov) and another is to do the checking as part of the code analysis workflow (like code analysis rule CA1703: ResourceStringsShouldBeSpelledCorrectly). I'd already been toying with the idea of implementing my own code analysis rules, so I decided to experiment with the latter approach...

    Aside: If you're not familiar with Visual Studio's code analysis feature, I highly recommend the MSDN article Analyzing Managed Code Quality by Using Code Analysis. Although the fully integrated experience is only available on higher-end Visual Studio products, the same exact code analysis functionality is available to everyone with the standalone FxCop tool which is free as part of the Microsoft Windows SDK for Windows 7 and .NET Framework 4. (FxCop has a dedicated download page with handy links, but it directs you to the SDK to do the actual install.)
    Unrelated aside: In the ideal world, all of an application's strings would probably come from a resource file where they can be easily translated to other languages - and therefore string literals wouldn't need spell-checking. However, in the real world, there are often cases where user-visible text ends up in string literals (ex: exception messages) and therefore a rule like this seems to have practical value. If the string resources of your application are already perfectly separated, congratulations! However, if your application doesn't use resources (or uses them incompletely!), please continue reading... :)

     

    As you might expect, it's possible to create custom code analysis rules and easily integrate them into your build environment; a great walk-through can be found on the Code Analysis Team Blog. If you still have questions after reading that, this post by Tatham Oddie is also quite good. And once you have an idea what you're doing, this documentation by Jason Kresowaty is a great resource for technical information.

    Code analysis is a powerful tool and has a lot of potential for improving the development process. But for now, I'm just going to discuss a single rule I created: DF1000: CheckSpellingOfAllStringLiterals. As its name suggests, this rule checks the spelling of all string literals in an assembly. To be clear, there are other rules that check spelling (including some of the default FxCop/Visual Studio ones), but I didn't see any that checked all the literals, so this seemed like an interesting place to start.

    Aside: Programs tend to have a lot of strings and those strings aren't always words (ex: namespace prefixes, regular expressions, etc.). Therefore, this rule will almost certainly report a lot of warnings run for the first time! Be prepared for that - and be ready to spend some time suppressing warnings that don't matter to you. :)

     

    As I typically do, I've published a pre-compiled binary and complete source code, so you can see exactly how CheckSpellingOfAllStringLiterals works (it's quite simple, really, as it uses the existing introspection and spell-checking APIs). I'm not going to spend a lot of time talking about how this rule is implemented, but I did want to show how to use it so others can experiment with their own projects.

    Important: Everything I show here was done with the Visual Studio 2010/.NET 4 toolset. Past updates to the code analysis infrastructure are such that things may not work with older (or newer) releases.

    To add the Delay.FxCop rules to a project, you'll want to know a little about rule sets - the MSDN article Using Rule Sets to Group Managed Code Analysis Rules is a good place to start. I've provided two .ruleset files in the download: Delay.FxCop.ruleset which contains just the custom rule I've written and AllRules_Delay.FxCop.ruleset which contains my custom rule and everything in the shipping "Microsoft All Rules" ruleset. (Of course, creating and using your own .ruleset is another option!) Incorporating a custom rule set into a Visual Studio project is as easy as: Project menu, ProjectName Properties..., Code Analysis tab, Run this rule set:, Browse..., specify the path to the custom rule set, Build menu, Run Code Analysis on ProjectName.

    Note: For WPF projects, you may also want to uncheck Suppress results from generated code in the "Code Analysis" tab above because the XAML compiler adds GeneratedCodeAttribute to all classes with an associated .xaml file and that automatically suppresses code analysis warnings for those classes. (Silverlight and Windows Phone projects don't set this attribute, so the default "ignore" behavior is fine.)

    Assuming your project contains a string literal that's not in the dictionary, the Error List window should show one or more warnings like this:

    DF1000 : Spelling : The word 'recieve' is not in the dictionary.

    At this point, you have a few options (examples of which can be found in the TestProjects\ConsoleApplication directory of the sample):

    • Fix the misspelling.

      Duh. :)

    • Suppress the instance.

      If it's an isolated use of the word and is correct, then simply right-clicking the warning and choosing Suppress Message(s), In Source will add something like the following attribute to the code which will silence the warning:

      [SuppressMessage("Spelling", "DF1000:CheckSpellingOfAllStringLiterals", MessageId = "leet")]

      While you're at it, feel free to add a Justification message if the reason might not be obvious to someone else.

    • Suppress the entire method.

      If a method contains no user-visible text, but has lots of strings that cause warnings, you can suppress the entire method by omitting the MessageId parameter like so:

      [SuppressMessage("Spelling", "DF1000:CheckSpellingOfAllStringLiterals")]
    • Add the word to the custom dictionary.

      If the "misspelled" word is correct and appears throughout the application, you'll probably want to add it to the project's custom dictionary which will silence all relevant warnings at once. MSDN has a great overview of the custom dictionary format as well as the exact steps to take to add a custom dictionary to a project in the article How to: Customize the Code Analysis Dictionary.

     

    Alternatively, if you're a command-line junkie or don't want to modify your Visual Studio project, you can use FxCopCmd directly by running it from a Visual Studio Command Prompt like so:

    C:\T>"C:\Program Files (x86)\Microsoft Visual Studio 10.0\Team Tools\Static Analysis Tools\FxCop\FxCopCmd.exe"
      /file:C:\T\ConsoleApplication\bin\Debug\ConsoleApplication.exe
      /ruleset:=C:\T\Delay.FxCop\Delay.FxCop.ruleset /console
    Microsoft (R) FxCop Command-Line Tool, Version 10.0 (10.0.30319.1) X86
    Copyright (C) Microsoft Corporation, All Rights Reserved.
    
    [...]
    Loaded delay.fxcop.dll...
    Loaded ConsoleApplication.exe...
    Initializing Introspection engine...
    Analyzing...
    Analysis Complete.
    Writing 1 messages...
    
    C:\T\ConsoleApplication\Program.cs(12,1) : warning  : DF1000 : Spelling : The word 'recieve' is not in the dictionary.
    Done:00:00:01.4352025

    Or else you can install the standalone FxCop tool to get the benefits of a graphical user interface without changing anything about your existing workflow!

     

    [Click here to download the Delay.FxCop rule assembly, associated .ruleset files, samples, and the complete source code.]

     

    Spelling is one of those things that's easy to get wrong - and also easy to get right if you apply the proper technology and discipline. I can't hope to make anyone a better speller ('i' before 'e', except after 'c'!), but I can help out a little on the technology front. I plan to add new code analysis rules to Delay.FxCop over time - but for now I hope people put DF1000: CheckSpellingOfAllStringLiterals to good use finding spelling mistakes in their applications!

  • Delay's Blog

    Safe X (ml parsing with XLINQ) [XLinqExtensions helps make XML parsing with .NET's XLINQ a bit safer and easier]

    • 9 Comments

    XLINQ (aka LINQ-to-XML) is a set of classes that make it simple to work with XML by exposing the element tree in a way that's easy to manipulate using standard LINQ queries. So, for example, it's trivial to write code to select specific nodes for reading, create well-formed XML fragments, or transform an entire document. Because of its query-oriented nature, XLINQ makes it easy to ignore parts of a document that aren't relevant: if you don't query for them, they don't show up! Because it's so handy and powerful, I encourage folks who aren't already familiar to find out more.

    Aside: As usual, flexibility comes with a cost and it is often more efficient to read and write XML with the underlying XmlReader and XmlWriter classes because they don't expose the same high-level abstractions. However, I'll suggest that the extra productivity of developing with XLINQ will often outweigh the minor computational cost it incurs.

     

    When I wrote the world's simplest RSS reader as a sample for my post on WebBrowserExtensions, I needed some code to parse the RSS feed for my blog and dashed off the simplest thing possible using XLINQ. Here's a simplified version of that RSS feed for reference:

    <rss version="2.0">
      <channel>
        <title>Delay's Blog</title>
        <item>
          <title>First Post</title>
          <pubDate>Sat, 21 May 2011 13:00:00 GMT</pubDate>
          <description>Post description.</description>
        </item>
        <item>
          <title>Another Post</title>
          <pubDate>Sun, 22 May 2011 14:00:00 GMT</pubDate>
          <description>Another post description.</description>
        </item>
      </channel>
    </rss>

    The code I wrote at the time looked a lot like the following:

    private static void NoChecking(XElement feedRoot)
    {
        var version = feedRoot.Attribute("version").Value;
        var title = feedRoot.Element("channel").Element("title").Value;
        ShowFeed(version, title);
        foreach (var item in feedRoot.Element("channel").Elements("item"))
        {
            title = item.Element("title").Value;
            var publishDate = DateTime.Parse(item.Element("pubDate").Value);
            var description = item.Element("description").Value;
            ShowItem(title, publishDate, description);
        }
    }

    Not surprisingly, running it on the XML above leads to the following output:

    Delay's Blog (RSS 2.0)
      First Post
        Date: 5/21/2011
        Characters: 17
      Another Post
        Date: 5/22/2011
        Characters: 25

     

    That code is simple, easy to read, and obvious in its intent. However (as is typical for sample code tangential to the topic of interest), there's no error checking or handling of malformed data. If anything within the feed changes, it's quite likely the code I show above will throw an exception (for example: because the result of the Element method is null when the named element can't be found). And although I don't expect changes to the format of this RSS feed, I'd be wary of shipping code like that because it's so fragile.

    Aside: Safely parsing external data is a challenging task; many exploits take advantage of parsing errors to corrupt a process's state. In the discussion here, I'm focusing mainly on "safety" in the sense of "resiliency": the ability of code to continue to work (or at least not throw an exception) despite changes to the format of the data it's dealing with. Naturally, more resilient parsing code is likely to be less vulnerable to hacking, too - but I'm not specifically concerned with making code hack-proof here.

     

    Adding the necessary error-checking to get the above snippet into shape for real-world use isn't particularly hard - but it does add a lot more code. Consequently, readability suffers; although the following method performs exactly the same task, its implementation is decidedly harder to follow than the original:

    private static void Checking(XElement feedRoot)
    {
        var version = "";
        var versionAttribute = feedRoot.Attribute("version");
        if (null != versionAttribute)
        {
            version = versionAttribute.Value;
        }
        var channelElement = feedRoot.Element("channel");
        if (null != channelElement)
        {
            var title = "";
            var titleElement = channelElement.Element("title");
            if (null != titleElement)
            {
                title = titleElement.Value;
            }
            ShowFeed(version, title);
            foreach (var item in channelElement.Elements("item"))
            {
                title = "";
                titleElement = item.Element("title");
                if (null != titleElement)
                {
                    title = titleElement.Value;
                }
                var publishDate = DateTime.MinValue;
                var pubDateElement = item.Element("pubDate");
                if (null != pubDateElement)
                {
                    if (!DateTime.TryParse(pubDateElement.Value, out publishDate))
                    {
                        publishDate = DateTime.MinValue;
                    }
                }
                var description = "";
                var descriptionElement = item.Element("description");
                if (null != descriptionElement)
                {
                    description = descriptionElement.Value;
                }
                ShowItem(title, publishDate, description);
            }
        }
    }

     

    It would be nice if we could somehow combine the two approaches to arrive at something that reads easily while also handling malformed content gracefully... And that's what the XLinqExtensions extension methods are all about!

    Using the naming convention SafeGet* where "*" can be Element, Attribute, StringValue, or DateTimeValue, these methods are simple wrappers that avoid problems by always returning a valid object - even if they have to create an empty one themselves. In this manner, calls that are expected to return an XElement always do; calls that are expected to return a DateTime always do (with a user-provided fallback value for scenarios where the underlying string doesn't parse successfully). To be clear, there's no magic here - all the code is very simple - but by pushing error handling into the accessor methods, the overall experience feels much nicer.

    To see what I mean, here's what the same code looks like after it has been changed to use XLinqExtensions - note how similar it looks to the original implementation that used the simple "write it the obvious way" approach:

    private static void Safe(XElement feedRoot)
    {
        var version = feedRoot.SafeGetAttribute("version").SafeGetStringValue();
        var title = feedRoot.SafeGetElement("channel").SafeGetElement("title").SafeGetStringValue();
        ShowFeed(version, title);
        foreach (var item in feedRoot.SafeGetElement("channel").Elements("item"))
        {
            title = item.SafeGetElement("title").SafeGetStringValue();
            var publishDate = item.SafeGetElement("pubDate").SafeGetDateTimeValue(DateTime.MinValue);
            var description = item.SafeGetElement("description").SafeGetStringValue();
            ShowItem(title, publishDate, description);
        }
    }

    Not only is the XLinqExtensions version almost as easy to read as the simple approach, it has all the resiliancy benefits of the complex one! What's not to like?? :)

     

    [Click here to download the XLinqExtensions sample application containing everything shown here.]

     

    I've found the XLinqExtensions approach helpful in my own projects because it enables me to parse XML with ease and peace of mind. The example I've provided here only scratches the surface of what's possible (ex: SafeGetIntegerValue, SafeGetUriValue, etc.), and is intended to set the stage for others to adopt a more robust approach to XML parsing. So if you find yourself parsing XML, please consider something similar!

     

    PS - The complete set of XLinqExtensions methods I use in the sample is provided below. Implementation of additional methods to suit custom scenarios is left as an exercise to the reader. :)

    /// <summary>
    /// Class that exposes a variety of extension methods to make parsing XML with XLINQ easier and safer.
    /// </summary>
    static class XLinqExtensions
    {
        /// <summary>
        /// Gets the named XElement child of the specified XElement.
        /// </summary>
        /// <param name="element">Specified element.</param>
        /// <param name="name">Name of the child.</param>
        /// <returns>XElement instance.</returns>
        public static XElement SafeGetElement(this XElement element, XName name)
        {
            Debug.Assert(null != element);
            Debug.Assert(null != name);
            return element.Element(name) ?? new XElement(name, "");
        }
    
        /// <summary>
        /// Gets the named XAttribute of the specified XElement.
        /// </summary>
        /// <param name="element">Specified element.</param>
        /// <param name="name">Name of the attribute.</param>
        /// <returns>XAttribute instance.</returns>
        public static XAttribute SafeGetAttribute(this XElement element, XName name)
        {
            Debug.Assert(null != element);
            Debug.Assert(null != name);
            return element.Attribute(name) ?? new XAttribute(name, "");
        }
    
        /// <summary>
        /// Gets the string value of the specified XElement.
        /// </summary>
        /// <param name="element">Specified element.</param>
        /// <returns>String value.</returns>
        public static string SafeGetStringValue(this XElement element)
        {
            Debug.Assert(null != element);
            return element.Value;
        }
    
        /// <summary>
        /// Gets the string value of the specified XAttribute.
        /// </summary>
        /// <param name="attribute">Specified attribute.</param>
        /// <returns>String value.</returns>
        public static string SafeGetStringValue(this XAttribute attribute)
        {
            Debug.Assert(null != attribute);
            return attribute.Value;
        }
    
        /// <summary>
        /// Gets the DateTime value of the specified XElement, falling back to a provided value in case of failure.
        /// </summary>
        /// <param name="element">Specified element.</param>
        /// <param name="fallback">Fallback value.</param>
        /// <returns>DateTime value.</returns>
        public static DateTime SafeGetDateTimeValue(this XElement element, DateTime fallback)
        {
            Debug.Assert(null != element);
            DateTime value;
            if (!DateTime.TryParse(element.Value, out value))
            {
                value = fallback;
            }
            return value;
        }
    }
  • Delay's Blog

    "Sort" of a follow-up post [IListExtensions class enables easy sorting of .NET list types; today's updates make some scenarios faster or more convenient]

    Recently, I wrote a post about the IListExtensions collection of extension methods I created to make it easy to maintain a sorted list based on any IList(T) implementation without needing to create a special subclass. In that post, I explained why I implemented IListExtensions the way I did and outlined some of the benefits for scenarios like using ObservableCollection(T) for dynamic updates on Silverlight, WPF, and Windows Phone where the underlying class doesn't intrinsically support sorting. A couple of readers followed up with some good questions and clarifications which I'd encourage having a look for additional context.

     

    During the time I've been using IListExtensions in a project of my own, I have noticed two patterns that prompted today's update:

    1. It's easy to get performant set-like behavior from a sorted list. Recall that a set is simply a collection in which a particular item appears either 0 or 1 times (i.e., there are no duplicates in the collection). While this invariant can be easily maintained with any sorted list by performing a remove before each add (recall that ICollection(T).Remove (and therefore IListExtensions.RemoveSorted) doesn't throw if an element is not present), it also means there are two searches of the list every time an item is added: one for the call to RemoveSorted and another for the call to AddSorted. While it's possible to be a bit more clever and avoid the extra search sometimes, the API doesn't let you to "remember" the right index between calls to *Sorted methods, so you can't get rid of the redundant search every time.

      Therefore, I created the AddOrReplaceSorted method which has the same signature as AddSorted (and therefore ICollection(T).Add) and implements the set-like behavior of ensuring there is at most one instance of a particular item (i.e., the IComparable(T) search key) present in the collection at any time. Because this one method does everything, it only ever needs to perform a single search of the list and can help save a few CPU cycles in relevant scenarios.

    2. It's convenient to be able to call RemoveSorted/IndexOfSorted/ContainsSorted with an instance of the search key. Recall from the original post that IListExtensions requires items in the list to implement the IComparable(T) interface in order to define their sort order. This is fine most of the time, but can require a bit of extra overhead in situations where the items' sort order depends on only some (or commonly just one) of their properties.

      For example, note that the sort order the Person class below depends only on the Name property:

      class Person : IComparable<Person>
      {
          public string Name { get; set; }
          public string Details { get; set; }
      
          public int CompareTo(Person other)
          {
              return Name.CompareTo(other.Name);
          }
      }

      In this case, using ContainsSorted on a List(Person) to search for a particular name would require the creation of a fake Person instance to pass as the parameter to ContainsSorted in order to match the type of the underlying collection. This isn't usually a big deal (though it can be if the class doesn't have a public constructor!), but it complicates the code and seems like it ought to be unnecessary.

      Therefore, I've added new versions of RemoveSorted/IndexOfSorted/ContainsSorted that take a key parameter and a keySelector Func(T, K). The selector is passed an item from the list and needs to return that item's sort key (the thing that its IComparable(T).CompareTo operates on). Not surprisingly, the underlying type of the keys must implement IComparable(T); keys are then compared directly (instead of indirectly via the containing items). In this way, it's possible to look up (or remove) a Person in a List(Person) by passing only the person's name and not having to bother with the temporary Person object at all!

     

    In addition to the code changes discussed above, I've updated the automated test project that comes with IListExtensions to cover all the new scenarios. Conveniently, the new implementation of AddOrReplaceSorted is nearly identical to that of AddSorted and can be easily validated with SortedSet(T). Similarly, the three new key-based methods have all been implemented as variations of the pre-existing methods and those have been modified to call directly into the new methods. Aside from a bit of clear, deliberate redundancy for AddOrReplaceSorted, there's hardly any more code in this release than there was in the previous one - yet refactoring the implementation slightly enabled some handy new scenarios!

     

    [Click here to download the IListExtensions implementation and its complete unit test project.]

     

    Proper sorting libraries offer a wide variety of ways to sort, compare, and work with sorted lists. IListExtensions is not a proper sorting library - nor does it aspire to be one. :) Rather, it's a small collection of handy methods that make it easy to incorporate sorting into some common Silverlight, WPF, and Windows Phone scenarios. Sometimes you're forced to use a collection (like ObservableCollection(T)) that doesn't do everything you want - but if all you're missing is basic sorting functionality, then IListExtensions just might be the answer!

Page 2 of 28 (277 items) 12345»