Jaime Rodriguez
On Windows Store apps, Windows Phone, HTML and XAML

  • Jaime Rodriguez

    catching up on all the blog posts and announcements from the holiday break..


    First of all, best wishes for your 2009.  
    I am a bit late to with the good wishes, and even later with posts that happened during the holiday break, but I still wanted to make sure you did not miss these:

    • Adam Kinney released the XAML Guidelines Part 2 video, this part is an interview with Unnir, a PM in the Blend team. In the interview, Unni first opened Blend inside Blend (that is how they design it) and then he walked in detail through how Blend is organized and hinted a bit on how their developers incorporate assets from their designers.    
    • The WPF team released Photosuru, a rich client photo viewing experience.  
      Photosuru started as an internal showcase and harness for 3.5 SP1 features (e.g. Bitmap Effects, perf improvements, client profile deployment,etc. ).  Along the way, as Photosuru grew and the user experience improved, it has turned into really good and comprehensive WPF sample (including features like layout, styling, skinning, etc.)
      The application was released with source code so I highly recommend that any one learning WPF download it and check it out..    Adam has a video walkthrough of the app of the app w/ Nic Armstrong.
    • Windows Live Mesh won the “Crunchies Best Technology Innovation/Achievement” award.   Congratulations to the Mesh team!!   If you are not a mesh user now, I hope the fact that this is a ‘community driven’ recognition encourages you to try it out.
    • Win 7 beta1 was released last week.  This is by far the best OS beta1 I have ever ran,  I am running it for my day job on my main computer (mostly because I feel it runs better than Vista already) and it has a few new features that I find handy (like the new snipping tool).  
      If you are a Win7 developer, check out the videos that our colleague Yochay is creating..  Keep an eye on his blog since he has a long list of developer videos he will be releasing.  Windows 7 has developers features that can really boost end-user productivity (like jump lists, taskbar, etc.) so it is a great opportunity for ISVs to differentiate their app with minor work tweaks. 
    • Karl announced that he and I will be doing an all-day training on Model View View-Model end of the month in Cupertino, CA..  If you are in the area, please stop by.  We have been doing a lot of M-V-VM thinking lately and hope to share good stuff..   Also, if you are an m-v-vm practitioner or apprentice, please let us know what you want us to cover. We have an agenda already,  will try to get it posted early enough for feedback..  

    That gets me all caught up.. Apologies if it is all dupes..

  • Jaime Rodriguez

    XAML practices series, part1


    IMTrioIf you ask any of the XAML practitioners about the XAML best practices on organizing a project, resources, etc..  they will all give you a single answer “it depends; every project is different”..  If you then ask them for details, you will get a slightly different answer from each expert.

    In a brief attempt to aggregate the recurrent ‘good practices’, I  recently interviewed a few folks about their preferences..   
    The first interview was IdentityMine's Jonathan Russ, Nathan Dunlap and Jared Potter.    These folks are some of the most experienced XAMLites I know..  Jonathan & Nathan date back to the early Avalon days..  
    Check out the video using Adam's play-by-play timeline by clicking on the image or the 3 wise guys..  
    [I will try to figure how to embed videos in the blog by the next part in this series]

    Huge thanks to Jonathan, Nathan, Jared and the IdentityMine team for taking the time for the interview and to Adam for doing his magic with the video.

  • Jaime Rodriguez

    Forwarding the DataGrid’s DataContext to its’ columns..


    DataContext is a very handy inherited property on any WPF application..     
    Most of the time, I set DataContext near the root on the [logical] tree, and let the inherited DataContext do its magic to bind the rest of the scene.

    I recently tried to bind a DataGridColumn to its inherited DataContext (via its datagrid container) and got a very surprising answer on the output trace window:

    “System.Windows.Data Error: 2 : Cannot find governing FrameworkElement or FrameworkContentElement for target element…”

    What is happening here?   
    The Columns collection is just a property in the Datagrid; this collection is not in the logical (or visual) tree, therefore the DataContext is not being inherited, which leads to there being nothing to bind to.

    I needed the functionality so I had to create a workaround.. With out much thought I decided to: 

    1. Listen for DataContextChanged in the DataGrid
    2. When DataContext changes,  forward the new value to the DataGridColumns in the datagrid.
    3. Bind properties on the DataGridColumn to this ‘forwarded’ DataContext  ( as I originally intended)

    To get it done,  I did not inherit from DataGrid and create a new class..  Instead i used the richness of WPF’s property system to pull a 1,2 punch:

    1. Override DataGrid’s DataContext metadata and listen for changes in it…
    2. Add a FrameworkElement.DataContextProperty to DataGridColumn …

    Code looks like this:


    FrameworkElement.DataContextProperty.OverrideMetadata ( typeof(DataGrid),            
    new FrameworkPropertyMetadata
    (null, FrameworkPropertyMetadataOptions.Inherits,
       new PropertyChangedCallback(OnDataContextChanged)));

    The OnDataContextChanged callback simply forwards the DataContext from DataGrid to its columns:

    public static void OnDataContextChanged ( DependencyObject d, DependencyPropertyChangedEventArgs e)
        DataGrid grid = d as DataGrid ; 
        if ( grid != null  ) 
            foreach ( DataGridColumn col in grid.Columns ) 
                col.SetValue ( FrameworkElement.DataContextProperty,  e.NewValue ); 

    That is it. Now we are ready to databind to DataGridColumns.

    <dg:DataGridTextColumn Binding="{BindingA}"
    Visibility="{Binding ElementName=ShowA,Path=IsChecked ,
    Converter={StaticResource BoolToVisConverter}}" />


    You can download source code for a small sample from here.

    The project has 3 checkboxes that are databound to a viewmodel…    The DataGrid’s column Visibility is databound to this same viewmodel,  if the checkboxes are checked ,the respective column is visible.. if unchecked, it is collapsed..

    A few more thoughts on DataGridColumn not being in the tree ..

    1. Binding to other UIElements via ElementName will not work because there is no tree.  
    2. Binding to a  StaticResource works fine..
    3. Binding to x:Static  will work fine too.

    Happy datagrid coding..  Again, there is source here.

  • Jaime Rodriguez

    Microsoft Client Continuum in action: The Silverlight toolkit charts, running in WPF


    The Silverlight toolkit CTP released at PDC includes some new charting functionality  (Column, Bar, Pie, Line, and Scatter).   
    The long-term plan is for all of these controls to be ported to WPF; but inspired by Rudi’s work on porting the themes, I peeked into the task to see if the Microsoft continuum would hold for the controls too.

    The results were darn good, I had the charts project compiled in WPF in ~20 mins; and after that, I only had to make a few run-time tweaks to get the project running… ( look for the #WPF  pre-processor in the code..   there is only a handful).

    Since the charting library is an early "preview" quality, I will probably not do a full port or any testing, but in case some one wants to carry it further, the source code is here..  

    Screenshot follows…  These are the same charts as in the SL Toolkit sample but running in a WPF app (named Window1 for Rob)..


  • Jaime Rodriguez

    Eight WPF themes released…


    Today, Rudi Grobler released a “WPF Themes Pack” on codeplex.
    The project includes eight XAML themes for WPF applications…

    Read his announcement here.   WPF Themes Codeplex project is here.

    I like his ‘Expression Dark’ theme.. It feels safe (and familiar) and I like the DavesGlossy theme too (screenshots below).

    Since some of the themes also ship with silverlight, this could be an easy way to create a consistent UI for an application that has a web client (Silverlight) and a destktop client (WPF).

    Thanks Rudi!!

    DavesGlossyControls ExpressionDark

  • Jaime Rodriguez

    NY Times ReaDER Free Edition..


    One of my favorite WPF apps is the NY times reader.. but a while ago they started charging a subscription fee and {since I am a cheap skate, and a very seldom commuter}  I stopped using it..

    Now they are back with a Free Edition..  It includes Top Stories,  Select Articles from Sunday Magazine, and Weekly crossword puzzle..   this is plenty for my 35 min once a week commute. It  will be good to keep up w/ the news to see who wins the US Presidential election (just kidding, I am not that out of touch –with the news at least) ..

    Install Times Reader Free Edition from here, and if you commute more often than I do, feel free to subscribe to full edition..


  • Jaime Rodriguez

    WPF DataGrid RTMs.. and Ribbon Preview..


    Today, on codeplex we released updates to the WPF Toolkit. This release includes:

    • RTM, V1 version of DataGrid DatePicker and Calendar controls.
    • A preview of VisualStateManager (VSM).

    We also release the Preview version of Ribbon. Install instructions here.

    PDC has been taking most of my time lately so I don’t [yet] have a great tutorial or a geek-out post :(…  I will come back to that..

  • Jaime Rodriguez

    Announcement: WPF Pixel Shader Effects Library on codeplex..


    We just published a codeplex project with source for > 25 Pixel Shader effects and ~35 Transition effects.. 

    This video demonstrating the effects and transitions is a must watch. it is much better than the descriptions below.. [but for any one with less bandwidth I still tried]..

    • Effects: BandedSwirl, Bloom, BrightExtract, ColorKeyAlpha, ColorTone, ContrastAdjust, DirectionalBlur, Embossed, Gloom, GrowablePoissonDiskEffect, InvertColor, LightStreak, Magnify, Monochrome, Pinch, Pixelate, Ripple, Sharpen, SmoothMagnify, Swirl, Tone, Toon, and ZoomBlur…

      Here are samples of the effects in action.

    Original content (and type) RenderTargetBitmap of content with Effects applied (via test app).
    None Swirl Embossed
    (Image) Swirl Embossed
    NoEffect InvertColor Pixelate2
    (Vectors) InvertColor Pixelated

    • Transition Effects:
      BandedSwirl, Blinds, Blood, CircleReveal, CircleStretch, CircularBlur, CloudReveal, Cloudy, Crumble, Dissolve, DropFade, Fade, LeastBright, LineReveal, MostBright, PixelateIn, PixelateOut, Pixelate, RadialBlur, RadialWiggle, RandomCircleReveal, Ripple, Rotate, Saturate, Shrink, SlideIn, SmoothSwirl, Swirl, Water, Wave..

      To see these in action, you really should check out the demo video on channel9 or go ahead and get the source from codeplex
      I promise it will be fun... [in a geeky kinda way].

    The scoop on the library.  
    Adam recorded a video with David Teitlebaum introducing the library and sharing credit with Troy Jefferson, the intern that packaged the effects...  Thanks Troy!!

    We are hoping others contribute; there is already plenty of other WPF effects out there..

    A few resources to get you going with PixelShaders (for WPF) effects:

    Have fun!  Please share feedback via codeplex.. and if you like the library blog it so others can find it.. imho the transitions are pretty neat!

  • Jaime Rodriguez

    Great WPF applications: Lawson’s Smart Office


    Earlier in the year, you might have seen screen shots of Lawson’s Smart Office application.  Now, thanks to Adam Kinney and Matt Allbee (from Lawson), you can actually see it live in this 9 min interview/walk through!
    I recommend the high quality video for the interactivity, but for those not patient enough, I have sprinkled lots of screen shots below with a brief (not all-inclusive) summary around the usage of WPF.

    Smart Office is a front-end to Lawson's suite of applications -which includes Enterprise Resource Planning (ERP), Supply Chain Management (SCM), Enterprise Performance Management (EPM), and Custome Resource Management (CRM).
    The best description I have heard for Smart Office is Matt calling it an "information workspace".  A typical user can spend all day inside the app, as such it is a full-screen applications that simulates a shell -they call it a "Canvas"-. It has everything inside the app, window management, quick task switching, sidebar, drag-drop, shell and office integration, and of course lots of screens or apps to interact with the Lawson back-end. Smart Office aims to make an organization's people more effective, their processes more efficient, and the end-users happier. Smart Office focuses on the user's needs, and focuses on simplicity and meaningful collaboration to improve user and group productivity. 

    Here is how they 'get it done':

    An impeccable, comprehensive and incredibly effective user interface.

    Smart Office has it all:
    3D for navigation and for charting.

    Floating,translucent windows that blend well and allow users to take quick actions with out concealing the other user interface.

    Video for user tutorials and walk-throughs.

    Flow Documents are used for help files, to create an adaptive, high-quality reading experience.

    Uses VisualBrush(es) to implement previews in their taskbar -similar to Windows Vista's Alt-Tab-. It reuses this technique for data entry to preview forms before you open them. . 


    Figure 1. You can see the video player (bottom left) showing a recording of the application with a different theme.  Notice the rich windowing behavior and the translucency on the widgets and the widget library itself.  I could not fit all features in a window, click these links for missing features: 3D navigation, XPS help files, Visual Brushes in taskbar, detailed 3d chart.


    Personalization Nirvana!!

    Smart Office goes far beyond basic application skinning (where you can select the theme/color for your app).

    Smart Office allows you to create custom styles to effectively visualize your data; end-users can create client-side presentation rules with out writing any code!

    Rich windowing behavior; 100% customizable and adaptive.

    Most of the screens use “smart, dynamic layouts”. The user determines the size and position of a screen,  which of course the application automatically remembers; beyond positioning and sizing, the window dynamically scales its content to take advantage of the real-estate available and if you do not like it, no problem, you can manually control the scale of each screen.

    Figure 2. The pink field is a custom ‘trigger’ created via client-side wizard.
    Notice the windows all have a zoom icon to apply a scale to the window’s content. If content scales to larger than available space, scrollbars will appear. More screenshots: Styling wizardZoom/Scale Windowing, theming.

    Office and Shell integration as a productivity booster
    The application integrates with the OS allowing operations like drag & drop.

    It also integrates with Office allowing to send data from Smart Office to Outlook, Word, and Groove.

    Excel integration allows for seamless exporting and editing the data within Excel and immediately synchronizing it with the application; edits in Excel run through Lawson’s security and validation layer before being stored. 
    Figure 3. Excel integration. Imagine you need to edit 10 rows of data, maybe apply a formula; these tasks are easier to do in Excel.
    To see Outlook integration, click here.

    Overall, the application is brilliant. Huge kudos to the teams involved in creating this amazing productivity suite.  
    I am looking forward to other companies –including Microsoft- creating more applications like this one..

    Geeking it out (with non visual details):
    The application is a great S+S showcase. The client is a Full-Trust Click Once deployment with autoupdate functionality. 
    The back-end is a Java Service Layer talking to all kinds of code (.NET, Java, main-frame, you name it).
    The Office Add-ins are installed on demand or based on user preferences.
    The application is huge! Over 8000 screens and growing.  This is of course meta-data driven but  as you can tell from the stunning visuals they do a great job at leveraging styles, data and control templates, to create a great UX.   
    Will try to get the technical folks next time they are in Redmond to share all the lower-level details!

    More write-ups and showcases coming
    There are lots of other great WPF applications out there; but little time do record every one… I am hoping to showcase at least one video per month.  If you want to be in the list for showcases drop me an email, we always love to see how others use the technology.

    ttfn. Huge thanks to Matt for stopping by to record this and to Adam for doing the interview!.

  • Jaime Rodriguez

    Inside the PDC2008 Pre-conference sessions..


    Last week I recorded a PDC CountDown video about the Pre-conference seminars at PDC2008. 

    Going into the recording, I knew I would have to keep it very high level (since the format for the whole recording is < 10 mins and there are important PDC announcements first, so I was couting on 8 minutes, and it takes me 3 to introduce each speaker, these folks have too many accomplishments).

    Now that I have no time constraints, here is a detailed introduction to the PDC pre-cons. 
    The top part of this writing is a bit of high-level reasons why you should attend the pre-cons? 
    Underneath that section, you can find the "inside the pre-con" planning, goals, processes, and raw session details as I see them (not as the polished abstracts describe).

    What is a pre-con?
    An all day training the day before PDC.

    What should you expect if you attend a pre-con?

    • Guidance! Actionable advise & lessons learned from industry & technology experts.
      Pre-cons are only a day, so when I thought about the sessions, I made sure the topics lend themselves for the presenters to share their lessons learned, their advise both on best practices and avoiding worst practices.  The motto for the pre-con from the start has been "Spend a day at the pre-con, learn some thing you can apply to your job as soon as you walk out of the pre-con!"  
    • Great/Relevant content:  
      We selected the topics very carefully to align with either PDC or the software industry.  Where we have PDC sessions on the topic (like parallelism) the pre-con will greatly compliment PDC sessions; where we don't (like Agile) the topic is very relevant to the industry.  Please do make sure you scroll to the 'inside' section to get the details on each session.
    • Great presenters:  
      The line-up of presenters is pretty incredible.  This was by far the highest priority for us, we wanted people who "do and teach" or "walk the walk" and I must say we got it.  Here are some silly statistics for you to get a measure on caliber:
      • 80% of the sessions have book authors as presenters.  If I go by numbers and add up the books I average ~two books per person (and these books are top sellers!)
      • 90% of the sessions have speakers that are regulars at big conferences (like PDC, TechEd, etc.). The only session that does not have 'repeat speakers' is "advanced debugging" and the reason for that is that we purposely went deep but I vouch for the speakers (have you read their book? ).  If you can't take my endorsement, how about Mark Russinovich's endorsement .
      • The Microsoft speakers are either Dev-Leads, Architects, or Distinguished Engineers. These are the people that "earned their stripes" and are now driving strategy; these are the kind of people I would want to get advice from.
    • Huge value
      The cost is $400. Most people pay that much on airfare when they fly to a $2500 training (by the same speakers). You will argue, this is one day, the $2500 is 3 or 4 days. My thoughts there:  if you are coming to PDC, you are likely an accomplished software professional. Do you really need HOLs? All four day trainings have them.  I think a one day session is great because it forces the presenters to prioritize, to compress. Given the caliber of speakers we have, they are not going to cut you short, they are going to figure creative ways to deliver the message and insights that matter most. This is double value for you because the price is great, and we make the most of your time.

    More info?
    Official abstracts and bios for the speakers at the PDC pre-con website



    Now the inside story to the Pre-cons..

    The goals:
    There were three key tenets at the core of pre-con planning:

    1. I wanted it to be deep content, centered around guidance. The topics had to be interesting and contentious; when contentious was not a match (e.g. WPF or Silverlight), we went for speakers with huge experience that would be analytical and critical of the technology, folks with insightful experience both in the technology presented, the predecessor and competing technologies, and the challenges the technology addresses.
    2. Steve Cellini, the uber PDC owner, wanted top quality. On the very first meeting, he said "Quality is the goal! Get only the top speakers and make sure the delivery is exceptional" and when I explained to him that I needed to balance speakers due to budget he did not even blink at it, "blow the budget if you need to!"   I think you will see Steve's support both in the line-up of incredibly talented speakers and in the number of speakers; a lot of sessions have two or three presenters, I firmly believe this will make the sessions more entertaining, and drive the content/guidance to a more insightful level. The few sessions that have a single speaker, it is because we felt the delivery was better with that single speaker!
    3. Complimentary to PDC. This was Mike Swanson's role; we wanted no overlaps and we wanted synergies with PDC sessions.
      Avoiding overlap was not hard, pre-cons are focused on shipping technology and PDC sessions focus on futures; that said, on any topic covered in both pre-con and PDC, a pre-con should frame a technology, its challenges,  and the current best practices, so that when you attend a PDC session on the future of the technology, you can get the most out of that topic, and put the future solution into context with the challenges and today's solution.

    Selecting the sessions
    I started with ~50 topics. I was all over the map, XNA, HPC, WPF, WF, Sharepoint, Live, F#, LINQ, etc.  Then I mapped them to PDC tracks  (sorry, afaik I can't share these but I can say there were four), for pre-con we added an extra track called "industry/fun" track; which aimed at doing some thing different from what you expect at PDC. 
    We quickly narrowed it from 50 to 25 by applying a rigorous quality criteria:

    • Do we have enough content for a whole day?  Is a day enough to grasp the topic and actually learn?
    • Is this topic something commonly taught elsewhere?  If so, is it relevant to PDC content?  If not relevant to PDC and taught elsewhere with same quality of presenter, we skipped it.

    When I was down to 20-25 sessions, we shared them with other teams at Microsoft and with an "Advisory board"; the advisory board is mostly Regional Directors; all of them trainers/influentials and community leaders. It was great to hear what they thought the community needed; it was quite different from what I was expecting, but they provided great info to back up their arguments, so we listened!

    After the prioritization, we went after great presenters for each topic. If we did not find an absolute rock-star (inside or outside Microsoft ) we cut the topic.

    In the end, there were still a few painful cuts.  ASPX/MVC stuff, XNA, new MFC/Win32 stuff, and a few others.  There was a an important goal to achieve: variety! . Some people inside Microsoft didn't get that it did not matter not to have the 10 most popular, but we needed to have some thing for everyone!   I could agree that ASPX would have had more attendees than say "advanced debugging", but I needed to offer some thing to the C++ developers, and the web developers already had Silverlight, you get the drill. "Something for everyone" was maybe a secondary tenet.

    The sessions themselves and the dynamics: Why the session, why the speaker(s)?
    I aimed to add context to each session, without repeating the abstracts and speaker bios.   Please do read those before reading the ramblings below; my thoughts below are meant to add context on why the speaker or topic, but I did not repeat the outline or objectives for each session.

    • WPF ( by Charles Petzold).  Despite the hype for web apps; there is still a lot of software that can't run in the browser (due to security, performance, off-line requirements, preserving an investment, and many other reasons). WPF is Microsoft's latest and richest UI framework for writing desktop applications; XAML and its declarative model are at the core of any future presentation stacks at Microsoft. Easy to see why we needed to go deep and explain the fundatmentals of WPF, and who better to present it than Charles Petzold?  
      Charles has written half a shelf of Windows Programming books. Two WPF books already! He is funny, he is a brilliant writer and speaker, he is very thorough and he speaks likes he sees it.  Early in the cycle, I told Charles he could have unlimited access to the WPF architects -most of them were eager to work with Charles - but then Charles said in an email when discussing his abstract "if I could re-write my WPF book, I would start with ... " and there I knew it was time to leave him alone; he has clearly thought it through (end-to-end) many times.

    • Silverlight ( by Jeff Prosise)-- Silverlight is a subset of WPF that brings richness, and cross-platform .NET programmability to the web; clearly a disruptive technology. No one has more experience training web developers than Jeff Prosise.  He is very critical and tends to take a 'best practices' angle to his talks so I thought he would be most appropriate to introduce people to Silverlight. We could have had an MS person deliver this session, but it would have been too "blue". Jeff will bring the perfect balance of experience and insights for people to really understand Silverlight and know when and how to apply it.

    • Data ( by Mike Pizzo & Jose Blakeley) --  This session topic was selected before I found the speakers. Given how rich our data programmability APIs are, there are lots of people asking for advice on when and how to use each technology.  There are also people wondering if the one they are using will be deprecated, etc.  My guess is that I have read > 3 books, and dozens of whitepapers and even more blog posts on our data APIs and I still could not authoritatively explain when to use each technology (and why), what best prepares me for the future?, etc.  If you want the answers to those questions, you should come to this session and ask Mike & Jose!  These two guys have been around since OLE DB. They are both incredibly smart and know data inside out; I doubt there is any one better to answer those questions and set you straight on the best fit for you.

    • Debugging ( by Mario Hewardt and Daniel Pravat).  It is a PDC tradition to have a very deep win32 session; at first I was going to skip this tradition because I had only 10 sessions and felt that the "inside the Winxx kernel" had been done enough times; but then I went back to my win32 days and remembered that despite my getting good at COM and Win32, debugging was always a challenge that I never mastered. My guess is that there are others out there in this same situation; the sad part is you never know how much more effective you can be until someone shows you, that is why the session, we need the experts to show us that it is easy.  The challenge when I decided to pursue it as a session was to find the speaker for the topic.  I ordered three debugging books online; when I ordered "Advanced Windows Debugging" I did not know the authors, but it was highest ranked book (5 stars).  I read 1/3 of each book and it was a no-brainer that these folks were it!  Then I met them and was reaffirmed that they were perfect for a debugging session: They were cautious before agreeing to do it; they had a lot of questions, and then they built a plan, and have been executing since. They also make a really great team, complimenting each other very well. I have read the book and can guarantee it will save you time next time you want to debug a problem, join the session and also save time from reading the whole book! I am sure Mario & Daniel will share their best tips at the pre-con.  

    • Concurrent programming and parallelism ( by Stephen Toub, Joe Duffy & David Callahan).  This was a no-brainer; it is a trend in computing, it is still new and there is a lot of room for guidance towards best practices.  I have seen Stephen & Joe present and both of them are great; this is one of those talks where you will learn a lot and be better prepared for actual PDC sessions; we have been monitoring carefully to make sure there is no overlaps and all the sessions compliment each other well.

    • VSTS ( via Brian Randell).  This session was picked early because the topic is very real-world. Lots of people have bought VSTS but don't know how to use it to its potential; I have been there myself.  I went to the VSTS team and asked them to present it and they said "Brian Randell is the man for this".  
      I was surprised they recommended an external, but then I spoke to Brian it was obvious he is full of best practices and advise to avoid the worst practices.  He is perfect for this session.  After the session was announced, the VSTS team heard that the Agile session had a panel and they asked to have a panel too; Brian Harry, Sam Guckenheimer and others are coming for Q&A at the end of Brian's session. This detail might not be in the abstract.

    • Windows Mobile apps ( Doug Boling & Jim Wilson).  Another trend ready to boost. My guess is there is lots of people who have Windows and .NET skills that they can use on mobile, but that does not fully prepare them for mobile; you have to understand the constraints (CPU, battery,etc.) and the tools (debuggers, emulators,etc.) Doug and Jim are the two best known Windows mobile trainers. I am quite excited that we got both of them for a joint session; I know the content they are preparing is new, so any one who knows them or has seen them before, should still get a lot out of this session. This session should also compliment any other Mobile sessions at PDC.

    • Agile (Mary Poppendieck and Grigori Melnik).  Practical advise from the industry and from within Microsoft.
      Mary brings industry experience, best practices, and a lot of insights on going to agile and sustaining it.   Mary is going to deliver some invaluable advise to help you overcome the most common challenges and pitfalls. In the later part of the seminar, Grigori will share the Microsoft perspective: How our own Patterns & Practices team does it.  Agile is a huge deal for the p&p team.  I am excited to hear their advise. For those more 'experienced' agile practitioners, there is also a panel at the end.  Peter Provost and Jim Newkirk are joining Mary & Grigori to answer all questions!

    • WCF and REST (Juval Lowy and Ron Jacobs).  This session will be full of great advise and fun. Juval is a probably the most recognized WCF/SOA speaker outside of Microsoft; he wrote the book on it and has spoken at all major conferences on the topic; but this time we threw him a curve: Ron is going to come and talk about REST.  I am confident between the two of them we will get a good pros & cons on SOAP and REST,  how these two can co-exist in a WCF world,etc.   These two folks have great chemistry; I really want to see them as a team! I have seen Ron present this REST talk before, but I am sure with Juval in the room the level will be raised immensely.   

    • Performance in the .NET framework (Mark FriedMan, Joe Hellerstein & Vance Morrison). This session is also aiming to address a real-world scenario. .NET performance is good but with perf, there is always room for improvement; Vance has been working on .NET from the very beginning, and has been a huge part of the team that has made a lot of progress;  Mark & Joe joined later, but are pretty deep into areas like thread pooling and garbage collection. I can't wait to hear these folks share their advise, I am certain there will be enough to boost the perf in your app, and to do at least three "Aha!"s during the day..  From what I have seen these 3 folks work great together, I think the session will come together very cohesively.



    Registering for pre-cons. 
    Now that you have seen the line-up; I hope you do consider joining.  It is going to be fun and enlightening. I promise!
    I have to close with an FAQ that has come up several times already: if you already registered for the conference and need to change your registration so you can attend the pre-con, it is possible to do it, just email PDC2008@ustechs.com and they will help you.

    If you have a suggestion for improving the pre-cons feel free to email me via the blog.  We welcome all suggestions!   


    That is it (for now)!   C U at PDC Pre-cons!

  • Jaime Rodriguez

    Client profile explained..


    As I mentioned on the SP1 cheat sheet, client profile is an exciting new deployment feature in SP1..  
    Troy Martez has an intro paper and a deployment guide for it. 
    Those docs and his coming blog are the ultimate reference on client profile, but I wanted to share the whole context and my 2c on the subject [because I have seen a lot of questions on the feature]. 
    [For most of you readers half of this is old news, feel free to skip to the highlighted sentences].

    The motivation for client profile:

    The .NET run-time has gotten big over time.  The growth was positive- we got WPF, WCF, Card Space, LINQ, etc- but the trade-off was increased download size and increased install time .
    The bad rep on download size is compounded by how we package the off-line installers.  For example, the off-line 3.5 SP1 installer,  is ~230 MB. The reason for it is because we ship x86, x64, and ia64 bundled together. The reality is that if you used the 3.5 SP1 boot-strapper and installed online, you would get 1/3 of that size, with a ~3 MB bootstrapper (instead of the whoopy 200mb).   

    Introducing client profile

    The idea is simple:
    1) package the subset of the framework that is most commonly used by client apps.
    2) install that subset.
    3) later (on the background preferably), upgrade the subset installed to a full .NET 3.5 SP1. 
    This package in #1 above is the client profile SKU.  Client profile includes WPF, Windows Forms, WCF, the BCL, data access, etc.  (for a full manifest, check Justin's post).
    The net result is that client profile gives you a .NET run-time with an initial download of ~28 MB and install time much shorter than full .NET framework. 

    The details on the implementation
    If you read #1 above, client profile is a subset of the framework. The more accurate explanation is that it is a subset of .NET 1.1x + subset of 2.0 + subset of 3.x so unfortunately we can not install client profile on a machine that already has a .NET framework installed.  

    • Why only install on 'clean' machines?  If you have the full 2.0 framework, and we installed a subset of that, it that could break your 2.0 apps already installed.
    • If you think further into this, Vista shipped with the .NET framework 3.0 in the box, so that gets Vista out of the way and we conclude that client profile is supported on XP machines that do not have a framework installed at all.  For WS03, since it is server, we don't do client profile.
    • On any machine that is not XP or that already has a framework, when you try to install client profile, it will install the full .NET 3.5 SP1.  

    If you see above again, step #3 is upgrading the framework from client profile to full .NET 3.5 SP1.. how does it happen?

    • Once you install client profile, the next time Windows Update runs on that system, it will try to upgrade it to full .NET 3.5 SP1. This is nice because the download and install happens on the background (for me windows update runs at 3 am every day).
    • There is likely a brief period of time when a machine between client profile getting installed and Windows Update upgrading it to a full 3.5 SP1. What happens during this period?
      • Any apps coded against 3.5 SP1 client profile will run fine.
      • Any apps that need the full framework, will prompt for a full framework install when they are launched.  You will be able to install full-framework seamlessly you just won't get the benefit of this happening on the background (if you used Windows Update).

    The temporary gotcha
    .NET 3.5 SP1 shipped last week, and client profile is available for download but Windows Update does not start updating machines to .NET 3.5 SP1 until they complete their testing - 4 to 6 weeks from now, we hope-.   So if you installed client profile today you would not be upgraded automatically like we planned. Because of this reason, we are labeling the Client Profile release as "Preview" until Windows update begins upgrading systems to 3.5 SP1.

    This does not mean the run-time with change; the run-time is frozen. We have the preview out for people to start testing their app against client profile and planning the deployment;  we just recommend that you wait and release a client profile app only after Windows Update begins upgrading systems to 3.5 SP1.

    Other FAQs:
    A point of confusion is the packaging.  Client profile is ~28 MB, but if you go to download it, you will see two options available: 

    • The bootstrapper for client profile is a tiny exe ( < 300K). It should be used for applications that will be online when you install. The bootstrapper looks at your OS and hardware platform ( e.g. x86) and only downloads the needed bits (~28 MB). This is online scenario is what client profile is designed for.
    • The client profile off-line installer. Most people would think this is a 28 MB file, would not you? 
      Well, unfortunately not. It is actually ~250MB. The reason for that is we assume you will be off-line and your application can't fail at install time, so we include the whole 3.5 SP1 package plus the 28 MB client profile, this way if you try to install it on a machine that already has the framework, we can still install 3.5 and have your application will run. 
      • My personal 2c here is that most people who need to install off-line would be better off installing the full 3.5 SP1 framework at once. It is slightly smaller package and most importantly gets it all out of the way, so there is no WU upgrade later; the only gotcha is that the installer will take a bit longer to install than using client profile.

    I am sure there are a lot more questions you will have (like how do I create an app that targets client profile). 
    I will come back to client profile later, or at least point you to Troy's blog since he is working on explaining all of these.   For today, I just wanted to explain the platforms and scenarios that client profile aims to address, its relationship to Windows Update, and the explanation on the off-line installer's size (this was causing confusion, at least for me).  For the record a benefit I did not tout today is that client profile lets you customize the UI for the install experience; I will have to come back to that one since it is neat-o.

  • Jaime Rodriguez

    On WPF reference applications and the new location for the WPF Hands-on-lab for building the Outlook UI?


    Tim is OOF and his automated response forwards WPF requests to me..   want to know the most FAQ was last week ?  Where is the hands-on-lab for building the Outlook UI using WPF?.. 

    Answer: It is here.    Give me a few days to look through it and ask Ronnie –the author – if we should post it on windowsclient.net too..  

    There was 6 requests for it last week,  which is great because it confirms some thing we are thinking today: we need more WPF reference applications.  

    We do have Family.Show,  but are thinking of a new one. Should we??

    If so, what is the scenario?   should it be a LOB or a consumer scenario? high-end graphics?    Do you really need step-by-step HOL?? Or would a slightly higher level write-up explaining all the trade-offs and best practices do??  [we are leaning for the latter]..

    Let me know via comments or email…



    PS – if we move it or add it to windowsclient.net I will put it in the comments for this post to avoid an extra post…   I tried to do that on Tim’s original post but new comments were disabled..

  • Jaime Rodriguez

    cheat-sheet to some of the WPF 3.5 SP1 features..


    .NET 3.5 SP1 buzz peaked very early at the beta.  At the time I was immersed in Silverlight, so I am now having to catch up; which is a bit of work since the release is packed with lots of new features.  Below is my cheat sheet to date; I tried to group them on what I saw were the "core" investments.

    The official release notes on 3.5 SP1 is at windowsclient.net/wpf.
    Tim Sneath has a  great post that puts the enhancements into context both on size ( # of changes) and on impact ( based on customer feedback ) .




    • Adam Kinney interviewed David Teitlebaum around beta1 time-frame about the graphics improvements: hardware accelerated bitmap effects, D3DImage, WriteableBitmap, etc.
    • Greg Schecter has this great series on hardware accelerated bitmap effects.
    • Dr. WPF's tutorial on D3DImage is very comprehensive.  This is a niche feature but it does enable a lot of scenarios where the previous solution [with airspace] was not ideal.   BTW, happy blog-anniversary Dr. WPF!


    • Adam Kinney has a great interview with Jennifer Lee.   I must say I am a bit surprised there is not more out there XBAPs.  XBAP got a bad reputation at 3.0 launch because it did not work on Firefox, and at the time WCF required Full trust and XBAPs did not elevate. All of that has been fixed now, so I am excited about the improvements they did for XBAP in 3.5 SP1 (like the HTML splashscreen).
    • Lester's has a sample too on WebBrowser control [this is a very handy feature, compared to frame].



    The list above is not all comprehensive, but it can help you catch up.  Please let me know what I missed [I am sure there is lots of that]...

  • Jaime Rodriguez

    Datagrid (part3): styling.


    In this third and final part of the datagrid series ( part1, part 2) we get into styling the datagrid a little bit. 

    This part is not an all comprehensive tutorial on styling datagrids, I will just touch on what we did for my sample and share a few tips & tricks.

    Here is the final look. 


    Here is the declaration for the whole data grid.

    <dg:DataGrid ItemsSource="{Binding Data}" Margin="20"                  
    RowStyle="{StaticResource RowStyle}"
    Grid.RowSpan="1" x:Name="BigKahuna">

    If you see above, I templated the Header. All I wanted was a blue background with white foreground.

    <Style x:Key="ColumnHeaderStyle" TargetType="{x:Type dg:DataGridColumnHeader}" 
    BasedOn="{StaticResource {x:Type dg:DataGridColumnHeader}}">
    Setter Property="Background" Value="{StaticResource DataGrid_Style0_Header}" />
    Setter Property="Foreground" Value="White" />
    Setter Property="HorizontalContentAlignment" Value="Center" />
    I also wanted Alternating rows, so I set AlternateCount to 2 in the datagrid and then I created a trigger for the RowStyle.
    <Style x:Key="RowStyle" TargetType="dg:DataGridRow" >
    Trigger Property="AlternationIndex" Value="1" >
    Setter Property="Background" Value="{StaticResource DataGrid_Style0_Alt1}" />
    Trigger Property="AlternationIndex" Value="0" >
    Setter Property="Background" Value="{StaticResource DataGrid_Style0_Alt0}" />

    One customization that I did not do in the demo, but is probably common is tweaking the selection. By default DataGrid does a blue highlight, imagine you want to change that color to a green; you would just need to override the Template for DataCell.

    <Style x:Key="CellStyle" TargetType="{x:Type dg:DataGridCell}">
    Trigger Property="IsSelected" Value="True">
    Setter Property="Background" Value="#FF3FC642" />
    and of course apply this on the datagrid CellStyle='{StaticResource CellStyle}'
    In the usual WPF way, styles and templates allow incredible flexibility and leave the designer in control for a visually stunning look with no code.
    The rest is tips & tricks for designers. 

    If you look in the obvious place ( Edit Template) Blend does not have a lot of design-time support for all the pieces in the datagrid because the parts to customize are not quite parts of the ControlTemplate, but for the most part a few of these are DataTemplates, so you can workaround by creating any ContentControl, and editing the ContentTemplate there.

    To edit the parts of the DataGrid's controlTemplate, you can actually drop and individual parts (like DataGridHeader) of the template in a regular window and edit their style there. 


    The only 'tricky' one you might run into is DataGridHeaderBorder (because it does not expose a template). My advise there is to go ahead and select Edit Style -> Create Empty.  Treat it as you would a border.  
    Window4.xaml in the sample code has a small example of editing the pieces. [it is not complete by any means. It is also not pretty].

    That is it in terms of getting my little portfolio data styled and it closes the DataGrid series. I hope it is useful to some one reading it.  It was fun playing with the DataGrid. You don't realize how big this thing is until you play with it and see all the small, yet meaningful options it has.

    For feedback, best place is the codeplex discussion
    This build is a CTP, but it is quite functional and near completion. The control will ship out of band so don't wait too long; it will be shipping soon.

  • Jaime Rodriguez

    datagrid (part 2) -- Show me some code.


    In part 1, I walked through some of the features in datagrid.
    The source for the series is here.

    In this part, we will create a UI that looks like this:

    I will mostly highlight the interesting parts [instead of going on a step by step].
    The source is available and it would be repetitive if I go step-by-step. 
    One thing to note is that (approximately) 90% of the work to customize it is in XAML.  Some of it I did in code just to illustrate a point.

    DataGridTextColumn bindings were the simplest ones. 
    You can assign a DataFieldBinding -- which is an actual Binding to the data you want. I liked the approach of passing the full binding [instead of a property name] because it allowed me to pass parameters like StringFormat (see below) and converters and other similar binding features.  Nice!

    dg:DataGridTextColumnDataFieldBinding="{BindingQuantity}"Header="Quantity" />
    dg:DataGridTextColumnDataFieldBinding="{BindingQuote,StringFormat={}{0:C}}"Header="Quote" />  
    From above, notice I mostly passed strings into the header property.  This was my choice; I could have more complex objects since Headers are Content controls and have a HeaderTemplate, but I did not need it here.  

    The symbol column  is a DataGridHyperlinkColumn; I did nothing to customize it. If you compare it to DataGridTextColumn you will see an extra property. On the DataGridHyperlinkColumn,  DataFieldBinding -- looks or expects a Uri.  and the ContentBinding looks for the text that the UI will display.


    <dg:DataGridHyperlinkColumn DataFieldBinding="{Binding SymbolUri}"  
    ContentBinding="{Binding Symbol}" Header="Symbol" SortMemberPath="Symbol"/>

    DataFieldBinding  - is where the data (Uri) is coming from.  
    ContentBinding - is the 'text' that is displayed on the hyperlink. 
    SortMemberPath - is the data used for sorting. The datagrid will look at the property this path points to and if it implements IComparer will automatically handle the sorting. [In this app, most of the columns sort and I implemented no sorting logic at all :)]  

    If you run the app, you can also see the "edit' behavior for DataGridHyperlinkColumn. You can edit the Uri, but not change the actual text (ContentBinding). You can manipulate it programmatically, but not from the editing experience.

    Today's change column
    I implemented as a DataGridTemplateColumn.

    <dg:DataGridTemplateColumn CellTemplate="{StaticResource DailyPerformance}" 
    Header="Today's Change" SortMemberPath="DailyDifference" />

    A DatagridTemplateColumn is one where I can apply a CellTemplate so that it generates the UI.  I first chose it for this column because I wanted to implement the behavior of highlighting gains ( >0 ) with Green and losses ( <0) as Red using a  DataTemplate.Trigger, but that did not work so I ended up using a Converter. Hind-sight this is likely a better solution [more performant] any way.

    <DataTemplate x:Key="DailyPerformance">
    <TextBlock   Text="{Binding DailyDifference, StringFormat={}{0:C}}" 
    Foreground="{Binding '', Converter={StaticResource StockToBrushConverter},
    ConverterParameter=IsDailyPositive }"> </TextBlock> </DataTemplate>

    Notice that I was still able to use a SortMemberPath on the DataGridTemplateColumn. This is really nice because regardless of what my UI looks like I can still sort the data.Total Gain column uses the same technique than Today's change.  

    Rating column is a little gaudy on purpose. 

    CellTemplateSelector="{StaticResource StarsTemplateSelector}"
    Header="Rating" SortMemberPath="Rating"/>

    Here I used a TemplateSelector just for illustration purposes.

    The selector is trivial. All it does is look for a template in a resource dictionary for the datagrid.

    public class StarsTemplateSelector : DataTemplateSelector 
           public override System.Windows.DataTemplate 
               SelectTemplate(object item, 
               System.Windows.DependencyObject container)
               StockXAction sac = item as StockXAction;
               FrameworkElement  fe = container as FrameworkElement; 
               if (sac != null && fe != null )
                   string s = sac.Stars.ToString() + "StarsTemplate";
                   DataTemplate ret =  fe.FindResource(s) as DataTemplate;
                   return ret; 
               return base.SelectTemplate(item, container);

    From the XAML, you can also notice the SortMemberPath again. The UI now has Star ratings on it, yet I can still sort and did not have to write any code !!

    Separator Columns are empty 'dummy' columns I added just make empty space to separate the colums from autogenerated ones. See Autogenerated Columns below for why.

    Autogenerated columns
    I wanted to leave AutoGenerateColumns="true" so you could see how the 'raw' data turns into the view.  It is also nice because you get to see some of the Column types I did not use for example the ComboBoxColum -- you can see it on the Autogenerated rating column. It is an enum, and it turns into a ComboBoxColumn.

    Default data for new rows
    If you scroll to the bottom and a new row [functionality that comes out of box]. You will see this:


    The NaN is a problem. What happens is here is it is trying to calculate Gain, but data has not been initialized.

    The workaround is to handle the DataGrid's InitializeNewItem. This will be called as a new record is initalized.

    this.BigKahuna.InitializingNewItem += 
    new InitializingNewItemEventHandler(BigKahuna_InitializingNewItem);


    InitializingNewItemEventArgs e)
                //cast e.NewItem to our type 
                StockXAction sa = e.NewItem as StockXAction;
                if (sa != null)
                    sa.Symbol = "New data"; 
                    sa.Quantity = 0;
                    sa.Quote = 0; 
                    sa.PurchasePrice = 0.0001; 

    Copying data on DataGridTemplateColumns

    Another issue you would notice is that if you do a Copy (Ctrl-C) or right click into Context menu which I added, the TemplatedColumns are not copied by default.  What I needed to handle in order for copying to work is to pass a binding to ClipboardContentBinding.  So we can tweak the template we had earlier and I will be tricky and pass the enumerator (Stars).

    <dg:DataGridTemplateColumn CellTemplateSelector="{StaticResource StarsTemplateSelector}" 
    Header="Rating" SortMemberPath="Rating" ClipboardContentBinding="{Binding Stars}" />

    Now when I copy paste, I do get the value generated from ToString() on the enumerator.

    One more thing to mention around Copying is that Data*Column has a CopyingCellClipboardContent event. This is good for overriding the value if I did not have a binding; what I noticed on this build is that if there is no binding set on ClipboardContentBinding, the event is not firing.  This will be fixed by RTM, interim just pass any binding (like {Binding}) and when the event fires you can override the value that will be cut & pasted from code.

    OK, that covers most of the functionality. In part 3 we can take care of the styling.

  • Jaime Rodriguez

    dabbling around the new WPF datagrid (part 1)


    On Monday, the WPF team released the CTP of their new datagrid control.  
    You can download it from here.  [Note that it requires .NET 3.5 SP1, released monday too]

    I have been playing with it so I created this 3 part series.

    • Part 1 (this write-up) is about the features in the grid and the ones missing from it.
    • Part 2 is a hands-on exercise to apply the features to customize the presentation of data (aka view) of the datagrid.
    • Part 3 includes a few tips & tricks on customizing/styling the datagrid.

    The source for this sample is here.

    Getting Started  was trivial. 

    1. I got the bits from codeplex,
    2. created a new WPF application, and
    3. added a reference to the WPFToolkit.dll.
    4. From my Window1.xaml, I added the xmlns declaration so I could refer to the datagrid. No need to map the assembly, the tools do that for you.

    <Window x:Class="WpfApplication1.Window1"
        Title="Window1" Height="300" Width="300"    

    I had some 'dummy' data simulating financial transactions so I took advantage of AutoGenerateColumns feature in the grid to get a quick 'fix':


    The results were quite rewarding for a line of code. I could Reorder the columns, Resize the Columns, Sort by Column, Add New Rows , Edit the data, Select Rows ,  and Copy to Clipboard.

    I then moved on quickly to styling it a little bit..  Like every thing else WPF, the datagrid is incredibly flexible on customization by using styles and templates. 
    In a few minutes, I hand-wrote this

            <SolidColorBrush x:Key="DataGrid_Style0_Header" Color="#FF4F81BD"/>
            <SolidColorBrush x:Key="DataGrid_Style0_Alt0" Color="#FFD0D8E8"/>
            <SolidColorBrush x:Key="DataGrid_Style0_Alt1" Color="#FFE9EDF4"/>
            <Style x:Key="ColumnHeaderStyle" TargetType="{x:Type dg:DataGridColumnHeader}">
                <Setter Property="Background" Value="{StaticResource DataGrid_Style0_Header}" />
                <Setter Property="Foreground" Value="White" />        
            <Style x:Key="RowStyle" TargetType="dg:DataGridRow" >
                    <Trigger Property="AlternationIndex" Value="1" >
                        <Setter Property="Background" Value="{StaticResource DataGrid_Style0_Alt1}" />
                    <Trigger Property="AlternationIndex" Value="0" >
                        <Setter Property="Background" Value="{StaticResource DataGrid_Style0_Alt0}" />

    and got this:


    [We will cover styling in part3, let's first walk through all features]: 

    Selection Unit: 
    Under the hood with the grid, you are really selecting cells, but the grid has a couple nice modes to make it easier for developer to use the concept of selected rows:

    • Cell - selects cells.  In this mode, SelectedCells property has selected cells and SelectedItems is null.
    • FullRow-- when a user selects a cell in a row, all of the cells in that row are selected. In this mode SelectedCells has all the selected cells in that row. SelectedItems has the selected rows.
    • CellOrRowHeader - a mix from the two above, where full-row select happens when clicking on the RowHeader (if its showing).  In this mode:

        o SelectedCells has all the selected cells, including all the cells in a selected row when a row is selected through the RowHeader

        o SelectedItems has the rows which are selected through the RowHeader, or is empty if only cells are selected (i.e., highlighting all the cells in a row will not add that row to SelectedItems)

    Selection Mode:
    In Single mode, I can choose a single unit (see above for unit) and in Extended Mode I can select multiple units. 
    The usual keyboard navigation short-cuts apply (Shift takes you to the end of current row, Ctrl preserves previous selection, etc.)


    Using GridLinesVisibility property, I can choose which gridlines are visible:  All, Horizontal, Vertical, and None.
    I can also choose the color for the horizontal and vertical gridlines.

    Headers (Row & Columns)

    By tweaking the HeaderVisibility property, I can choose which headers are visible:  All, Column, Row, None.  
    The headers for each column can be customized/styled using a HeaderTemplate. 

    Column operations

    Autogeneration of columns works quite well.  The default mappings are:

    Data Type Generated Column
    string DataGridTextColumn
    Uri DataGridHyperlinkColumn
    bool DataGridCheckBoxColumn
    enum DataGridComboBoxColumn*

    *the ComboBoxColumn is created only if the field is writeable ( which happens when the property is not read only).
    For other types (e.g. DateTime or objects) the DataGridTextColumn will be the default with a ToString() on the object.

    You can customize AutoGeneration of columns by handling the AutoGeneratingColumn event. You will see this in part2.

    ReadOnly columns  is missing from the CTP, but that is not a huge problem, you can easily accomplish ReadOnly behavior by using DataGridTemplateColumns and replacing the templates with read-only controls ( like TextBlocks).

    Column Resizing and Reordering is implemented out of the box and is toggled on/off via the CanUserReorderColumns and CanUserResizeColumns  respectively. 
    If you want to control reorder per column, there is a CanUserReorder property on the column itself.

    Frozen columns. A frozen column is one that does not scroll out of view when the user scrolls in the horizontal direction. When a column is frozen every column displayed to its left are also frozen.  Frozen columns is supported out of the box. You can control it by setting the IsFrozen property on a column.

    You can see frozen columns in the demo I created by right clicking and showing the ContextMenu.

    Row operations

    Adding new rows is supported. You can enable it via CanUserAddRows. Deleting rows is supported too, controlled via CanUserDeleteRows.

    For alternating rows, the datagrid has an AlternationCount property for controlling AlternateRows. 
    The way it works is you set AlternationCount to the total number of styles/colors to be used. Usually this is two colors, truly alternating, but it could be more colors if needed [and you like to get funky]  

    Once AlternationCount has been set, on your RowStyle you can create a trigger that checks AlternationIndex (which should be 0 to AlternationCount-1) and set the style there. 

    Editing Cells

    Before getting into editing, I have to comment on entering Edit Mode.
    The Datagrid requires you to have focus in the cell in order to get into edit mode.  To get focus, you can click on a cell, or tab into it.
    Once you have focus, the most common gestures to get into edit mode are supported:

    • Using the Keyboard – cell has focus, start typing, goes into edit
    • Using the Keyboard – cell has focus, enter edit mode command (ex. F2), goes into edit
    • Using the Mouse – cell has focus, click, goes into edit
    • Using the Mouse – cell may or may not have focus, double click goes into edit.

    For programmatically manipulating the cell with focus or during edit mode, the datagrid has a property of CurrentCell and each DataGridCell instance has an IsEditing property

    You can customize the Editing experience for any column by providing a CellEditingTemplate.  [If you used one of the stock columns listed above, those automatically provide a template].

    Editing a cell has three commands that are fired as you get in and out of edit mode:

    • BeginEditCommand (any of the gestures above)
    • CancelEditCommand  ( press Esc )
    • CommitEditCommand  (press Enter, Tab to next cell, or change focus using mouse)

    Other features
    Copy to Clipboard
    is implemented.  Ctrl-C works fine. The data format appears to be tab delimited. Which is nice as it works seamlessly with excel.
    Keyboard Navigation [by using arrows and tab] works out of the box. 

    Some missing features already announced: 
    The big one is RowDetails.   I hear it is already in later builds, so the expectation is that it will be in by RTM.
    ReadOnly columns, and support for hidden columns. [Though I am thinking for those there is workarounds today].

    Bugs along the way and known issues.

    The only one I ran into is that DataTemplate.Triggers is not working on this build.  I hear it will be working on later builds.

    Show me the code (or demo).
    Playing with all the features above is easy. 
    The sample app I Created has a little bit of UI data bound to the datagrid that lets you manipulate the grid to see most of the features above. 

    In Part2, we start using these features to build a more 'insightful' view of the data.

  • Jaime Rodriguez

    Working with Collections in Deep Zoom.


    In the Deep Zoom run-time, you can load two types of compositions:

    • A single image composition is when you interact with a single image at run-time.  This does not mean you started with a single image, you could start with many images and compose a scene, then you export from Composer and all of the images get 'stitched' into a single composition that you can interact with at run-time.  You interact (like zooming, opacity, etc.) with your image by changing the properties of your MultiScaleImage.
    • A "Collection of Images" is when you compose a scene but export it as a collection of images (duh jaime do u get paid by the word? word, word?)
      At run-time, you can still interact with each of the individual images in your composition.  
      The images are exposed via the SubImages collection of your MultiScaleImage.   You can still set properties in the MultiScaleImage and these properties will affect all the images in the collection [for example if I Zoom-in to 200% in MSI, that would impact which SubImages are visible] but with collections I also get the benefit of interacting with the SubImages directly.

    Collections have a lot more flexibility of course, but I also caution of two tiny concerns:

    • when dealing with collections,  you likely end up downloading a few more tiles as you go.  Not a huge deal
    • Your images load independently;  this again is not a huge deal unless you have huge discrepancies in size; in that case you will see your small images load earlier than your bigger ones.  [To solve this you could play with Opacity]

    This post is about working with Collections, so let's assume I decided the two issues above are not in play (that is the case for most people). 

    To find out what we can do with a MultiScaleSubImage, all we need is to look at the class definition:

    • AspectRatio - readonly property ==  width/height.
    • Opacity = self-explains;  0== transparent.  1.0 == Opaque
    • ViewportOrigin == top-left corner of the image.  Stay tuned for a lot more below. This is a really interesting property and 1/3  the point of this post
    • ViewportWidth == width of the area to be displayed.  This value is in logical coordinates. For example:
      • a value of 1 displays the entire image (no zoom),
      • a value of 0.5 is 200% zoomed in and a value of 0 is completely zoomed (user cannot see the image at all).
      • A value above 1 is zooming out from the image. For example, a value of 2 means that the image will take up half the size of the MultiScaleImage  control area (50% zoom).
    • ZIndex -- self explains;  higher numbers are in front of lower ones.

    A few of the less obvious options for dealing with collections are:

    If you have used Deep Zoom Composer you might have noticed that composer has a property called "Tag" for each image. As of beta2 tag is not exposed via the MultiScaleSubImage.  So, how can you get to this Tag?

    The tags are stored in the metadata.xml  file generated by Deep Zoom Composer. 
    You can easily get to this file using WebClient or HttpWebRequest networking APIs in SL2.  It is a trivial two step process:

    1. Make call to read XML file.  I did it from ImageOpenSucceded to not compete with download traffic for image and to know for sure that when WebClient callback happened I could access the images in the collection.

      msi_ImageOpenSucceeded(objectsender, RoutedEventArgs e)
              WebClient wc = newWebClient();
              wc.DownloadStringCompleted += newDownloadStringCompletedEventHandler(wc_DownloadStringCompleted);
              wc.DownloadStringAsync(newUri("GeneratedImages/Metadata.xml", UriKind.Relative));     
    2. Then we read the results.    The code is below.  I used LINQ to XML -that makes it trivial :)
      The only thing worth highlighting from the code is the "map" between tags and MultiScaleSubImages. 
      metadata.xml has a ZOrder, which is a 1-based index of the position of the image in the collection.   Despite its name (ZOrder), this has nothing to do with MultiScaleImage.ZIndex .

      The actual collection is 0 based, so we  subtract one to the value read from metadata.xml    I have put red, highlighted comments on top the two most relevant lines .

    void wc_DownloadStringCompleted(objectsender, DownloadStringCompletedEventArgs e)
        if(e.Cancelled == false&& e.Error == null)
            strings = e.Result;
            XDocument doc = XDocument.Parse(s);
            var images = froma indoc.Element("Metadata").Descendants("Image")  

            foreach ( XElement image inimages )
                CollectionImage ci =
                     Height = (double) image.Element("Height"),
                     Width = (double) image.Element("Width"),

    //here we read the ZOrder from metadata.xml and subtract one
                     ZOrder = ((int) image.Element("ZOrder")) -1 ,
                     Tag = (string) image.Element("Tag"),
                     Location = newPoint{ X = (double)image.Element("x"), Y = (double) image.Element("y")}

    //here we map from the SubImages collection based on the ZOrder we read
                ci.Image = msi.SubImages[ci.ZOrder];
                _images.Add ( ci ) ;

            items.ItemsSource = _images;


    If you look at the code, I created a CollectionImage which aggregates the stuff from metadata.xml and the corresponding MultiScaleSubImage. 
    This means I could now filter, or do any thing since the data is merged.  Kirupa has an example of using tags to filter (so I will stop here on that topic and move to Viewports). 


    ViewportOrigin represents the left(x), top(y) corner of your SubImage relative to the MultiScaleImage control.   
    The surprise (for me at least) is that:

    • They are normalized relative to the viewportWidth of the subimage you are dealing with.
    • Moving towards the right in the horizontal ( X) direction is actually a negative offset, so is moving towards the bottom.

    Got it?? OK! you are done.  
    If you are like me  you might want to see a sample.  Here are some below: 

    image This is a DeepZoom composition w/ two images. 
    Blue is 1200x800  ( Aspect ratio = 1.5 )
    Yellow is 600x400 ( AR = 1.5 )

    At this point ViewportOrigin = 0,0 for both...   Looks good to me.

    It is worth mentioning [if you try to make sense as you go ] that the
    ViewportWidth for blue == 1.0  [takes the whole width available]
    ViewportWidth for yellow == 2.0  [takes 1/2 the width available to the control]

    The numbers on the images are "approximations".. if you read a coord of say 900,600 that means it is around there, but not quite exact

    Let's now start changing ViewportOrigin.
    image Here I simply changed viewportOrigin in yellow image.

    My first instinct looking at this one would be  1,0 ... [since it is 600 pixels left of 0,0]
    I was wrong!!
    This is actually ViewportOrigin =  -1, 0..
    Remember, when you move an image to right or bottom, the offsets are negative.

    You want to know what would happen if VO was 1,0??
    The demo is at  http://www.cookingwithxaml.com/recipes/DeepZoomCollection/default.html

    You can play with ZIndex, Opacity and ViewportOrigin for each image [their values are databound on the grid].
    image Having explained that the ViewportOrigins offsets (to right,bottom) are negative numbers.
    Can you guess what the offset is for the image to the right?
    My guess was ( 0, -1) but then  I was wrong again!  
    The ViewportOrigin here is ( 0, -.66666666666)

    Because the offsets are relative to the Width and in this case it is 600.
    So a viewport of (0,-1) would have been lower in the y axis [instead of at 400, it would be at 600]
    image This is 0,-1 and exactly what we expected for this one (after reading line above).
    image I know you have it by now, but just for fun, this is  ( -1.5, -.3333)  aka ( 900,200)
    Notice how half of our yellow image is clipped. 
    image This is ViewportOrigin ( 0.5, -.3333 ) ... I figured I should show you some thing with a positive value for Viewport Origin...

    Again, you can play with the ugly but hopefully useful sample here.. 
    Just change the ViewportOrigin, or any other property and see what happens.
    You can use the same sample to play with Opacity, ZIndex  and ViewportWidth..   this will show you the flexibility in collections.
    Don't get tricky with the values as there is no validation.

    Mapping SubImages to Element Coordinates
    Now that we understand ViewportWidth and ViewportOrigin,  we can map from logical coordinates to element coordinates so we can put overlays on our MultiScaleImage.  Or do hit testing or any thing similar.

    What I did is put a small pink rectangle in the page and I am going to listen to MouseMove on the MultiScaleImage and then do kind of a "hit testing" to see which Image I am over.  I used ZIndex to select only the single image on the front. If you did not use ZIndex you can select multiple.

    So, what does the map look like??   The whole code is below commented in detail..  I hope that makes it easiest to explain -instead of my rambles-.

    /// <summary>
    /// Gets a rectangle representing the top-most image that the mouse is over
    /// </summary>
    /// <param name="elementPoint">Mouse Position, in Element Coordinates</param>
    /// <returns> Rectangle reprsenting Element Coordinates for the image or 0,0,0,0 if not over an image</returns>
    Rect SubImageHitTestUsingElement(Point elementPoint)
        Rect resultRect = new Rect(0, 0, 0, 0);
        int zIndex = 0;
        // We loop through all our images. 
        for (int i = 0; i < _images.Count; i++)
                // Selct our MSSI. 
                MultiScaleSubImage subImage = _images[i].Image;
                // NOTICE the scale is a mutliplication of the size of our image (1 / subImage.ViewportWidth)
                // and the current Zoom level ( 1 / msi.ViewportWidth) 
                double scaleBy = 1 / subImage.ViewportWidth * 1 / msi.ViewportWidth;
                // The two lines below convert our image size us from Logical to Element Coords
                // Notice that for Height, we must take into account Aspect ratio. 
                double width = scaleBy * this.msi.ActualWidth;
                double height = (scaleBy * this.msi.ActualWidth * (1 / subImage.AspectRatio));
                // Now we convert our viewportorigin  (logical coords) to Element Coords
                // Reminder, this is top-left ..  Notice that we multiply by -1 since 
                // we saw the negative math for Viewport Origin. 
                Point p = msi.LogicalToElementPoint(
                    new Point(
                         -subImage.ViewportOrigin.X * 1 / subImage.ViewportWidth,
                        - subImage.ViewportOrigin.Y * 1 / subImage.ViewportWidth));
                // Now we create a Rectangle in Element Coords. 
                Rect subImageRect = new
                    Rect(p.X, p.Y, width, height);
                // here we hitTest, using Contains.. 
                // and we keep track of the front-most element only..                    
                if (subImageRect.Contains(elementPoint))
                    if (subImage.ZIndex >= zIndex)
                        zIndex = subImage.ZIndex;
                        resultRect = subImageRect;                            
            catch (Exception ex)
        return resultRect;

    I used Element Coords because that is what I was after. If you want logical coords, it should be easy from code above.. 
    Just convert the point to Logical, do the scaling for zoom and hittest against a logical rect...

    Fair enough???   The source is [you guessed it] at Skydrive.

    You can see a few tiny issues I did not care much for:
    1) My math is rounded so some times you see the 'Rectangle' I created be slightly off (adding some extra pixels should do fine) ...
    2) I did the recalculation for rectangle only on mouse move..  and I did not do it on Pan... so if you Zoom using Wheel or you pan, it will take for you to move the mouse one more time in order for Rectangle overlay to update.

    That is my part!!  Now it is up to you to build some thing really cool using real images and hittesting..

  • Jaime Rodriguez

    Installing Silverlight no longer requires a browser re-start… woohoo!!!


    A few weeks ago I noticed changes in browser's behaviors… 

    1) My IE  7.0.6000  no longer prompts for Click To Activate  :)

    2) My firefox   no longer required a reboot after installing Silverlight …  If you had not noticed, using silverlight.js on IE allowed you to instantiate silverlight plugin right after installing , but on other browsers it required a re-start… 

    I pinged Piotr, our deployment PM, asking if he was seeing the same and he upped it sharing that he had a way to instantiate silverlight on ANY PLATFORM and ANY browser WITH OUT requiring A RE-START…  [call me a geek, but I was happy]..   Today, he posted the blog with the magic call: navigator.plugins.refresh () …   Check it out yourself…  

    I hear the  next version of Silverlight.js (when we next release an SDK) will include these changes; in the mean time you can take his advise and use navigator.plugins.refresh () to get around the re-boot issue and instantiate SL right after installing…  

    [caveat: this does not work if silverlight is being updated, and the plugin was loaded, but that scenario is not common since silverlight auto-updates itself] …

    Thanks Piotr, and welcome to blogging..

  • Jaime Rodriguez

    Only two weeks to go until WPF Codecamp in North Carolina ( May 17)…


    On May 17,  Karl Shifflett and Josh Smith present their “WPF Multi-Tier Business Application Track at the Enterprise Developers Guild Code Camp”.
    The event is at the CPCC Central Campus located in Charlotte, NC.  

    If you are in the area or within driving/train distance I definitely recommend you try to attend…  Josh and Karl are two of the most knowledgeable and passionate WPF experts out there..   I have had the pleasure of seeing Josh present at the 08 WPB bootcamp and he is very good – he has this funny/casual way of sharing a lot of deep experience..    I have not seen Karl speak yet, but given his Mole accomplishments there is no doubt this event is going to be very insightful…   [btw, if MS has another WPF bootcamp in the future, we will have Karl :)]

    I wish I could be there for this.. but I have a conflict ;( …. if you are in the area sign-up soon, attend and have lots of fun …  [there will even be prizes]

    Good luck Josh & Karl!!!   Thanks for volunteering and sharing!


  • Jaime Rodriguez

    Viewbox for Silverlight2


    Viewbox is a pretty handy 'container' in WPF..   It is a decorator that scales its child content to the size available to the viewbox ( if child is smaller it scales it up >1, if child is bigger it scales it down such that it fits based on some stretch direction).

    You can find source code for a sample viewbox here

    If you want to see a ugly (yet still useful for those knowing viewbox) harness, there is a sample here..

    Sorry for no docs; the control itself is straight forward.  Please check the docs for WPF viewbox to see how to use it; the SL2 version above mimics WPF.

    Please do review the source if you are using it for a real project.



  • Jaime Rodriguez

    Some of the Silverlight blogs I read..


    This one got stuck on my drafts on 4/10… sorry …   (FYI, I am back from my trip; more on that soon ) …

    - –-  -—-— -- -

    I am out all next 1.5 weeks talking to ISVs in EMEA about WPF and Silverlight… Should be so much fun that I won’t have time to blog :(..  in the mean time, here is an OPML of the Silverlight blogs I read…   

    If your blog is missing or you want to recommend some one else’s blog, please email me or leave comments…

    Again, OPML is here.

  • Jaime Rodriguez

    built-in Styling and generic.xaml


    Most people already know (from ScottGu’s blog post for example)  that in Silverlight 2 you can override the ControlTemplate for a Control and ‘re-define’ the look of the control.  However, I have received a few questions around the use of generic.xaml to accomplish this same task; I will try to share a few thoughts on this to tease you into digging deeper on your own.   If you are short on time, skip to [FAQs on built-in styles below]

    Some definitions on the recurrent “what is difference between style and template?” 

    Style is an object (in Markup or in Code) that sets properties of a control. 
    All that a style can do is set the value of existing properties on the control.  Imagine our control was a Car, a Style could say some thing like “wheelsize=17”, bodycolor=”cherry red”, “windowtreatment=”tinted”, etc.. 

    A template actually defines the pieces or parts of the car. For example, a template for a cheap convertible might not have a roof at all :)  or a template for a Car can decide if it is two doors or four doors, if it has four wheels or eight, etc. 
    When I explain it I always tell people, the template defines the skeleton, the Style dresses the pirate ( I like the pirate analogy cause some of them have one eye, or one leg, or one arm, etc. making good use of Templates).

    Where things get interesting is that a Style can set any property in the control, and Template is itself a property, so what you see the tools (like Blend) do most often is when you want to override the Template of a Control, they override the whole style and the Template property with it. 

    Stay with me… even if the above does not make sense, the rest of the article will help.

    Where generic.xaml comes in is in the “magic” that defines the default look for a control.  Let’s imagine we want to create a GelButton .. 

    public class GelButton : Button    

    Simple enough, now we want to use it in our Page.xaml user control, we add the namespace and the control.
    <UserControl x:Class="StylingSample.Page"
        Width="400" Height="300" 
        <StackPanel Width="50" >
            <Button Content="Top"  Height="50"/> 
            <samples:GelButton Content="Cream" Height="50"></samples:GelButton>
            <Button Content="Bottom" Height="50"> </Button>

    Would you be surprised if the outcome looked like this?


    I can’t tell you if you should be surprised or not (I am undecided myself), but I can tell you what happened!

    The control by default is lookless. You need to define the look for it. This is accomplished by assigning a valid ControlTemplate to the control  [via the Template property in the Control class].

    To assign the Template property, you could do some thing like:

    public GelButton ()
    .Template = XamlReader.Load ("<ControlTemplate xmlns='http://schemas.microsoft.com/client/2007'
    ><Grid ..> </Grid></ControlTemplate>");


    but a better way to do it is to store the control template in a generic.xaml Resource Dictionary and then magically the run-time, will pick it up from there. Your template would be associated to your control via the TargetType attribute when defining the resource.  This template would now become what we call the “built-in style”.

    Here are the details on creating a built-in style.

    Generic.xaml is a ResourceDictionary –a property bag for resources – that you include in your assembly, at the root of the project.  If you are a WPFer you might be thinking it should be in themes\generic.xaml, I hear that is where it might end up, but for now (Silverlight 2 beta1), it needs to be in the root of the project.  The default (empty)  resource generic.xaml could look like this:


    For defining the look & feel for our GelButton we need to start with some default template. Long term, this will be a right click inside Blend  (like in WPF); temporarily since Blend does not yet support styling,  I would recommend is using David Anson’s handy Stylebrowser application to copy the default Style for button and paste it into the resource dictionary.

    [unfortunately the default button template is too verbose, so for practical purposes here I am going to use a much simpler template].

        <Style TargetType="samples:GelButton">
            <Setter Property="Background" Value="Black" />
            <Setter Property="Foreground" Value="White" />
            <Setter Property="Template">
                    <ControlTemplate  TargetType="samples:GelButton">
                        <Grid x:Name="RootElement" >
                            <Ellipse Width="{TemplateBinding Width}" 
                                Height="{TemplateBinding Height}" 
                                Fill="{TemplateBinding Background}"
                            Content="{TemplateBinding Content}" 
                            ContentTemplate="{TemplateBinding ContentTemplate}" 
                            Foreground="{TemplateBinding Foreground}" 
                            VerticalAlignment="Center" HorizontalAlignment="Center"

    Let’s dissect the work needed to create this template:

    1. Added a xmlns:samples to the resource dictionary.
      Notice the slightly different syntax from the namespaces you add to say a UserControl; in generic.xaml I included the assembly (which is the name of my DLL).
      If I had tried xmlns:samples="clr-namespace:StylingSample; with out the assembly=”StylingSample” it would not work. [trust me, I make that mistake often].
    2. Next I defined the style and I told it the TargetType I wanted this Style to be applied to; you usually do this when defining a Style so your template can do discovery of the properties and validate these, but when doing this in generic.xaml, the magic that happens on built-in templates uses this information to create a relationship (or I would say bind) between this Style and the type; now when ever the type is instantiated, if no other style is applied, this style will be used as the default style.
    3. The rest is simple styling stuff.  TemplateBinding is probably the most interesting part, this creates a binding between the property we are setting and the actual controls’ property. For example:  <ContentPresenter Foreground={TemplateBinding Foreground}" > creates a bind between the content presenter’s foreground and the Foreground in the actual control.  This will our UI styleabe from within the tools. Inside Blend or in XAML you can declare a button <GelButton Foreground=”Red” > or a <GelButton Foreground=White> and get flexibility as the template will get the value carried through. F
      For more info on all of this you should watch Karen Corby’s MIX presentation on “Rich,Dynamic UIs” ..

    Now, I can run the same code, with changing nothing other than the Resource dictionary I added and I get:


    Since I did create a Templatebinding for background/Foreground , I can even have some fun..  After all, I promised some “meat”.. Need food!! sorry about that it is 1:30 PM ..

    <UserControl x:Class="StylingSample.Page"
        Width="400" Height="300" 
        <StackPanel Width="50" Margin="0,20,0,0">
            <samples:GelButton Content="" Height="20.4" 
                    RenderTransformOrigin="0.5,0.5" Width="48.8" Canvas.ZIndex="2">
                        <GradientStop Color="#FFF5DEB3"/>
                        <GradientStop Color="#FFE0B05C" Offset="0.826"/>
            <samples:GelButton Content="Ham" Height="16" Canvas.ZIndex="1">
                    <LinearGradientBrush EndPoint="0.5,1" StartPoint="0.5,0">
                        <GradientStop Color="#FFD64141"/>
                        <GradientStop Color="#FFE23939" Offset="1"/>
                        <GradientStop Color="#FEDAB6B6" Offset="0.43299999833106995"/>
            <samples:GelButton Content="" Height="16"  >
                        <GradientStop Color="#FFF5DEB3"/>
                        <GradientStop Color="#FFECC06E" Offset="0.991"/>

    So, I just wasted 10 mins of your time and 40 of mine introducing you to generic.xaml and built-in styles.  I had promised to answer a few questions, here they are:

    [REF: FAQs on built-in styles]

    What are the benefits of built-in styles, why use generic.xaml instead of hardcoding the template?

    It is nice to store all your templates in a resource dictionary that you can easily swap – as opposed to having to do it in code-.   Imagine you needed to create three themes for your app, doing it with hardcoded templates would be hard.
    Also, If you put your Template in the ResourceDictionary the template can now reference other resources in the dictionary itself.

    Why is it that all examples I have seen are not using built-in styles?  we are always told to apply the style inline from App.xaml

    Built-in styles are designed for control authors, when you write a control, you provide look & feel.  If you look at example above, I had to inherited from Button class.  Most samples are purely styling a button, so they take a different approach.

    In the financials demonstrator, you inherited from Button and did nothing other than provide the built-in style, is this a best practice?

    I liked that approach (but I come from an enterprise background where we create bloated frameworks that often inherit just to create an abstraction in case some thing changes later) ; the one benefit you get is you can use your button any where with out having to explicitly refer to a style. 
    The disadvantage of course is that inheriting takes a bit of extra performance and memory; but this is pretty negligible from what I have seen. 

    Again, I don’t call it a best practice, more of a personal preference for me.  If you look at financials demonstrator now, I ended up adding a property later UseReflection, so now the button does have a reason to be its own class.

    Built-in styles sounds like I can change it all in one place? I don’t want to crowd my code with <Button Style=”{StaticResource GelButtonStyle}” >.

    That is right, if you can afford inheriting and the classes are not sealed.  That said, after building a few solutions I realized I had a false sense of centralization  [yes I made up the term]. 
    The argument is

    1) With built-in styles, I can change the style in one place. It is the same using App.xaml you change the style itself in one place for all.  What you are replicating a lot is the name of the style, but the style itself is in one place.

    When can I not use built-in styles? 
    if the class was sealed or they had protected the Template property then you would not be able to use this.

    Can I just create a generic.xaml and override the System.Windows.Controls templates with out inheriting?
    Not that I know of. It does not sound like a good idea; I tried it just to see if it worked and it did not work for me.

    Is applying a built-in style going to break or affect my state and parts?
    No. As long as your style uses the same names, the code will still pick all that as if it was an inline Style.

    We would not need built-in styles if you allowed TargetType every where, including on regular dictionaries, like WPF does.
    Fair point, these features are all being considered for later versions after 2.0 stay tuned, right now this way works, it is flexible and comprehensive.

    Will this approach work with Blend? Will I still be able to style in Blend.

    Yes!  Blend works with this already; that is how it picks the look & feel for System.WIndows.Controls today.

    Why do styles & Control templates always go together? Can I just do my Control template?
    My personal opinion is that if you need a template and not the style your template might be too stringent or too hard coded; it would be the equivalent of writing a ControlTemplate that does not have any TemplateBinding on it; don’t get me wrong, I am not saying this is wrong, I am just saying 99% of the time, this does not happen. With regards to simply providing the ControlTemplate in generic.xaml,  I don’t believe that would work.

    In the financials demonstrator, you named your class Button, for some thing that inherited from Sysetm.Windows.Controls.Button.. Is using the same name required?

    Absolutely not. I chose the name because I was going to override all controls, but I ended up changing my mind and that made it more confusing. Sorry about that; the name does not matter (as long as it does not conflict); from experience calling it Button will confuse you, don’t do that.

    If I use the built-in style, does that mean a ‘consumer’ of my button will not be able to style the button later?

    No! A consumer will still be able to style the button later and as well as override your template.


    OK, I need to go eat.  This at least answers the questions I had; will try to come back to this at a different time.

  • Jaime Rodriguez

    Silverlight instantiation on 2.0


    On my Deep zoom Post, I recommended that for Deep Zoom applications you instantiate the control using Silverlight.js to avoid the Click To Activate.  

    Some one picked on this and asked why the new default is using OBJECT. Here is what I know (and most of it comes from past emails with Piotr Puszkiewicz).

    Using the OBJECT tag has a few advantages: 

    1. Customizing the experience is much easier ..  All you do is put your install experience inside the object tag.  For example, the defult puts an anchor with the GetSilverlight image:
      <object data="data:application/x-silverlight," …> 
      <!—This is the isntall experience –->
      <a href="http://go.microsoft.com/fwlink/?LinkID=108182" style="text-decoration: none;"> <img src=http://go.microsoft.com/fwlink/?LinkId=108181
      alt="Get Microsoft Silverlight" style="border-style: none"/> </a>
      <!—- end of install --> </
    2. No javascript dependency. If your page is hosted some where, your might not be allowed to bring javascript but might not have the object tag blocked, so now you can instantiate with no Javascript. 
    3. One less file to download – which is a huge deal for high traffic portals and hosters.
    4. Much easier to create the ‘embed’ or send to a friend feature if you are sending an object tag.  This one (and #2) are coupled with the fact that XAPs can be loaded across domains.

    So why did we default to <OBJECT /> ??
    Well, because Click To Activate is going away.  IE team announced it and given they said it was in april, we are within  28 days of this happening.. [Should we start a pool?].

    For those of us creating samples today, we obviously have a choice for a few more weeks; I will remain partial to silverlight.js; if you want to come, here is how I get my instantiation:

    1. I created a dummy Silverlight 1.0 solution using Blend 2.5 March CTP.
      This creates a good default.html that I can reuse.
    2. I then copied the silverlight.js from “\program files\microsoft sdks\silverlight\v2.0\tools\ directory to my new solution.
      [I did it mostly because I noticed that was the one updated last and I know the install team put it there]
    3. Now you edit the default.html to:
      1. Comment out the <script> include for page.xaml.js
        <!--    <script type="text/javascript" src="Page.xaml.js"></script>—>
      2. Comment out the var scene = new Page() declaration.  
        //var scene = new SLOn.Page();
      3. Change the source parameter in createSilverlight call to point to your XAP instead of the XAML file
                        source: "ClientBin/hello.xap",
      4. Comment out the onLoad handler, which refers to the scene we removed in #1.
        //  onLoad: Silverlight.createDelegate(scene, scene.handleLoad),
    4. An optional step that I usually do is change the control’s style so it takes 100%

      #silverlightControlHost {
                  height: 100%;
                  width: 100%;

    5. Another optional step is adding the style for body and html so there is no padding:

      html, body {
              height: 100%;
              overflow: auto;
          body {
              padding: 0;
              margin: 0;

    6. If you were doing this for production you would probably also want to tweak the onError handler in createSilverlight..
    7. After that, I copy my default.html and silverlight.js to the web directory where my solution is at, and browse/navigate to it.

    You can get my sample default.html and silverlight.js from here.   

    Disclaimer: this approach is good for developer samples, for a production app please follow the Silverlight Installation Experience Guide

  • Jaime Rodriguez

    Open source charting library for Silverlight 2..


    Visifire has a free, open-source library for 2D and “3D looking” charts built using Silverlight 2..  Very cool!!

    image image image image

    For me, this is obviously handy when creating demos and showing Silverlight capabilities; for commercial use they also have a commercial license..   

    Emailing them to see if they would be interesting on working with us for their charts to be styled in Blend; in the mean time enjoy the charts and play with them..

  • Jaime Rodriguez

    A deepzoom primer ( explained and coded)..


    I had to learn DeepZoom recently and along the way I put together some handy notes .. below are the notes organized in a near step-by-step explanation format.  This is a very long post, but I hope it has useful insights for any one wanting to do deepzoom so I recommend you read it all.  If you must skip, then the outline will help you.  imo, part 3 and 5 are the good stuff. 

    Part 1 – The history and brief explanation on how DeepZoom works.

    Part 2 – Constructing a DeepZoom image using Image Composer

    Part 3 – Introduction to the DeepZoom object model – goes way beyond the docs I hope

    Part 4 – Coding a deepZoom ‘host’ user control with Pan & Zoom

    Part 4.1 – Adding a few extra features to our User Control

    Part 5 – Lessons learned on the code, documenting the gotchas  **must read even if you know Deep Zoom already

    Part 6 – Give me the code, just a zip file w/ the goodies 

    Part 7 – Show me the outcome; what did we build?

    Part 1 – The History & Math behind DeepZoom

    A lot of people equate DeepZoom to SeaDragon – they assume SeaDragon was the code name and DeepZoom is the marketing name-. This assumption is not quite right (unless you equate your engine to your car model).  Seadragon is an incubation project resulting from the acquisition of Seadragon Software; the Seadragon team is part of the Live organization and are working on several projects (like Photosynth). DeepZoom is an implementation that exposes some of the SeaDragon technology to Silverlight.

    DeepZoom provides the ability to smoothly present and navigate large amounts of visual information (images) regardless of the size of the size of the data, and optimizing the bandwidth available to download it.  

    How does DeepZoom work?

    DeepZoom accomplishes its goal by partitioning an image (or a composition of images) into tiles.  While tiling the image, the composer also creates a pyramid of lower resolution tiles for the original composition. 

    The image to the right shows you what a pyramid; the original image is lowest in the pyramid, notice how it is tiled into smaller images, also notice the pyramid creates lower resolution images (also tiled).   A few of the docs I read said the tiles are 256x256, but from peeking through the files generated by the composer I am not convinced; I do know from reading through the internal stuff that there is some heavy math involved here, so I trust they tile for right size :).

    All of this tiling is performed at design-time and gets accomplished using the DeepZoom composer.

    At run-time a MultiScaleImage downloads a lower resolution tile of the image first and downloads the other images on demand (as you pan or zoom); DeepZoom make sure the transitions from lower to higher res images are smooth and seamless. 


    Given all this, how is Deepzoom different than say a ScaleTransform (for zoom) on a high resolution image?

    With a ScaleTransform, usually you would download the whole high res image at once; this delays how quickly the end user gets to see the image when the page or application loads.  Some times people apply a trick where you use different resolutions images, since you are not tiled you will likely end up downloading several big images (consuming more network bandwidth) or the download time will continue to be high if the initial downloaded image is not small enough, also the transitions from low to higher res are going to be more noticeable unless your write the transitions yourself.

    DeepZoom and its tiling make it possible to see bits quicker and can optimize for bandwidth.  In the worst case scenario where some one looked at every single one of the tiles at the highest resolution, DeepZoom would have an extra overhead of 33% compared to downloading the single highest resolution image at once, but this ‘worst case’ scenario is almost never hit, most of the time DeepZoom can save you from downloading too much.   

    Another feature in DeepZoom is its ability to create ‘collections’ from the composite image.  This provides you the ability to compose a scene ( group of images ), optimize them for speed & download, but still maintain the ‘autonomy’ and identity of the image, you can programmatically manipulate (or position) these images from within the DeepZoom collection (more on collections in part 4).

    Part 2 – Constructing a DeepZoom Image using DeepZoom composer 

    1. We begin by downloading the "DeepZoom composer" from Microsoft Downloads.
      If you want a great reference for the tool, try the DeepZoom Composer guide. In the steps below, I am going to keep it to the minimum steps needed and some of the gotchas when using the tool.
    2. After installing the DeepZoom Composer, we launch it. 
      Trivia facts: Composer is a WPF application, like most of the other Expression products. Also, codename was Mermaid (you can see this from the export window).
    3. Under the File Menu, select "New Project"
        1. Select a location to store the project.
          I recommend some thing with a short path like c:\users\jaimer\. The composer has some usability issues that make working with long paths a little hard; and the composer will append to your path later when you export.
        2. I called it "EasterEggHunt" as that is what my project will be.
    4. Now click "Add Image" to import a few images.
      You can import multiple images at once.  In my case, I am importing 3 images: bummysmall.jpg, eggs.jpg, and world2.jpg). These are in the inputimages directory if your are following along with the source.
      This added all the images we are going to use in the composer.  All images must be added at design-time.
    5. Click "Compose"  on the Center toolbar to compose the DeepZoom image.
    6. Double click the world image to add it to the 'stage' or design surface.
    7. Click Fit To Screen to maximize our space.
    8. Click on the eggs image  to add it to the stage.
    9. Zoom many times into the image at a place where you want to drop some easter eggs.
      1. Hint:  the Usual short cuts of Ctrl+ and Ctrl-  do work for zooming. Unfortunately Pan(h) and Select(v) don't work.
    10. Shrink the easter eggs into a small size -- don't worry, with DeepZoom we will be able to Zoom a lot at run-time to find them and see them.
    11. Drop the easter eggs where you want to. He is an example of mine, I dropped them in Mexico. Notice I am quite zoomed into the map and the eggs are small.


    12. Repeat the steps in 11 for the bunny picture.  In my case,  I did it in Seattle area.
      Note: unfortunately I could not figure how to drag same image twice into stage area.  The work around I used is to make a copy of the image with different name, and add it to the image gallery ( Step 4).
    13. Click Ctrl-0 to see our DeepZoom Image with out the zooms.  You sized it right if you can't easily see the eggs and bunny in the map.
    14. CLick "Export" in the main toolbar. 
    15. Here we enter the settings for output.
    16. Leave the "generate collection" unchecked for now.
      What Generate Collection does is exports the DeepZoom Image with metadata and at run-time the images can be accessed via the MultiScaleImage.SubImages  property.   If you can get to these images, you can move them around the composed image ( for layout ) you can also tweak their Opacity.
      The reason I am leaving them unchecked is beause there seems to be a bug (at least on my machine) where if I click Generate Collections my images at run-time show at an offset of where they are supposed to be.   I have reported it to the DZC team and they are investigating.
    17. Enter a Name  ( "Easter" on the export Dialog).
    18. I leave the Output path untouched.
      This is where having entered a short path in Step 2 above would pay up because their Path dialog does not Wrap and it is fairly small. [Kirupa already said this is improving for next version]. If you opt to change the path, be attentive when you export again, it seems to reset to its default value.

    19. Now, assuming your output looks similar to mine above, (Create Collection unchecked) Click Export and we are done when it says Export Completed.

    Part 3 - DeepZoom Object Model

    Once you have a DeepZoom image, you will need an instance of the MultiScaleImage class in your silverlight application to load that image.  Instantiating the MultiScaleImage class can be done from XAML

    <MultiScaleImage x:Name="DeepZoom" Source="easter/info.bin" />

    or from code:

    MultiScaleImage DeepZoom = new MultiScaleImage () ;

    DeepZoom.Source = new Uri ( “easter/info.bin”) ;

    Before  going through the DeepZoom API it makes sense to understand the terminology used:

    • Logical Coordinates – is a normalized value (0 to 1) representing a coordinate in the image itself (not the control)
    • Element Coordinates – is the actual control coordinates. For example in a MultiScaleImage of Width=800, Height =400, when the mouse is at the center, the element coordinates are 400,400.  These coordinates are not normalized.

    Now, we navigate through the interesting properties and methods in MultiScaleImage

    • Source – refers to the Image source; usually info.bin when not using collections or items.bin  if using collections. 
    • SubImages – when using collections, this is a reference to all the images in a composed DeepZoom Image.
    • ViewportWidth – Specifies the width of the parts of the image to be displayed. The value is in Logical coordinates.
      For example: 
      Width=2 means image is zoomed out and only takes half the space available. 
      To zoom in, a viewport < 1 is required.  ViewportWidth of 0.5 is a 200% zoom.
    • ViewportOrigin – the Top,Left corner for the parts of the image to be displayed.  This is returned in Logical coordinates.  For example, imagine I am panning by 10% each time and I pan twice to the right while zoomed in at 100% (so no zoom), my ViewportOrigin.X will be 0.2.
    • UseSprings – gets or set whether DeepZoom animates the transitions ( like ZoomAboutLogicalPoint, updates to ViewportOrigin, etc. ).

    The interesting methods are:

    • ElementToLogicalPoint – takes a coordinate of the control, and gives you a logical ( normalized coordinate).
      For example, mouse at Center (400,400) with ViewportWidth=1 and you call ElementToLogical ( ) will return (0.5, 0.5)
    • LogicalToElementPoint – takes a logical coordinate (normalized) and returns a point in the MultiScaleImage control where that logical point corresponds to.
    • ZoomAboutLogicalPoint – implements the Zoom.  The two parameters are the new zoom multiplier - as an increment from current zoom factor in the image - and the Logical point at which to zoom around. 
      Example of the incremental zoom would be to ZoomAboutLogicalPoint  ( 1.5, 0.5, 0.5) .. I will be zoomed in to 1.5 times;  if I repeat this operation with same values I am zoomed in at 1.5 * 1.5  which is 2.25 times from size where I started.

    In my opinion, surprisingly missing from the API were:

    • The original width and height of the DeepZoomImage  (so that I can translate normalized logical coords to physical on the image).
    • Zoom – to tell you the current total Zoom level; this one you can get around by keeping track of any zooms you implement. Another possible workaround is that Zoom appears to be 1/ViewportWidth; I can’t think of the scenario where this does not hold, if there is please let me know and again just keep track of your zooms if that happens.

    Part 4 – Coding a  DeepZoom Host User Control


    The goal here is to code a sample  reusable control just to illustrate the points; along the way we will of course implement enough features for our Easter Egg Hunt.  [Update: Sorry about belatedness, I started this on 3/22 but had a trip that prevented me from playing around, so I am late from easter]

    1. Inside Visual Studio 2008, create a new Silverlight Application; I called it DeepZoomSample.
    2. Build the application so the Clientbin directory is created.
    3. Copy the output from the DeepZoom Composer to the Clientbin directory of our Silverlight application.
      In my case, I called the output “Easter” so I can go into the output directory from composer and just copy that whole directory to my Silverlight Application’s ClientBin.
    4. Now that we have our image, we can edit the XAML in Page.Xaml, to show the image.
      <UserControl x:Class="DeepZoomSample.Page"
          <Grid x:Name="LayoutRoot" Background="White">
              <MultiScaleImage x:Name="DeepZoom" Source="easter/info.bin" /> 
      If you run the application now, you will see the image loads  but there is no functionality: zoom and pan have not been implemented. 
      For zoom, we need to use the mouse wheel, but Silverlight has no native support for it. A good work around is to use Peter Blois’ MouseWheelHelper. This class uses HTML Bridge to listen to the mouse wheel event in the browser and exposes the events to managed code.
    5. Add a new code file to your project, I called it MouseWheelHelper.
    6. Copy Peter’s code below into the MouseWheelHelper file.

      using System;
      using System.Windows;
      using System.Windows.Controls;
      using System.Windows.Documents;
      using System.Windows.Ink;
      using System.Windows.Input;
      using System.Windows.Media;
      using System.Windows.Media.Animation;
      using System.Windows.Shapes;
      using System.Windows.Browser;

      namespace DeepZoomSample
          // this code came from Peter Blois,  http://www.blois.us/blog
          // Code ported by Pete blois from Javascript version at http://adomas.org/javascript-mouse-wheel/
          public class MouseWheelEventArgs : EventArgs
              private double delta;
              private bool handled = false;

              public MouseWheelEventArgs(double delta)
                  this.delta = delta;

              public double Delta
                  get { return this.delta; }

              // Use handled to prevent the default browser behavior!
              public bool Handled
                  get { return this.handled; }
                  set { this.handled = value; }

          public class MouseWheelHelper

              public event EventHandler<MouseWheelEventArgs> Moved;
              private static Worker worker;
              private bool isMouseOver = false;

              public MouseWheelHelper(FrameworkElement element)

                  if (MouseWheelHelper.worker == null)
                      MouseWheelHelper.worker = new Worker();

                  MouseWheelHelper.worker.Moved += this.HandleMouseWheel;

                  element.MouseEnter += this.HandleMouseEnter;
                  element.MouseLeave += this.HandleMouseLeave;
                  element.MouseMove += this.HandleMouseMove;

              private void HandleMouseWheel(object sender, MouseWheelEventArgs args)
                  if (this.isMouseOver)
                      this.Moved(this, args);

              private void HandleMouseEnter(object sender, EventArgs e)
                  this.isMouseOver = true;

              private void HandleMouseLeave(object sender, EventArgs e)
                  this.isMouseOver = false;

              private void HandleMouseMove(object sender, EventArgs e)
                  this.isMouseOver = true;

              private class Worker

                  public event EventHandler<MouseWheelEventArgs> Moved;

                  public Worker()

                      if (HtmlPage.IsEnabled)
                          HtmlPage.Window.AttachEvent("DOMMouseScroll", this.HandleMouseWheel);
                          HtmlPage.Window.AttachEvent("onmousewheel", this.HandleMouseWheel);
                          HtmlPage.Document.AttachEvent("onmousewheel", this.HandleMouseWheel);


                  private void HandleMouseWheel(object sender, HtmlEventArgs args)
                      double delta = 0;

                      ScriptObject eventObj = args.EventObject;

                      if (eventObj.GetProperty("wheelDelta") != null)
                          delta = ((double)eventObj.GetProperty("wheelDelta")) / 120;

                          if (HtmlPage.Window.GetProperty("opera") != null)
                              delta = -delta;
                      else if (eventObj.GetProperty("detail") != null)
                          delta = -((double)eventObj.GetProperty("detail")) / 3;

                          if (HtmlPage.BrowserInformation.UserAgent.IndexOf("Macintosh") != -1)
                              delta = delta * 3;

                      if (delta != 0 && this.Moved != null)
                          MouseWheelEventArgs wheelArgs = new MouseWheelEventArgs(delta);
                          this.Moved(this, wheelArgs);

                          if (wheelArgs.Handled)

      MouseWheelHelper fires a Moved Event whenever the Wheel moves. The EventArgs is a MouseWheelEventArgs, which has the delta property. Delta is a normalized property (0 to 1), for now all we look at is whether it is greater than 0 or not.
      If Delta is greater than 0, then the wheel has rotated away from the user; if Delta is a negative number, then the wheel has rotated toward the user.

    7. Before we handle the Moved event, let’s add a ZoomFactor property to our control, this will be the increment/decrement on a wheel operation. The default value is 1.3, which is a 30% increment.  Nothing scientific behind this number, I am pretty much just ‘copying’ what I see every other sample do. I think the number works OK.
       protected double _defaultZoom = 1.3; 
             public double DefaultZoomFactor
                     return _defaultZoom; 
                     _defaultZoom = value; 
    8. We also add a CurrentTotalZoom property, this will be a cached version of overall zoom level (since we can’t query this from the MultiScaleImage API.  I also added a MaxZoomIn and MaxZoomOut to prevent the image from going too far in (is there such a thing?) or too far out. Too Far out did matter as the image can disapper if you go too far.  In my case I picked my Maximum values arbitrarily.

      private double _currentTotalZoom = 1.0;

             public double CurrentTotalZoom
                 get { return _currentTotalZoom; }
                 set { _currentTotalZoom = value; }

             private double _maxZoomIn = 5000;
             protected double MaxZoomIn
                 get { return _maxZoomIn; }
                 set { _maxZoomIn = value; }
             private double _maxZoomOut = 0.001;

             protected double MaxZoomOut
                 get { return _maxZoomOut; }
                 set { _maxZoomOut = value; }

    9. Now, we can add a DoZoom function to our class, this will be called when there is a Zoom operation.   The parameters for it are: the new Zoom level RELATIVE to where the image is at,  and  a point in Element Coordinates since most likely we will be zooming around the mouse, and we get Element coordiantes out of that.

      /// <summary>
            /// Performs a Zoom operation relative to where Image is at. 
            /// Example, call DoZoom twice with a Zoom of 1.25 will lead to an image that is zoomed at 
            /// 1.25 after first time and ( 1.25 * 1.25 for second time, which is a 1.56
            /// </summary>
            /// <param name="relativeZoom"> new zoom level; this is a RELATIVE value not absolute.</param>
            /// <param name="elementPoint"></param>
            void DoZoom(double relativeZoom , Point elementPoint)
                if (  _currentTotalZoom * relativeZoom < MaxZoomOut ||
                      _currentTotalZoom * relativeZoom > MaxZoomIn) 
                Point p = DeepZoom.ElementToLogicalPoint(elementPoint);
                DeepZoom.ZoomAboutLogicalPoint(relativeZoom, p.X, p.Y);
                this.Zoom = relativeZoom;
                _currentTotalZoom *= relativeZoom; 
    10. Now we are ready to handle the MouseWheelHelper.Moved event.   We will do it in three parts: 
      1. We will subscribe to MouseMove event in the MultiScaleImage, so we can keep track of where the mouse is; we need this because MouseWheelHelper.Moved does not give us a MousePosition, and there is no way to query MousePosition in Silverlight2 outside of a Mouse EventHandler.

        // inside the Loaded event for the user control
        DeepZoom.MouseMove += newMouseEventHandler(DeepZoom_MouseMove);
        _lastMousePosition = new Point ( DeepZoom.ActualWidth /2 , DeepZoom.ActualHeight /2);
        protected Point _lastMousePosition;

        void DeepZoom_MouseMove(objectsender, MouseEventArgs e)
                 _lastMousePosition = e.GetPosition(DeepZoom);
      2. Now we instantiate a MouseWheelHelper and subscribe to Moved event

        // inside the Loaded Event for UserControl
        MouseWheelHelper mousewheelhelper = new MouseWheelHelper(this);
        mousewheelhelper.Moved += newEventHandler<MouseWheelEventArgs>(OnMouseWheelMoved);
      3. We add the OnMouseWheelMoved function to the UserControl class..
        void OnMouseWheelMoved(object sender, MouseWheelEventArgs e)
           // e.Delta > 0 == wheel moved away, zoom in 
           if (e.Delta > 0)
              DoZoom( DefaultZoomFactor, _lastMousePosition);
              // Zoom out 
              DoZoom( 1/ DefaultZoomFactor, _lastMousePosition);

      4. NOTE: If you compare the source above with the code in the sample source, they are slightly different.
        In the sample source there is two approaches to handling Zoom, and there is a boolean flag called _useRelatives that controls this. if you set _useRelatives to true, it will zoom based in relation to a last zoom; I think this makes it more complicated but for some reason most samples I have seen of DeepZoom use this calculation.  I think the behavior is the same than the approach I took, but the math is simpler with the approach in the steps above.   I did add both in case I find later that there was a scenario addressed by the _useRelatives approach.

    11. At this point we should be able to run the application and get Zoom to work (in and out) around the mouse location.  Compile the app and run it to make sure we are making progress.
    12. To Pan, we need to detect the MouseLeftButtonDown and MouseLeftButtonUp,  the assumption is we will pan when the mouse is down, and pan in the direction of the Mouse movement and then stop panning when the mouse is up.
      1. Let’s add a handler for MouseLeftButtonDown, we add the listener in the UserControl’s Loaded event.  This handler will set a variable called _isDragging  to flag that the mouse is down; we will use this flag on the MouseMove handler.

        // inside the Loaded function, we add code behind our MouseWheelHelper code added earlier..
        DeepZoom.MouseLeftButtonDown += newMouseButtonEventHandler(DeepZoom_MouseLeftButtonDown);
      2. The handler looks like this:
        protected bool _isDragging = false;
        protected Point _lastDragViewportOrigin; 
        void DeepZoom_MouseLeftButtonDown(object sender, MouseButtonEventArgs e)
                    this._lastDragViewportOrigin = DeepZoom.ViewportOrigin;
                    this._lastMousePosition = e.GetPosition(DeepZoom);
                    this._isDragging = true; 
      3. Now we subscribe to MouseLeftButtonUp, inside Loaded function  and we add the handler function for it.

        //  The one liner below goes in the Page_Loaded event handler
        DeepZoom.MouseLeftButtonUp += newMouseButtonEventHandler(DeepZoom_MouseLeftButtonUp);

        void DeepZoom_MouseLeftButtonUp(objectsender, MouseButtonEventArgs e)
            this._isDragging = false;
      4. Now we tweak the code inside MouseMove  to Change the ViewportOrigin to perform the Pan operation.
        void DeepZoom_MouseMove(object sender, MouseEventArgs e)
          if (_isDragging)
           Point newViewport = _lastDragViewportOrigin;
           Point currentMousePosition = e.GetPosition(DeepZoom);
           newViewport.X += (_lastMousePosition.X - currentMousePosition.X) 
        / this.DeepZoom.ActualWidth * this.DeepZoom.ViewportWidth; newViewport.Y += (_lastMousePosition.Y - currentMousePosition.Y)
        / this.DeepZoom.ActualWidth * this.DeepZoom.ViewportWidth; this.DeepZoom.ViewportOrigin = newViewport; _lastDragViewportOrigin = newViewport; } // NOTE: it is important this be after the isDragging check …
        // since this updates last position, which is used to compare for dragging. _lastMousePosition = e.GetPosition(DeepZoom); }
      5. We should also detect the MouseLeave event, and if we are in the middle of a Pan, we need to reset the _isDragging flag.

        // inside the UserControl Loaded handler DeepZoom.MouseLeave += newMouseEventHandler(DeepZoom_MouseLeave);
        DeepZoom_MouseLeave(objectsender, MouseEventArgs e)
        this._isDragging = false;
                  this.DeepZoom.Cursor = Cursors.Arrow; 

    13.   That is it for the basics and the ‘hard stuff’ … with not too many lines of code, we have Zoom & Pan in our host.  Along the way we added a few properties we can reuse to create UI around our DeepZoom image.

    Part 4.1 Adding more UI to navigate in a DeepZoom Control.

    In the last sections I took it slow and walked through the code to explain what we were working on.  Going forward below will pick up the pace a bit, and the original code will be tweaked to get into a host control with a bit more navigation and troubleshooting advise. 

    We begin by adding a Navigation wheel to the UserControl.xaml.  The wheel has four repeat buttons with arrows pointing east,west,north, south; these buttons will be used to pan in the respective direction.

    At the center of the wheel, there is a regular button, which takes you home ( to where there is no Zoom, no panning, etc. )

    1. Implementing Panning is done by changing the viewportOrigin. We have a choice of Panning relative to control size and relative to ImageSize. Let me explain:

      If the controls ViewPortWidth is 1.0 – and we pan by a Logical increment of 0.1  we are panning 10 percent on a direction. This % seems reasonable.
      If however we are zoomed in 500% ( viewportWidth = 0.2 ) and we do a Pan of 0.1 (logical)  then we are going to pan by a lot ( 50% of what is visible).  So we need to scale our original 0.1 increment by the ViewportWidth.  Don’t you think?

      Here is what I did:
      1. Added a Property of type double called  PanPercent.   This property holds the increment. You can set it from XAML; default is 0.1  ( aka 10% )
      2. Added a property of type bool called UseViewportScaleOnPan.  If this is true, we will pan by  PanPercent * ViewportWidth; if this is false we pan by PanPercent.
    2. Now we are ready for Panning. We add event handlers for all our Pan RepeatButtons:
      this.PanRight.Click += new RoutedEventHandler(PanRight_Click);
      this.PanLeft.Click += new RoutedEventHandler(PanLeft_Click);
      this.Home.Click += new RoutedEventHandler(Home_Click);
      this.PanBottom.Click += new RoutedEventHandler(PanBottom_Click);
      this.PanTop.Click += new RoutedEventHandler(PanTop_Click);

    3. Each of the event handlers calls the Pan function with their respective direction:
      void Pan(PanDirection direction)
      double percent = PanPercent; 
      if ( UseViewportScaleOnPan ) 
          percent *= this.DeepZoom.ViewportWidth; 
      switch (direction)
         case PanDirection.East:
            this.DeepZoom.ViewportOrigin =
               new Point(this.DeepZoom.ViewportOrigin.X - Math.Min(percent, this.DeepZoom.ViewportOrigin.X),
         case PanDirection.West:
             this.DeepZoom.ViewportOrigin =
                 new Point(this.DeepZoom.ViewportOrigin.X + Math.Min(percent, (1.0 - this.DeepZoom.ViewportOrigin.X)),
          case PanDirection.South :
               this.DeepZoom.ViewportOrigin =
                new Point(this.DeepZoom.ViewportOrigin.X ,
                 this.DeepZoom.ViewportOrigin.Y + Math.Min( percent, 1.0 - this.DeepZoom.ViewportOrigin.Y));
           case PanDirection.North :
                this.DeepZoom.ViewportOrigin =
                   new Point(this.DeepZoom.ViewportOrigin.X,
                   this.DeepZoom.ViewportOrigin.Y - Math.Min( percent, this.DeepZoom.ViewportOrigin.Y));

    4. Panning to Home is a combination of setting the ViewportOrigin to 0,0 and setting the ViewportWidth to 1

      void Home_Click(object sender, RoutedEventArgs e) {
      this.DeepZoom.ViewportOrigin = new Point(0, 0);
      this.DeepZoom.ViewportWidth = 1; }

    5. Next thing is to implement Zoom in and Zoom Out; these are also trivial, the only decision to make is where to Zoom, I needed two approaches:
      When Zooming using keyboard,  Ctrl+ Ctrl- (on Windows)  I wanted to Zoom at the mousePosition. 
      When zooming using the magnifying glass icons I added to the UI, I can not use the mousePosition – as I knew the mouse was where the magnifying glass – so I zoomed around the center of the control.

      Let’s begin with simple ZoomIn using Click from magnifying glass:

      btnZoomIn_Click(objectsender, RoutedEventArgs e)
                 ZoomIn( newPoint( DeepZoom.ActualWidth / 2 , DeepZoom.ActualHeight / 2)); 
      /// <summary>
      /// Zooms in around an ELEMENT Coordinate..  
      /// I technically did not need this function to abstract and could have called DoZoom directly 
      /// </summary>
      /// <param name="p"></param>
      void ZoomIn( Point p )
          if (_useRelatives)
              DoRelativeZoom(Zoom / DefaultZoomFactor, p, ZoomDirection.In);
              DoZoom(DefaultZoomFactor, p );
    6. When zooming from Keyboard.  The “gotcha” was that the DeepZoomImage is a FrameworkElement and it does not receive focus (in Silverlight Focus is at Control class), so what I did was listen for Keyboard event in the Grid that is the top container in my Host UserControl. 
      this.LayoutRoot.KeyUp += new KeyEventHandler(DeepZoom_KeyUp);
    7. Another surprise was that + (by pressing Shift+9) on my machine turned out to be a keycode, not a standard key.  I am not a keyboard expert in Silverlight (yet) but from what I read, keycodes can be different across platform so I added code to check in case I am running in the Mac. I checked using the Environment.OSVersion property.     Please double check this code, as I was pretty lazy about what running on Windows or Mac means. 

    8. public bool RunningOnWindows

                          Environment.OSVersion.Platform == PlatformID.Win32NT ||
                          Environment.OSVersion.Platform == PlatformID.Win32S); 
    9. Now we do the Zoom, note that I am not too confident on my Mac KeyCodes; I got this by sniffing on my Mini but I have a regular,ergonomic Microsoft keyboard on  that mini so double check with your keyboard just in case.
      void DeepZoom_KeyUp(object sender, KeyEventArgs e)
      if (e.Handled == true)
      if ( ( RunningOnWindows && ((Keyboard.Modifiers & ModifierKeys.Control) == ModifierKeys.Control) &&
           (e.Key == Key.Add || (e.Key == Key.Unknown && e.PlatformKeyCode == 0xBB)))  ||
           ( RunningOnMac && ((Keyboard.Modifiers & ModifierKeys.Control) == ModifierKeys.Control) &&
           (e.Key == Key.Add || (e.Key == Key.Unknown && e.PlatformKeyCode == 0x18 )))
             ZoomIn( _lastMousePosition); 
          else if ( 
            (RunningOnWindows && ((Keyboard.Modifiers & ModifierKeys.Control) == ModifierKeys.Control) &&
            (e.Key == Key.Add || (e.Key == Key.Unknown && e.PlatformKeyCode == 0xBD)))||
            ( RunningOnMac && ((Keyboard.Modifiers & ModifierKeys.Control) == ModifierKeys.Control) &&
            (e.Key == Key.Add || (e.Key == Key.Unknown && e.PlatformKeyCode == 0x1B )))
            ZoomOut( _lastMousePosition );
         e.Handled = true;
    10. At  that point our app would be functionally complete but I want to share a few more findings from my learning so let me share  the DebugSpew, it can be handy for you too.
      1. When I wrote my first deepZoom app, I took the usual approach of databinding to it so I can reverse engineer it and found a few issues; since I am traveling I have not discussed them w/ DeepZoom folks in depth for now take these as “gotchas in beta1” and will try to get some one from DeepZoom team to confirm if these are ‘final’ behaviors (feel free to leave feedback here or at the expression blog) letting them know your preferences.
        • Databinding to MultiScaleImage was a bit flaky.   ViewportWidth and ViewportOrigin did not fire notifications for me.  The explanation I have seen is that  because DeepZoom animates with springs, binding to these properties was not recommended.  These values will change every frame during a transition.   The recommended workaround was to subscribe to the MotionFinished event.  This fires at the end of a transition, so gives me a nice way to pull the value.  In my case (for debug/learning deepZoom), the workaround  was very acceptable so I implemented it.

          // in my loaded event for the page
          DeepZoom.MotionFinished += newRoutedEventHandler(DeepZoom_MotionFinished);

          void DeepZoom_MotionFinished(objectsender, RoutedEventArgs e)
             if(DebugSpew.DataContext != null)
                   MultiScaleImageDummyDataWrapper dw = DebugSpew.DataContext asMultiScaleImageDummyDataWrapper;
                   if(dw != null)
                            PullData(refdw, refDeepZoom);
                            MouseLogicalPosition = DeepZoom.ElementToLogicalPoint(_lastMousePosition);


          void PullData(ref MultiScaleImageDummyDataWrapper data, ref MultiScaleImage msi)
                      data.ViewportWidth = msi.ViewportWidth;
                      data.ViewportOrigin = msi.ViewportOrigin;
                      data.AspectRatio = msi.AspectRatio;
                      data.UseSprings = msi.UseSprings; 
        • Databinding to the other properties (that are not animated per frame) in MultiScaleImage also gave me a bit of trouble [some times the control would not show up]. My advise is to not data bind for now.   
        • Once I had the databinding worked out, I added a handler to pull data from the MouseMove so I could show coordinates when Mouse is moving, I wanted them in logical and element coordinates, so I did the translation and I added an extra  call from MotionFinished to translate the point again as the logical Coordinate changes when the Viewport changes.

    Part 5 – Lessons learned

    Overall I was quite impressed with DeepZoom. it is pretty cool stuff; I wish I had some cool pictures for a better application, but I did not try since  I knew I could not top memorabilia.

    1. My personal advise:  do not databind to DeepZoom for now. Pull to it on Motion Finished.

    2. No keyboard input goes into DeepZoom (since it is a FrameworkElement). In order to have keyboard input you must have a Control that has focus; since keyboard events bubble you can handle Keyboard input at a higher level (e.gl LayoutRoot, just check to see if it has not been handled previously).

    3. On my real app –which can’t be shared as it was a customer’s app –.  I ran into an issue when using Collections. My images were showing up in the wrong place.  I reported it already and they are investigating –during the shower, where I do my best thinking- I came with the theory that is the resolution independence in WPF ( 96 to 72 DPI conversion).  I have not confirmed.

    4. I did not discuss collections in the post, so will try it here. Collections are cool because it gives you access to your Subimages so you can  manipulate them.  Move them around, scale them, animate position and Opacity.   For now, beta1 has only one Collection; I think it would be cool to have multiple collections so you can aggregate.  This can kind of be simulated via logic, but would be nice if it was in the control.  If you simulate it, the advise I was given is do not simulate it by overlaying two MultiScaleImage controls one on top of the other, there are a few known issues with interactions on overlays (though to be honest I tried it and did not run into issues).

    5. UseSprings= true is pretty cool, but pending how quick you want to do your panning/zooming, turning it off can make your app appear more responsive. I would not turn UseSprings off for a consumer facing app, but I would consider doing it for an internal app.. For example, I am doing a Heatmap with lots of data in it, for analytical purposes. Since it is drill through I am considering it.

    6. When panning, make sure you handle MouseLeave on your control.

    7. Handling mouse wheel is not available out of the box is trivial but Peter Blois has a great solution. Do not  write the code to handle wheel. Peter’s code works great so far. Check his blog for updates too, he has a nice abstraction now to the same API.

    8. If you skipped section 3, check it out. Understanding the object model is critical and takes 5 mins.

    9. If you are writing a DeepZoom application, I recommend you use the old instantiation via silverlight.js … Click To Activate will eventually go away in IE, but in the mean time it is pretty annoying for an app that is so visual and so focused on mouse navigation. 

    10. If subscribing to MultiScaleImage.OpenImageSucceded make sure you do it from your constructor right after initializeComponent.  I tried to do it of  UserControl.Loaded and when doing a load on a page with image cached that is too late.

    11. If possible try to ‘hold’ any operations until OpenImageSucceded has fired ( no pans, zooms before that). I saw weird results if I try to access properties on MultiScaleImage before this event; in particular if you access the SubImages collection before ImageOpenSucceded, then I would get an empty collection and when ImageOpenSucceeded was fired, the collection would not be overridden; so advise for collections is don’t touch SubImages before the OpenImageSucceeded fires.

    Part 6 – Source

    Is at Skydrive

    Part 7 – Show me the app.

    You can see it here;  it is not visually impressive but I think it shows a bit of what you can do with DeepZoom and most important it is functional code you can quickly refactor and reuse. If I missed a common deepZoom task let me know.
    I added two extra “easter eggs” beyond the bunny and  the eggs above in the walk through.

    1. One is for  the NCAA basketball team for my college, which won yesterday and made it to the Final Four tournament .  (Hint, the four finalists are:  Kansas, Memphis, UCLA and North Carolina.
    2. The other one is a bitmap with dates & locations for my upcoming Europe trip (Germany, Austria)..  If you are in the area and available the nights I am in the area, ping me and we can get together to discuss any thing .NET client related (e.g. WPF, Silverlight, and ASPX).

    Part 8 -- Thanks

    Thanks to Tim Aidlin who chose the colors and gave me cooler icons for the map; I butchered them a bit when I turned them into controls so don’t hold it against him, he is a gifted designer –you can see his real work on the MIX website and any thing else MIX08 branded.

Page 5 of 9 (222 items) «34567»