May, 2007

  • Never doubt thy debugger

    Broken line in ASP.NET 2.0 TreeView in IE 7

    • 54 Comments

    Create a very simple page in ASP.NET 2.0, add a TreeView control and set ShowLines=true; now browse the page with Internet Explorer 7: you'll very likely see something like this:

    treeview broken lines

     

    In IE 6 this looks good... smile_thinking

    The point is that Internet Explorer 7 changes the boxing model: now a box that's too small to accomodate ita content won't stretch like it does on all other browsers including IE6, it will try to stay as small as possible. The problem in this case is that the DIV tags generated by the control are just 1 pixel height, which was working fine until now. Here is how the "View source" for the page above looks:

       1: [...]
       2: <table cellpadding="0" cellspacing="0" style="border-width:0;">
       3:     <tr>
       4:         <td>
       5:             <div style="width:20px;height:1px">
       6:                 <img src="/TreeViewSample/WebResource.axd?d=vGK_uMmdWLf5UZMMUhv9tSAl-YQg-jrsZ90xzYAI6TE1&amp;t=632985107273882115" alt="" />
       7:             </div>
       8:         </td>
       9:         <td>
      10:             <a id="TreeView1n1" href="javascript:TreeView_ToggleNode(TreeView1_Data,1,TreeView1n1,'l',TreeView1n1Nodes)">
      11:                 <img src="/TreeViewSample/WebResource.axd?d=vGK_uMmdWLf5UZMMUhv9tfjlKbdZ_ojL4O8CY0ydKO_HFK9lO1t2cZ2AjaDIqJy_0&amp;t=632985107273882115" 
      12:                 alt="Collapse New Node" style="border-width:0;" />
      13:             </a>
      14:         </td>
      15:         <td style="white-space:nowrap;">
      16:             <a class="TreeView1_0" href="javascript:__doPostBack('TreeView1','sNew Node\\New Node')" 
      17:                 onclick="TreeView_SelectNode(TreeView1_Data, this,'TreeView1t1');" id="TreeView1t1">New Node</a>
      18:         </td>
      19:     </tr>
      20: [...]

    As you can see, the first DIV tag contains a style definition with "height:1px"; that's the problem.

    And now, here is how we can sort this out:

    • Create a new style definition in your page (or create an external .css file and link it in your pages, pedending on your needs)
    • Add the following class definition: ".tree td div {height: 20px !important}" (of course without quotation marks)
    • In your TreeView component add a referende to CssClass="tree"

    Note that normally the style defined in the DIV takes precedence over the style defined at page level (or external .css file), but since in this case we need to override that setting, we can use the !important CSS directive; here is how the modified source looks like:

       1: [...]
       2: <html xmlns="http://www.w3.org/1999/xhtml">
       3: <head runat="server">
       4:     <title>Untitled Page</title>
       5:     <style>
       6:         .tree td div {
       7:             height: 20px !important
       8:         }
       9:     </style>
      10: </head>
      11: <body>
      12:     <form id="form1" runat="server">
      13:         <div>
      14:             <asp:TreeView ID="TreeView1" runat="server" ShowLines="True" CssClass="tree">
      15:                 <Nodes>
      16:                     <asp:TreeNode Text="New Node" Value="New Node">
      17:                         <asp:TreeNode Text="New Node" Value="New Node">
      18:                             <asp:TreeNode Text="New Node" Value="New Node">
      19: [...]

    And the resulting page:

    treeview fixed page

     

    P.s.: thanks to my colleague Markus Rheker for this one! smile_regular

     

    Carlo

  • Never doubt thy debugger

    The importance of breaking changes

    • 3 Comments

    Yesterday I closed a case about a migration issue from ASP.NET 1.1 to 2.0. The customer built this application based on ASP.NET 1.1 to generate some PDF documents on the fly on the web server, and stream the content to the client for reading; the application also served as a sort of archive browser, where online users are able to browse a list of archived PDF files. The customer thought carefully to his error handling (I think he did a very good job), and also decided to customize the standard 404 error page to something more friendly and informative for application's users, so he configured the <customErrors> section in his web.config to point to a specific page to handle the 404 return codes. But he wanted this custom page to be displayed also if the user requested a non existing PDF file, so he had to add a new mapping in IIS console, to have requests for .pdf files go through the ASP.NET execution channel (aspnet_isapi.dll etc...) and benefit of the advanced features (including security and error handling) granted by the .NET Framework.

    pdf mapping

    This worked pretty well fos some time, until the customer decided to migrate his web applications to ASP.NET 2.0. He recompiled the application with Visual Studio 2005, updated the IIS mapping to reference the aspnet_isapi.dll which comes with Framework 2.0 and then at the first run of the upgraded site got this weird behavior: if he tried to browse a non existing .aspx page he got his custom 404 page, but if he tried to browse a non existing .pdf file IIS prompted him to enter authentication credentials; but no matter which ones he inserted, after three attempts (all failing) he was redirected to the 401 error page.

    They sent me a sample application, and reproducing the problem on my machine was quite straightforward. To be honest in my experience this is not a common scenario (maybe you can correct me?) so I didn't remeber exactly all the details about how to configure it, but I had some echoes of an ASP.NET training I attended a few years ago (which was part of my MCSD certification, before joining Microsoft) and I thought we had to tell somehow the runtime how to manage this new extension we wanted to add... My mind went to HttpHandlers, but I didn't find any new HttpHander in customer's web.config. So I added it to my sample:

    <httpHandlers>
        <add path="*.pdf" verb="*" type="System.Web.StaticFileHandler" validate="true"/>
    </httpHandlers>

    I tested again, and I magically gor the custom 404 page also requesting a non existing .pdf file.

    Next question is: if the application worked fine for some time with ASP.NET 1.1 without adding that HttpHandler, where is the difference? Well... thinking about how the configuration file mechanism works in ASP.NET, I opened "C:\WINDOWS\Microsoft.NET\Framework\v1.1.4322\CONFIG\machine.config" for 1.1 and "C:\WINDOWS\Microsoft.NET\Framework\v2.0.50727\CONFIG\web.config" for 2.0, and started looking at the differences in the <httpHandlers> section. ASP.NET 2.0 has a longer list of extensions and HttpHandlers perconfigured, but I found what I was looking for at the end of the section:

    machine.config for ASP.NET 1.1

    […] <add verb="*" path="*.asp" type="System.Web.HttpForbiddenHandler" /> <add verb="*" path="*.licx" type="System.Web.HttpForbiddenHandler" /> <add verb="*" path="*.resx" type="System.Web.HttpForbiddenHandler" /> <add verb="*" path="*.resources" type="System.Web.HttpForbiddenHandler" /> <add verb="GET,HEAD" path="*" type="System.Web.StaticFileHandler" /> <add verb="*" path="*" type="System.Web.HttpMethodNotAllowedHandler" /> </httpHandlers> […]

    root web.config for ASP.NET 2.0

    […] <add path="*.refresh" verb="*" type="System.Web.HttpForbiddenHandler" validate="true" /> <add path="*.svc" verb="*" type="System.ServiceModel.Activation.HttpHandler, System.ServiceModel,
    Version=3.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089"
    validate="false" /> <add path="*" verb="GET,HEAD,POST" type="System.Web.DefaultHttpHandler"
    validate="true" />
    <add path="*" verb="*" type="System.Web.HttpMethodNotAllowedHandler" validate="true" /> </httpHandlers> […]

    That’s the difference: in ASP.NET 1.1, if a request has an extension which does not match one of the predefined extensions (and has a GET or HEAD verb) will match the “*” path and will go through the StaticFileHandler module, while in ASP.NET 2.0 if the same happens the request will go through the DefaultHttpHandler module, which has a different behavior. So to be 100% sure I removed the custom HttpHander from application web.config for the repro, and added it in the root web.config, and of course the problem was again resolved.

    Another step forward: is it conceivable that we make such a change without documenting it? Well... I learnt to not be surpresed of anything in life, but before becoming too  philosophic I started searching our docs and I found a clear reference to this behavior in the ASP.NET Run-Time Breaking Changes page:

    Short Description
    When mapping custom extensions to an existing ASP.NET builtin handler, it is now necessary to configure a build provider for that extension.

    Affected APIs
    Configuration
    Severity
    Low
    Compat Switch Available
    No


    Description
    When mapping custom extensions to an existing ASP.NET builtin handler, it is now necessary to configure a build provider for that extension. In version 2.0, the ASP.NET build system requires a build provider to handle compilation. The builtin build providers can be reused but they need to be specified.


    User Scenario
    If an application maps a private file extensions (say .misc or .foo or whatever) to an existing ASP.NET handler type (say, the page handler), then it is now necessary to provide configuration that tells ASP.NET how to compile that file type. e.g.

    <system.web>
        <httpHandlers>
            <add verb="*" path="*.misc" type="System.Web.UI.PageHandlerFactory" />
        </httpHandlers>
    </system.web>

    In v1, this would work as is. In version 2.0, they also need to register a BuildProvider. e.g.

    <compilation>
        <buildProviders>
            <add extension=".misc" appliesTo="Web" type="System.Web.Compilation.PageBuildProvider" />
        </buildProviders>
    </compilation>
    Types mapped to the static file handler or star mapped types are not impacted by this, only compiled types.

    Work Around
    Add a configuration directive to tell ASP.NET how to compile that file type.

    I sent everything to the customer and he's now happiling displying his custom 404 page also for non existing PDF files smile_regular.

    Lesson learnt with this one: if I had the breaking change page at hand, I very likely saved the time spent in troubleshooting... but hey, at least we solved the problem!

     

    Carlo

  • Never doubt thy debugger

    ASP.NET 2.0 application running in Firefox and not running in IE

    • 1 Comments

     

    Every now and then we get calls from a customer who decided to migrate his application from ASP.NET 1.1 to 2.0 (and sometimes those are mission-critical ones...) just changing the relevant mappings in IIS console, to then discover that the application is no longer working, or has some quite weird behavior which gives them headaches...

    Lukily that was not the case of the call I'll now talk about; to test if the application could work against ASP.NET 2.0 the customer made the very quick test described above: they simply changed the mappings through IIS console and immediately notice that something was going wrong inside the app. They started troubleshooting the problem themselves and have then been able to repro the same problematic behavior inside a minimized application, just a simple .aspx page with some very basics functionalities in there (just a page with a command button and a GridView to fill with some data coming from Sql Server). This minimized repro was working fine on the dev machine with Cassini, but once installed in the target web server (Windows 2003 with IIS 6.0); when browsing with IE, the resulting HTML lacks completely the client-side Javascript to trigger the postback (while this code is present in Firefox). They were accessing the web server through a proxy, but the same was true also when using Firefox...

    We checked the basics (event log, IIS configuration, web.config etc...) but everything looked fine; we then captured a network trace to monitor the communication between the client and the web server, and here we found that the User-Agent string was a bit different from the one we excepted (and this was also confirmed enabling Tracing for the test .aspx page):

    • IE User-Agent: Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; server; domain|user; .NET CLR 1.1.4322; .NET CLR 2.0.50727; InfoPath.1)
    • Modified User-Agent: Jaguar Application Client/4.1,Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; server; domain|user; .NET CLR 1.1.4322; .NET CLR 2.0.50727; InfoPath.1)

    The User-Agent string was modified by the procy of the customer.

     After some more investigation we found that the User-Agent string was not matching anymore the one described in "C:\WINDOWS\Microsoft.NET\Framework\v2.0.50727\CONFIG\Browsers\ie.browser", which contains the definition of IE browser capabilities; ASP.NET relies on that file (there are more, for other kind of browsers) to know the level of the browser issuing the request and produce an HTML output which can be understood by that browser. Due to the modified User-Agent, ASP.NET was no longer able to recognize the IE capabilities, and it considered it as a downlevel browser (capable of understanding just plain HTML), therefore it produces just that, simple HTML. Interestingly, this did not apply to Firefox because its User-Agent string (even if modified) matched against a .browser file with Javascript capabilities, therefore ASP.NET always included client-side scripts and the test page worked as expected.

    As a colleague of mine told at the Lisbon off-site a couple of weeks ago: "Now that we understand what's going wrong, we can fix it" smile_regular. We created a new .broser file (we called it jaguar.browser) and modified the regular expression to match your custom User-Agent string; we preferred to create a new file instead of modifying the IE one because those files are not granted to be immutable, and future hotfixes or Service Packs could modify them, thus invalidating our changes (and the customer would have all of a sudden the problem back). By the way, after you create or modify a .browser file, the new changes are not immediately available for the runtime, so you must run the following command:

    aspnet_regbrowsers -i

    The command creates the runtime browser capabilities assembly and installs it in the global assembly cache; so in this case the problem was not IE itself, but rather the non matching User-Agent string.

    Here are a couple of links which may be of interests regarding this issue:

    Carlo

  • Never doubt thy debugger

    Blog title contest

    • 4 Comments

    As I guess many of you have noted, I'm not native English speaking so even if I try to write in a simple and appropriate form to the context of this blog (and I should apoligize for my mistakes), I sometimes find miself wondering how to exactly express in English what I have in mind and is of course easy to me to say in Italian... Well, one of the beauties of working in a virtual team spread across different EMEA countries (and also the possibility to get in touch with people practically from all over the world, whereaver Microsoft has an office), is the possibility of get in touch with all of those different people, languages and cultures quite easily (such as emailing, IM or ringing someone).

    So, taking advantage of this opportunity, I emailed Doug to ask his opinion on the title of my blog, because I've been wondering for a while now it the title really makes sense in English smile_thinking. I didn’t want to use the word “ramblings” because it sounded a bit stale, and I tried to find something with a similar meaning (“rambling from a developer”) but a bit more refined… but how does it sound?

    As usual Doug has been very kind (as he have been a few months ago, when thanks to his unbiased help I won a bet with a friend about which was the right form of a particular sentence in English... thanks Doug! smile_regular. By the way, that friend will has to fulfill his obligations!) and gave me an explanation:

    • Rambling would more often be used in the context of talking or writing to indicate something that does not have a particular focus, direction or agenda but would also convey a light hearted attitude especially when used by you to refer to your own writing. “A developer’s ramblings” would certainly sound more natural as colloquial English than “A developer’s strayings”. Strayings is not strictly a word, as ‘s’ should only be added to a noun to make it plural whereas straying is a verb but then the same is true of ramblings
    • Strayings implies you are going off a predefined route, i.e. going off topic. (“I am walking the path to enlightenment but I keep straying off”)
    • Waffling would be another one although it is used more for spoken rather than  written words. All these have a slightly modest, self-deprecating tone to them, which can be a good thing for a blog title
    • Ponderings or “A developer ponders” would be another option – ponder meaning to thing about random stuff in a floaty dreamy kind of way
    • Contemplations or “A developer contemplates” would also be ok – a bit more philoshopical sounding but fairly neutral

    I made some researches in my English dictionary, and found also a few other entries for the list above:

    • Mull: in the sense of "to mull over" (an idea etc...)
    • Wander: to wander over...
    • Roam: to roam about...

    So, what do you like most? Any other suggestion? Let me know your thoughts (and this is also a means to see how many interested readers I have... smile_wink). Let's see is something nice comes out; and what about the prize for this contest? Well... the proudness of having helped me choosing the new title of my blog! smile_teeth

     

    Carlo

  • Never doubt thy debugger

    Italian MS bloggers, a new community is born

    • 4 Comments

    Last Wednesday I took part to the first (non) meeting of the new-born Italian bloggers community: we were a bunch of people united by the passion for technology, who happen woking in the same company and whom share an interest (if not a passion, in some cases) for sharing their experiences and knowledge in their blogs.

    We are just at the beginning of the jurney, so if you can read Italian and what to know more, start having a look at the Italian MS bloggers list smile_regular

     

    Carlo

  • Never doubt thy debugger

    Murphy's law (i.e. never play with your demos right before a presentation!)

    • 0 Comments

    Or at least if you do, just make sure to have a backup copy of the working demo in a safe place for a quick last minute restore! smile_wink

    (if you are just curious, you can learn more in Wikipedia and on the Murphy's law site smile_regular)

     

    Carlo

  • Never doubt thy debugger

    The public/private hotfix debate

    • 1 Comments

    Every now and then the question comes back in the limelight: "Why some hotfixes are publicly available to download, but most of the times I have to call CSS to get one?" and "Wouldn't be easier if we could just download the fix ourselves? At least we would save time, since we'll get it anyway from CSS" and again "Who decides if a fix has to be public or private? How?" etc..., feel free to add more if you have (and I'm sure some of you do! smile_wink).

    I discussed this topic internally with my colleagues (both Support Engineers and Escalation Engineers) in CSS EMEA and of course there are some different views on it, but there is also a common understanding about some of the principles behind the policy Microsoft adopted. Those "private" fixes do not undergo the same amount of tests that Service Packs or "public" fixes have to pass, and this is the main reason (basically: costs); one of the parameters the Product Team(s) takes into account when producing a fix is the business impact that issue is having (or potentially will have) on customer's applications, but also the risk of introducing regression bugs, the amount of code to change, severity of the problem etc..., and then someone in the management chain has to decide which kind of fix to produce. Basically the process outlined here for Sql fixes, also applies for DevTools products (and for any other products in Microsoft, I guess).

    The logic behind the policy we nowadays have, is that a customer can make a research and find the KB article which points to a specific fix and call CSS to get it; nevertheless sometime is hard to understand why the KB article is publicly available, but the hotfix is not. A solution to this would be to keep also those articles private, so that we at CSS can find the fix anyway, but this should clear the situation on customer's perspective: public articles refers only to public fixes, available for download.

    Correct? Well, sort of... smile_thinking

    Because sometimes (also in my experience) the requested fix does not fits customer's needs, either because it refers to a different product (previous version), or because the customer already has it maybe installed with a Service Pack or with a Roll-up package: what if a customer finds an article which describes a symptom he's getting, but due to a different cause? Of course that fix will not work for him, and he'll have to call CSS anyway...

    Keeping some hotfixes private is necessary for three reasons:

    1. Not all fixes have been extensively tested
    2. We would end up having customer who, by default, would install every fix we release even if they don't need it, and this is not good (see the previous point, and imagine all the implications)
    3. We need to know which customers have installed a private fix: as per point 1, if we discover a problem in one of those fixes we are then able to find out who and when has installed it, and we can contact them to take appropriate actions if needed

    In my experience, when we get an incorrect fix request from a customer we of course don't send the file (which will be useless anyway), but try to help the customer with a reasonable commercial effort (as the policy tells smile_nerd), keeping in mind that this kind of Support Calls are free of charge; so if we can't find a solution quickly, we'll have to open a "standard" (either incident or payment) call to work on the problem.

    Of course we could improve the technical accuracy of our fix articles in order to increase customer's ability to correctly identify his problem and the relevant hotfix (if that's the case, and if available); but sometimes the articles are technically correct, but could happen that the "human factor" generates errors and misunderstandings (and I'm not sure if this can be controlled...). Then those private fixes are already available for Premier customers through a restricted web site they can access; maybe would be possible to apply that kind of initiative to a larger portion of customers, but still we should be able to know who and when downloaded it (for the reason at point 3 above).

    Nevertheless the pilot I blogged about a while ago is still progressing, so maybe something is moving in the direction of the feedbacks we got from you, after all...

    Of course any comments on this is welcome! smile_wink

     

    Carlo

  • Never doubt thy debugger

    Need to pass cookies between machines on the same LAN?

    • 0 Comments

    Then in your web.config use an authentication section similar to the following:

    <authentication mode="Forms">
        <forms 
               name="myCookie"
               timeout="20"
               loginUrl="Default.aspx"
               defaultUrl="Default.aspx"
               protection="All"
               path="/"
               requireSSL="false"
               slidingExpiration="true"
               cookieless="UseDeviceProfile"
               domain="domain.com"
               enableCrossAppRedirects="true"
        />
    </authentication>

    Internet Explorer will only pass cookies from one site to another if they have the domain cookie attribute set; i.e. the Fully Qualified Domain Name is required. Moreover all GET/POST must use the FQDN, as any Response.Redirect. The reason behing this is security, to make sure that a cookie is not passed to another site, thus exposing to cookies stealing.

     

    Carlo

  • Never doubt thy debugger

    Are you profiling your application and now you can't debug anymore?

    • 1 Comments

    This is an interesting case I got very recently and that made me scratch my head a lot, wondering what was going wrong on customer's machine for a while... until I called Doug on the rescue and we (he) drove the call to an happy ending smile_regular

    I got this case from a colleague on the Visual Studio team; the customer reported a weird problem with the debugging of their ASP.NET 1.1 application (with Visual Studio 2003): when trying to step into the code line by line with F11, the debugger actually stopped only on the signature of every method skipping completely the body of the method, and to add something more, this happened only when they were using a specific namespace (e.g. MyCompany.MyNamespace.Group); if they used the same exact code but in a different namespace, everything worked fine smile_omg. Moreover, this was happening on that very specific web server only, if they moved the code to another machine the debugger worked like charm (but moving to a different machine was not an option for them).
    My first thought went to the .pdb files, some kind of mismatch between the between the code esecuted in memory and the symbols used for debugging.

    At this regard I also have to say that the customer had an unusual configuration in his development environment: they had a Win2003 machine with Visual Studio installed, the local IIS had a virtual directory to host the application but the actual files were on a network disk; the files where residing on a shared folder on a different Win2003 server, which was the real web server running the ASP.NET application. Basically the customer used his local IIS (on the DEV machine) just to open the project, edited the code files (saved and compiled on disk Y:, which pointed to a different machine), and when they needed to debug the application they first set a breakpoint in the code, then browsed the target page on their machine (to load the application in the remote worker process), then attached manually the remote w3wp.exe. It worth also to say that even if in my opinion this is not exactly the best configuration they could use, this worked for quite a long time for them.

    We first tryed to empty all the temporary folders involved (IE, ASP.NET etc...) but with no luck; we then captured some logs (network traces, Process Monitor, a memory dump of both Visual Studio and the affected w3wp.exe, to see what was happening during the network communication between the two machines, and where the .pdb files where from etc...), but again we got nowhere. The dump showed they had quite a few strong named assemblies in the /bin folders of their applications, and this is not good (as Tess explained here).

    We first installed the strong named assemblies in the GAC and ensured that the symbol files where in the correct folder, Visual Studio module list showed the correct symbol file is loaded; we compiled the assembly without the strong name and deploy to the /bin folder rather than the GAC then stepping through the functions worked fine. Sn.exe reported that the checked assemblies where ok, and also testing with a new test key pair the problem was still there.

    Doug then had an EasyAssist session with the customer (EasyAssist is a tool based on LiveMeeting, cut out to fit the needs of a remote assistance session) and found that also debugging a local application failed with the same symptoms... From examining environment variables it was noticed that CLR Profiling services were enabled and this pointed to a component of a 3rd party diagnostic tool. Further debugging showed this tool was intercepting function calls during the attempted debug. Disabling this tool allowed debugging with CorDBG to now work and at this point the remote debugging problem was solved.

    So the lesson learnt here is: if you need profiling tools use them, but pay attention that they don't interfere with Visual Studio and the debugger, and possibly have those services/tools disabled and enable them only when really needed.

    Doug, if you want to add something more on this, you are very welcome! smile_regular

    Carlo

Page 1 of 1 (9 items)