Larry Osterman's WebLog

Confessions of an Old Fogey
  • Larry Osterman's WebLog

    On Campus Interviews with Microsoft

    • 12 Comments

    It’s digging back into prehistory time :)  WAY back into pre-historical times.

    Microsoft has always done on-campus interviews, it’s an integral part of the recruiting process.  Gretchen and Zoe have written about it a lot here.

    My on-campus interview story is a bit dated nowadays, it occurred when I was interviewing at Carnegie-Mellon University back in October of 1983 (or thereabouts).

    I had signed up for the interview with Microsoft more or less on a lark, I wasn't particularly interested in Microsoft as an employer, but Microsoft did compilers, and operating systems (for toy computers), and I figured that it would be worth an hour or two of my time.  I REALLY wanted a job working in operating systems development on a "real" computer - something like a Vax would be nice.

    I walked into the interview cold - I had no idea who this "Steve Ballmer" was, I figured he was yet another development manager, just like I had all the other interviewers I had seen.  Boy was I surprised.

    I walked into the office-let that they used for interviews, and was introduced to this great big strapping slightly balding guy with a voice almost as loud as my own.  He started in asking me the usual questions - what's your favorite course, what do you do in your spare time, etc.  So far, nothing unusual....  Then he started asking me questions - but not programming questions like I'd had before, logic questions..  The one that sticks in my mind was:

    "We're going to play a game.  I'm going to pick a number between 1 and 100, you get to guess the number.  If you don't get it, I'll tell you if you're too high or too low.  If you guess it, i'll pay you $6 less the number of guesses that you take - so if you get the number on the first guess, you get $6, but if you take 7 guesses, you pay ME $1. 

    Now do you want to play the game or not?"

    I said "Of course not - a binary search on 1 to 100 takes 7 guesses for the worst case, you can pick your numbers such that you will always force 7 guesses". 

    Steve looks me and says "Are you sure?” 

    Long pause on my part....  "Well, I don't need to choose my first pivot point at 50, any number between 32 and 64 will work just as well, and if I do that, I change the spread, so if I change the start point of the binary search, I can win this game".

    Steve then moved on to describe the work environment at Microsoft - everyone gets their own office, development work is done on Sun workstations, and most of the compiler and language work is done on a DECSYSTEM-20 (since I was a die-hard dec-20 fan back then (and still am), this was music to my ears).  Steve then described the Microsoft DEC-20 and commented "We love the DEC-20, except when we do our nightly builds - it gets totally unusable when we're doing builds,  the load average gets to 10 or 20" (the "load average" on a DEC-20 was a measure of the performance of the machine - the higher the number the lower the user response time).  He got quite emphatic about how horrible life was on the mainframe when this was happening.

    At that point, I totally lost it.  "You think that a load average of 10 or 20 is bad?  Man, you are clueless - you have absolutely no idea how bad the load can get on a DEC-20.  On a good day, the load average on our DEC-20 is 50, and when projects are due, it goes up to 100 or 120".  I continued ranting for several more minutes.

    At about that time, the interview ended, but I was convinced that I had blown it (no big deal, as I mentioned - I didn't really care about the job anyway).  On the way out, I started reading the Microsoft literature that I'd been given...  When it came to describing the executive team at Microsoft, I stopped and stared at the brochure.  There was his name and picture - "Steve Ballmer, Vice President, Operations".

    Sigh..  If I had any chance of getting that job, I had surely blown it totally - you just don't just tell the guy who’s interviewing you that he's an idiot.  Especially when he's the head of H.R. in a company that's trying to hire you...

    Needless to say, the very next day, I received a telex asking me to come to Redmond and interview with Microsoft.  I remember Valorie running into my compiler design class with the telex in her hand.  Three months later, I interviewed on campus at Microsoft (my first plant trip).  Things must have gone well; I got my very first full time job offer at about 4PM on the day I interviewed.

    Oh, and about that interview question (there was a reason I put it in the story)...  I wasn't happy with the answer to the question that I'd given Steve, it kept on niggling away at the back of my mind.  About a week later, I was busy working on the parser for my Compiler Design class, and I decided I needed a break, so I wrote a program to emulate the game choosing different pivot points (you can tell I am/was a totally obsessive geek for even considering this).  After running through the game, even after choosing pivot points that aren't in the middle, it turns out that you CANNOT win the game - there is no other pivot point that can be used to improve your odds of guessing the worst-case numbers based on a pivot on 50 - the alternate pivot points still require more than 6 guesses to find the value.  On the other hand if the person you’re playing with believes that you’re going to choose a pivot point of 50, and picks his numbers accordingly, you CAN potentially win, but it’s a crap shoot.

    I ran into Steve in the hall about a year after I had started, and asked him about that (further proof that I’m an uber-geek – I actually followed through with the interviewer and challenged him on his interview question)...  He said "Yeah, I knew that, the point of that interview question is to see if the interview candidate can even consider a pivot point other than 50, I didn't care about what the real answer was"...

    Edit: KC pointed out in private email that I left out a crucial detail - hey, it was more than 20 years ago, you expect me to get every detail right? :)

     

  • Larry Osterman's WebLog

    What's wrong with this code, part 4: the answers

    • 16 Comments

    As I mentioned yesterday, this is a subtle problem.  Apparently it wasn’t subtle enough for the people commenting on the API, without fail, everyone nailed it perfectly.

    But this IS a problem that I run into at least once a month.  Someone comes to my office and says “I’m getting an undeclared external in my application and I don’t see why!”  95% of the time, the problem is exactly the one in this bug.

    Here’s the offending line in the buggy header:

    NIFTY_API_API int MyNiftyAPI(void);

    Not really surprising, heck, it’s the ONLY line in the buggy header.  What’s wrong with the line is actually an error of omission.  The header doesn’t specify the calling convention used for the API.  As a result, in the absence of an explicit calling convention, the compiler assumes that the current calling convention is the calling convention for the API.

    Unfortunately that’s not the case.  The calling convention for an API is set by the compiler when the code is built, if every part of the project uses the same calling convention, you’re fine, but if anyone compiles their code with a calling convention other than yours, you’re toast.  Raymond goes into some detail on how to diagnose these problems here, so…  As I mentioned yesterday, he’s written a number of posts on the subject.

    The key indicator that there might be a problem was my statement “I’m writing a DLL”.  If this was just a routine in my application, it wouldn’t matter, since all the components in my application are usually compiled with the same set of compiler settings.

    But when you’re dealing with DLL’s (or statically linked libraries), the consumer of your code typically isn’t in the same project as you are, so it’s absolutely critical that you specify the calling convention you used to prevent them from using the “wrong” calling convention in their code.

    Kudos:

    Grant pointed out the problem initially, followed quickly (and more authoritatively) by Borsis Yanushpolsky.  Everyone else posting comments also picked it up.

    Grant also pointed out (in private email), and Mike Dimmick pointed out in the comments section that there’s another, equally glaring problem with the header.  It is missing an extern “C” to correctly inform the compiler that the API in question shouldn’t use C++ name decoration.  The code should have been wrapped with:

    #ifdef __cplusplus
    extern "C" {            /* Assume C declarations for C++ */
    #endif  /* __cplusplus */ 

    #ifdef __cplusplus
    }                       /* End of extern "C" { */
    #endif  /* __cplusplus */
     

     So the full version of the header should be:

    // The following ifdef block is the standard way of creating macros which make exporting
    // from a DLL simpler. All files within this DLL are compiled with the NIFTY_API_EXPORTS
    // symbol defined on the command line. this symbol should not be defined on any project
    // that uses this DLL. This way any other project whose source files include this file see
    // NIFTY_API_API functions as being imported from a DLL, whereas this DLL sees symbols
    // defined with this macro as being exported.
    #ifdef NIFTY_API_EXPORTS
    #define NIFTY_API_API __declspec(dllexport)  
    #else
    #define NIFTY_API_API __declspec(dllimport)
    #endif

    #if defined(_STDCALL_SUPPORTED)
    #define STDCALL __stdcall    // Declare our calling convention.
    #else
    #define STDCALL
    #endif // STDCALL_SUPPORTED

    #ifdef __cplusplus
    extern "C" {     // Don’t use C++ decorations for our API names
    #endif

    NIFTY_API_API int STDCALL MyNiftyAPI(void);

    #ifdef __cplusplus
    }                // Close the extern C.
    #endif

    I have a huge preference for __stdcall APIs.  They have all of the benefits of the __cdecl calling convention (except for the ability to support variable numbers of arguments) and they result in significantly smaller code (since the routine cleans its stack, not the caller).   As I mentioned in a comment in the previous post, the savings that NT achieved when it switched its default calling convention from __cdecl to __stdcall was huge – far more than we had anticipated.

    There’s still one more potential bug: The header file isn’t wrapped in either a #pragma once, or in #ifdef _NIFTY_API_INCLUDED_/#define _NIFTY_API_INCLUDED_/#endif // _NIFTY_API_INCLUDED_.  Given the current API header this isn’t a problem, but if the header file is included from multiple places, the possibility exists that definitions within the header could result in multiple definitions, which could later result in errors.

    McGroarty brought up an interesting point:

    I'm no Windows guy, but I'll put a cautious eye to the generic int being subject to the signedness and size of the day.

    I hadn’t considered this originally, but he has a point.  Int is the only fundamental C/C++ type that has a variable size, it is guaranteed to be at least as large as a short (which is larger or equal to a char, but guaranteed to be able to hold values from -32767 to 32767).  So an int can be either a 16 bit, 32 bit or 64 bit quantity.

     

  • Larry Osterman's WebLog

    Every programmer should know assembly language - part two

    • 0 Comments

    Jeremy Kelly pointed me to this post that he made about a debugging session that the Exchange escalation guys did that discovered a rootkit running on a customers machine.

    It is an awesome detective job, and it’s a great example of exactly why (a) Every Developer needs to know Assembly and (b) Why you need to reformat your machine after you’ve been infected.

    The ONLY way that they discovered that this machine had been rooted was the fact that the rootkit had a bug.  If it hadn’t been for the bug, the poor customer would have never known that he had a problem, until much later.

    And yes, stuff like this happens a lot.  We’re very fortunate that we have some really talented escalation engineers working here that can diagnose stuff like this, but it’s a part of the skill set that developers and support people need to have.

     

    Way to Go Jeremy, a great read.

     

  • Larry Osterman's WebLog

    35 years ago today...

    • 6 Comments

    I was seven years old at the time,  and I remember getting woken up by my parents and being brought downstairs to where they had a great big party going on (for some reason I thought it was very late in the evening, although I now realize that it was only about 9:30 eastern time).  There must have been a dozen people clustered tightly around our TV. 

    We all sat there in silence and stared at the blurry images coming from Mission Control.  Walter Cronkite was explaining what was happening in great detail. 

    And then the immortal words came from the speaker.  “Houston, Tranquility Base here.  The Eagle has landed”.  It’s one of my earliest memories and it STILL brings tears to my eyes as I remember it. 

    The world literally changed that day.  We have forgotten so much of the wonder that those early explorations brought, the sense of magic that the images of a man, yes a human being standing on ANOTHER PLANET brought.  Now we get excited when unmanned robots the size of vacuum cleaners scurry over the surface of mars.  Or when a school bus sized observatory goes to Saturn. 

    But it isn’t the same thing.  The visceral reaction to seeing a human being standing on a planet (or performing a space walk or repairing a telescope) adds a level of involvement that cannot be achieved by little scooters.

    My thanks go out today to the crew of Apollo 11, for inspiring a generation.

     

  • Larry Osterman's WebLog

    What's wrong wth this code, part 4

    • 17 Comments

    Ok, time for another “what’s wrong with this code” problem.

    This time, I’m writing a DLL.  Nothing complicated, just a plain old DLL.  As is expected, I publish a header file for my api:

    // The following ifdef block is the standard way of creating macros which make exporting
    // from a DLL simpler. All files within this DLL are compiled with the NIFTY_API_EXPORTS
    // symbol defined on the command line. this symbol should not be defined on any project
    // that uses this DLL. This way any other project whose source files include this file see
    // NIFTY_API_API functions as being imported from a DLL, whereas this DLL sees symbols
    // defined with this macro as being exported.
    #ifdef NIFTY_API_EXPORTS
    #define NIFTY_API_API __declspec(dllexport)
    #else
    #define NIFTY_API_API __declspec(dllimport)
    #endif

    NIFTY_API_API int MyNiftyAPI(void);

    You’ll notice that this header is almost identical to the header file that Visual Studio produces when you ask it to make a DLL.  Even so, there’s a bug in this header file.

    Your challenge is to figure out what the bug is.  It’s subtle, this time, but important (although Raymond and I have touched on it before).  Btw, the fact that it uses the non standard __declspec is NOT the bug.  That’s syntactic sugar that could be easily removed without removing the error.

    As usual, answers and kudos tomorrow.

     

  • Larry Osterman's WebLog

    What are Known DLLs anyway?

    • 10 Comments

    In my previous post about DLLs and how they work, I commented that winmm.dll was a KnownDLL in Longhorn.  It turns out that this is a bug in an existing KnownDLL. But what in the heck ARE Known DLLs in the first place?

    Well, it turns out that it’s in the KB, and I’ll summarize.

    KnownDLL’s is a mechanism in Windows NT (and win9x) that allows the system to “cache” commonly used system DLLs.  It was originally added to improve application load time, but it also can be considered a security mechanism, since it prevents people from exploiting weak application directory permissions by dropping in Trojan horse versions of system DLLs (since the key system DLLs are all known DLLs, the version of the file in the application directory will be ignored).  As a security mechanism it's not a particularly strong mechanism (if you can write to the directory that contains a program, you can create other forms of havoc), but it can be considered a security mechanism.

    If you remember from my previous article, when the loader finds a DLL import record in an executable, it opens the file and tries to map the file into memory.  Well, that’s not ENTIRELY the case.  In fact, before that happens the loader looks for an existing section called \KnownDlls\<dll filename>.  If that section exists, then instead of opening the file, the loader simply uses the existing section.   It then follows all the “normal” rules for loading a DLL.

    When the system boots, it looks in the registry at HKEY_LOCAL_MACHINE\System\CurrentControlSet\Control\Session Manager\KnownDLLs and creates a \KnownDlls\<dll filename> section for every DLL listed under that registry key.

    If you compare the HKLM\System\CCS\Control\Session Manager\KnownDLLs registry key with the sections under \KnownDlls (using a viewer like winobj), you’ll notice that the \KnownDlls object container always has more entries in it than the registry key.  This is because the \KnownDlls sections are computed as the transitive closure of the DLLs listed in KnownDLLs.  So if a DLL’s listed in KnownDLLs, all of the DLL’s that are statically linked with the DLL are ALSO listed in the \KnownDlls section.

    Also, if you look in the KnownDLLs registry key, you’ll notice that there’s no path listed for the KnownDLLs.  That’s because all KnownDLLs are assumed to be in the directory pointed to by HKLM\System\CCS\Control\KnownDLLs\DllDirectory registry value.  Again, this is an aspect of KnownDLLs being a security feature – by requiring KnownDLLs to be in the same directory, it makes it harder for someone to inject their own Trojan version of one of the KnownDLLs.

    Oh, and if the KnownDLLs processing causes any issues, or if for some other reason you don't want the system to load a DLL as a KnownDll, then you can set HKLM\System\CCS\Control\Session Manager\ExcludeFromKnownDlls to exclude a DLL from the KnownDll processing.  So in my example, until the bug is fixed in the existing KnownDLL, I’m adding winmm.dll to my ExcludeFromKnownDlls list.

     

     

  • Larry Osterman's WebLog

    IE Annoyances..

    • 25 Comments

    No, I’m not going to complain about transparent PNG or CSS support.  Frankly, since I’m not a webmaster, I don’t care about them (sorry).

    This one’s pretty specific, and I’m pretty sure that it’s an IE bug.

    One thing that I’ve noticed while reading other peoples blogs is that IE seems to get confused about which text size at which I want my pages rendered.  For some reason, Eric Lippert’s blog seems to be a constant offender there, although I don’t know why.

    For some reason, after reading his ‘blog, I seem to find my text size changed from “Medium” to “Smaller”.  Which is usually too small for me.  And of course, when I close the browser, it happily sets the new font size as the default for my machine.

    I know the setting is kept in the registry somewhere and I could put an ACL on the registry key to prevent it from happening, but I’d love to understand why this is happening.  What’s allowing a web page to change the text size I want to use to view the entire web?

    Edit: Problem discovered: It was an interaction between a buggy internal tool and IE.  Which explains why I didn't find it by googling :)

     

  • Larry Osterman's WebLog

    Milestones (not Millstones)

    • 6 Comments

    In October of 1982 (22 years ago this year!), I met a young woman named Valorie Holden and fell in love with her.  She has been my almost constant companion and partner ever since then; we were married in 1987, and now have two wonderful, gifted children.

    She works harder than anyone I’ve ever known, and is a constant source of inspiration to me.  She is a phenomenal mother, and is currently studying to become a truly extraordinary teacher (she already is an extraordinary teacher, but the state insists that she have this silly piece of paper so…).

    Next fall, she’ll be holding down three jobs: Mom, Teachers Aid in the PACE 5/6 classroom, AND full time student.  I don’t know how she manages to do all this and still remain sane, but somehow she does.

    Words really don’t suffice to express how much I love her.  I don’t always do a great job of showing it, but she is constantly on my mind.

    So why am I posting this?  Well, tomorrow’s her birthday.  And while she’s told me that I shouldn’t do anything about it (it’s one of those big ones and she HATES it when people make a fuss about her), I couldn’t resist putting up a post about it.

    So if you run into Valorie, make sure you wish her a happy birthday!

     

  • Larry Osterman's WebLog

    I, Robot

    • 26 Comments

    Michael Gartenberg over at Jupiter Research had a post today about the new movie I, Robot.

    I’ve seen the trailers for this movie, and I think it may be one of the greatest abominations that Hollywood has ever created.  I don’t know WHAT was going through Janet Jeppsons (Isaac’s wife) mind when she authorized the use of Isaac’s stories…  The original I, Robot stories were thoughtful stories about robots coming to take on sentience.  It wasn’t about the evil robots taking over the world.  But darned if that isn’t what the new movie is about.

    Asimov’s robots were ALWAYS constrained by the 3 laws of robotics.  It was a constant throughout the stories that the three laws were NEVER violated.  Having the three laws was a fascinating literary device, because it allowed Asimov to come up with story after story where it appeared that the three laws were being broken when in fact they weren’t.  His “Caves of Steel” and the other R. Daneel Olivaw/Elijah Baley stories are absolute classics in the S.F. Mystery novel genre.

    I looked at the trailer for the new movie and cringed.  Especially at the scenes with all the robots attacking Will Smith, and Will Smith playing Arnold Schwarzenegger with his railgun.

    This one’s a must-miss in my opinion.  If they had kept the original title of “Hardwired”, and avoided the tie-in with Asimov, then maybe it might be worthwhile.  But as long as they’re sullying Isaac’s works with this drivel, I’m staying home. 

     

    In case you think I’m just an Asimov fan-boy, think: Millions of kids will see this movie and think that the Asimov I, Robot stories are just more summer action movie fodder.  Is that really the legacy of one of the most thoughtful of the great science fiction authors?

     

  • Larry Osterman's WebLog

    Sometime soon, this internet thing is going to be really big, ya know.

    • 13 Comments

    But sometimes I wonder if it’s getting TOO big.

    I don’t normally try to do two rants quick succession, but there was a recent email discussion on an internal mailing list that sparked this rant.

    There’s a trend I’ve been seeing in recent years that makes me believe that people somehow think that there’s something special about the technologies that make up the WWW.  People keep trying to use WWW technologies in places that they don’t really fit.

    Ever since Bill Gates sent out the “Internet Tidal Wave” memo 9 years ago, people seem to believe that every technology should be framed by the WWW. 

    Take RPC, for example.  Why does RPC have to run over HTTP (or rather SOAP layered over HTTP)?  It turns out that HTTP isn’t a particularly good protocol for RPC, especially for connection oriented RPC like Exchange uses.  About the only thing that RPC-over-HTTP-over-TCP brings beyond the capabilities of RPC-over-TCP is that HTTP is often opened through firewalls.  But the downside is that HTTP is typically not connection oriented.  Which means that you’re either have to re-authenticate the user on every RPC or the server has to cache the client’s IP address and verify the client that way (or by requiring a unique cookie of some kind).

    Why does .Net remoting even support an HTTP protocol?  Why not just a UDP and a TCP protocol (and I have serious questions about the wisdom of supporting a UDP protocol)?  Again, what does HTTP bring to .Net remoting?  Firewall pass-through?  .Net remoting doesn’t support security at all natively; do you really want unsecured data going through your firewall?  At least HTTP/RPC provides authentication.  And it turns out that supporting connection-less protocols like HTTP caused some rather interesting design decisions in .Net remoting – for instance, it’s not possible to determine if a .Net remoting client has gone away without providing your own ping mechanism.  At least with a connection oriented transport, you can have deterministic connection rundown.

    Why does every identifier in the world need to be a URI?  As a case in point, one of our multimedia libraries needed a string to represent the source and destination of media content – the source was typically a file on the disk (but it could be a resource on the net).  The destination was almost always a local device (think of it as the PnP identifier of the dev interface for the rendering pin – it’s not, but close enough).  Well, the multimedia library decided that the format of the strings that they were using was to be a URI.  For both the source and destination.  So, when the destinations didn’t fit the IETF schema for URIs (it had % characters in it I believe, and our destination strings didn’t have a URI prefix) they started filing bugs against our component to get the names changed to fit the URI schema.  But why were they URIs in the first place?  The strings were never parsed, they were never cracked into prefix and object. 

    Now here’s the thing.  URIs are great for referencing networked resources.  They really are, especially if you’re using HTTP as your transport mechanism.  But they’re not the solution for every problem.  The guys writing this library didn’t really want URIs, they really wanted opaque strings to represent locations.  It wasn’t critical that their URIs meet the URI format, they weren’t ever going to install a URI handler for the identifiers, all they needed to be were strings.

    But since URIs are used on the internet, and the internet by definition is a “good thing” they wanted to use URIs.

    Another example of an over-used internet technology: XML.  For some reason, XML is considered to be the be-all and end-all solution to a problem.  People seem to have decided that the data that’s represented by the XML isn’t important; it’s the fact that it’s represented in XML.  But XML is all ABOUT the data.  It’s a data representation format, for crying out loud.  Now, XML is a very, very nice data representation.  It has some truly awesome features that make representing structured data a snap, and it’s brilliantly extensible.  But if you’re rolling out a new structured document, why is XML the default choice?  Is there never a better choice than XML?  I don’t think so.  Actually, Dare Obasanjo proposed a fascinating XML litmus test here, it makes sense to me.

    When the Exchange team decided to turn Exchange from an email system into a document storage platform that also did email, they decided that the premier mechanism for accessing documents in the store was to be HTTP/DAV.  Why?  Because it was an internet technology.  Not because it was the right solution for Exchange.  Not because it was the right solution for our customers.  But because it was an internet technology.  Btw, Exchange also supported OLEDB access to the store, which, in my opinion made a lot more sense as an access technology for our data store.

    At every turn, I see people deploying solutions that are internet, even when it’s not appropriate.

    There ARE times when it’s appropriate to use an internet technology.  If you’re writing an email store that’s going to interoperate with 3rd party clients, then your best bet is to use IMAP (or if you have to, POP3).  This makes sense.  But it doesn’t have to be your only solution.  There’s nothing WRONG with providing a higher capability non-internet solution if the internet solution doesn’t provide enough functionality.  But if you go the high-fidelity client route without going the standards based route, then you’d better be prepared to write those clients for LOTS of platforms.

    It makes sense to use HTTP when you’re retrieving web pages.  You want to use a standardized internet protocol in that case, because you want to ensure that 3rd party applications can play with your servers (just like having IMAP and POP3 support in your email server is a good idea as mentioned above). 

    URLs make perfect sense when describing resource location over the network.  They even make sense when determining if you want to compose email (mailto:foo@bar.com) or if you want to describe how to access a particular email in an IMAP message store (imap://mymailserver/public%20folders/mail%20from%20me).  But do they make sense when identifying the rendering destination for multimedia content? 

    So internet technologies DO make sense when describing resources on the internet.  But they aren’t always the right solution to every problem.

     

  • Larry Osterman's WebLog

    More on plumbing fixtures...

    • 11 Comments

    Found this on snopes.com, my favorite urban legends site.

    It’s a transparent public toilet installed in a London construction site. 

    Somehow I have potties on the brain today...

    Edit: Fixed title and images, twice (proxy troubles).

     

  • Larry Osterman's WebLog

    Microsoft and plumbing fixtures.

    • 8 Comments

    I was having an email discussion with Ben Slivka the other day, and he asked me what three things were going to make customers enthusiastic about Longhorn.

    My answer to him was as follows:

    I'm not sure.  My guess would be the changes around the Multimedia experience (persistent per application volume control, and improved handling (you can have windows sounds be mp3 files), WinFS means that you can search the metadata on your multimedia as quickly as you can Google, which makes slicing and dicing play lists better). 

    The new UI should be REALLY slick, and should attract a lot of consumers (eye candy always does).

    Beyond that, I'm not sure - the reality is that most of the cool stuff in Longhorn is plumbing - Avalon and the rest of WinFX means that app authors will be able to easily do stuff that they've never been able to do before, which means that there's a host of new cool apps that will be able to be written for longhorn.  That also means that app authors have even more ways of writing annoying applications (if you think skins are bad, consider what happens when app designers will do when they can put video on a button face), so...

    But the thing is that consumers don't see the cool stuff that's going on in the platform.  Unlike Apple, who spends HUGE amounts of time and effort on making the UI cool and flashy (and responsive and consistent, etc), Microsoft tends to work on getting the plumbing right.   But customers don’t see the plumbing.

    Which means that our toilets flush and our sinks drain really, really well, but they're not very pretty.  To continue the plumbing analogy, Apple is Kohler - lots of flash, looks great, works well, Microsoft is Delta - not as much flash, but totally rock solid reliable.

    The reality is that I just don’t see customers going totally bonkers about things like the games library or parental controls, or the other end-user features of Longhorn.  But man, Longhorn as a platform is going to let developers do really amazing stuff.

     

    Please note: I’m not an evangelist.  I don’t know all the bells and whistles; I work on windows audio, which is why my answer was multimedia-centric. 

     

  • Larry Osterman's WebLog

    New Exchange 'blog post, this one on push notifications

    • 0 Comments

    Nico over in Exchange just told me that he posted my article on Exchange’s Push Notification feature, so included by reference :).

  • Larry Osterman's WebLog

    Internationalizing Microsoft Products

    • 25 Comments

    The Seattle Times has an interesting article today about Microsoft’s efforts to extend beyond the “basic 30-or-so” languages we already support into languages with somewhat smaller market shares (Urdu, Kiswahili, Nepalese etc). 

    It’s actually cool, and I’m really happy that we’re extending our outreach to locales where there’s relatively few speakers (ok, Hindi doesn’t have relatively few speakers).

    But I do need to take issue with at least one comment in the article:

    Microsoft began localizing its software in different languages about 15 years ago.

    We’ve been localizing our products for as long as I’ve worked at Microsoft.  At a minimum, Microsoft’s first Japanese field office opened in 1977 (ASCII-Microsoft), 27 years ago, our Japanese Subsidiary was founded in 1986 (18 years ago).  All of these produced localized products for the Japanese market.  In 1985, I was working on MS-DOS 4.0, which was shipped localized in French to Goupil.  I still remember a demo of Arabic Windows 2.0 from the mid 1980’s, the person doing the demo started writing in Arabic (a Right-To-Left language) and the text appeared to the right of the cursor (as would be expected).  He then got a HUGE round of applause when he switched from typing in Arabic to English (a LTR language) and the text started appearing to the LEFT of the cursor.

    One of the stock interview questions a friend of mine used to ask was about how you handle cursor up and cursor down motions in GWBasic – it dates from at least 1982.

    So we’ve been doing localization since just about forever; I don’t know why they picked 15 years ago for the Times article. 

    Localization has ALWAYS been a big deal to Microsoft; it’s literally illegal to sell non localized products in some countries (don’t ask, I don’t know the specifics).  And since we want to ship our software everywhere J

    And I’m REALLY glad that we’re finally targeting the smaller languages, it’s cool.  I also love the mechanism that’s being used to do the localization.  Instead of localization being done in a centralized location, the localization is being done by local groups – so instead of Microsoft having to have native speakers of the various languages, we’re engaging communities in those countries to produce the localized content.

    We currently have language packs available for Windows XP in Bulgarian, Catalan, Croatian, Estonian, Hindi, Latvian, Lithuanian, Romanian, Serbian (Cyrillic), Serbian (Latin), Thai, and Ukrainian.  There’s a similar list for Office (only in Catalan or Norwegian currently though).

     

  • Larry Osterman's WebLog

    Jon Wiswall's started blogging...

    • 0 Comments

    I don't normally do “hey, he started blogging” posts, but I just noticed that Jon Wiswall has started a blog.

    Jon's one of those guys who can be counted on for insightful and intellegent answers during internal discussions, I know that when I see one of his responses, his answer's going to be the right one.

    Subscribed.

     

  • Larry Osterman's WebLog

    So how am I doing?

    • 14 Comments

    Well, this is my 100th post to my weblog, and since its review time at Microsoft, I figured I’d turn the forum over to my readers.

    I started this weblog 4 months ago after reading Raymond’s ‘blog for several months and marveling at his ability to consistently produce interesting content.  Since I love to talk (or rant) about lots of different topics, I wanted to give this ‘blogging thing a chance.  I may write code for a living, but I’ve always enjoyed technical writing.  I’ve wanted to publish “Larry’s ranting about software” for years now (I went as far as to start work on an outline for a “Inside Windows NT Security” book to pitch to MS-Press, but eventually decided I didn’t have enough time to be a full time author).  Publishing this blog seemed to be an ideal opportunity to move forward with some form of that dream.

    I’ve got to say that this self-publishing thing has been both more challenging and more exciting than I had ever realized.  Valorie can tell you that I’ve gotten pretty obsessive about coming up with new ideas for posts, there have been times that figuring out what to post has somewhat taken over my life.  It’s been a fascinating experience.

    I set myself a goal on day one of producing at least one new post every day I’m at work, and so far I’ve been able to meet that goal, although sometimes it has been hard.  There are times I post at 4:00PM, especially if work’s busy, but I’ve managed to meet that goal.  Along the way, I’ve learned a huge amount from the many insightful comments that people have been made on the various articles.

    Anyway, enough glurge…

    Anyway, in the spirit of evaluating how we’re servicing our customers both inside and outside Microsoft, how AM I doing?  Do the things I post her meet your needs?  What is your favorite thing about what I’m doing?  What’s your least favorite thing?

    And most importantly, what can I do to improve?  Feel free to either post comments on this thread, or if you’d rather send me email you can use the comments link on this blog, or you can send me email directly, my email is at LarryO (at) Microsoft.Com.

     

  • Larry Osterman's WebLog

    Microsoft and Art

    • 12 Comments

    Sorry about not posting yesterday, I was out with the kids at Seattle Center (Daniel had rehearsals and I was having fun with Sharron), so no time to write up a post (I’m not as well organized as Raymond).

    Microsoft’s got a pretty impressive art collection.  Some pieces are cool, some are merely controversial.

    We had a huge internal debate the other day about this piece:

    Yes, it’s a piece of notebook paper in a frame.  It isn’t until you get REALLY close to it that you realize that it’s a painting of a piece of notebook paper in a frame…

     

  • Larry Osterman's WebLog

    It's the platform, Silly!

    • 69 Comments

    I’ve been mulling writing this one for a while, and I ran into the comment below the other day which inspired me to go further, so here goes.

    Back in May, Jim Gosling was interviewed by Asia Computer Weekly.  In the interview, he commented:

    One of the biggest problems in the Linux world is there is no such thing as Linux. There are like 300 different releases of Linux out there. They are all close but they are not the same. In particular, they are not close enough that if you are a software developer, you can develop one that can run on the others.

    He’s completely right, IMHO.  Just like the IBM PC’s documented architecture meant that people could create PC’s that were perfect hardware clones of IBM’s PCs (thus ensuring that the hardware was the same across PCs), Microsoft’s platform stability means that you could write for one platform and trust that it works on every machine running on that platform.

    There are huge numbers of people who’ve forgotten what the early days of the computer industry were like.  When I started working, most software was custom, or was tied to a piece of hardware.  My mother worked as the executive director for the American Association of Physicists in Medicine.  When she started working there (in the early 1980’s), most of the word processing was done on old Wang word processors.  These were dedicated machines that did one thing – they ran a custom word processing application that Wang wrote to go with the machine.  If you wanted to computerize the records of your business, you had two choices: You could buy a minicomputer and pay a programmer several thousand dollars to come up with a solution that exactly met your business needs.  Or you could buy a pre-packaged solution for that minicomputer.  That solution would also cost several thousand dollars, but it wouldn’t necessarily meet your needs.

    A large portion of the reason that these solutions were so expensive is that the hardware cost was so high.  The general purpose computers that were available cost tens or hundreds of thousands of dollars and required expensive facilities to manage.  So there weren’t many of them, which means that companies like Unilogic (makers of the Scribe word processing software, written by Brian Reid) charged hundreds of thousands of dollars for installations and tightly managed their code – you bought a license for the software that lasted only a year or so, after which you had to renew it (it was particularly ugly when Scribe’s license ran out (it happened at CMU once by accident) – the program would delete itself off the hard disk).

    PC’s started coming out in the late 1970’s, but there weren’t that many commercial software packages available for them.  One problems developers encountered was that the machines had limited resources, but beyond that, software developers had to write for a specific platform – the hardware was different for all of these machines, as was the operating system and introducing a new platform linearly increases the amount of testing required.  If it takes two testers to test for one platform, it’ll take four testers to test two platforms, six testers to test three platforms, etc (this isn’t totally accurate, there are economies of scale, but in general the principal applies – the more platforms you support, the higher your test resources required).

    There WERE successful business solutions for the early PCs, Visicalc first came out for the Apple ][, for example.  But they were few and far between, and were limited to a single hardware platform (again, because the test and development costs of writing to multiple platforms are prohibitive).

    Then the IBM PC came out, with a documented hardware design (it wasn’t really open like “open source”, since only IBM contributed to the design process, but it was fully documented).  And with the IBM PC came a standard OS platform, MS-DOS (actually IBM offered three or four different operating systems, including CP/M and the UCSD P-system but MS-DOS was the one that took off).  In fact, Visicalc was one of the first applications ported to MS-DOS btw, it was ported to DOS 2.0. But it wasn’t until 1983ish, with the introduction of Lotus 1-2-3, that PC was seen as a business tool and people flocked to it. 

    But the platform still wasn’t completely stable.  The problem was that while MS-DOS did a great job of virtualizing the system storage (with the FAT filesystem)  keyboard and memory, it did a lousy job of providing access to the screen and printers.  The only built-in support for the screen was a simple teletype-like console output mechanism.  The only way to get color output or the ability to position text on the screen was to load a replacement console driver, ANSI.SYS.

    Obviously, most ISVs (like Lotus) weren’t willing to deal with this performance issue, so they started writing directly to the video hardware.  On the original IBM PC, that wasn’t that big a deal – there were two choices, CGA or MDA (Color Graphics Adapter and Monochrome Display Adapter).  Two choices, two code paths to test.  So the test cost was manageable for most ISVs.  Of course, the hardware world didn’t stay still.  Hercules came out with their graphics adapter for the IBM monochrome monitor.  Now we have three paths.  Then IBM came out with the EGA and VGA.  Now we have FIVE paths to test.  Most of these were compatible with the basic CGA/MDA, but not all, and they all had different ways of providing their enhancements.  Some had some “unique” hardware features, like the write-only hardware registers on the EGA.

    At the same time as these display adapter improvements were coming, disks were also improving – first 5 ¼ inch floppies, then 10M hard disks, then 20M hard disks, then 30M.  And system memory increased from 16K to 32K to 64K to 256K to 640K.  Throughout all of it, the MS-DOS filesystem and memory interfaces continued to provide a consistent API to code to.  So developers continued to write to the MS-DOS filesystem APIs and grumbled about the costs of testing the various video combinations.

    But even so, vendors flocked to MS-DOS.  The combination of a consistent hardware platform and a consistent software interface to that platform was an unbelievably attractive combination.  At the time, the major competition to MS-DOS was Unix and the various DR-DOS variants, but none of them provided the same level of consistency.  If you wanted to program to Unix, you had to chose between Solaris, 4.2BSD, AIX, IRIX, or any of the other variants.  Each of which was a totally different platform.  Solaris’ signals behaved subtly differently from AIX, etc.  Even though the platforms were ostensibly the same, they were enough subtle differences so that you either wrote for only one platform, or you took on the burden of running the complete test matrix on EVERY version of the platform you supported.  If you ever look at the source code to an application written for *nix, you can see this quite clearly – there are literally dozens of conditional compilation options for the various platforms.

    On MS-DOS, on the other hand, if your app worked on an IBM PC, your app worked on a Compaq.  Because of the effort put forward to ensure upwards compatibility of applications, if your application ran on DOS 2.0, it ran on DOS 3.0 (modulo some minor issues related to FCB I/O).  Because the platforms were almost identical, your app would continue to run.   This commitment to platform stability has continued to this day – Visicalc from DOS 2.0 still runs on Windows XP.

    This meant that you could target the entire ecosystem of IBM PC compatible hardware with a single test pass, which significantly reduced your costs.   You still had to deal with the video and printer issue however.

    Now along came Windows 1.0.  It virtualized the video and printing interfaces providing, for the first time, a consistent view of ALL the hardware on the computer, not just disk and memory.  Now apps could write to one API interface and not worry about the underlying hardware.  Windows took care of all the nasty bits of dealing with the various vagaries of hardware.  This meant that you had an even more stable platform to test against than you had before.  Again, this is a huge improvement for ISV’s developing software – they no longer had to wonder about the video or printing subsystem’s inconsistencies.

    Windows still wasn’t an attractive platform to build on, since it had the same memory constraints as DOS had.  Windows 3.0 fixed that, allowing for a consistent API that finally relieved the 640K memory barrier.

    Fast forward to 1993 – NT 3.1 comes out providing the Win32 API set.  Once again, you have a consistent set of APIs that abstracts the hardware and provides a constant API set.  Win9x, when it came out continued the tradition.  Again, the API is consistent.  Apps written to Win32g (the subset of Win32 intended for Win 3.1) still run on Windows XP without modification.  One set of development costs, one set of test costs.  The platform is stable.  With the Unix derivatives, you still had to either target a single platform or bear the costs of testing against all the different variants.

    In 1995, Sun announced its new Java technology would be introduced to the world.  Its biggest promise was that it would, like Windows, deliver platform independent stability.  In addition, it promised cross-operating system stability.  If you wrote to Java, you’d be guaranteed that your app would run on every JVM in the world.  In other words, it would finally provide application authors the same level of platform stability that Windows provided, and it would go Windows one better by providing the same level of stability across multiple hardware and operating system platforms.

    In Jim Gosling post, he’s just expressing his frustration with fact that Linux isn’t a completely stable platform.  Since Java is supposed to provide a totally stable platform for application development, just like Windows needs to smooth out differences between the hardware on the PC, Java needs to smooth out the differences between operating systems.

    The problem is that Linux platforms AREN’T totally stable.  The problem is that while the kernel might be the same on all distributions (and it’s not, since different distributions use different versions of the kernel), the other applications that make up the distribution might not.  Java needs to be able to smooth out ALL the differences in the platform, since its bread and butter is providing a stable platform.  If some Java facilities require things outside the basic kernel, then they’ve got to deal with all the vagaries of the different versions of the external components.  As Jim commented, “They are all close, but not the same.”  These differences aren’t that big a deal for someone writing an open source application, since the open source methodology fights against packaged software development.  Think about it: How many non open-source software products can you name that are written for open source operating systems?  What distributions do they support?  Does Oracle support other Linux distributions other than Red Hat Enterprise?  The reason that there are so few is that the cost of development for the various “Linux” derivatives is close to prohibitive for most shrink-wrapped software vendors; instead they pick a single distribution and use that (thus guaranteeing a stable platform).

    For open source applications, the cost of testing and support is pushed from the developer of the package to the end-user.  It’s no longer the responsibility of the author of the software to guarantee that their software works on a given customer’s machine, since the customer has the source, they can fix the problem themselves.

    In my honest opinion, platform stability is the single thing that Microsoft’s monoculture has brought to the PC industry.  Sure, there’s a monoculture, but that means that developers only have to write to a single API.  They only have to test on a single platform.  The code that works on a Dell works on a Compaq, works on a Sue’s Hardware Special.  If an application runs on Windows NT 3.1, it’ll continue to run on Windows XP.

    And as a result of the total stability of the platform, a vendor like Lotus can write a shrink-wrapped application like Lotus 1-2-3 and sell it to hundreds of millions of users and be able to guarantee that their application will run the same on every single customer’s machine. 

    What this does is to allow Lotus to reduce the price of their software product.  Instead of a software product costing tens of thousands of dollars, software products costs have fallen to the point where you can buy a fully featured word processor for under $130.  

    Without this platform stability, the testing and development costs go through the roof, and software costs escalate enormously.

    When I started working in the industry, there was no volume market for fully featured shrink wrapped software, which meant that it wasn’t possible to amortize the costs of development over millions of units sold. 

    The existence of a stable platform has allowed the industry to grow and flourish.  Without a stable platform, development and test costs would rise and those costs would be passed onto the customer.

    Having a software monoculture is NOT necessarily an evil. 

  • Larry Osterman's WebLog

    Why should I even bother to use DLL's in my system?

    • 9 Comments

    At the end of this blog entry, I mentioned that when I drop a new version of winmm.dll on my machine, I need to reboot it.  Cesar Eduardo Barros asked:

    Why do you have to reboot? Can't you just reopen the application that's using the dll, or restart the service that's using it?

    It turns out that in my case, it’s because winmm’s listed in the “Known DLLs” for Longhorn.  And Windows treats “KnownDLLs” as special – if a DLL is a “KnownDLL” then it’s assumed to be used by lots of processes, and it’s not reloaded from the disk when a new process is created – instead the pages from the existing DLL is just remapped into the current process.

    But that and a discussion on an internal alias got me to thinking about DLL’s in general.  This also came up during my previous discussion about the DLL C runtime library.

    At some point in the life of a system, you decide that you’ve got a bunch of code that’s being used in common between the various programs that make up the system. 

    Maybe that code’s only used in a single app – one app, 50 instances.

    Maybe that code’s used in 50 different apps – 50 apps, one instance.

    In the first case, it really doesn’t matter if you refactor the code into a separate library or not.  You’ll get code sharing regardless.

    In the second case, however, you have two choices – refactor the code into a library, or refactor the code into a DLL.

    If you refactor the code into a library, then you’ll save in complexity because the code will be used in common.  But you WON’T gain any savings in memory – each application will have its own set of pages dedicated to the contents of the shared library.

    If, on the other hand you decide to refactor the library into its own DLL, then you will still save in complexity, and you get the added benefit that the working set of ALL 50 applications is reduced – the pages occupied by the code in the DLL are shared between all 50 instances.

    You see, NT's pretty smart about DLL's (this isn’t unique to NT btw; most other operating systems that implement shared libraries do something similar).  When the loader maps a DLL into memory, it opens the file, and tries to map that file into memory at its preferred base address.  When this happens, memory management just says “The memory from this virtual address to this other virtual address should come from this DLL file”, and as the pages are touched, the normal paging logic brings them into memory.

    If they are, it doesn't go to disk to get the pages; it just remaps the pages from the existing file into the new process.  It can do this because the relocation fixups have already been fixed up (the relocation fixup table is basically a table within the executable that contains the address of every absolute jump in the code for the executable – when an executable is loaded in memory, the loader patches up these addresses to reflect the actual base address of the executable), so absolute jumps will work in the new process just like they would in the old.  The pages are backed with the file containing the DLL - if the page containing the code for the DLL’s ever discarded from memory, it will simply go back to the DLL file to reload the code pages. 

    If the preferred address range for the DLL isn’t available, then the loader has to do more work.  First, it maps the pages from the DLL into the process at a free location in the address space.  It then marks all the pages as Copy-On-Write so it can perform the fixups without messing the pristine copy of the DLL (it wouldn’t be allowed to write to the pristine copy of the DLL anyway).  It then proceeds to apply all the fixups to the DLL, which causes a private copy of the pages containing fixups to be created and thus there can be no sharing of the pages which contain fixups.

    This causes the overall memory consumption of the system goes up.   What’s worse, the fixups are performed every time that the DLL is loaded at an address other than the preferred address, which slows down process launch time.

    One way of looking at it is to consider the following example.  I have a DLL.  It’s a small DLL; it’s only got three pages in it.  Page 1 is data for the DLL, page 2 contains resource strings for the DLL, and page 3 contains the code for the DLL.  Btw, DLL’s this small are, in general, a bad idea.  I was recently enlightened by some of the office guys as to exactly how bad this is, at some point I’ll write about it (assuming that Raymond or Eric don’t beat me too it).

    The DLL’s preferred base address is at 0x40000 in memory.  It’s used in two different applications.  Both applications are based starting at 0x10000 in memory, the first one uses 0x20000 bytes of address space for its image, the second one uses 0x40000 bytes for its image.

    When the first application launches, the loader opens the DLL, maps it into its preferred address.  It can do it because the first app uses between 0x10000 and 0x30000 for its image.  The pages are marked according to the protections in the image – page 1 is marked copy-on-write (since it’s read/write data), page 2 is marked read-only (since it’s a resource-only page) and page 3 is marked read+execute (since it’s code).  When the app runs, as it executes code in the 3rd page of the DLL, the pages are mapped into memory.  The instant that the DLL writes to its data segment, the first page of the DLL is forked – a private copy is made in memory and the modifications are made to that copy. 

    If a second instance of the first application runs (or another application runs that also can map the DLL at 0x40000), then once again the loader maps the DLL into its preferred address.  And again, when the code in the DLL is executed, the code page is loaded into memory.  And again, the page doesn’t have to be fixed up, so memory management simply uses the physical memory that contains the page that’s already in memory (from the first instance) into the new application’s address space.  When the DLL writes to its data segment, a private copy is made of the data segment.

    So we now have two instances of the first application running on the system.  The space used for the DLL is consuming 4 pages (roughly, there’s overhead I’m not counting).  Two of the pages are the code and resource pages.  The other two are two copies of the data page, one for each instance.

    Now let’s see what happens when the second application (the one that uses 0x40000 bytes for its image).  The loader can’t map the DLL to its preferred address (since the second application occupies from 0x10000 to 0x50000).  So the loader maps the DLL into memory at (say) 0x50000.  Just like the first time, it marks the pages for the DLL according to the protections in the image, with one huge difference: Since the code pages need to be relocated, they’re ALSO marked copy-on-write.  And then, because it knows that it wasn’t able to map the DLL into its preferred address, the loader patches all the relocation fixups.  These cause the page that contains the code to be written to, and so memory management creates a private copy of the page.  After the fixups are done, the loader restores the page protection to the value marked in the image.  Now the code starts executing in the DLL.  Since it’s been mapped into memory already (when the relocation fixups were done), the code is simply executed.  And again, when the DLL touches the data page, a new copy is created for the data page.

    Once again, we start a second instance of the second application.  Now the DLL’s using 5 pages of memory – there are two copies of the code page, one for the resource page, and two copies of the data page.  All of which are consuming system resources.

    One think to keep in mind is that the physical memory page that backs resource page in the DLL is going to be kept in common among all the instances, since there are no relocations to the page, and the page contains no writable data - thus the page is never modified.

    Now imagine what happens when we have 50 copies of the first application running.  There are 52 pages in memory consumed by the DLL – 50 pages for the DLL’s data, one for the code, and one for the resources.

    And now, consider what happens if we have 50 copies of the second application running, Now, we get 101 pages in memory, just from this DLL!  We’ve got 50 pages for the DLL’s data, 50 pages for the relocated code, and still the one remaining for the resources.  Twice the memory consumption, just because the DLL was wasn’t rebased properly.

    This increase in physical memory isn’t usually a big deal when it’s happens only once. If, on the other hand, it happens a lot, and you don’t have the physical RAM to accommodate this, then you’re likely to start to page.  And that can result in “significantly reduced performance” (see this entry for details of what can happen if you page on a server).

    This is why it's so important to rebase your DLL's - it guarantees that the pages in your DLL will be shared across processes.  This reduces the time needed to load your process, and means your process working set is smaller.   For NT, there’s an additional advantage – we can tightly pack the system DLL’s together when we create the system.  This means that the system consumes significantly less of the applications address space.  And on a 32 bit processor, application address space is a precious commodity (I never thought I’d ever write that an address space that spans 2 gigabytes would be considered a limited resource, but...).

    This isn’t just restricted to NT by the way.  Exchange has a script that’s run on every build that knows what DLLs are used in what processes, and it rebases the Exchange DLL’s so that they fit into unused slots regardless of the process in which the DLL is used.  I’m willing to bet that SQL server has something similar.

    Credits: Thanks to Landy, Rick, and Mike for reviewing this for technical accuracy (and hammering the details through my thick skull).  I owe you guys big time.

     

  • Larry Osterman's WebLog

    How to stop politicians from asking you for money

    • 8 Comments

    Many years ago, Valorie and I gave money to a gun-control initiative in here in Washington State (a friend and former boss was heavily involved in the campaign).  It went down in flames, but not before I got put on the Democratic Party's mailing list as someone who's a likely donor (although I don't understand entirely why donating to a gun control initiative necessarily marks me as a Democrat).

    Well, about 6 months later, at dinner time, the phone rings.  For whatever reason, I answered it.

    It was Jay Inslee, our congressman, asking for money.   I like Jay, I think he's done pretty good job in Washington (I also liked his Republican predecessor, but who's counting).

    Before he got very far into his pitch, I cut him off (yeah, I'm rude like that - cutting off a U.S. Congressman, but w.t.h. he interrupted my dinner).

    “Jay, if you ever call my house again, I'm immediately going to make a donation to your opponent.”

    He's never called my house again, neither has any other candidate.

     

  • Larry Osterman's WebLog

    Impersonation and named pipes.

    • 2 Comments

    Someone asked on an internal mailing list why the documentation of security impersonation levels has the following quote:

    When the named pipe, RPC, or DDE connection is remote, the flags passed to CreateFile to set the impersonation level are ignored. In this case, the impersonation level of the client is determined by the impersonation levels enabled by the server, which is set by a flag on the server's account in the directory service. For example, if the server is enabled for delegation, the client's impersonation level will also be set to delegation even if the flags passed to CreateFile specify the identification impersonation level.

     

    The reason’s actually fairly simple:  The CIFS/SMB protocol doesn’t have the ability to track the user’s identity dynamically (this is called Dynamic Quality of Service or Dynamic QOS).  As a result, the identity of the user performing an operation on a networked named pipe is set when the pipe is created, and is essentially fixed for the lifetime of the pipe. 

     

    If the application impersonates another user’s token after opening the pipe, the impersonation is ignored (because there’s no way of informing the server that the user’s identity has changed).

     

    Of course if you’re impersonating another user when you call CreateFile call, then that user’s identity will be used when opening the remote named pipe, so you still have some ability to impersonate other users, it’s just not as flexible as it could be.

     

     

  • Larry Osterman's WebLog

    SpaceShipOne pictures

    • 2 Comments

    I don’t normally do “me too” posts, but Robert Scoble posted this link to some truly amazing pictures of SpaceShipOne’s first flight and I wanted to share J 

  • Larry Osterman's WebLog

    What's wrong with this code, Part 3: The answers

    • 7 Comments

    In yesterday’s post, there was one huge, glaring issue:  It completely ignored internationalization (or i18n in “internet-lingo”).

    The first problem occurs in the very first line of the routine:

                      if (string1.Length != string2.Length)

    The problem is that two strings can be equal, even if their lengths aren’t equal!   My favorite example of this is the German sharp-s character, which is used in the German word straße.  The length of straße is 6 characters, and the length of the upper case form of the word (STRASSE) is 7 characters.  These strings are considered as equal in German, even though the lengths don’t match.

    The next problem occurs 2 lines later:

                      string upperString1 = string1.ToUpper();

                      string upperString2 = string2.ToUpper();

    The call to ToUpper() doesn’t specify a culture.  Without a specific culture, String.ToUpper uses the current culture, which may or may not be the culture that created the string.  This can create some truly unexpected results, like the way that Turkish treats the letter “I” as Tim Sneath pointed out in this article. 

    Even if you call String.ToUpper() BEFORE making the length check, it’s still not correct.  If you call String.ToUpper() on an sharp-s character, you get a sharp-s character.  So the lengths of the string don’t change when you upper case them.

    The good news is that the .Net framework specifies a culture neutral language, System.Globalization.CultureInfo.InvariantCulture.  The invariant culture is similar to the English language, but it’s not associated with any region, so the rules are consistent regardless of the current UI culture.  This allows you to avoid the security bug that Tim pointed out in his article, and allows you to have deterministic results.  This is particularly important if you’re comparing strings against constants (like if you’re checking arguments to a function).  If you call String.Compare(userString, “file:”, true) without specifying the locale, then as Tim pointed out, if the userString contains one of the Turkish ‘I’ characters, you won’t match correctly.  If you use the invariant culture you will.

    As an interesting piece of trivia that I learned while writing the test app for this ‘blog entry, if you use System.String.Compare(“straße”, “STRASSE”, true, System.Globalization.CultureInfo.InvariantCulture), it returns TRUE J.

    Btw, in case it wasn’t obvious, the exact same issue exists in unmanaged code (I chose managed code for the example because it was simpler).  If you use lstrcmpi to compare two strings, you’ll have exactly the same internationalization problem.  The lstrcmpi routine will compare strings in the system locale, which is not likely to be the locale you want.  To get the invariant locale, you want to use CompareStringW specifying the LOCALE_INVARIANT locale and NORM_IGNORECASE.  Please note that LOCALE_INVARIANT isn’t described on the CompareString API page, it’s documented in the table of language identifiers or in the MAKESORTLCID macro documentation.

    The bottom line: Internationalization is HARD.  You can follow some basic rules to make yourself safe, but… 

    Now for kudos:

    Francois’ answer was the most complete of the responses in the forum, he correctly pointed out that my routine compares only codepoints (the individual characters) and not the string.   He also picked up on the culture issue.

    Mike Dunn’s answer was the first to pick up on the first error, he correctly realized that you can’t do the early test to compare lengths.

    Carlos pointed out the issue of characters that form ligatures, like the Unicode Combining Diacritical characters (props to charmap for giving me the big words that people use to describe things).  The sharp-s character is another example.  In English, the ligatures used for fi, fl, ff, and ffi are also another example of combining characters (Unicode doesn’t appear to have defined these ligatures btw; they appear in the private use area of some system fonts however).

    Anon pointed out the Turkish i/I issue.

    Sebastien Lambla pointed out the issue of invariant cultures explicitly.

    Non issues:

    Several readers pointed out that I don’t check parameters for null.  That was intentional (ok, not really, but I don’t think it’s wrong).  In my mind, passing null strings from the caller is an error on the part of the caller, and the only reasonable answer is for the routine to throw an exception.  Checking the parameters for null allows me to throw a different exception.  It’s also that the routine could be defined that a non null string compared with a null string was inequal, but that’s not what I chose (this is fundamentally stricmp()).

    Also, some people believed that the string comparison routine should ignore whitespace, I’m not sure where that came from J.

    Edit: Fixed stupid typo in the length of german strings.  I can't count :)

    Edit2: tis->this.  It's not a good day :)

     

  • Larry Osterman's WebLog

    What's wrong with this code, part 3

    • 44 Comments

    This time, let’s consider the following routine used to determine if two strings are equal (case insensitively).  The code’s written in C# if it’s not obvious.

                static bool CompareStrings(String string1, String string2)
                {
                      //
                      //    Quick check to see if the strings length is different.  If the length is different, they are different.
                      //
                      if (string1.Length != string2.Length)
                      {
                            return false;
                      }

                      //
                      //    Since we're going to be doing a case insensitive comparison, let's upper case the strings.
                      //
                      string upperString1 = string1.ToUpper();
                      string upperString2 = string2.ToUpper();
                      //
                      //    And now walk through the strings comparing the characters to see if they match.
                      //
                      for (int i = 0 ; i < string1.Length ; i += 1)
                      {
                            if (upperString1[i] != upperString2[i])
                            {
                                  return false;
                            }
                      }
                      return true;
                }

    Yes, the code is less efficient than it could be, but there’s a far more fundamental issue with the code.  Your challenge is to determine what is incorrect about the code.

    Answers (and of course kudos to those who found the issues) tomorrow.

     

  • Larry Osterman's WebLog

    Warning: Minor Political Incorrectness Within...

    • 10 Comments

    Feel free to skip this...

    I was driving home from work today, and noticed something I had never seen before.

    On the other side of the street was a jogger.

    Nothing new there, there are joggers all over Microsoft.  Until I noticed that he had a white cane with a red tip sweeping in front of him.

    Yup, a blind jogger, running.  Next to a major street, assisted only by his cane.

    Pretty cool, IMHO.

     

Page 29 of 33 (815 items) «2728293031»