July, 2004

Larry Osterman's WebLog

Confessions of an Old Fogey
  • Larry Osterman's WebLog

    The Computer Ate My Vote

    • 15 Comments

    I ran into a radio ad for True Majority on my way into work today.  True Majority is an online liberal advocacy group founded by Ben Cohen (of Ben&Jerry’s).  The ad was for a project of theirs called “The Computer Ate My Vote”.  Unfortunately, I can’t find a copy of their ad online so I can link to it, but it was fascinating.

    While I have severe issues with electronic voting, especially as it’s practiced in America today, I found the tone of their ad disturbing.  The central premise was that EVERYONE knows that computers are unreliable and thus electronic voting is inherently unreliable, without a paper trail.

    As I said, I support the principal that electronic voting is dangerous.  I love the fact that my precinct votes with a scan card – the card is scanned in at the voting place, and validated tallied there.  But the card itself is stored in a vault inside the scanner and can be tallied afterwards in the need of a recount.

    I do have issues however with the fear mongering about the <scaaaaarry music>“evil computer that’s going to eat your vote”</scaaaaarry music>.  How often do you have computers eat your data nowadays?  I haven’t had a Word or Outlook crash that ate data in YEARS.  Computers are VERY reliable these days, the issues with electronic voting from my standpoint is that there is no recountability and that the systems are hacker-prone, NOT that computers per se shouldn’t be involved in voting.  As I said – my precinct votes electronically, WITH a paper trail that I authored.

    I do feel obligated to point out one of the cooler things about the ad: They targeted it at a particular county, and at the chief elections official in that county, by name!  So it wasn’t a shotgun ad, it was carefully targeted.  A nice touch.

     

  • Larry Osterman's WebLog

    Just ran into this on /.: Amit Singh's essay on computer security.

    • 0 Comments

    Which can be found here.

    A truely fascinating read.  I just had a quick read-through this morning, but I definitely want to go back later today and read it in more detail.

    From what I saw earlier, no FUD, just facts.  Very nicely done.

     

  • Larry Osterman's WebLog

    How do I open ports in the Windows Firewall, part 3

    • 14 Comments

    This is the 3rd and final article in my discussion of how the WMC product opened holes in the Windows firewall to enable the WMC clients to access the WMC HTTP server.

    In my last article, I had found an INetConnection object, which had a “guidId” property that I thought might be useful when trying to associate an INetConnection object with an IP address.

    I dug a bit deeper through the SDK, and I discovered that the IP_ADAPTER_ADDRESSES structure contained a string “AdapterName”.  It turns out that the “AdapterName” field is a string-ized representation of the GUID used in the INetConnection!  And, since the IP_ADAPTER_ADDRESSES structure contains the IP address of the connection, I can use that to see if the adapter is associated with the right API.

    Once I’d found the right IP_ADAPTER_ADDRESSES entry, all I’d need to do is to call CLSIDFromString on the AdapterName field of the IP_ADAPTER_ADDRESSES field and I’d have the GUID I needed. 

    Well, to get an IP_ADAPTER_ADDRESSES structure, I need to call the GetAdaptersAddresses API, which returns an array of IP_ADAPTER_ADDRESSES structures.  Now I had all the pieces I needed to pull all of this together.

    For a given IP address, I called into a routine GetAdapterGuid which calls GetAdapterAddresses.  It then matched the FirstUnicastAddress field in the IP_ADAPTER_ADDRESSES structure with the IP address specified, and if they matched, returned the GUID in the “AdapterName” field of the IP_ADAPTER_ADDRESSES structure.

    Now that I had the adapter GUID, I called into the INetSharingManager object to enumerate all the connections, and found the connection that corresponded to the adapter GUID I was looking for.

    And once I had the INetConnection that matched my IP address, I asked the sharing manager for the INetSharingConfiguration that corresponded to that INetConnection object, and added the port mapping for my port to that IP address.

    And I was done!  A whole lot of work, and a pretty squirrelly API but it got the job done.

    Please note: If I were using the INetFW APIs that was added for XP SP2, this process would have been much easier.  The new firewall API is documented here.  Using the new API, I could just CoCreateInstance of an INetFwOpenPort, set the IP address for our local subnets (see the initial post for a list of the local subnets) as the remote addresses, set the port and service, and add the INetFwOpenPort to the INetFwOpenPorts collection.  And I’d be done.  The new API also lets you have significantly more control over opening ports in the firewall than the old one did, and it appears to be a far more pleasant API to use.  There’s even an example of opening ports in the “Exercising the firewall” C++ sample.  You can download the firewall SDK here.

     

  • Larry Osterman's WebLog

    It's Opening Night Tomorrow!

    • 1 Comments

    Tomorrow evening, Daniel opens in his first professionally produced show!  He’s appearing in SCT’s summer season production of Joseph and the Amazing Technicolor Dreamcoat. 

    Daniel was one of several thousand kids who auditioned for about a hundred spots during the summer session, we were SO proud when he got his part.

    And to cap it off, Joseph’s been one of my favorite shows since I first did it as a choral piece back in 1970 (yeah, 34 years ago), this is Daniel’s second time appearing in the show (he did it at his school over the summer last year). 

    I cannot say too much about the Seattle Children’s Theater summer season.  We saw Captain Blood and The Arrogant Kickapod there last Friday, they were awesome, and I’m looking forward to seeing the rest of the summer season.  These are NOT just kid shows; they’re really very high quality productions.  The Arrogant Kickapod was an absolute hoot (think Moliere’s Tartouffe, set in a summer camp, with music from a Frankie & Annette movie).

    We’re also really looking forward to seeing Family Game Night this summer, since Daniel was one of the 5 kids who created the show (he was in a class called “Original Works” where the class collaboratively created the play, and Don Fleming, the instructor polished their ideas into a stageplay).  Daniel was really bummed that he didn’t get a part in that one (since he created the part of Stevie the evil hacker), but he’s pretty stoked about Joseph anyway.

    If you’re interested in seeing some really awesome theater, done by kids, you can get tickets at the SCT box office, or here.

     

  • Larry Osterman's WebLog

    Blogging messes up my IE address bar.

    • 13 Comments

    When writing my blog, I end up putting a lot of links to stuff on the net.  To get the links, I navigate to the page, and select the link in the IE address bar and cut&paste it into Outlook’s compose form (sometimes I need to open in a new window to ensure that I have a good link, but…).  It works pretty well, except for one drawback…

    Apparently, it appears that when you highlight a url and copy it to the clipboard, the control that IE’s using to run the “Address” combo box decides that the copy-to-clipboard action was SO important that it needs to add the URL being highlighted to the list of sites that comes down when you open the combo box.

    Since I use that combo box as a sort-of “shorter-than-favorite” bookmark list, it means that the list quickly gets filled up with junky urls that I don’t care about.  It can get very frustrating, especially since the list in the combo box is so short (about 40 entries) – the sites I care about quickly get dropped off the list and the address bar combo box is useless.

    FWIW, Firefox doesn’t do this, but I’m not yet ready to switch…

     

     

  • Larry Osterman's WebLog

    How do I open ports in the Windows Firewall, part 2

    • 5 Comments

    This is the second post in a series of posts that explain how the Windows Media Connect project opened up a particular port through the XP SP2 firewall.

    In the last post, we had figured out how to actually open the port, but I hadn’t discussed how you find the INetSharingConfiguration interface.

    Well, to get an INetSharingConfiguration, you need to have an INetSharingManager.  The INetSharingManager is a top level COM object, and so you can just call CoCreateInstance on the object. 

    The INetSharingManager API is used to determine if the firewall is enabled and to enumerate through “connections”.  Given a connection, it will return the sharing configuration for a specific connection using the INetSharingConfigurationForINetConnection property.  So we need to find an INetConnection. 

    Well, an INetConnection represents an entry in the “Network Connections” shell folder (right click on the “My Network Places” and select properties and you’ll get “Network Connections”), so we need to enumerate through the INetConnection objects and figure out which one is associated with the IP address we want.

    To enumerate over the connections, I retrieve the INetSharingManager::EnumEveryConnection property.  That returns an INetSharingEveryConnectionCollection object, which implements IEnumVARIANT.

    But just having an INetConnection object doesn’t let us know what IP address it’s associated with.  To get that, we need to call the INetConnection::GetProperties method, which returns a pointer to a NETCON_PROPERTIES structure.

    So we’re there, right?  Well, no.  If you look through the NETCON_PROPERTIES object, there are no IP addresses in the structure.  All you’ve got is the GUID of the connection, the name of the connection (“Local Area Connection”), the media type, status, stuff like that.  Really useful stuff for the shell, but it doesn’t help us.

    But there is that “guidId” field in the structure.  It’s described as being a “Globally-unique identifier (GUID) for this connection”.  Maybe there’s a way to take advantage of that field.

    And that’s tomorrow’s post.

     

  • Larry Osterman's WebLog

    How do I open ports in the Windows Firewall?

    • 22 Comments

    One of the side-projects I recently was assigned to work on was to switch the Windows Media Connect project from using the home-brewed HTTP server that was originally coded for the product, to using HTTP.SYS, which is included in XP SP2.  This was as a part of a company-wide initiative to remove all home-brewed HTTP servers (and there were several) and replace them with a single server.  The thinking was that having a half dozen HTTP servers in the system was a bad idea, because each of them was a potential attack vector.  Now with a single server, we have the ability to roll out fixes in a single common location.

    The HTTP.SYS work was fascinating, and I’ll probably write about it more over time, but I wanted to concentrate on a single aspect of the problem.

    I got the server working relatively quickly until we picked up a new version of XP SP2.  That one featured additional improvements to the firewall, and all of a sudden, the remote devices couldn’t retrieve content from the web server.  The requests weren’t getting to our service at all.  What was weird was that they WERE getting the content directory (the names of the files on the machine) but when they tried to retrieve them, they failed. 

    Well, we had suspected that this was going to happen; the new build of SP2 moved HTTP.SYS behind the firewall (it had been in front of the firewall previously).  So now we needed to open a hole in the firewall for our process, the UPnP hosting service had already opened their port, that's why the content directory was available.  Over the next several posts, I’ll go through the process that I went through do discover how to do this.  Everything I needed was documented, but it wasn’t always obvious. 

    The first thing we had to deal with was the fact that we only wanted to open the firewall on local subnet addresses.  To prevent users’ multimedia content from going outside their home, WMC will only accept connections from IP addresses that are in the private network IP address range (192.168.x.x) and the AutoIP address range (169.254.x.x).  We also open up the local address ranges of 10.x.x.x and 172.16.x.x (with a netmask of 0xff, 0xf0, 0, 0) .  So we only wanted to open the firewall on private IP addresses,  it would be a “bad” thing if we opened the WMC port to public addresses, since that could potentially be used as an attack vector.

    The Windows firewall has been documented since Windows XP, the first MSDN hit for “internet connection firewall” returns this page that documents the API.  For XP SP2, there’s a new firewall API, if you use an MSDN search for “firewall API” the first hit is this page which describes the XP SP2 firewall API in great detail.  For a number of reasons (chief among which was that when I wrote the code the firewall API hadn’t been published), my implementation uses the original firewall API that’s existed since Windows XP.  As a result, my code and the techniques I’ve described in the next couple of posts work should work just fine on Windows XP as well as working on XP SP2.

    Anyway, on with the story.  So, as always, I started with the API documentation.  After groveling through the API for a while, I realized I was going to need to use the INetSharingConfiguration interface’s AddPortMapping API to add the port.  I’d want to use the INetSharingConfiguration API on each of the IP addresses that WMC was using.

    So to add a mapping for my port, I simply called INetSharingConfiguration::AddPortMapping specifying a name (in my case I used the URL for the IP address), internal and external port (the same in my case), and a string with the local IP address.  That API returned an INetSharingPortMapping object, which we have to Enable to make it effective.

    Tomorrow: How do we get the INetSharingConfiguration?

    Edit: Clarified IP addresses used for WMC after further investigation.

    Edit: Updated link

     

  • Larry Osterman's WebLog

    The suggestion box is now open!

    • 2 Comments

    In my “How am I doing” post, one of the suggestions was that I add a “suggestion box” a la Raymonds.

    It's a great idea, so I've decided to set one up.  It can be found here.

  • Larry Osterman's WebLog

    On Campus Interviews with Microsoft

    • 12 Comments

    It’s digging back into prehistory time :)  WAY back into pre-historical times.

    Microsoft has always done on-campus interviews, it’s an integral part of the recruiting process.  Gretchen and Zoe have written about it a lot here.

    My on-campus interview story is a bit dated nowadays, it occurred when I was interviewing at Carnegie-Mellon University back in October of 1983 (or thereabouts).

    I had signed up for the interview with Microsoft more or less on a lark, I wasn't particularly interested in Microsoft as an employer, but Microsoft did compilers, and operating systems (for toy computers), and I figured that it would be worth an hour or two of my time.  I REALLY wanted a job working in operating systems development on a "real" computer - something like a Vax would be nice.

    I walked into the interview cold - I had no idea who this "Steve Ballmer" was, I figured he was yet another development manager, just like I had all the other interviewers I had seen.  Boy was I surprised.

    I walked into the office-let that they used for interviews, and was introduced to this great big strapping slightly balding guy with a voice almost as loud as my own.  He started in asking me the usual questions - what's your favorite course, what do you do in your spare time, etc.  So far, nothing unusual....  Then he started asking me questions - but not programming questions like I'd had before, logic questions..  The one that sticks in my mind was:

    "We're going to play a game.  I'm going to pick a number between 1 and 100, you get to guess the number.  If you don't get it, I'll tell you if you're too high or too low.  If you guess it, i'll pay you $6 less the number of guesses that you take - so if you get the number on the first guess, you get $6, but if you take 7 guesses, you pay ME $1. 

    Now do you want to play the game or not?"

    I said "Of course not - a binary search on 1 to 100 takes 7 guesses for the worst case, you can pick your numbers such that you will always force 7 guesses". 

    Steve looks me and says "Are you sure?” 

    Long pause on my part....  "Well, I don't need to choose my first pivot point at 50, any number between 32 and 64 will work just as well, and if I do that, I change the spread, so if I change the start point of the binary search, I can win this game".

    Steve then moved on to describe the work environment at Microsoft - everyone gets their own office, development work is done on Sun workstations, and most of the compiler and language work is done on a DECSYSTEM-20 (since I was a die-hard dec-20 fan back then (and still am), this was music to my ears).  Steve then described the Microsoft DEC-20 and commented "We love the DEC-20, except when we do our nightly builds - it gets totally unusable when we're doing builds,  the load average gets to 10 or 20" (the "load average" on a DEC-20 was a measure of the performance of the machine - the higher the number the lower the user response time).  He got quite emphatic about how horrible life was on the mainframe when this was happening.

    At that point, I totally lost it.  "You think that a load average of 10 or 20 is bad?  Man, you are clueless - you have absolutely no idea how bad the load can get on a DEC-20.  On a good day, the load average on our DEC-20 is 50, and when projects are due, it goes up to 100 or 120".  I continued ranting for several more minutes.

    At about that time, the interview ended, but I was convinced that I had blown it (no big deal, as I mentioned - I didn't really care about the job anyway).  On the way out, I started reading the Microsoft literature that I'd been given...  When it came to describing the executive team at Microsoft, I stopped and stared at the brochure.  There was his name and picture - "Steve Ballmer, Vice President, Operations".

    Sigh..  If I had any chance of getting that job, I had surely blown it totally - you just don't just tell the guy who’s interviewing you that he's an idiot.  Especially when he's the head of H.R. in a company that's trying to hire you...

    Needless to say, the very next day, I received a telex asking me to come to Redmond and interview with Microsoft.  I remember Valorie running into my compiler design class with the telex in her hand.  Three months later, I interviewed on campus at Microsoft (my first plant trip).  Things must have gone well; I got my very first full time job offer at about 4PM on the day I interviewed.

    Oh, and about that interview question (there was a reason I put it in the story)...  I wasn't happy with the answer to the question that I'd given Steve, it kept on niggling away at the back of my mind.  About a week later, I was busy working on the parser for my Compiler Design class, and I decided I needed a break, so I wrote a program to emulate the game choosing different pivot points (you can tell I am/was a totally obsessive geek for even considering this).  After running through the game, even after choosing pivot points that aren't in the middle, it turns out that you CANNOT win the game - there is no other pivot point that can be used to improve your odds of guessing the worst-case numbers based on a pivot on 50 - the alternate pivot points still require more than 6 guesses to find the value.  On the other hand if the person you’re playing with believes that you’re going to choose a pivot point of 50, and picks his numbers accordingly, you CAN potentially win, but it’s a crap shoot.

    I ran into Steve in the hall about a year after I had started, and asked him about that (further proof that I’m an uber-geek – I actually followed through with the interviewer and challenged him on his interview question)...  He said "Yeah, I knew that, the point of that interview question is to see if the interview candidate can even consider a pivot point other than 50, I didn't care about what the real answer was"...

    Edit: KC pointed out in private email that I left out a crucial detail - hey, it was more than 20 years ago, you expect me to get every detail right? :)

     

  • Larry Osterman's WebLog

    What's wrong with this code, part 4: the answers

    • 16 Comments

    As I mentioned yesterday, this is a subtle problem.  Apparently it wasn’t subtle enough for the people commenting on the API, without fail, everyone nailed it perfectly.

    But this IS a problem that I run into at least once a month.  Someone comes to my office and says “I’m getting an undeclared external in my application and I don’t see why!”  95% of the time, the problem is exactly the one in this bug.

    Here’s the offending line in the buggy header:

    NIFTY_API_API int MyNiftyAPI(void);

    Not really surprising, heck, it’s the ONLY line in the buggy header.  What’s wrong with the line is actually an error of omission.  The header doesn’t specify the calling convention used for the API.  As a result, in the absence of an explicit calling convention, the compiler assumes that the current calling convention is the calling convention for the API.

    Unfortunately that’s not the case.  The calling convention for an API is set by the compiler when the code is built, if every part of the project uses the same calling convention, you’re fine, but if anyone compiles their code with a calling convention other than yours, you’re toast.  Raymond goes into some detail on how to diagnose these problems here, so…  As I mentioned yesterday, he’s written a number of posts on the subject.

    The key indicator that there might be a problem was my statement “I’m writing a DLL”.  If this was just a routine in my application, it wouldn’t matter, since all the components in my application are usually compiled with the same set of compiler settings.

    But when you’re dealing with DLL’s (or statically linked libraries), the consumer of your code typically isn’t in the same project as you are, so it’s absolutely critical that you specify the calling convention you used to prevent them from using the “wrong” calling convention in their code.

    Kudos:

    Grant pointed out the problem initially, followed quickly (and more authoritatively) by Borsis Yanushpolsky.  Everyone else posting comments also picked it up.

    Grant also pointed out (in private email), and Mike Dimmick pointed out in the comments section that there’s another, equally glaring problem with the header.  It is missing an extern “C” to correctly inform the compiler that the API in question shouldn’t use C++ name decoration.  The code should have been wrapped with:

    #ifdef __cplusplus
    extern "C" {            /* Assume C declarations for C++ */
    #endif  /* __cplusplus */ 

    #ifdef __cplusplus
    }                       /* End of extern "C" { */
    #endif  /* __cplusplus */
     

     So the full version of the header should be:

    // The following ifdef block is the standard way of creating macros which make exporting
    // from a DLL simpler. All files within this DLL are compiled with the NIFTY_API_EXPORTS
    // symbol defined on the command line. this symbol should not be defined on any project
    // that uses this DLL. This way any other project whose source files include this file see
    // NIFTY_API_API functions as being imported from a DLL, whereas this DLL sees symbols
    // defined with this macro as being exported.
    #ifdef NIFTY_API_EXPORTS
    #define NIFTY_API_API __declspec(dllexport)  
    #else
    #define NIFTY_API_API __declspec(dllimport)
    #endif

    #if defined(_STDCALL_SUPPORTED)
    #define STDCALL __stdcall    // Declare our calling convention.
    #else
    #define STDCALL
    #endif // STDCALL_SUPPORTED

    #ifdef __cplusplus
    extern "C" {     // Don’t use C++ decorations for our API names
    #endif

    NIFTY_API_API int STDCALL MyNiftyAPI(void);

    #ifdef __cplusplus
    }                // Close the extern C.
    #endif

    I have a huge preference for __stdcall APIs.  They have all of the benefits of the __cdecl calling convention (except for the ability to support variable numbers of arguments) and they result in significantly smaller code (since the routine cleans its stack, not the caller).   As I mentioned in a comment in the previous post, the savings that NT achieved when it switched its default calling convention from __cdecl to __stdcall was huge – far more than we had anticipated.

    There’s still one more potential bug: The header file isn’t wrapped in either a #pragma once, or in #ifdef _NIFTY_API_INCLUDED_/#define _NIFTY_API_INCLUDED_/#endif // _NIFTY_API_INCLUDED_.  Given the current API header this isn’t a problem, but if the header file is included from multiple places, the possibility exists that definitions within the header could result in multiple definitions, which could later result in errors.

    McGroarty brought up an interesting point:

    I'm no Windows guy, but I'll put a cautious eye to the generic int being subject to the signedness and size of the day.

    I hadn’t considered this originally, but he has a point.  Int is the only fundamental C/C++ type that has a variable size, it is guaranteed to be at least as large as a short (which is larger or equal to a char, but guaranteed to be able to hold values from -32767 to 32767).  So an int can be either a 16 bit, 32 bit or 64 bit quantity.

     

  • Larry Osterman's WebLog

    Every programmer should know assembly language - part two

    • 0 Comments

    Jeremy Kelly pointed me to this post that he made about a debugging session that the Exchange escalation guys did that discovered a rootkit running on a customers machine.

    It is an awesome detective job, and it’s a great example of exactly why (a) Every Developer needs to know Assembly and (b) Why you need to reformat your machine after you’ve been infected.

    The ONLY way that they discovered that this machine had been rooted was the fact that the rootkit had a bug.  If it hadn’t been for the bug, the poor customer would have never known that he had a problem, until much later.

    And yes, stuff like this happens a lot.  We’re very fortunate that we have some really talented escalation engineers working here that can diagnose stuff like this, but it’s a part of the skill set that developers and support people need to have.

     

    Way to Go Jeremy, a great read.

     

  • Larry Osterman's WebLog

    35 years ago today...

    • 6 Comments

    I was seven years old at the time,  and I remember getting woken up by my parents and being brought downstairs to where they had a great big party going on (for some reason I thought it was very late in the evening, although I now realize that it was only about 9:30 eastern time).  There must have been a dozen people clustered tightly around our TV. 

    We all sat there in silence and stared at the blurry images coming from Mission Control.  Walter Cronkite was explaining what was happening in great detail. 

    And then the immortal words came from the speaker.  “Houston, Tranquility Base here.  The Eagle has landed”.  It’s one of my earliest memories and it STILL brings tears to my eyes as I remember it. 

    The world literally changed that day.  We have forgotten so much of the wonder that those early explorations brought, the sense of magic that the images of a man, yes a human being standing on ANOTHER PLANET brought.  Now we get excited when unmanned robots the size of vacuum cleaners scurry over the surface of mars.  Or when a school bus sized observatory goes to Saturn. 

    But it isn’t the same thing.  The visceral reaction to seeing a human being standing on a planet (or performing a space walk or repairing a telescope) adds a level of involvement that cannot be achieved by little scooters.

    My thanks go out today to the crew of Apollo 11, for inspiring a generation.

     

  • Larry Osterman's WebLog

    What's wrong wth this code, part 4

    • 17 Comments

    Ok, time for another “what’s wrong with this code” problem.

    This time, I’m writing a DLL.  Nothing complicated, just a plain old DLL.  As is expected, I publish a header file for my api:

    // The following ifdef block is the standard way of creating macros which make exporting
    // from a DLL simpler. All files within this DLL are compiled with the NIFTY_API_EXPORTS
    // symbol defined on the command line. this symbol should not be defined on any project
    // that uses this DLL. This way any other project whose source files include this file see
    // NIFTY_API_API functions as being imported from a DLL, whereas this DLL sees symbols
    // defined with this macro as being exported.
    #ifdef NIFTY_API_EXPORTS
    #define NIFTY_API_API __declspec(dllexport)
    #else
    #define NIFTY_API_API __declspec(dllimport)
    #endif

    NIFTY_API_API int MyNiftyAPI(void);

    You’ll notice that this header is almost identical to the header file that Visual Studio produces when you ask it to make a DLL.  Even so, there’s a bug in this header file.

    Your challenge is to figure out what the bug is.  It’s subtle, this time, but important (although Raymond and I have touched on it before).  Btw, the fact that it uses the non standard __declspec is NOT the bug.  That’s syntactic sugar that could be easily removed without removing the error.

    As usual, answers and kudos tomorrow.

     

  • Larry Osterman's WebLog

    What are Known DLLs anyway?

    • 10 Comments

    In my previous post about DLLs and how they work, I commented that winmm.dll was a KnownDLL in Longhorn.  It turns out that this is a bug in an existing KnownDLL. But what in the heck ARE Known DLLs in the first place?

    Well, it turns out that it’s in the KB, and I’ll summarize.

    KnownDLL’s is a mechanism in Windows NT (and win9x) that allows the system to “cache” commonly used system DLLs.  It was originally added to improve application load time, but it also can be considered a security mechanism, since it prevents people from exploiting weak application directory permissions by dropping in Trojan horse versions of system DLLs (since the key system DLLs are all known DLLs, the version of the file in the application directory will be ignored).  As a security mechanism it's not a particularly strong mechanism (if you can write to the directory that contains a program, you can create other forms of havoc), but it can be considered a security mechanism.

    If you remember from my previous article, when the loader finds a DLL import record in an executable, it opens the file and tries to map the file into memory.  Well, that’s not ENTIRELY the case.  In fact, before that happens the loader looks for an existing section called \KnownDlls\<dll filename>.  If that section exists, then instead of opening the file, the loader simply uses the existing section.   It then follows all the “normal” rules for loading a DLL.

    When the system boots, it looks in the registry at HKEY_LOCAL_MACHINE\System\CurrentControlSet\Control\Session Manager\KnownDLLs and creates a \KnownDlls\<dll filename> section for every DLL listed under that registry key.

    If you compare the HKLM\System\CCS\Control\Session Manager\KnownDLLs registry key with the sections under \KnownDlls (using a viewer like winobj), you’ll notice that the \KnownDlls object container always has more entries in it than the registry key.  This is because the \KnownDlls sections are computed as the transitive closure of the DLLs listed in KnownDLLs.  So if a DLL’s listed in KnownDLLs, all of the DLL’s that are statically linked with the DLL are ALSO listed in the \KnownDlls section.

    Also, if you look in the KnownDLLs registry key, you’ll notice that there’s no path listed for the KnownDLLs.  That’s because all KnownDLLs are assumed to be in the directory pointed to by HKLM\System\CCS\Control\KnownDLLs\DllDirectory registry value.  Again, this is an aspect of KnownDLLs being a security feature – by requiring KnownDLLs to be in the same directory, it makes it harder for someone to inject their own Trojan version of one of the KnownDLLs.

    Oh, and if the KnownDLLs processing causes any issues, or if for some other reason you don't want the system to load a DLL as a KnownDll, then you can set HKLM\System\CCS\Control\Session Manager\ExcludeFromKnownDlls to exclude a DLL from the KnownDll processing.  So in my example, until the bug is fixed in the existing KnownDLL, I’m adding winmm.dll to my ExcludeFromKnownDlls list.

     

     

  • Larry Osterman's WebLog

    IE Annoyances..

    • 25 Comments

    No, I’m not going to complain about transparent PNG or CSS support.  Frankly, since I’m not a webmaster, I don’t care about them (sorry).

    This one’s pretty specific, and I’m pretty sure that it’s an IE bug.

    One thing that I’ve noticed while reading other peoples blogs is that IE seems to get confused about which text size at which I want my pages rendered.  For some reason, Eric Lippert’s blog seems to be a constant offender there, although I don’t know why.

    For some reason, after reading his ‘blog, I seem to find my text size changed from “Medium” to “Smaller”.  Which is usually too small for me.  And of course, when I close the browser, it happily sets the new font size as the default for my machine.

    I know the setting is kept in the registry somewhere and I could put an ACL on the registry key to prevent it from happening, but I’d love to understand why this is happening.  What’s allowing a web page to change the text size I want to use to view the entire web?

    Edit: Problem discovered: It was an interaction between a buggy internal tool and IE.  Which explains why I didn't find it by googling :)

     

  • Larry Osterman's WebLog

    Milestones (not Millstones)

    • 6 Comments

    In October of 1982 (22 years ago this year!), I met a young woman named Valorie Holden and fell in love with her.  She has been my almost constant companion and partner ever since then; we were married in 1987, and now have two wonderful, gifted children.

    She works harder than anyone I’ve ever known, and is a constant source of inspiration to me.  She is a phenomenal mother, and is currently studying to become a truly extraordinary teacher (she already is an extraordinary teacher, but the state insists that she have this silly piece of paper so…).

    Next fall, she’ll be holding down three jobs: Mom, Teachers Aid in the PACE 5/6 classroom, AND full time student.  I don’t know how she manages to do all this and still remain sane, but somehow she does.

    Words really don’t suffice to express how much I love her.  I don’t always do a great job of showing it, but she is constantly on my mind.

    So why am I posting this?  Well, tomorrow’s her birthday.  And while she’s told me that I shouldn’t do anything about it (it’s one of those big ones and she HATES it when people make a fuss about her), I couldn’t resist putting up a post about it.

    So if you run into Valorie, make sure you wish her a happy birthday!

     

  • Larry Osterman's WebLog

    I, Robot

    • 26 Comments

    Michael Gartenberg over at Jupiter Research had a post today about the new movie I, Robot.

    I’ve seen the trailers for this movie, and I think it may be one of the greatest abominations that Hollywood has ever created.  I don’t know WHAT was going through Janet Jeppsons (Isaac’s wife) mind when she authorized the use of Isaac’s stories…  The original I, Robot stories were thoughtful stories about robots coming to take on sentience.  It wasn’t about the evil robots taking over the world.  But darned if that isn’t what the new movie is about.

    Asimov’s robots were ALWAYS constrained by the 3 laws of robotics.  It was a constant throughout the stories that the three laws were NEVER violated.  Having the three laws was a fascinating literary device, because it allowed Asimov to come up with story after story where it appeared that the three laws were being broken when in fact they weren’t.  His “Caves of Steel” and the other R. Daneel Olivaw/Elijah Baley stories are absolute classics in the S.F. Mystery novel genre.

    I looked at the trailer for the new movie and cringed.  Especially at the scenes with all the robots attacking Will Smith, and Will Smith playing Arnold Schwarzenegger with his railgun.

    This one’s a must-miss in my opinion.  If they had kept the original title of “Hardwired”, and avoided the tie-in with Asimov, then maybe it might be worthwhile.  But as long as they’re sullying Isaac’s works with this drivel, I’m staying home. 

     

    In case you think I’m just an Asimov fan-boy, think: Millions of kids will see this movie and think that the Asimov I, Robot stories are just more summer action movie fodder.  Is that really the legacy of one of the most thoughtful of the great science fiction authors?

     

  • Larry Osterman's WebLog

    Sometime soon, this internet thing is going to be really big, ya know.

    • 13 Comments

    But sometimes I wonder if it’s getting TOO big.

    I don’t normally try to do two rants quick succession, but there was a recent email discussion on an internal mailing list that sparked this rant.

    There’s a trend I’ve been seeing in recent years that makes me believe that people somehow think that there’s something special about the technologies that make up the WWW.  People keep trying to use WWW technologies in places that they don’t really fit.

    Ever since Bill Gates sent out the “Internet Tidal Wave” memo 9 years ago, people seem to believe that every technology should be framed by the WWW. 

    Take RPC, for example.  Why does RPC have to run over HTTP (or rather SOAP layered over HTTP)?  It turns out that HTTP isn’t a particularly good protocol for RPC, especially for connection oriented RPC like Exchange uses.  About the only thing that RPC-over-HTTP-over-TCP brings beyond the capabilities of RPC-over-TCP is that HTTP is often opened through firewalls.  But the downside is that HTTP is typically not connection oriented.  Which means that you’re either have to re-authenticate the user on every RPC or the server has to cache the client’s IP address and verify the client that way (or by requiring a unique cookie of some kind).

    Why does .Net remoting even support an HTTP protocol?  Why not just a UDP and a TCP protocol (and I have serious questions about the wisdom of supporting a UDP protocol)?  Again, what does HTTP bring to .Net remoting?  Firewall pass-through?  .Net remoting doesn’t support security at all natively; do you really want unsecured data going through your firewall?  At least HTTP/RPC provides authentication.  And it turns out that supporting connection-less protocols like HTTP caused some rather interesting design decisions in .Net remoting – for instance, it’s not possible to determine if a .Net remoting client has gone away without providing your own ping mechanism.  At least with a connection oriented transport, you can have deterministic connection rundown.

    Why does every identifier in the world need to be a URI?  As a case in point, one of our multimedia libraries needed a string to represent the source and destination of media content – the source was typically a file on the disk (but it could be a resource on the net).  The destination was almost always a local device (think of it as the PnP identifier of the dev interface for the rendering pin – it’s not, but close enough).  Well, the multimedia library decided that the format of the strings that they were using was to be a URI.  For both the source and destination.  So, when the destinations didn’t fit the IETF schema for URIs (it had % characters in it I believe, and our destination strings didn’t have a URI prefix) they started filing bugs against our component to get the names changed to fit the URI schema.  But why were they URIs in the first place?  The strings were never parsed, they were never cracked into prefix and object. 

    Now here’s the thing.  URIs are great for referencing networked resources.  They really are, especially if you’re using HTTP as your transport mechanism.  But they’re not the solution for every problem.  The guys writing this library didn’t really want URIs, they really wanted opaque strings to represent locations.  It wasn’t critical that their URIs meet the URI format, they weren’t ever going to install a URI handler for the identifiers, all they needed to be were strings.

    But since URIs are used on the internet, and the internet by definition is a “good thing” they wanted to use URIs.

    Another example of an over-used internet technology: XML.  For some reason, XML is considered to be the be-all and end-all solution to a problem.  People seem to have decided that the data that’s represented by the XML isn’t important; it’s the fact that it’s represented in XML.  But XML is all ABOUT the data.  It’s a data representation format, for crying out loud.  Now, XML is a very, very nice data representation.  It has some truly awesome features that make representing structured data a snap, and it’s brilliantly extensible.  But if you’re rolling out a new structured document, why is XML the default choice?  Is there never a better choice than XML?  I don’t think so.  Actually, Dare Obasanjo proposed a fascinating XML litmus test here, it makes sense to me.

    When the Exchange team decided to turn Exchange from an email system into a document storage platform that also did email, they decided that the premier mechanism for accessing documents in the store was to be HTTP/DAV.  Why?  Because it was an internet technology.  Not because it was the right solution for Exchange.  Not because it was the right solution for our customers.  But because it was an internet technology.  Btw, Exchange also supported OLEDB access to the store, which, in my opinion made a lot more sense as an access technology for our data store.

    At every turn, I see people deploying solutions that are internet, even when it’s not appropriate.

    There ARE times when it’s appropriate to use an internet technology.  If you’re writing an email store that’s going to interoperate with 3rd party clients, then your best bet is to use IMAP (or if you have to, POP3).  This makes sense.  But it doesn’t have to be your only solution.  There’s nothing WRONG with providing a higher capability non-internet solution if the internet solution doesn’t provide enough functionality.  But if you go the high-fidelity client route without going the standards based route, then you’d better be prepared to write those clients for LOTS of platforms.

    It makes sense to use HTTP when you’re retrieving web pages.  You want to use a standardized internet protocol in that case, because you want to ensure that 3rd party applications can play with your servers (just like having IMAP and POP3 support in your email server is a good idea as mentioned above). 

    URLs make perfect sense when describing resource location over the network.  They even make sense when determining if you want to compose email (mailto:foo@bar.com) or if you want to describe how to access a particular email in an IMAP message store (imap://mymailserver/public%20folders/mail%20from%20me).  But do they make sense when identifying the rendering destination for multimedia content? 

    So internet technologies DO make sense when describing resources on the internet.  But they aren’t always the right solution to every problem.

     

  • Larry Osterman's WebLog

    More on plumbing fixtures...

    • 11 Comments

    Found this on snopes.com, my favorite urban legends site.

    It’s a transparent public toilet installed in a London construction site. 

    Somehow I have potties on the brain today...

    Edit: Fixed title and images, twice (proxy troubles).

     

  • Larry Osterman's WebLog

    Microsoft and plumbing fixtures.

    • 8 Comments

    I was having an email discussion with Ben Slivka the other day, and he asked me what three things were going to make customers enthusiastic about Longhorn.

    My answer to him was as follows:

    I'm not sure.  My guess would be the changes around the Multimedia experience (persistent per application volume control, and improved handling (you can have windows sounds be mp3 files), WinFS means that you can search the metadata on your multimedia as quickly as you can Google, which makes slicing and dicing play lists better). 

    The new UI should be REALLY slick, and should attract a lot of consumers (eye candy always does).

    Beyond that, I'm not sure - the reality is that most of the cool stuff in Longhorn is plumbing - Avalon and the rest of WinFX means that app authors will be able to easily do stuff that they've never been able to do before, which means that there's a host of new cool apps that will be able to be written for longhorn.  That also means that app authors have even more ways of writing annoying applications (if you think skins are bad, consider what happens when app designers will do when they can put video on a button face), so...

    But the thing is that consumers don't see the cool stuff that's going on in the platform.  Unlike Apple, who spends HUGE amounts of time and effort on making the UI cool and flashy (and responsive and consistent, etc), Microsoft tends to work on getting the plumbing right.   But customers don’t see the plumbing.

    Which means that our toilets flush and our sinks drain really, really well, but they're not very pretty.  To continue the plumbing analogy, Apple is Kohler - lots of flash, looks great, works well, Microsoft is Delta - not as much flash, but totally rock solid reliable.

    The reality is that I just don’t see customers going totally bonkers about things like the games library or parental controls, or the other end-user features of Longhorn.  But man, Longhorn as a platform is going to let developers do really amazing stuff.

     

    Please note: I’m not an evangelist.  I don’t know all the bells and whistles; I work on windows audio, which is why my answer was multimedia-centric. 

     

  • Larry Osterman's WebLog

    New Exchange 'blog post, this one on push notifications

    • 0 Comments

    Nico over in Exchange just told me that he posted my article on Exchange’s Push Notification feature, so included by reference :).

  • Larry Osterman's WebLog

    Internationalizing Microsoft Products

    • 25 Comments

    The Seattle Times has an interesting article today about Microsoft’s efforts to extend beyond the “basic 30-or-so” languages we already support into languages with somewhat smaller market shares (Urdu, Kiswahili, Nepalese etc). 

    It’s actually cool, and I’m really happy that we’re extending our outreach to locales where there’s relatively few speakers (ok, Hindi doesn’t have relatively few speakers).

    But I do need to take issue with at least one comment in the article:

    Microsoft began localizing its software in different languages about 15 years ago.

    We’ve been localizing our products for as long as I’ve worked at Microsoft.  At a minimum, Microsoft’s first Japanese field office opened in 1977 (ASCII-Microsoft), 27 years ago, our Japanese Subsidiary was founded in 1986 (18 years ago).  All of these produced localized products for the Japanese market.  In 1985, I was working on MS-DOS 4.0, which was shipped localized in French to Goupil.  I still remember a demo of Arabic Windows 2.0 from the mid 1980’s, the person doing the demo started writing in Arabic (a Right-To-Left language) and the text appeared to the right of the cursor (as would be expected).  He then got a HUGE round of applause when he switched from typing in Arabic to English (a LTR language) and the text started appearing to the LEFT of the cursor.

    One of the stock interview questions a friend of mine used to ask was about how you handle cursor up and cursor down motions in GWBasic – it dates from at least 1982.

    So we’ve been doing localization since just about forever; I don’t know why they picked 15 years ago for the Times article. 

    Localization has ALWAYS been a big deal to Microsoft; it’s literally illegal to sell non localized products in some countries (don’t ask, I don’t know the specifics).  And since we want to ship our software everywhere J

    And I’m REALLY glad that we’re finally targeting the smaller languages, it’s cool.  I also love the mechanism that’s being used to do the localization.  Instead of localization being done in a centralized location, the localization is being done by local groups – so instead of Microsoft having to have native speakers of the various languages, we’re engaging communities in those countries to produce the localized content.

    We currently have language packs available for Windows XP in Bulgarian, Catalan, Croatian, Estonian, Hindi, Latvian, Lithuanian, Romanian, Serbian (Cyrillic), Serbian (Latin), Thai, and Ukrainian.  There’s a similar list for Office (only in Catalan or Norwegian currently though).

     

  • Larry Osterman's WebLog

    Jon Wiswall's started blogging...

    • 0 Comments

    I don't normally do “hey, he started blogging” posts, but I just noticed that Jon Wiswall has started a blog.

    Jon's one of those guys who can be counted on for insightful and intellegent answers during internal discussions, I know that when I see one of his responses, his answer's going to be the right one.

    Subscribed.

     

  • Larry Osterman's WebLog

    So how am I doing?

    • 14 Comments

    Well, this is my 100th post to my weblog, and since its review time at Microsoft, I figured I’d turn the forum over to my readers.

    I started this weblog 4 months ago after reading Raymond’s ‘blog for several months and marveling at his ability to consistently produce interesting content.  Since I love to talk (or rant) about lots of different topics, I wanted to give this ‘blogging thing a chance.  I may write code for a living, but I’ve always enjoyed technical writing.  I’ve wanted to publish “Larry’s ranting about software” for years now (I went as far as to start work on an outline for a “Inside Windows NT Security” book to pitch to MS-Press, but eventually decided I didn’t have enough time to be a full time author).  Publishing this blog seemed to be an ideal opportunity to move forward with some form of that dream.

    I’ve got to say that this self-publishing thing has been both more challenging and more exciting than I had ever realized.  Valorie can tell you that I’ve gotten pretty obsessive about coming up with new ideas for posts, there have been times that figuring out what to post has somewhat taken over my life.  It’s been a fascinating experience.

    I set myself a goal on day one of producing at least one new post every day I’m at work, and so far I’ve been able to meet that goal, although sometimes it has been hard.  There are times I post at 4:00PM, especially if work’s busy, but I’ve managed to meet that goal.  Along the way, I’ve learned a huge amount from the many insightful comments that people have been made on the various articles.

    Anyway, enough glurge…

    Anyway, in the spirit of evaluating how we’re servicing our customers both inside and outside Microsoft, how AM I doing?  Do the things I post her meet your needs?  What is your favorite thing about what I’m doing?  What’s your least favorite thing?

    And most importantly, what can I do to improve?  Feel free to either post comments on this thread, or if you’d rather send me email you can use the comments link on this blog, or you can send me email directly, my email is at LarryO (at) Microsoft.Com.

     

  • Larry Osterman's WebLog

    Microsoft and Art

    • 12 Comments

    Sorry about not posting yesterday, I was out with the kids at Seattle Center (Daniel had rehearsals and I was having fun with Sharron), so no time to write up a post (I’m not as well organized as Raymond).

    Microsoft’s got a pretty impressive art collection.  Some pieces are cool, some are merely controversial.

    We had a huge internal debate the other day about this piece:

    Yes, it’s a piece of notebook paper in a frame.  It isn’t until you get REALLY close to it that you realize that it’s a painting of a piece of notebook paper in a frame…

     

Page 1 of 2 (31 items) 12