February, 2006

Larry Osterman's WebLog

Confessions of an Old Fogey
  • Larry Osterman's WebLog

    Katamari 'R Filler

    • 4 Comments
    For Christmas, we got the kids a PS2, they've been having a great deal of fun with it.

    On Raymond’s recommendation, I also got a copy of Katamari Damacy, which IS the most messed up video game ever.  Daniel’s beaten it, I’m only about halfway through it, but it’s huge fun.  One of the things I like the best about it is that it's got an awesome soundtrack. I keep on having the “daa da na na na na na na, na na na Damacy” soundtrack running through my head.

    Valorie, on the other hand is somewhat addicted to the various Oberon Media games on the MSN zone, one of her current favorites is a game called “Filler”.  She was playing it the other night, and I was watching her play (I like to watch others play video games) when we looked up at each other and said “Hey, that’s the Katamari Damacy music!”.

    It’s played with chimes at a much slower tempo, but it’s still the Katamari Damacy music. Cool.

  • Larry Osterman's WebLog

    Threat Modeling, then and now

    • 2 Comments

    Way back in the day, back when we were doing the initial threat models for the vista audio stack, I wrote a couple of articles about threat modeling and the process as I saw it then.

    Well, I've now been through another round of writing threat models (I wrote three new threat models for beta2), two different sets of threat model training, and reviews of all the beta2 threat models, and I've realized that a bunch of my thinking about the threat modeling process has changed.

    I'm still a 100% convert to threat modeling, there are a couple of potential issues in the audio stack that we probably wouldn't have thought of if we hadn't gone through the process.  But I've also come to realize that many of the things I wrote about last year were either naive or were unnecessary.

    Some of the aspects of threat modeling I wrote about are critical - data flow diagrams, for example are the heart of a good threat model.  But others aren't nearly as important (threat trees, for example - it turns out that most people simply don't have the training to build a threat tree reasonably).

    I'm also still a fan of the BTMBM, it helps to validate your threat model and ensure its completeness, but I'm not that it's the core of the threat modeling process - when I wrote that early article, I didn't realize how critical the DFD was in building a good threat model.  Nowadays, I've come to realize that if you've got a good DFD, you can build a good threat model.

    Btw, there are now some other resources on threat modeling available from Microsoft, including the applications threat modeling team's blog. Peter Torr also wrote a great article for IEEE Security&Privacy entitled "Demystifying the Threat-Modeling Process".  Unfortunately that last one's not available online, IEEE charges $19.00 for the PDF for the document (unless you're an IEEE member, in which case it costs somewhat less), but you may be able to find it in a library (it's from the September/October 2005 issue).

  • Larry Osterman's WebLog

    The Code Room: Breaking Into Vegas

    • 3 Comments

    Sorry about the lack of posts, I've just not been motivated to post recently.

     

    Someone pointed out the new episode of "The Code Room", which is focused on Security, it's an interesting perspective.

     

    I did find it rather hokey, the various security experts wrangled for the job aren't the greatest actors, but the points brought up were valid (I thought it vaguely humorous that Frank Swiderski's first suggestion when dealing with the breach was "Threat Modeling").

    It's worth the half hour it takes to watch it.

     

  • Larry Osterman's WebLog

    The Clueless Manifesto

    • 11 Comments

    I don't usually echo other people's content, but I ran into this via Scoble:

    "The Clueless Manifesto"

     

    I'm not entirely sure why it struck a chord, but it did.

     

  • Larry Osterman's WebLog

    Why does the NT redirector close file handles when the network connection breaks?

    • 33 Comments

    Yesterday, Raymond posted an article about power suspend and it's behavior in XP and Vista.  It got almost as many comments as a post in the IE blog does :).

    I want to write about one of the comments made to the article (by "Not Amused Again"):

    James Schend: Nice one. Do you know why the damn programs have to ask before allowing things to go into suspend mode? It's because MS's freakin networking code is freakin *broken*. Open files are supposed to be reconnected after a suspend, and *they are not*, leading to losses in any open files. (Not that saving the files then allowing the suspend to continue works either, as the wonderful opportunistic file locking junk seems to predictably barf after suspends.)

     

    A long time ago, in a building far, far away, I worked on the first version of the NT network filesystem (a different version was released with Windows 2000).  So I know a fair amount about this particular issue.

    The answer to the Not Amused Again's complaint is: "Because the alternative is worse".

    Unlike some other network architectures (think NFS), CIFS attempts to provide a reliable model for client/server networking.  On a CIFS network, the behavior of network files is as close to the behavior of local files as possible.

    That is a good thing, because it means that an application doesn't have to realize that files are opened over the network.  All the filesystem primitives that work locally also work over the network transparently.  That means that the local file sharing and locking rules are applied to files on network.

    The problem is that networks are inherently unreliable.  When someone trips over the connector to the key router between your client and the server, the connection between the two is going to be lost.  The client can reconnect the connection to the network share, but what should be done about the files opened over the network?

    There are a couple of criteria that any solution to this problem must have:

    First off, the server is OBLIGATED to close the file when the connection with the client is disconnected.  It has no ability to keep the file open for the client.  So any strategy that involves the server keeping the client's state around is a non-starter (otherwise you have a DoS scenario associated with the client). Any recovery strategy has to be done entirely on the client. 

    Secondly, it is utterly unacceptable to introduce the possibility of data corruption.  If there is a scenario where reopening the file can result in a data corruption scenario, then  that scenario can't be allowed.

    So let's see if we can figure out the rules for re-opening the file:

    First off, what happens if you can't reopen the file?   Maybe you had the file opened in exclusive mode and once the connection was disconnected, someone else got in and opened it exclusively.  How are you going to tell the client that the file open failed?  What happens if someone deleted the file on the share once it was closed?  You can't return file not found, since the file was already opened.

    The thing is, it turns out that failing to re-open the file is actually the BEST option you have.  The others are actually even worse than that scenario.

     

    Let's say that you succeed in re-opening the file.  Let's consider some other scenarios:

    What happens if you had locks on the file?  Obviously you need to re-apply the locks, that's a no-brainer.  But what happens if they can't be applied?  The other thing to consider about locks is that a client that has a lock open on a region of the file assumes that no other client can write to that region of the file (remember: network files look just like local files).  So they assume that nobody else has changed that region.  But what happens if someone else does change that region?  Now you just introduced a data corruption error by re-opening the file.

    This scenario is NOT far-fetched.  It's actually the usage pattern used by most file based database applications (R:Base, D-Base, Microsoft Access, etc).  Modern client/server databases just keep their files open all the time, but non client/server database apps let multiple clients open a single database file and use record locking to ensure that the database integrity is preserved (the files lock a region of the file, alter it, then unlock it).  Since the server closed the file when the connection was lost, other applications could have come in, locked a region of the file, modified it, then unlocked it.  But YOUR client doesn't know this happened.  It thinks it still has the lock on the region of the file, so it owns the contents of that region.

    Ok, so you decide that if the client has a lock on the file, we won't allow them to re-open the file.  Not that huge a restriction, but it means we won't re-open database files over the network.  You just pissed off a bunch of customers who wanted to put their shared database on the server.

     

    Next, what happens if the client had the file opened exclusively?  That means that they know that nobody else in the world has the file open, so they can assume that the file's not been modified by anyone else.  That means that the client can't re-open the file if it's opened in exclusive mode.

    Next let's consider the case where the file's not opened exclusively: There are four cases of interest, involving two file attributes and two file open modes: FILE_SHARE_READ and FILE_SHARE_WRITE  (FILE_SHARE_DELETE isn't very interesting), and FILE_READ_DATA and FILE_WRITE_DATA.

    There are four interesting combinations (the cases with more than one write collapse the file_share_write case), laid out in the table below.

      FILE_SHARE_READ FILE_SHARE_WRITE
    FILE_READ_DATA This is effectively the same as exclusive mode - nobody else can write to the file, and the client is only reading the file, thus it may cache the contents of the file The client is only reading data, and it isn't caching the data being read (because others can write to the file).
    FILE_WRITE_DATA This client can write to the file and nobody else can write to it, thus it can cache the contents of the file. The client is only writing data, and it can't be caching (because others can write to the file)

    For FILE_SHARE_READ, others can read the file, but nobody else can write to the file, the client can and will cache the contents of the file, .  For FILE_SHARE_WRITE, no assumptions can be made by the client, so the client can have no information cached about the file.

    So this means that the ONLY circumstance in which it's reliable to re-open the file is when a file has never had any locks taken on it and when it has been opened for FILE_SHARE_WRITE mode.

     

    So the number of scenarios where it's safe to re-open the file is pretty slim. we spent a long time discussing this back in the NT 3.1 days and eventually decided that it wasn't worth the effort to fix this.

    Since we can't re-open the files, the only option is to close the file.

    As a point of information, Lan Manager 2.0 redirector for OS/2  did have such a feature, but we decided that we shouldn't implement it for NT 3.1. The main reason for this was the majority of files opened in OS/2 were open for share_write access (it was the default), but for NT, the default is to open files in exclusive mode, so the majority of files can't be reopened.

     

  • Larry Osterman's WebLog

    Why Threat Model?

    • 17 Comments

    We're currently in the middle of the most recent round of reviews of the threat models we did for all the new features in Vista (these happen periodically as a part of the SDL).

    As usually happens in these kinds of things, I sometimes get reflective, and I've spent some time thinking about what the reasons WHY we generate threat models for all the components in the system (and all the new features that are added to the system).

    Way back in college, in my software engineering class, I was taught that there were two major parts to the design of a program.

    You started with the functional specification.  This was the what of the program, it described the reasons for the program, what it had to do, and what it didn't have to do :).

    Next came the design specification.  This was the how of the program, it described the data structures and algorithms that were going to be used to create the program, the file formats it would write, etc.

    We didn't have to worry about testing our code, because we all wrote perfect code :).  More seriously, none of the programs we worked on were complicated enough to justify a separate testing organization - the developers would suffice to be the testers.

    After coming to Microsoft, and (for the first time) having to deal with a professional testing organization (and components that were REALLY complicated), I learned about the 3rd major part, the test specification.

    The test specification described how to test the program: What aspects were going to be tested, what were not, and it defined the release criteria for the program.

    It turns out that a threat model is the fourth major specification, it's the one that tells how the bad guys are going to BREAK your program.  The threat model is a key part of what we call SD3 - Secure by Design, Secure by Default, and Secure in Deployment.  The threat model is a large part of how you ensure the "Design", it forces you to analyze the components of your program to see how it will react to an attacker.

    Threat modeling is an invaluable tool because it forces you to consider what the bad guys are going to do to use your program to break into the system.  And don't be fooled, the bad guys ARE going to use your program to break into the system.

    By forcing you to consider your program's design from the point of view of the attacker, it forces you to consider a larger set of failure cases than you'd normally consider.  How do you protect from a bad guy replacing one of your DLLs?  How do you protect against the bad guy snooping on the communications between your components?  How to you handle the bad guy corrupting a registry key or reading the contents of your files?

    Maybe you don't care about those threats (they might not be relevant, it's entirely possible).  But for every irrelevant threat, there's another one that's going to cause you major amounts of grief down the line.  And it's way better to figure that out BEFORE the bad guys do :).

     

    Now threat modeling doesn't help you with your code, it doesn't prevent buffer overflows or integer overflows, or heap underruns, or any of the other myriad ways that code can go wrong.  But it does help you know the areas where you need to worry about.  It may help you realize that you need to encrypt that communication, or set the ACLs on a file to prevent the bad guys from getting at them, etc.

     

    Btw, before people comment on it, yes, I know I wrote a similar post last year :).  I had another aha related to it and figured I'd post again. Tomorrow, I want to go back and reflect on those early threat model posts :)

     

  • Larry Osterman's WebLog

    WFP is my new best friend

    • 31 Comments
    I've mentioned our computer setup a couple of times before - Valorie's got her laptop, Daniel, Sharron and I each have our own desktop computers, and there are a couple of other machines floating around the house.  Since the kids machines don't have internet access, we've got a dedicated machine sitting in our kitchen whose sole purpose is to let the kids get their email and surf the net.  The theory is that if they're surfing in the kitchen, it's unlikely they'll go to bad places on the net.

    It also means we can easily allow them to run as non admins when they surf but be admins on their machines (which is necessary for some of the games they play).

    Ok, enough background.  Yesterday night, I was surfing the web from the kitchen machine, and I noticed that the menu bar on IE had disappeared.  Not only that, but I couldn't right click on any of the toolbars to enable or disable them.  All the IE settings looked reasonable, IE wasn't running in full screen mode, it was just wierd.

    Other than this one small behavior (no menus in either IE or other HTML applications (like the user manager and other control panel applets), the machine was working perfectly.  The behavior for HTAs was wierd - there was a windows logo in the middle of the window where the menu bar should be, but that was it.

    I ran an anti-spyware and virus scan and found nothing. I went to the KB to see if I could find any reason for this happening, but found nothing.

    I even tried starting a chat session with PSS but it never succeeded in connecting.

    I must have spent about 2 hours trying to figure out what was wrong.

    The first inkling of what was actually wrong was when Daniel asked me to get up so he could read his email - he got this weird message about "Outlook Express could not be started because MSOE.DLL could not be initialized".  That was somewhat helpful, and I went to the KB to look it up.  The KB had lots of examples of this for Win98, but not for XP SP2.  So still no luck.

    And then I had my Aha!.  I ran chkdsk /f to force a full chkdsk on the drive and rebooted.

    Within a second or so on the reboot, chkdsk started finding corruptions in the hard disk.  One of the files that was corrupted was one of the OE DLL's, another was something related to browsing, and there were a couple of other corrupted files.

    I rebooted after running chkdsk, and now I got a message that msimn.exe was invalid or corrupt.  I looked at the file, and yup, MSIMN.EXE had a 0 length. Obviously it was one of the files corrupted on the disk.

    So now I had a system that almost was working, but not quite.

    During my trolls through the KB, I'd run into the SFC command.  The SFC (System File Checker) is a utility in XP and Win 2K3 that will verify that all files protected by WFP (Windows File Protection) are valid.  If it finds invalid files, it restores them from the cache directory.  As per the KB article, I ran SFC /SCANNOW and waited for a while.  Darned if it didn't find all the files that had been corrupted and repaired them.

    So Daniel got his email back, IE got its menus back, and the machine seems to be back on its feet again!

    Man, I love it when stuff works the way it's supposed to.

     

    Btw, my guess is that the data corruptions have either been there for a while and we didn't notice them, or they were introduced during a rapid series of power failures we had on Saturday and Sunday (this machine isn't currently on a UPS so...).

  • Larry Osterman's WebLog

    So apparently there's a "football" game going on this weekend?

    • 8 Comments

    I thought we were done with football this year, but apparently not :)

     

    Actually, this Superbowl is somewhat problematic for me.  I grew up in Albany, NY, which has absolutely no professional sports teams at all.  So I never really understood the whole "fandom" thing.  Yeah, there were the Jets and the Giants, and the Mets and Yankees, but they were all over a hundred miles away, so weren't particularly relevant to me.

    Then I got to Pittsburgh, in the fall of 1980.  The Steelers had just won four Superbowls in a row, and the town was hot with the "One for the Thumb in '81" slogan.  I had never seen such a sports induced furvor, it was quite amazing to me.  Slowly, I began to become a fan of the Steelers (even though they didn't do particularly well in the early 80's).

    Then I moved to Seattle, with the Sonics, Seahawks and Mariners.  Ok the Mariners were a joke, but the Seahawks were doing pretty darned well under Chuck Knox, and the Sonics were decent teams.

    Over time, I began to learn to love the Mariners, and the Seahawks sucked more and more (the Nordstrom family sold the team to a guy from California, who didn't really want to own the team, so...).

     

    Fast Forward to 2006, and look what's happening: It's the Seahawks and the Steelers.

    Figures.

     

    Well, at least I finally will be watching a Superbowl for something other than the ads :).

    Go <Insert Team Here>!!!!!!!

     

  • Larry Osterman's WebLog

    What Hath God Wrought

    • 11 Comments

    Well, it had to happen.  Western Union is finally going out of the telegram business

    One of the questions asked in the MSNBC article linked is "Ever get a telegram" (ok, it's in a fake survey linked along with the article).

    Actually, to my knowledge, I've only ever received one in my life.  It was about 21 years ago, and it came the day after I had a job interview with this big somewhat balding guy with a loud voice, and a name I'd never heard of before, Steve Ballmer.  The telegram was to invite me out to Seattle to interview with a small software company that almost nobody had heard of.  And the rest is history.

    So it's finally the end of an era, telegrams will no longer be delivered, no more guy in the blue uniform with the funny brimmed hat.

    Oh, and the title of this post?  It's the text of the first telegram sent, from Samuel Morse to Alfred Vail (and no, I didn't know that off the top of my head).

     

  • Larry Osterman's WebLog

    Firewalls, a history lesson

    • 27 Comments

    Recently, a rather high profile software company has been taken to task about its patching strategy.

    One of the comments that was made by the customers of this company was basically: "We don't have to worry, all our servers are behind a firewall".

    I've got to be honest and wonder why these people that their firewall somehow protects their systems?  A firewall is the outside of what is known as "M&M Security" - Hard and Crunchy outside, Soft and Chewy inside.  The basic problem with M&M security is that once a bad guy (or worm, or virus, or malware of any form) gets behind the crunchy outside, the game is over.

    George Santayana once said "Those who cannot remember the past are condemned to repeat it.".  And trusting in a firewall is an almost perfect example of this.

    It turns out that there's a real-world example of a firewall that almost perfectly mirrors today's use of firewalls.  It's actually quite uncanny in its accuracy.

    Immediately after WW1, the French, seeing the potential for a threat from Germany, built a series of fortifications known as the "Maginot Line".  These were state-of-the art fortifications designed to protect against most if not all the threats known at the time.

    (Image stolen from wikipedia).

    From all accounts, the Maginot Line was a huge success.  Everywhere the German army engaged the French on the Maginot line, the line did an excellent job of protecting France.   But it still failed.  Why?  Because instead of attacking the Maginot Line head-on, the Germans instead chose to cut through where the Maginot line was weak - the Saar gap (normally an impenetrable swamp, but which was unusually dry that year) and the Low Countries (Belgium and the Netherlands, which weren't considered threats), thus bypassing the protection.

    The parallels of the Maginot line and Firewalls are truly eerie.  For instance, take the paragraph above, and replace the words "Maginot Line" with "firewall", "French" with "the servers", "German Army" with "Hackers", Saar gap with unforeseen cracks and "Low Countries" with "employee's laptops" and see how it works:

    From all accounts, the Firewall was a huge success.  Everywhere the Hackers engaged the servers on the line, the firewall did an excellent job of protecting the servers.   But it still failed.  Why?  Because instead of attacking the Firewall head-on, the hackers instead chose to cut through where the firewall was weak - they utilized previously unforeseen cracks (because the company hadn't realized that their WEP protected network was crackable) and the employee's laptops, where the firewall was weak (because the employee's laptops weren't considered threats), thus bypassing the protection.

    You should never assume that some single external entity is going to protect your critical assets.  If you've got a huge armored front door, I can guarantee that the thieves won't come through the armored front door.  Instead, they're going to pick up a rock and throw it through the glass window immediately next to the door and go through it.

    I'm not dinging firewalls.  They are an important part of your defensive arsenal, and can provide a critical front line of defense.  But they're not a substitute for defense in depth.  And let's be honest: Not everyone configures their firewall correctly.

    If you assume that your firewall protects you from threats, then you're going to be really upset when the bad guys come in through an unprotected venue and steal all your assets.

    Thanks to Stephen Toulouse and Michael Howard for their feedback.

Page 1 of 1 (10 items)