Larry Osterman's WebLog

Confessions of an Old Fogey
Blog - Title

Validating inputs

Validating inputs

  • Comments 4

Derek” posted a comment to my previous post about validating inputs to functions that’s worth commenting on.

IMHO, the user shouldn't be able to crash the app. The app should verify all information from any untrustworthy source (user input, file, network, etc.) and provide feedback when the data is corrupt. This makes it a "firewall" of sorts. The app itself is trustworthy, or at least it should be once it's debugged.

The application is better equipped to deal with errors than API's, because (1) it knows where the data comes from, and (2) it has a feedback mechanism.

He’s absolutely right.

He’s more than right.  This is (IMHO) the key to most of the security issues that plague the net today.  People don’t always validate their input.  It doesn’t matter where your input comes from, if you don’t validate it, it WILL bite you in the rear.   Just about every form of security bug out there today is caused by one form or another of not validating input – SQL injection issues are caused by people not validating items typed into forms by users, many buffer overflows are often (usually?) caused by people passing inputs into constant sized buffers without checks.

This applies to ALL the user’s input.  It applies if you’re reading a source file from disk.  It applies when you’re reading data from a network socket.  It applies when you’re processing a parameter to an RPC function.  It applies when you’re processing a URL in your web server.

What’s fascinating is how many people don’t do that.  For Lan Manager 1.0 and 2.0, validation of incoming packets was only done on our internal debug releases, for example.  Now this was 15 years ago, and Lan Manager’s target machines (20 megahertz 386 boxes) didn’t have the horsepower to do much validation, so there’s a lot of justification for this. Back in those days, the network services that validated their inputs were few and far between – it doesn’t justify the practice but…  There was a huge amount of internal debate when we started working on NT (again, targeted at 33MHz 386 machines).  Chuck Lenzmeier correctly insisted that the NT server had to validate EVERY incoming SMB.  The Lan Manager guys pushed back saying that it was unnecessary (remember – Lan Manager comes from the days where robustness was an optional feature in systems).  But Chuck stood his ground and that the input validation had to remain.  And it’s still there.  We’ve tightened up the checks on every release since then, adding features like encryption and signing to the CIFS protocol to even further reduce the ability to tamper with the incoming data.

Now the big caveat: If (and only if) you’re an API, then some kinds of validation can be harmful – see the post on validating parameters for more details.  To summarize, check your inputs, obsessively, but don’t ever use IsBadXxxPtr to ensure that the memory’s invalid – just let the user’s app crash if they give you garbage. 

If you’re a system service, you don’t have that luxury.  You can’t crash, under any circumstances.  On the other hand, if you’re a system service, then the memory associated with your inputs isn’t handed to you like it is on an API.  This means you have no reason to ever call IsBadXxxPtr – typically you’ve read the data you’re validating from somewhere, and the thing that did the reading gave you an authoritative length of the amount of data received.  I’m being vague here because there are so many ways a service can get data – for instance, it could be read from a file with ReadFile, it could be read from a socket with recv, it could come from SQL server (I don’t know how SQL results come in J, but I’m willing to bet that the length of the response data’s included), it could come from RPC/COM, it could come from some a named pipe, etc.

Rule #1: Always validate your input.  If you don’t, you’ll see your name up in lights on bugtraq some day.

 

  • What do you do about valid but extreme inputs?

    For instance, let's say the user tells an image processing app to resize a bitmap to 1000x1000. How about 5000x5000? 128x128256, due to a typo? The image allocation routine might fail. Or the request for a temporary work surface after that. Or the Save As dialog that tries to appear. For the app to be robust in this regard _all_ paths from that dialog have to be robust against memory failure, which is a huge coverage area. If you try to detect and reject outrageous inputs, how do you determine reasonable limits?

    At some point, you have to realize that crashes _will_ occur, and that exception handling strategies are part of making code robust. That includes trapping hardware exceptions around selected vulnerable routines, generating failure reports, and having watchdog+restart strategies.
  • I didn't say that validating inputs was a justification for not writing other error checks. Absolutely you need to continue to check for errors.

    But I can't think of the number of bugtraq posts I've heard which are of the form of: "Command line buffer overrun in app <foo>". Usually what happens is that there's an app that copied argv[1] into a buffer of 256 bytes.

    For the image processing app, "validating the inputs" means validating the 1000 to ensure that: (a) it's a number, and (b) it's small enough to fit into whatever storage you're using to hold it (if you're going to fit it into a word, it'd better be less than 65535 for instance).

    You've also got to be careful of arithmatic overflow failures - what if they pass in 0xffffffff by 0xffffffff to the resize a bitmap command? If you compute the buffer by multiplying x, y, and the bit depth of the image, you're likely to have an arithmatic overflow error.

    What happens after that's up to the app, it knows it's inputs are safe.
Page 1 of 1 (4 items)