I like it when people send me e-mail with security questions. I like it because it implies two things:
1) The person is thinking about the security implications of their code, and has recognised a possible problem; and
2) The person realises that they need to seek help to get the right answer, rather than just saying "Oh we'll use $MAGIC_SECURITY_FEATURE$ here and it won't be a problem."
Anyway, recently I got the following (paraphrased) question from someone:
In Michael Howard’s book Writing Secure Code it says that you shouldn’t tell the attacker too much when you fail and gives the specific example of showing an exception stack trace to the user when there is a failure. When our application fails to load an assembly, it shows the user a dialog box with a stack trace in it to help the user track down the problem. I can see why this makes sense in debug mode, but is it a security problem in release modes? Aren't we giving away sensitive information by doing this?
Glad you asked! :-)
The specific advice in question applies when the attacker doesn't have direct access to the code. For example, in an ASP .NET application, the attacker is sitting across the world looking at their web browser and can't read the DLLs off your web server's hard disc. In this scenario, handing out information in a stack trace on an error page is bad because you might leak path names, source code, or other tidbits of information that the attacker can use to stage their attack.
Two other examples of this are a SQL Server application giving out detailed error information to an application running on a separate machine (or in a separate security context), or even a script on a web page talking to an installed ActiveX control and getting an error message back with too much information. In all these cases, you are giving the attacker access to information they couldn't get in any other way since they don't have direct access to the code on their machines.
But in the case of an application displaying an error dialog to the user, the "user" already has access complete access to the code (they're running it on their machine!) so they can already attach a debugger to it, disassembly it, poke it with a sharp stick, etc. if they really want to. No information is being leaked in this case (although you might not want to scare users with a full stack dump by default!).
A while ago, another person asked me a different question:
We're building an Excel solution with VSTO 2005 and using the new Task Pane functionality [ed: cool!]. Our solution talks to a secured web service to download sensitive information into the spreadsheet. We're worried about what happens when the user saves the spreadsheet -- should we blank the data out of the spreadsheet every time it is saved so that we don't "leak" sensitive information? We automatically refresh the data the next time the workbook is opened so legitimate users will still have access to it.
First thing to do -- as always -- is to consider the threats you are trying to mitigate. The threat which appears to be most salient here is one of information disclosure. For example:
creates a spreadsheet using the VSTO solution and uses it to download some sensitive information into a worksheet. She intends to e-mail the spreadsheet to her co-worker Bob, but accidentally e-mails it to a competitor with the same name. Obviously Competitor Bob doesn't have access to the sensitive information hosted by the web service, but now he can read the persisted data out of the spreadsheet and use it for his own nefarious purposes.
In this case, the person realised that their solution was an "enabling technology" that could allow sensitive data to leak to unauthorised people, and they wanted to take some responsibility to help stop any potential security breaches. That's a great attitude to have, but in this case the suggested solution attempts to protect the user from themselves, and in the process throws the baby out with the bathwater so to speak. It's also very fragile.
Using code to blank out values on save is unreliable -- what if the code crashes during the Save event handler and the code never completes its task, or if a bad COM Add-In crashes Excel and saves an auto-recovered version of the file with the data still in it? Additionally, whilst you may know to blank out cells A1 to C20 because that's where your code puts the data, you have no idea if the user has manually copy-and-pasted it to some other area of the workbook, so you can't guarantee you'll always blank out every cell that contains the sensitive data.
Even if you could technically perform this action, blanking out cells it may actually defeat the purpose of the solution. You have to ask yourself "Why are we building this solution on Excel?" One of the benefits of doing so is that you get local access to off-line data, and you can easily move it around. Is it an intended feature of the solution that people who are authorised to see the data can download it to a spreadsheet and forward it to people who are not authorised to see the data? What if two authorised people see different views of the data and they share a spreadsheet; what happens to the data in that case? Is it intended that User B can see User A's data (since it was cached in the spreadsheet), or are they supposed to re-run the queries and download their own data every time? (But remember that User B can always disable the code if they don't want the queries to auto-refresh and thus leave behind the original data)? If the desired functionality is that the data always auto-refreshes, how does Alice show Bob a snapshot of her data without having Bob physically come over to her desk and look at her monitor?
Really this one comes down to user education. Whenever the user opens / creates the spreadsheet, you could initially have the task pane show some descriptive text describing what the application did and require the user to click a "Download sensitive information" button before you contact the web service (this would also help prevent repurposing attacks). Now the user knows the spreadsheet contains sensitive information, and you have two possibilities:
1) If they are "good" people then they should be careful not to forward the document to the wrong person, just as they should be careful not to forward their Microsoft Money data file or their last Review document. Suggesting to the user that they can use IRM to protect the document might help here.
2) If the user is "bad" and intentionally wants to leak the data to other "bad" people, you can't stop them doing it anyway. They could just copy & paste the data, take a photo of the screen, print it out, etc. Depending on how determined the bad guy is, IRM may not be appropriate here because it is not strictly a security technology.
Moral of the story: Threat models are your friend :-)