Following on from my previous post, I thought I'd talk a little bit about the designs of the projects that I am in charge of. Now to any of my Microsoft superiors who may be reading this post, fear not. I have never put any security back doors into my projects. It'd never make it past a spec review anyhow.
Instead, I would like to talk a bit about front-end vs back-end interfacing with databases. For example, we have a spam rules database with a web front-end to interact with it. The front-end allows you to do inserts, deletes, modifications to existing rules and limited search queries. It's a convenient interface.
But yet it is limited. Sometimes the web server goes down. Sometimes existing spam rules take a long time to load. Sometimes doing a search returns thousands of queries and is nearly impossible to find the one I want. In this case, searching the database itself makes sense. I have much more flexibility doing a MySQL search and specifying the search parameters for rules that I want than I have with the web front end. When I was a spam analyst, the web front end was all I needed. But as I moved on from analyzing spam to doing research into spam in general, I needed to do more flexible queries. In addition, if I ever wanted to modify a spam rule and the server was down, I could always do a manual db insert.
I call this manual db modification a back-door modification. It's actually useful from time to time but it is more prone to error. If I ever delete something but forget to specify which rule, such as drop * from spamRules (not the real table name), that would be a disaster. But on the other hand, doing a search query and piping the results into another script I wrote to parse some headers / IPs is incredibly powerful. With great power comes great responsibility.
I understand the need for direct access to data mine the database for research purposes. However, would you not agree that the administrator of the database should revoke unnecessary privileges that could allow you to accidentally drop tables. Why run the risk?
Is a server being down really a good enough excuse to do manual error prone inserts and updates? It could easily become a stop gap measure rather than building the system to allow for redundancy with fail over.
When you mentioned disaster it brought up a good point. Security can help prevent disasters but they inevitably happen and when they do a recovery plan kicks into place. Which I'm sure you do in the event that one day someone does drop that table.
I suppose I should have qualified my statements. Access to these databases should be restricted, not just anyone should be able to access them.
In general, the server being down is not a good excuse to do inserts... unless there are special circumstances. If there is a major spam outbreak and a few spam rules will do the trick, then I think that this is justification for going around established procedures.
Finally, you are correct when it come to security. Something like having redundant backups of tables will help prevent the results of finger slippage from being disastrous.
"I have much more flexibility doing a MySQL search and specifying the search parameters for rules that I want than I have with the web front end."
You do? In Microsoft?
Good stuff Terry.
I have a system somewhat similar to yours where I have all of my configuration, rules and data stored in a database. I have a GUI front end that our developers wrote, but it has a number of shortcomings.
As a result, I have added two mechanisms for accessing the backend. Most of the time I just need to search so I have created a account with readonly rights that I use specifically for that purpose.
When the need arises to make a change that I cannot perform through the GUI I remote into a server that is in the same domain and connect using my domain account that has dbo rights.
There is still risk with this solution, but it is greatly minimized over running day-to-day queries with an account that could modify the data.