For some reason The Clash's 'Should I Stay or Should I Go?' has been going round in my head today - 'Should I RE-pair or restore? Nah nah NA-na nana nah' - must be something to do with Redmond's good weather.
Anyway, I get asked for help with this a lot - more so it seems over the last year or so as people are moving bigger databases to SQL Server and more people are realizing that they need a good backup strategy rather than relying on luck and repair. Usually I get asked this after someone's been faced with the dilemma under pressure, has just had to make an arbitrary choice (trading off data/work loss against limitation of downtime), and wants to know if there's a more sure-fire way of working out what to do.
The bottom line is that I can't give a general recommendation one way or the other - it all depends. However, by doing a little homework and planning, you can work towards a sensible (and defendable to your management chain) answer fairly easily. The trick is to practice going through this exercise now, rather than trying to do it for the first time when a disaster happens and everyone's in headless-chicken mode trying to get the business going again. If you can just plug the data into a tried and tested set of decision steps, you'll be more confident in your choice and things should go more smoothly.
I'm going to assume you're running SQL Server 2005, as the new online restore technologies allow for much reduced downtime and a wider spectrum of choices when formulating your backup strategy.
First, gather the following information:
You should make sure your database is structured in such a way that it is possible to formulate a restore or repair strategy that satifies both your downtime and data-loss SLAs. If this isn't possible, you need to push-back on the SLAs before you get blamed for not meeting them when a disaster occurs.
Now, there are some decision steps to go through to work out what to do, using the information gathered above. Of course, you need to factor in doing some kind of root-cause analysis on whatever disaster occurs so you can take whatever preventative steps are necessary to stop it happening again. This may be as simple as copying the database files somewhere else for later analysis.
Usually the disaster is that a query fails because of an IO error - that's the first the DBA knows that something is wrong. Then some kind of DBCC consistency check (typically a DBCC CHECKDB - remember to use the WITH NO_INFOMSGS, ALL_ERRORMSGS options) is run to see what consistency errors have been caused by whatever hiccup the hardware has caused. So, assuming you have a set of results from a consistency check, here are the steps:
I'd be very interested to hear thoughts on this - especially if there's something I've missed out.
The Perth SQL Server User Group was lucky enough to have Paul Randal , Principal Lead Program Manager
I mentioned a the storage engine blog recently and was reading the latest post Common bad advice around
PingBack from http://outdoorceilingfansite.info/story.php?id=4417
PingBack from http://outdoorceilingfansite.info/story.php?id=22053
PingBack from http://greenteafatburner.info/story.php?id=4444
PingBack from http://topalternativedating.info/story.php?id=10751
PingBack from http://patiosetsite.info/story.php?id=634
PingBack from http://thestoragebench.info/story.php?id=2110