Here are some resources to accompany our geekSpeak from March 26 - Tame your Software Dependencies with James Kovacs (click the link to view the recording)
Resources and Tools
Books on TDD
Various inversion of control containers
A question on refactoring
Any recommendations for refactoring existing code to insert interfaces? (e.g., what's the best dependency to break first, the database?)
I would highly recommend reading "Working Effectively with Legacy Code" by Michael Feathers. Michael gives you excellent strategies and patterns for safely implementing tests around untested code, which is crucial in order to make non-breaking changes to an existing application (aka refactoring).
As for which dependency to break first, which one is causing you the most pain? Typically it is the database in my experience as round-tripping to a database on each test to fetch data dramatically slows the tests down. If you have slow tests, you'll be unlikely to run them as often and then the tests lose some of their value. (N.B. You still want integration tests that access the database. You just don't want each and every unit test to do so.) As well it requires a lot of effort to keep consistent data for tests, often using test data setup scripts or rolling back transactions at the end of tests.
Other areas that often cause pain are integration points - web services, DCOM/Enterprise Service calls, external text files, ... Anywhere your application is relying on an external application. Integration points are problems for tests because if you're relying on them, your tests will fail when the external applications are unavailable due to crashes, service outages, integration point upgrades, external network failures, ... Imagine that your e-commerce website integrates with 6 external systems (credit card processor, credit check, inventory, sales, address verification, and shipping). Your development environment integrates with DEV/QA versions of each of these services. Each service has 95% uptime, which translates into 1.5 days of downtime a month for maintenance, upgrades, unexpected outages, ... The chance of all systems being available is the product of their availabilities or 95%*95%*95%*95%*95%*95%=73.5% uptime for all 6 integration points. If your tests directly use these test systems, your test suite will fail over 25% of the time. Now is that because you introduced a breaking change or because one of your integration points is temporarily unavailable? Life gets worse when you integrate with more systems or when the availability of those test system is lower. Imagine you have to integrate with 12 integration points with 95% availability - your test suite will only pass 54% of the time. Or if your 6 test systems only have 90% availability, your test suite only passes 53% of the time. In each case, it's a coin toss whether you know for certain whether the change you just made broke the application. When you start getting a lot of false negatives (failing tests when the problem is in an integration point), you stop trusting your tests and you're essentially flying without a parachute.
By decoupling yourself from your integration points by using interfaces for the majority of your tests, you know that the code base is healthy and you can separately test integration points.