LinkedIn | FaceBook | Twitter
When you have very large data sets that reside in a single table, you can use SQL Server's Table Partitioning to "break" the table on logical boundaries like time, and the system handles "putting the data back together" when you query it. It's a great way to gain performance, do maintenance and more.
But there are other methods of partitioning the data than just using this feature. Before we had this function I used different tables and programming code to make inserts and reads across multiple tables. It takes more thought, but allows other benefits, such as a fine level of control on how and where you put the data. If you go this route, there are several things to keep in mind. You'll need to consider the "ranges" of your data, and what to do when you need to query across two tables. Things like using UNION ALL (which doesn't filter for duplicates) instead of UNION in your statements, for instance.
I'm certainly not advocating that you NOT use the Partitioning features, just that you should think everything through when you are implementing a solution. Most of the time, you have options, so learning and understanding when to use which approach is key. Picking the right strategy is why they pay you the big money.
The major problem our group has with table partitioning is the way MS recommends you deal with older data: add more space. What if I don't want that? So we archive data and move it over to different servers, where it can then be queried against and used, without impacting performance on the transactional server or taking up more expensive drive space.
The only place it well and truly works for us is as a somewhat easier (more effort on the setup, less effort on the maintenance) rolling window of data.
I do this as well - it's called a "Hiearchical Storage Management" process. There are actually lots of ways to do this, and mine is a combination of hardware and SQL Server features. I don't think I would recommend partitioning as the sole solution for HSM.