When the PC becomes the server

Rate This
  • Comments 2

I was speaking with a friend yesterday about a requirement he has to separate the maintenance tasks running on his SQL Server from the regular user transactions. In specific, he wanted to throttle the reindexing tasks while users were on, since his shop is 24x7. I explained that in SQL Server 2008, we’ve addressed this need (and others) specifically. I’ll go in to all that in another post in the future.

 

But this brought up an interesting point. The root cause of this issue is that servers grew up from personal computers. I’m old enough to remember the computer I used was a mainframe that I accessed over a terminal, which was just a fancy modem with a screen and keyboard attached. On a mainframe, no one user ever got 100% of the server, memory or just about any other component. A mainframe operator assigned percentages of these things to the user or workload.

 

A PC, however, is different. Both the hardware and the software were designed with only one user in mind, so all users and processes get 100% of everything. Sure, we have multitasking operating systems (really just time-slicing) now, but that still doesn’t change the basic architecture of how the system was designed. So if you take one of those Personal Computers, grow it up into a server, it follows that you’ll transfer a lot of those architecture designs right along.

 

It seems to me that the more things change, the more they stay the same. When you think about the requirements that large-scale computing has, you’ll notice that you’ll run into the same answers that mainframe systems dealt with decades ago. And I think you’ll start seeing some of the same solutions they came up as well.

Leave a Comment
  • Please add 7 and 7 and type the answer here:
  • Post
Page 1 of 1 (2 items)