Larry Osterman's WebLog

Confessions of an Old Fogey
Blog - Title

What's the big deal with the Moore's law post?

What's the big deal with the Moore's law post?

  • Comments 19
In yesterday's article, Jeff made the following comment:

I don't quite get the argument. If my applications can't run on current hardware, I'm dead in the water. I can't wait for the next CPU.

The thing is that that's the way people have worked for the past 20 years.  A little story goes a long way of describing how the mentality works.

During the NT 3.1 ship party, a bunch of us were standing around Dave Cutler, while he was expounding on something (aside: Have you ever noticed this phenomenon?  Where everybody at a party clusters around the bigwig?  Sycophancy at its finest).  The topic on hand at this time (1993) was Windows NT's memory footprint.

When we shipped Windows NT, the minimum memory requirement for the system was 8M, the recommended was 12M, and it really shined at somewhere between 16M and 32M of memory.

The thing was that Windows 3.1 and OS/2 2.0 both were targeted at machines with between 2M and 4M of RAM.  We were discussing why NT4 was so big.

Cutlers response was something like "It doesn't matter that NT uses 16M of RAM - computer manufacturers will simply start selling more RAM, which will put pressure on the chip manufacturers to drive their RAM prices down, which will make this all moot". And the thing is, he was right - within 18 months of NT 3.1's shipping, memory prices had dropped to the point where it was quite reasonable for machines to come out with 32M and more RAM. Of course, the fact that we put NT on a severe diet for NT 3.5 didn't hurt (NT 3.5 was almost entirely about performance enhancements).

It's not been uncommon for application vendors to ship applications that only ran well on cutting edge machines with the assumption that most of their target customers would be upgrading their machine within the lifetime of the application (3-6 months for games (games are special, since gaming customers tend to have bleeding edge machines since games have always pushed the envelope), 1-2 years for productivity applications, 3-5 years for server applications), and thus it wouldn't matter if their app was slow on current machines.

It's a bad tactic, IMHO - an application should run well on both the current generation and the previous generation of computers (and so should an OS, btw).  I previously mentioned one tactic that was used (quite effectively) to ensure this - for the development of Windows 3.0, the development team was required to use 386/20's, even though most of the company was using 486s.

But the point of Herb's article is that this tactic is no longer feasible.  From now on, CPUs won't continue to improve exponentially.  Instead, the CPUs will improve in power by getting more and more parallel (and by having more and more cache, etc).  Hyper-threading will continue to improve, and while the OS will be able to take advantage of this, applications won't unless they're modified.

Interestingly (and quite coincidentally) enough, it's possible that this performance wall will effect *nix applications more than it will affect Windows applications (and it will especially effect *nix derivatives that don't have a preemptive kernel and fully asynchronous I/O like current versions of Linux do).  Since threading has been built into Windows from day one, most of the high concurrency application space is already multithreaded.  I'm not sure that that's the case for *nix server applications - for example, applications like the UW IMAP daemon (and other daemons that run under inetd) may have quite a bit of difficulty being ported to a multithreaded environment, since they were designed to be single threaded (other IMAP daemons (like Cyrus) don't have this limitation, btw).  Please note that platforms like Apache don't have this restriction since (as far as I know), Apache fully supports threads.

This posting is provided "AS IS" with no warranties, and confers no rights.

  • 1/5/2005 1:29 AM Tom M

    > on Alpha VMS systems where everything was
    > run as a seperate process, and
    > synchronisation was done my kernel objects
    > and messages. They were nice and clean [...]
    > I've always seen multi-threaded apps as a
    > bit of a hack because of the high process
    > startup cost on windows.

    On VAX VMS systems everything was run as a separate process but processes were more expensive than on Unix. I thought the high cost of processes on WNT resembled WNT's predecessor[*] rather well.

    In VMS as well as in other OSes there existed other kinds of resources besides heaps, and some of them were just as painful to administer. Mailboxes were convenient for some things but were not so convenient for other things.

    Of course the parts of it that were done right were done right. It had shabby edges instead of being shabby all the way through. The high cost of processes was indeed one of those shabby edges though.

    [* WNT-- might be clear but inaccurate;
    W--N--T-- might be accurate but unclear.]
  • This post is all wrong. You've failed to name one performance-sensitive UNIX app that can't use threads. Furthermore, there's good standards-based support for threading on UNIX.

    IMAPD, QPOPPER, FTPD are performance-critical and would benefit more from multithreading? What are you talking about??
  • Seun,
    Let me try again. You're taking one paragraph in a post and making claims about the entire post based on that.

    Here's my (and Herb's) point: ALL single-threaded applications are going to start having issues because CPUs aren't going to be getting any faster.

    This is true, REGARDLESS of platform - As I noted, single threaded Win32 apps have the same issues. IMAPD, QPOPPER and FTPD are simply examples of INETD based apps that are single threaded.

    IMAPD is a performance-sensitive *nix app - it has several operations (SEARCH and THREAD) that are (a) potentially highly serializable, and (b) totally CPU bound. And clients that issue search verbs to the IMAP server are forced to wait until the server returns (that makes the performance of these verbs performance critical). For the past 20 years, as newer and newer machines came out, the performance of the IMAP server would get faster and faster, without any work on the part of Mark Crispin.

    The *nix design philosophy of launching one process per client encourages this design. On Win32, one process per client is an utter performance disaster, and I know of no production Win32 server that uses it (because they simply can't scale beyond a couple of thousand clients). Instead, the Win32 server apps have been forced (due to the relatively high costs of processes) to implement some form of asynchronous scheduling mechanism (one thread per client doesn't scale either - at one megabyte of stack space per thread, you can only fit about 2000 stacks in a 2G address space, which again limits your scalablility).

    My point here is simply that any single threaded server application (and as I explained in the previous paragraph many *nix servers tend to fall in this pattern) will cease to have improvements in performance that occur simply by purchasing new hardware.

    Instead, the authors of those applications will have to redesign their applications to take advantage of the various techniques that will allow them to exploit the inherent parallelism of the new processors. Whether it's multithreading or openMP, or whatever, it doesn't matter.

    For the most part, Win32 servers have already dealt with this problem (because the Win32 platform forced them to deal with it earlier), now *nix apps will also have to deal with the issue.

    Btw, I never once said that *nix didn't have threading - I know there's "good standards-based threading" for *nix platforms. But any app that runs under INETD is likely to not take advantage of it (they might, but it's unlikely (since they wouldn't need to)).
  • PingBack from http://greenteafatburner.info/story.php?id=2466

Page 2 of 2 (19 items) 12