Larry Osterman's WebLog

Confessions of an Old Fogey
Blog - Title

Moore's Law Is Dead, Long Live Moore's law

Moore's Law Is Dead, Long Live Moore's law

  • Comments 44
Herb Sutter has an insightful article that will be published in Dr. Dobb's in March, but he's been given permission to post it to the web ahead of time.  IMHO, it's an absolute must-read.

In it, he points out that developers will no longer be able to count on the fact that CPUs are getting faster to cover their performance issues.  In the past, it was ok to have slow algorithms or bloated code in your application because CPUs got exponentially faster - if you app was sluggish on a 2GHz PIII, you didn't have to worry, the 3GHz machines would be out soon, and they'd be able to run your code just fine.

Unfortunately, this is no longer the case - the CPU manufacturers have hit a wall, and are (for the foreseeable future) unable to make faster processors.

What does this mean?  It means that (as Herb says) the free lunch is over. Intel (and AMD) isn't going to be able to fix your app's performance problems, you've got to fall back on solid engineering - smart and efficient design, extensive performance analysis and tuning.

It means that using STL or other large template libraries in your code may no longer be acceptable, because they hide complexity.

It means that you've got to understand what every line of code is doing in your application, at the assembly language level.

It means that you need to investigate to discover if there is inherent parallelism in your application that you can exploit.  As Herb points out, CPU manufacturers are responding to the CPU performance wall by adding more CPU cores - this increases overall processor power, but if your application isn't designed to take advantage of it, it won't get any faster.

Much as the financial world enjoyed a 20 year bull market that recently ended (ok, it ended in 1999), the software engineering world enjoyed a 20 year long holiday that is about to end. 

The good news is that some things are still improving - memory bandwidth continues to improve, hard disks are continuing to get larger (but not faster).  CPU manufacturers are going to continue to add more L1 cache to their CPUs, and they're likely to continue to improve.

Compiler writers are also getting smarter - they're building better and better optimizers, which can do some really quite clever analysis of your code to detect parallelisms that you didn't realize were there.  Extensions like OpenMP (in VS 2005) also help to improve this.

But the bottom line is that the bubble has popped and now it's time to pay the piper (I'm REALLY mixing metaphors today).  CPU's aren't going to be getting any faster anytime soon, and we're all going to have to deal with it.

This posting is provided "AS IS" with no warranties, and confers no rights.

  • So if CPUs have hit the wall in terms of performance, aren't we just going to see scalibility outwards with multi-CPU systems becoming more common place? (with the OS abstracting away multi-cpu issues)

    I can possibly see this as a problem for a small fraction of developers out there (game developers, scientific app devs, imaging developers, etc) where performance is crucial, but will this really affect internal corporate developers writing web apps or developers writing standalone business software?
  • Ryan: Absolutely - that's the point of Herb's article.

    But multiple CPU cores will only help so much, and only if your application can take advantage of them (by either being multithreaded or by using OpenMP to expose finer grained paralellisms in your application).
  • "It means that using STL or other large template libraries in your code may no longer be acceptable, because they hide complexity.

    It means that you've got to understand what every line of code is doing in your application, at the assembly language level."

    So no more .net Framework then? ;-)


  • If your app's written to the .Net framework today, and you're happy with its performance, then there's no reason not to continue to use it.

    If you write new code, you should write to the .Net framework, because of the improved security/managibility (especially if you're deploying network facing applications).

    But if you're going to switch from unmanaged code to managed code, you can't count on Moore's law getting you out of the 5%-10% slowdown you're going to get by going to managed code.

    And you need to be extra careful of your performance. You need to understand the performance ramifications of the various collection containers (System.Collections.Array, System.Collections.HashTable). Managed code is much easier to use, but part of the reason for its ease of use is that complexity is hidden - it's really easy to write managed applications that perform poorly.
  • I think you're absolutely right for systems developers and the people Ryan mention: people on the edge of the performance envelope. For everybody else (the vast majority of developers) I'm not sure I see how it matters all that much.
  • mschaef, you might be right.

    But the only kind of developer that I can think of whose code won't end up on the bleading edge of the performance envelope are hobbyest developers and those who deal with a very limited set of customers.

    If you're writing asp.net applications, then what happens when your asp.net application gets /.'ed?

    Remember the IBM ads where a small web company goes live and the product orders start coming in? They got 10, then 20, then 100, then 1000, then tens of thousands of orders. The IBM add was basically about whether or not your web services platform could handle unexpected new traffic. The implication in those ads was that if you went with IBM you wouldn't have this problem - but their point is still valid. If you've got a CPU bottleneck in your web application, you won't be able to buy a faster box to run it on.
  • Larry...
    "If you've got a CPU bottleneck in your web application, you won't be able to buy a faster box to run it on."

    Luckily web applications are highly parallel and are multi-threaded by nature (at least every technology I've used). Each request can run on it's own thread. Multi-cpu servers will allow you to scale out this type of application. Heck, most web large web applications run just fine in server clusters.

    That said, I agree with the premise of the article. I've noticed that since 2003 desktop CPUs have not gotten any faster (and it looks like it will be that way for 2005). Intel has recently stated that in the next few years, we'll see new CPUs that have 10x the performance of current products. I'll beleive it when I see it. I wonder how long they can keep selling the same P4s with different product numbers?
  • Either we can revert back to programming in pseudo-assembly, or we could let the machines do the work for us instead. I'm voting for declaring shared state concurrency a bankrupt idea and putting our efforts into developing more modern languages.
    There is an interesting <a href="http://lambda-the-ultimate.org/node/view/458">discussion</a> on the exact same topic over on Lambda-the-Ultimate.
  • I don't think we've got to go back to programming in assembly.

    But we DO need to understand the consequences of our code.

    Do you use System.StringBuilder to concatenate strings or System.String.operator+=? It's actually not clear which is more efficient - for some string values, StringBuilder's more efficient, for other operator+= is.
  • "But the only kind of developer that I can think of whose code won't end up on the bleading edge of the performance envelope are hobbyest developers"

    Well, it seems pretty clear (based on a simple perusal of software store aisles, and custom projects I'm familar with) that it's possible to do interesting and commercially viable software on systems with significantly "obsolete" hardware. For software that does need cutting edge performance I'd expect that most projects do things like leverage existing engines, and thereby leave the truly heavy lifting to the specialists.

    "and those who deal with a very limited set of customers. "

    Compared to a developer on Windows, that's pretty much everybody. ;-)

    "But we DO need to understand the consequences of our code. "

    I'm not advocating ignoring the consequences of our code, just making the statement that modern machines are powerful enough that naive approaches can be suprisingly effective (and cheaper to implement).
  • mschaef, you're right, modern machines ARE fast enough.

    But the key takeaway (IMHO) is that the days of assuming that we can ship bloatware and rely on Moore's Law to cover our mistakes are over - machines aren't going to be getting significantly faster (without making source code changes), and that means that a lot of apps may be burned. For example, the Microsoft Picture It! team won't be able to ship a pokey version assuming that in the near future machines will be fast enough to run their code well (I'm picking on the PictureIt team here, it's actually pretty fast on my machine at home).
  • This is a huge counterargument against managed code. The working set of managed code is huge, STL and majority of C++ features are blazingly fast as compared to the penalties incurred due to .Net runtime.

    Is Herb Sutter shooting down his own case for the managed code here?

  • Amit,
    I actually disagree with it being an argument against managed code. Working set isn't the issue here, Herb's not talking about memory bandwidth, he's talking about CPU bandwidth. Memory bandwidth is likely to continue to improve (and to be more important as time goes on).

    And it's possible to write high performance ASP.NET applications (it's also possible to write poorly performing ASP.NET applications).
  • I don't quite get the argument. If my applications can't run on current hardware, I'm dead in the water. I can't wait for the next CPU.

    And this... "Concurrency is the next major revolution in how we write software." Uh, call me crazy, but isn't that every Web app made today? It sure would suck if our apps were only handling one request at a time!

  • Jeff,
    I think I'd like to answer that question tomorrow in more depth tomorrow.

    But the simple answer is that people have "known" for the past 20 years that if their app was just tolerable on the current generation of hardware that all they'd have to do is to wait for the next generation and it'd work just fine. So they'd ship apps that were pokey on current hardware because they knew that it'd be better on new hardware.

    The thing is that that assumption is no longer true.

    The world isn't just web apps - and even web apps often have shared state that limits concurrency.
Page 1 of 3 (44 items) 123