Larry Osterman's WebLog

Confessions of an Old Fogey
Blog - Title

Larry's rules of software engineering part 1, redux

Larry's rules of software engineering part 1, redux

  • Comments 5

One of my early blog posts was one I entitled “Every software engineer should know roughly what assembly language their code generates“.  It got a lot of commentary.

Well, I just ran into (via /.) the following article by Randall Hyde entitled: “Why learning assembly language is still a good idea“.

His article is much more in-depth and better written than my post, but essentially restates my premise.  I'm always happy when other people post stuff that agrees with me :)

 

 

  • just got 'Debugging .NET and Windows Applications' by john robbins, thats got an excellent chapter on x86 asmembly reaading, as well as a chapter on msil. Well worth a read.
  • If you just write for .Net is it worth learning x86, x86-64 and ia64 because the customers might choose to run it on any one of them, or is learning msil enough?
  • I agree that programmers should know assembly language. But I think the best justification is Joel Spolsky's concept of "Leaky Abstractions" (http://www.joelonsoftware.com/articles/LeakyAbstractions.html).

    The observation that application performance has not kept up with Moore's Law is certainly true. However, I think it is wrong to conclude that programmers have been negligent of performance. The most important reasons that applications haven't followed Moore's Law, as I see it, are:

    1) Memory access time has not improved as fast as Moore's Law.

    2) Programmers are taking advantage of higher-level techniques like garbage collection that sacrifice performance to develop code faster.

    3) Users' expectations of software performance have risen faster than Moore's Law. (today's word processor is not 16,384 times faster than yesterday's, but yesterday's word processor didn't have to perform Unicode-aware, mixed left-to-right and right-to-left text reflow in real-time).

    That said, I do agree with Hyde's comment that improving an existing algorithm's running time can be more important than switching to an algorithm with better asymptotic performance. Given the choice between an instant 2x speedup vs going from O(n^x) to O(n^x-1) in the software I use every day, I'd probably take the 2x speedup. (although I'm pretty sure my software has good asymptotic scaling anyway :)
  • I touched on this in a post a while back dealing with what differentiates a "good" programmer from a "bad" one or "good" code from "bad" code.

    http://www.lazycoder.com/article.php?story=20040131151944229

    Knowing what the compiler is doing doesn't just deal with the assembly code it generates. It can be understanding what the code does for you under the hood without knowing what assembly instructions it generates. It's all about knowing what's going on.
  • Hey Dan, in a lot of cases, Garbage collection can be *FASTER* than malloc/dealloc!

    And IMO, learning the MSIL is enough... has anybody LOOKED at the assembly language generated from MSIL? Let's just say I've written a lot of assembly language in my day, and looking at MSIL JIT'd to x86 assembly is becoming almost as verbose as an unabridged dictionary.

    However, understanding the MSIL would reflect the copy constructor scenario brought up in the original post.
Page 1 of 1 (5 items)