Algorithm versus implementation

Algorithm versus implementation

  • Comments 11

When I first started making games, most things were written in C, with critical pieces optimized in assembly language. A skilled assembly programmer could beat the C compilers of the day by a factor of two or more, so this was an important optimization (for instance the blit routines in Allegro were all written in assembly).

But assembly language could be a dangerous pitfall. Rewriting a function in assembly language changed the details of the way it worked (the how, or implementation), but did not fundamentally alter its purpose (the what, or algorithm). Doing the same basic thing in a more efficient way could give a 2x, 3x, or even 4x speed boost, but larger gains could only be achieved by switching to a smarter algorithm.

The danger of assembly language was that it made code faster, but also more complex and harder to understand. By obscuring the underlying algorithm, this 2x optimization often hid the potential for much more significant algorithmic improvements.

I was reminded of this by a recent thread on the creators.xna.com forums, where a word search algorithm was taking more than half a second. Early suggestions focused on optimizing the implementation:

  • Use StringBuilder to reduce garbage
  • Pack short text strings into 64 bit integers to speed comparison
  • Use threads to parallelize the work across multiple CPU cores

These techniques are the C# equivalent of rewriting something in assembly language. They did indeed work, reducing the half second search time to 80 milliseconds (a 6x gain), but there is a fairly low upper bound on how much such things can help. For instance on a quad core machine, threaded code can run at most 4x faster than a single core version (in practice the gain is usually far less).

Then Jon Watte suggested a better algorithm, using a tree structure to replace the original O(N!) (factorial) algorithm with a log(N) version.

Result? The search now completes in less than 1 millisecond. That's more than a 500x improvement!

Just goes to show that, while languages and programming techniques continue to evolve, there are still few things as important as the fundamentals of algorithms and data structures.

  • I think you are too polite in your statements. Programmers who have never seen books like Cormen's "Introduction to algorithms" are a disaster!

  • That was a good thread and I offer praise to Jon Watte for giving that suggestion. Finding a better algorithm for what you have is usually tough to do.

  • Thanks for posting this. Too many people are so worried about learning the syntax and semantics of a particular language - and that's a huge job - that they forget the part about the "what am I trying to do". In fact, I'm hoping that the part they were trying to speed up was indeed the part that was slowing them down the most, and that they didn't optimize the wrong piece first.

  • I was the one with the issue, and Jon's suggestion was terribly useful.  The final time before his idea was actually 10 MS, so a 50x gain, but still (and that was only gained via different algorithmic changes).  

    To respond to GuyWithDogs, the algorithm gets used fairly frequently and couldn't be removed/called less, so speeding it up reduced a lot of noticeable jerks.  I did do some profiling before hand to make sure that's where the issue was.

    To be fair, I wouldn't say calling technique #1 or #3 the C# equivalent of assembly is accurate.  The original implementation *was* rubbish, throwing strings around like candy, causing immediate pain (lots of string construction) and later pain (lots of garbage for the GC).  Avoiding that in general, I think, is just good practice.  Technique #3 wouldn't have really been about speed.  It would have been about calling the method before it was needed to have the results by the time it was needed.  That's pretty powerful when there are no clever data structures available (ie: resource loading).

    Technique #2, though, might as well have been assembly.  ^^

  • I'd add that Cormen's Introduction to Algorithms is probably a better bet for the interested amateur than Knuth's vast text. It's more readable (e.g. pseudocode versus Knuth's assembly language for the examples) and gives a great introduction to 'Big O' notation and algorithm analysis:

    http://mitpress.mit.edu/algorithms/

    Maybe you or MS could create a virtual bookshelf of recommended volumes and online resources for those who have played about with XNA but want to progress to the next stage. Or a wiki where we could all contribute. What do you think?

  • I forgot that Scott Mitchell's excellent six-part 'An Extensive Examination of Data Structures for .NET 2.0' may also be useful. It's on the MSDN 'C# Algorithms and Data Structures' page, which unfortunately hasn't been updated in a while:

    http://msdn.microsoft.com/en-us/vcsharp/aa336800.aspx

  • I think it is possible to go too far along this line of thought however...

    Sometimes it is not possible(given time constraints) to come up with a better algorithm, assuming that you have looked at all the relevant literature.

    Also only considering the algorithm can lead to something which is theoretically faster, but does not take advantage of the available hardware or the specifics of the data set.

    For example a radix sort is linear time, but has a large setup time and often for smaller lists an n log n sort like quicksort is better. Also one type of sort may be better due to hardware details, for example the number of Load Hit Stores it generates.

    This sort of knowledge is often gained only by considering the low level details and perhaps threading/coding in assembly etc.

  • Isn't this about getting the best value for your money [return on investment (ROI)]?

    The assembly analogy is saying that most of the low level techniques require a lot of investment [of time, lines of codes, reduced maitainability] and the return sometimes is marginal.

    Most high-level [change of concept, algorithm] changes usually have a lot better return and don't require that much investment.

    You shouldn't ignore either, both are important and the best ROI could be in either. However, you should consider always what gives you best ROI - it's your time, use it wisely.

    To finish on a badly egneered pun: It's like using a priority queue in real life :)

  • While this is true, I think when you are working at the cutting edge developing new algorithms and associated analysis can be orders of magnitude more expensive and time consuming. Think of the amount of money pumped into universities to achieve this.

    Assuming that you are succesful, which is by no means guaranteed.

    That said most problems can be solved by adapting existing algorithms.

    Anyone want to come up with a more efficient sorting algorithm?

  • I also think that that most people are more familiar with programming languages than with algorithms and data structures just because they think that algorithms are hard because of the complex math being involved. Lanuage syntax and semantics seem, like, more obvious. Need to say, math always looks more complicated than it is actually.

  • I agree completely with this. I've been working on a C# x86+DOS emulator similar to DOSBox - things like eliminating garbage generation on the critical path are certainly important, but I realized the biggest performance improvements when I fundamentally improved operand decoding algorithms. Still, I've also done more "rewriting stuff in assembly" than I should probably admit...

Page 1 of 1 (11 items)
Leave a Comment
  • Please add 5 and 5 and type the answer here:
  • Post