Who Cares How Fast the Code Is?

Computer Science Teacher
Computer Science Teacher - Thoughts and Information from Alfred Thompson

Who Cares How Fast the Code Is?

  • Comments 4

Ironically one of the problems teachers can have with teaching about optimizing programs is that computers are a) so fast now and b) getting faster all the time. Students often do not see the need to create more efficient algorithms because they assume that what they have is fast enough and if it isn’t then the next computer they buy will “fix” the problem by being faster. And truth be told with most of the toy programs we are forced to use in a classroom situation (not enough time for really large complicated projects) things do work out that way. But real life is more complicated then that. So we really owe it to students to discuss code optimization (and refactoring which is closely related.)

I’ve had a couple of conversations with students over the years that basically took the form of “yeah it is slow but it works and I’ll never need to run it again.” There is some logic to that of course. I once had a program that I ran maybe once a week. I had thrown it together in a hurry to meet an immediate need. Once I realized I would need it more often I also realized that it was very inefficient. I saw several ways that it could be a lot faster. But I did some math. It took about a minute too long (it could possibly run in seconds) and it would take me at least an hour to re-write it. Was I going to run it 60 times? Probably not so where was the payback for my time? I did get a new computer shortly there after which was fast enough that the run time was cut in half so now the payback time was 120 more runs – so it would really cost me more time to fix then it would save me. That sort of math takes place more than many would think by the way. But sometimes it comes out very differently.

Sometimes the issue is around applications that really need to be fast. Other times it is around hardware that has some limitations that have to be taken into account because changing the hardware is not an option. Cy Khormaee recently talked to Paul Oliver of Legendary Studios to come up with a list of optimizations that should be taken into account when creating games for the Zune device. The Zune was designed as a music player not a game device. Since XNA Game Studio 3.0 (now available in preview) lets programmers create games for the Zune this creates an interesting learning opportunity. Specifically the hardware limitations have to be taken into account if one wants to create a game that performs well enough for people to really enjoy. This is an opportunity to have a real “teachable moment.” The list Paul and Cy have makes for a good read and the start of some interesting discussions.

Also on performance, Dare Obasanjo, who deals with some very large data intensive social networking applications, took a look at some scaling problems with Twitter recently on his blog. He examines how basic design choices can make the difference between an application that really works and one that collapses under the weight of input/output needs. This is a discussion worth reading about as students consider that many of the most important applications revolve around how data is saved and retrieved.

  • I don't know - should it delay a project? I guess I've just run into over optimizers in my line of work that can't seem to turn anything in on time.

    Just out of curiosity. When you were teaching and had a project that was due on Friday... Did you ever give students the option of turning the project in a week late provided that they added some additional features?

    Thats the best real world IT scenario I can come up with!

  • I used to offer better grades (sort of like more money) for more features but not too often more time for more features. Part of that is related to school schedules and when grades have to be in.

    When I wrote code for a living I remember being asked for more features in less time as often as they were willing to give me more time. :-)

  • If you want students to gain <em>experience</em> optimizing code (and leave the debate about when to do it as a cost-benefit analysis to be done in the workplace) then all you need to do is set up a situation where you have a computation that demands efficiency.

    Your physics simulation works with 10 bouncing balls?  Great, throw in 500.  Everything's fine in your flocking algorithm with 25 boids?  Great, make it 250.

    The trick is creating specs or situations that lend themselves to a naive implementation that <em>can</em> be optimized.  I don't think this is too difficult in the classroom, even without getting students into large codebases or super-complex projects.

  • It isn't a faster computer anymore, it is a more cores at the problem.  Their solution will have to throw more threads at the problem, not more raw CPU to overcome a pure CPU based issue.

    Having a student learn to debug and optimize code sooner than later will allow them to see better paths and execute their designs with better in the future.

    If they always go for a O(2^n) solution, who cares if there is a O(n) solution out there, right?  A faster computer will fix that ...

Page 1 of 1 (4 items)