Turning a bug into a feature

Turning a bug into a feature

Rate This
  • Comments 15

I was amused to read this post about an arithmetic bug which accidentally turned into an AI feature (found via Raymond’s recent link clearance.) It reminded me of a story that my friend Lars told me about working on a certain well-known first-person shooter back in the day.

The developers of this game had a problem: an enemy computer-controlled character would see a fatal threat (say, a thrown grenade) and run away from it. Trouble is, the half-dozen other enemies in the area would see the same threat and they would all try to leave via the same best exit path. They’d bump into each other, shuffle around, reverse direction… and the result was an unrealistic-looking mess that called attention to the artificiality of the world. Basically, the AI had a bug; it was not smart enough to find efficient paths for everyone, and was thereby making the game less fun.

You might try to solve this problem by implementing a more complex and nuanced escape route finding algorithm for cases where multiple AI characters are all faced with threats. However, machine cycles are a scarce resource in first-person shooters; this solution probably doesn’t fit into the performance budget and it’s a lot of dev work. They finally solved the problem by writing a cheap “reverse detector” that detected when an AI character had radically changed direction more than once in a short time period. When the detector notices that an AI character has been running in two different directions in quick succession, the character’s default behavior is changed to some variation on “crouch down and cover your head”.

With this policy in place an enemy might run away from your grenade, bump into another fleeing enemy, turn back to try to find a new route, and that would trigger the duck-and-cover behaviour. The net result is that not only does this look like realistic human behavior, it is very satisfying to the player who threw the grenade.

I also noticed that Shawn’s post about the arithmetic bug was part of a feature whereby the AI of the racing game was tuned not to make the AI strive to win the game, but rather to make the AI strive to produce a more enjoyable playing experience for the human player. Lars wrote a short paper on the subject of “artificial stupidity” – that is, deliberately designing flaws into AI systems so that they appear intelligent while at the same time creating fun situations rather than frustrating situations. Quite an interesting read. (See this book for more thoughts on game AI design.)

Sadly, bugs in compilers seldom turn out to actually be desirable features, though it does occasionally happen.

  • Thie post (espeically at the end) reads like you are trying to further justfiy the wrong behavior in the C# compiler that your last blog post talked about.

    First, my previous post was my April Fool's Day post. I assume you are talking about the post about the semantics of base calls.

    Second, the behaviour I describe is not wrong. It is by design. You might disagree that this was a good design decision; apparently most people do. I certainly see that point of view and sympathize with it, but I continue to maintain that the choice made by the language designers is the best choice that could be made given the alternatives. (Which were: preserve a crashing bug, create a subtle breaking change, or do expensive and complex work to enable a scenario we think is a bad idea in the first place. Whether the first is the lesser or greater evil than the second is a judgment call; I don't expect everyone to agree with judgment calls. That's what makes them judgment calls.)

    And third, this post has nothing whatsoever to do with the post about base calls. This post is about two things, first, that sometimes a bug accidentally causes a desirable behaviour (in the racing game case) or can be cheaply transformed into a desirable behaviour (as in the action game case), and second, that game AIs are designed to be fun, not smart.

    I have no idea why you would associate either of those two things with a post about an undesirable crashing bug in the runtime leading to a subtle form of the brittle base class problem. I think we all agree that all the behaviours discussed in my post about base classes are in one way or another undesirable. And there was never any accident involved in my earlier post. And my earlier post was about the brittle base class problem, not how to design an efficient AI to make a fun first-person shooter.

    Fourth, I am not that subtle. Were I attempting to make the paragraph you refer to into a justification, I would have begun the paragraph with "This provides yet another justification for the behaviour I mentioned in my earlier post..." What precisely made you think that I was implicitly referring back to that post? I desire to be a good writer, and that means communicating my intent clearly. Apparently I have failed to do so here, so I'd be interested to know what I did wrong.

    -- Eric

     

  • > Sadly, bugs in compilers seldom turn out to actually be desirable features, though it does occasionally happen.

    Do you have an interesting example?

    Sure, here's one: http://blogs.msdn.com/ericlippert/archive/2006/05/24/type-inference-woes-part-one.aspx -- here we accidentally implement a subtly different rule than the C# specification requires, and I believe that the actual implementation behaviour is better than the specified behaviour. This has happened numerous times in the compiler. Another example would be a bug I made in the method type inference engine; the spec says that in a particular scenario to consider the built-in conversions between all members of a set of types; the behaviour I implemented by accident was to consider all built-in and user-defined conversions. Upon reflection, we decided that the actual implemented behaviour was better than what we'd originally specified. -- Eric

  • Stefan,

    Please, give Eric a break. He did not mention anything about the last post and you don't have to draw un-necessary conclusions.

  • Oh, come on, of course they are! (I'm talking about bugs in compilers)

    It is always lot of joy to see, that it is not you who is stupid - it is a bug in a compiler. Besides, some bugs lead to enjoyable communications with your other software development sites.

    A good example is MIDL compiler bug with upper-lowecase. The fact, that neither you nor your remote collegues broke the interface between the modules, but MIDL Compiler did, promotes international peace and frienship :)

  • As a class exercise (a long, long time ago) I wrote a Tic-Tac-Toe game in Java.  My kids were relatively young at that age so I let them have a go at it.  They got pretty bored with it, though, when they realized that the best they could do was tie.  A classic need for "artificial stupidity" if ever there was one.    They were pretty impressed that Dad could write a game that couldn't be beat; I confess that I didn't go overboard in explaining how simple it was.

  • Funny story about the FPS AI. I like it, and I agree as a player it would be entertaining to see the AI characters all crouching as if that would save them from my deadly grenade!  :)

    It's funny though…detecting the repeated bad behavior is not how I'd have addressed the issue.  But it seems like a lot of programming goes that way: "oh, my code is doing the wrong thing; instead of making it do the right thing, I will try to detect it doing the wrong thing, and do something else in an attempt to correct for the wrong behavior".

    I would have just had the code select a leader (perhaps the AI character nearest the exit, for example), and then rather than having the other AI characters all try to head for the same exit, have them follow the leader (and no, not all follow the same leader…a similar selection process for picking the leader would also order the followers, so each followed the one that was next-nearest the exit).

    I'm not sure my solution would have made for as entertaining a game, but it still seems like the technically superior approach to me.  :)

  • Bugs are like mutations in the genetic code of a cell, most of them are bad, some few turn out to be good. The organism tries to eliminate the bad ones and keep the good ones through the process of natural selection.

    Morale of the story: the more bugs you have, the higher the chance some of them will be beneficial ;).

  • Eric, you've officially been Chenned.  You'll need to start your own Nitpicker's Corner now!

    @pete.d - If you read Lars' write-up, you'll see that it's all about how the "technically superior" approach is often the antithesis of FUN.  Players don't really want the AI to *be smart*, they just want it to *not look too dumb*.

  • I introduced a somewhat similar bug in the first release of my iPhone game (http://www.vconqr.com). The game is a version of Risk and I was pulling a final all nighter (I should have known that could not end well) trying to make the (rather hastily constructed) A.I. stronger by getting it to favour defending territories that are bordered by their own territories. In my testing (with 1 or 2 A.I. players) it all worked fine, but I failed to watch too closely when I scaled up to 4+ A.I. players. What happened was that the A.I. players would fight a bit to get connected territories then just sit on them because the weightings always indicated that it was a better bet than attacking!

    Of course I got slated in the early reviews because of it (and despite having a fix within two days had to wait two weeks for it to go live!) - but one amusing aspect was that some myths started to string up - such as "they attack if you choose blue!"

    My "fix" was actually to slot in an entirely rewritten A.I. engine - but I did make sure that I added a sufficient level of randomness in that it shouldn't fall into stalemate situations so easily again!

    You might be able to avoid "static" situations and provide a more fun experience by writing three different AIs, rather than have the same AI for each computer player. I used to play a lot of Risk as a child and each of my friends had a different strategy, which made the game very amusing. The Drew strategy was yours: control a large connected continent for as long as possible, engaging only in minor skirmishes until you have enough strength to take over another continent in one sweep. Repeat until you win. The John strategy was basically random; attack all over the place and overextend yourself. The Eric strategy was to get bogged down in skirmishes with Drew fighting over Europe until finally making a last stand with two hundred armies on Iceland and Great Britain. And so on. Remember, your aim is not to find the Nash Equilibrium strategy for Risk, but rather to produce a fun experience.

    I think the ideal would be to find three strategies that have Rock Paper Scissors properties -- each one is vulnerable to another. You'll notice that games like Age Of Empires use this game design: cavalry beats archers, archers beat footmen, footmen beat calvalry. -- Eric

  • This reminds me of the infamous 256th "split screen" level of Pac-Man (aka the "Final Level"). Although in that situation, it wasn't an AI issue, it was a bug in the fruit-drawing routine which would cause the right half of the screen to be filled with garbage symbols.

    A really old, but goofy bug was in an Apple II program called "SAM" (Software Automated Mouth) which would take strings and attempt to pronounce them using speech synthesizer routines. Every so often something would go wrong and it would start out saying your string, followed by "Syntax Error" and then add a long string of spoken garbage text and symbols. It would progressively get worse, eventually to get stuck in a loop, trying to read out loud large portions of memory.

  • @Eric Yes, that's exactly what my rewritten A.I. does. In fact it has more than three strategies available and can mutate between them at different stages of the game. I wouldn't say they are Rock Paper Scissors complementary, but not far off.

    Whenever I make an A.I. change now I run my soak test mode at full speed overnight. Early on I did get cases where someone would get stuck in Australasia and the game would never end.

  • @Phil Nash said: one amusing aspect was that some myths started to string up - such as "they attack if you choose blue!"

    You've reminded me of a story about some experiments done with birds (I can't remember where I read about this). They'd be put in a box with a button, and would get a peanut whenever they stamped on the button.

    So the test would get more and more complex (double-click the button, wait two seconds, press it again) and the bird would always figure out the pattern.

    So then they made it just spit out a peanut at random intervals. Poor bird... when they came back to check on him, he was standing in the corner on one leg, or turning on the spot while flapping one wing, having become certain that this is what makes the peanut appear.

    (It would probably be controversial to draw a parallel with the emergence of human religious rituals at this point, so I won't...)

  • "...seldom turn out to actually be desirable features, though it does occasionally happen."

    Sounds like there's a story to be told there.

  • I dabble on the periphery of SecondLife, a free multiuser platform that allows regular users to run their own (sandboxed) code in a 3d multiuser environment. My involvement is mostly in writing documentation for the scripting language (LSL)... but from the community side. LL, the producers of SL, have a bad record of writing the documentation and have never written a spec for the language, let alone the library of functions. This has resulted in a substantial amount of functionality becoming lava-flowed... and in a few cases bugs became features, or better still accidental features. As you might imagine, it makes documenting the platform and language difficult. I'm always asking the question: Is this a bug or a feature? It's even worse for the users.

    Without transparency users may depend so heavily upon bugs that the bug becomes a feature; a misfeature. This is something that MSDN could work on.

  • @Daniel Earwicker - Good story.  It can be amusing when people or animals are convinced of patterns that do not exists.  While reading your post, the first thing that came to my mind was climate scientists :)

    (I then thought of many of the other "origin" scientific theories, but those are probably too controversial to bring up.)

Page 1 of 1 (15 items)