We recently had a good discussion internally about the nature of ASSERTs in code. I, being the rebel that I am, took the time honored stance of, “if you’ve hit it, you’ve triaged it and changed to code to handle that condition gracefully (at least as gracefully as possible), it’s safe to remove that assert.” The other camp had a few different views, but the basis of their retort was, “just because you hit it, doesn’t mean that you’ve covered all scenarios in which that particular condition can be triggered, and removing it lessens the ability to catch those scenarios”. Which is a very valid argument as well, but my problem with that stance starts simply at the entry. The scope of an ASSERT should be as narrow as possible, ASSERTs should not be there to validate that each caller does not trigger that particular condition. If the changes made to handle the condition gracefully cause that calling function to fail, or misbehave in some other fashion, it’s up to them to fix their code. If it happens to be another block of your code, c’est la vie.
I’ve been there, as I’m sure all of you have as well. But my point being, the removal of an ASSERT should only happen if that condition is now handled AND that condition has been; invoked, triaged and understood. And hopefully all of the aforementioned comes from a well defined test case triggering the ASSERT. Here again, the counter-argument provided, “the retention of the ASSERT provides another layer to protect against regressions in the callers as the immediacy of the regression is felt when the ASSERT is triggered.” I would simply contest that by saying, the test is just as immediate of a regression trap, and will also be caught under both FRE and CHK builds. Sure there is a chance that the nature of the test case will change where it will cease to act as a valid regression test, but the onus should be on the owner of the tests to ensure that any changes made does not remove or redefine those test cases.
Both sides of the debate sound reasonable no matter what your preference for using ASSERTs. And I’d agree with you 50% of the time on either side. :)
Now, there are two extra data points that may or may not sway you to either side of this discussion and these lend themselves to reaffirming that there is something to break every rule;
Why am I bringing this all up? Well primarily, I haven’t blogged in a while (*g*). But it was a good go around amongst us and it also opened up some more possibilities for test tools and test cases through this discussion. So even though the two sides of this debate agreed to disagree, the end result is that we’ve identified more areas to improve existing tests and toolsets to allow developers the means to choose what they feel is best for the product and not block test automation in the process.
See, there is no such thing as a “failed experiment” in science. ;)
Now Playing – Jonathan Coulton Skullcrusher Mountain