Eric writes:

How can I help my devs help me determine what is ready to test so I can help them?

I’m a tester on an agile dev team.  Every three weeks I get a new build which includes stuff that is supposed to work and stuff that is not supposed to work (e.g., it is only partially complete or it depends on other non-complete stuff).

<snarky>An agile team that hands a build off to Test only once every three weeks?</snarky>

With each build, I struggle to test only those features that are supposed to work.  There is little value logging endless numbers of bugs for stuff that isn’t code complete.  However, I’m having a tough time translating what the devs believe to be complete into what I can actually test.

I see lots of value in looking at code that isn't officially done. More on that in a moment...

We use VSTS to track dev tasks for various features.  For example, to complete FeatureA the devs may need to complete two tasks; 1.) a web service layer task, 2.) a UI layer task.  Originally, I used the heuristic that tasks marked “Complete” and promoted to a build are testable in that build.  Apparently, this heuristic is wrong because although the devs may complete all work for a given feature or task, it may throw obnoxious errors when tested because it is dependant on some other incomplete task.

Is there any tester anywhere who has not experienced this at some point? I find that even teams who can follow that heuristic often still cannot!

My previous testing experience has always been with a waterfallish approach.  They normally handed me a build and a corresponding spec.  So my gut tells me the devs or PMs or somebody should be responsible for telling me what stuff works and if I find that stuff does not work, I can log a bug!  But my brain tells me I can be more valuable to my team if I take some responsibility and somehow determine what is actually testable on my own.  Any advice?

Eric, are you saying that the specifications you received in your previous experience actually had anything to do with the product you were given? I didn't think so. <g/>

I have not yet found a tester whose product and spec exactly matched each other. Ever. Anywhere. Often there is some correspondence between the two, but I have never seen them match exactly. Thus, Eric, the world in which you now find yourself is merely some degrees of difference from the world in which you used to live. You always have a spec. How did you handle discrepancies between the spec and the product then? Is there any reason why you cannot do the same now?  That's adding value!

Another way you can provide value to your team is to help your developers learn how to test. By, for example, introducing them to my Testing For Developers checklist. And by pair testing with them before they check in. And by reviewing their code and having them review yours. And by asking for early and frequent buddy builds so that you can give them early and frequent feedback regarding issues that you find. And by building mutual respect.

You can add value to your team by testing the user stories, and the documentation, and every discussion anyone has about any portion of your product.

You can have a discussion with your team regarding what "testable" means. Have y'all defined what bar code must meet before a developer can call it "complete"? Have you helped your team understand the pain you experience when they hand you "complete" code that they seem to have run? Have you asked them how they think you can better help them? Have you discussed with them the ways in which knowing which features will be coming online when enables you to better plan your time and thus provide better service to your team?

Read up on automated testing, exploratory testing, Rapid Software Testing, and every other testing technique and technology you can find. No one technique is ever always the correct answer (Tester Quiz: name at least three scenarios in which this statement is incorrect); deciding which to use when is a way you can uniquely add value.

Your gut is correct: if someone else tells you what stuff is supposed to work - which is to say, when you have a guaranteed-good oracle - identifying inconsistencies and discrepancies and issues - i.e., bugs - tends to become simpler. Your brain is also correct: whatever you can do to test your product, and to do so as early as possible, adds value. Agitating for the former can be useful insofar as it helps your team produce higher quality software (whatever "quality" means for y'all). Ditto for working through the latter.

Just about anything you do can add value to your team. Try something. If it works, can you make it better? If it doesn't work, might it if you change something? What else could you try? Lather, rinse, and repeat your way to your optimum value add.