[rob] Since Ron has been entering thoughts and comments on compiler testing, I’d like to add a perspective on IDE testing from VC.  I’ll leave it in notation format to encourage discussion of points for which readers would like more clarification.

 

Issues to address through IDE testing:

  • Changing specifications over time (some features are completely ambiguous until devs implement, others have varying levels of specs prior to implementation)
  • Library of regression test cases, authors having moved to other teams; QA work is more cumulative than dev work in that regression testing currently takes a larger toll than code base maintenance.
  • Dependecies (teams depend on us, we depend on them)
  • Platform coverage
  • SKU coverage
  • Multiple paths through UI
  • CodeModel path around UI
  • Localization; Data handling and UI
  • SxS, Compatability
  • Performance
  • XP Logo 

Process:

  • Theory
    • Schedule and cost for the milestone before you begin (sounds like a line from Willy Wonka: [school teacher] “…switch our Friday schedule to Monday, which means that the test we take each Friday on what we learned during the week will now take place on Monday before we’ve learned it.”)  Update schedule as milestone progresses to measure against target dates.
    • Review feature specifications from PM team, providing feedback (expect updates during cycle).
    • Review implementation specification from Dev team, providing feedback (updates expected).
    • QA drafts high level test plan and submit to review by Dev and PM teams; update.
      • Reference previous product and similar feature test plans
      • Incorporate customer feedback and market expectations for setting priorities
      • Test team contracts attempt to prevent assumptions from leading to test holes across team boundries
    • Author test cases according to test plan; update plan as needed.
    • Author test case automation
    • Run automation test passes on coverage matrix
    • Run manual test passes on matrix subset
    • Log bugs, analyze trends, verify fixes.
    • Analyze automation code coverage (block level) results from instrumented builds.
      • Combine results with test plan holes to target next automation set.
    • ‘Dogfood’ the product
      • (Semi-)stable builds are selected by individual testers to use for developing automation (some data driven, some code).
      • App week:  break into teams and develop short term projects (some become team tools).
    • Themed bug hunts:  “Bug Bash”
      • team targeting of a specific Logo, accessibility, localization,.... categories.
      • Periodic re-education of feature owners on testing aspects
    • Iterate and ship!
  • Reality, as enforced by those pesky J market needs and restrictions…
    • Level of clear specifications varies greatly by feature and product cycle.  This is not something QA can (historically) count on.  Working closely with the PM team allows for the necessary information to be focused on creating effective test plans.  Various unexpected needs (from marketing to division to group to feature) continuously challenge the proposed schedule (you know, that thing we all agreed to before the first feature specification was actually written?).  Adapt and overcome; priority trade off.
    • Implementation plans tend to be the actual code from Dev.  Monitoring check-ins and dev check-in test reviews helps to prepare for proper coverage
    • Some high level test plans get bogged down from lack of feature specifications and test cases develop ad hoc during exploratory test passes.
    • Authoring automation competes with running manual test cases.  The need to report the current product status, in a sense, delays the ability to report a more complete status more often... (Catch 22, but without the airplanes).
    • Automation requires a test harness. 
      • A test harness requires development and maintenance.
      • VC has moved from an internal tool to a divisional tool
        • Still requires development/maintenance participation
        • Supporting/following the gander isn’t what the goose used to do
    • Coverage matrix bulges and balloons;  OS varieties, SKU variations, Side by side installations, run time compatibility issues, pre-release forks, internal drops, priority customer drops, priority customer quick fixes, alphas, betas, somewhere in there we have the primary branch we plan to ship…
    • Again and again activities we want to do (Pri 2 and 3 test cases; driving pri 2 and 3 bug fixes; driving pri 2 and 3 features) are restricted by schedule needs (competing in the market).
    • dependencies get out of sync and one team may be blocked on another team  

Next installment:  Tactics...