Software Engineering, Project Management, and Effectiveness
When I need to quickly analyze a product and give actionable feeback, I use scenario evaluations. Scenario evaluations are basically an organized set of scenarios and criteria I use to test and evaluate against. It's a pretty generic approach so you can tailor it for your situation. Here's an example of the frame I used to evaluate the usage of Code Analysis (FX Cop) in some security usage scenarios:
Scenario Evaluation MatrixDevelopment life cycle
Input and Data Validation
In this case, I organized the scenarios by life cycle, app type, and security categories. This makes a pretty simple table. Explicitly listing the scenarios out helps see where the solution fits in and where it does not, as well as identify opportunities. A key aspect for effective scenario evaluation is finding the right matrix of scenarios. For this exercise, some of the scenarios are focused on the user experience of using the tool, while others are focused on how well the tool addresses recommendations. What's not shown here is that I also list personas and priorities next to each scenario, which are also extremely helpful for scoping.
What becomes interesting is when I applied criteria to the scenarios above. For example:
I then walked the scenarios, testing and evaluating against the criteria. This produced a nicely organized set of actionable feedback against how well the solution is working (or not). I think part of today's product development challenge isn't a lack of feedback, but rather a lack of actionable feedback that's organized and prioritized.
The beauty of this approach is that you can use this to evaluate your own solutions as well as others. If you're evaluating somebody else's solution, this actually helps quite a bit because you can avoid making it personal and argue the data.
The other beauty is that you can scale this approach along your product line. Create the frames that organize the tests and "outsource" the execution of the scenario evaluations to people you trust.
I've seen variations of this approach scale down to customer applications and scale up to full-blown platform evaluations for analysts. Personally, I've used it mostly for performance and security evaluations of various technologies and it helps me quickly find holes I might otherwise miss and it helps me communicate what I find.
PingBack from http://msdnrss.thecoderblogs.com/2007/12/31/kano-satisfiers-and-dissatisfiers/