Software Engineering, Project Management, and Effectiveness
Manage energy, not time, to get more things done ... This concept really resonates with me. I also like it because it can be counter intuitive or non-obvious.
One way to try and get more things done is to, jam more in your schedule. Yuck! Unfortunately, that's a fairly common practice.
I actually have lots of practices for managing time (outcome-based work breakdown structures, managing outcomes vs. activities, prioritizing outcomes based on usage and value, avoiding over-managing minutia, using outcome-based agendas for meetings, distinguishing getting results vs. building connections in meetings, using time-boxes to deliver incremental results in projects, "zero-mail in the inbox" practice … etc.) While I'm always open to new time management practices, I think I was getting diminishing returns from yet more time management techniques.
So stepping back, here's the situation … I was using a full arsenal of time management techniques, I was known for getting results, and yet I wanted to reach the next level. What happened next was, I noticed a common thread among a few very different trainings and books around leadership and results. Energy was a recurring theme.
Of course, then it made total sense (the beauty of 20/20 hindsight!). We've all had that great hour of brilliance or that unproductive work week. I did a reality check against several past projects. It was easy for me to see the connection of energy and results, when all else was equal. The problem was, I didn't have an arsenal of practices for managing energy. It turns out, I didn't really need to. Simply by knowing what drains me or catalyzes me helped a lot.
Now that I've been aware of this underlying concept for a while, I have learned a few practices along the way. One practice I use is I explicitly ask the team when and how often do they want to deliver customer results (i.e. how often do they want to see the fruits of their effort?). I balance this with capability, customer demand, project constraints and a bunch of other drivers, but the fact that I explicitly try to leverage energy and rhythm, helps crank the energy up a notch (and, as a bonus, results).
When you're working on an R&D project, how do you shorten the cycles around testing your user experience models?... Be the Software
That's the advise John Socha-Leialoha, father of Norton Commander, gave me and it worked like a champ.
We faced a lot of user experience design issues early in our R&D project. For example ....
Initially, we did a bunch of whiteboard modeling, talk-throughs, and prototyping. The problem was the prototypes weren't efficient. I had a distributed team so it was tough to paint a good picture of the prototype, even when we all agreed to the scenarios and requirements. The other problem was customer reviews were tough because it was easy to rat-hole or get distracted by partial implementations. The worst case was when we would finish a prototype and it would be a do-over.
We experimented with two techniques:
This radically improved customer verification of the user experience and kept our dev team building out the right experience.
Mocking up in slides is nothing new. The trick was making it efficient and effective:
For example, here's the slide list for one deck:
What originally took a week to prototype, we could mock up in an hour if not minutes. Do-overs were no longer a problem. In fact, mocking up alternate solutions was a breeze. The beauty was we could keep our release rhythm of every two weeks, while we explored solution paths in the background, with less disruption to the dev team.
The other beauty was we could use the same deck to walkthrough with customers and the dev team. The customers would bang on the user experience. The developers would bang on the technical feasibility. For example, show a catalog to customers and they evaluated the the best way to browse and filter. Sow the same screen to the devs and they would evalute the performance of the catalog. We would also brainstorm the "what-ifs", such as how will the catalog perform when there's a billion items in it ... etc. We got better at teasing out the key risks before we hit them.
Building the software became more an exercise of instantiating the user experience versus leaving too much to be made up on the fly.
To "be the software", it's as simple as letting the user walk through the user experience of performing that task (via the slides), and, as John put it, "you be the software ... you simply state how the software would respond." If slides are too heavy, draw on paper or use a whiteboard. The outcome is the user gets a good sense of what it's like to use your solution, while you get a sense of the user's more specific needs. The interactive approach produces way more benefits than a simple spec review or 1/2-baked prototype.
In .NET 1.1, we timed managed code by wrapping QueryPerformanceCounter and QueryPerformanceFrequency. The following How To shows how:
In .NET 2.0, you can use the Stopwatch Class. I found the following references useful:
This is a follow up to my post, Manage Energy, Not Time. A few folks have asked me how I figure out energy drains and catalysts.
For me, clarity came when I broke it down into:
On the task side ...This hit home for me when one of the instructors gave some example scenarios:
He asked, "how do you feel?" He said some people will have "energy" for some of these. Others won't. Some people will be excited by the chance to drill into data and cells. He said others will be excited by painting the broader strokes. He then gave more examples, such as, the irony of how you might have the energy to go skiing, but not to go to the movies.
The point he was making was that energy was relative and that you should be aware of what gives you energy or takes it away.
On the people side ...I pay more attention to people now in terms catalysts and drains. With some individuals, I'm impressed at their ability to sap energy. (I can almost hear Gauntlet in the background ..."Your life force is running out ..."). With other individuals, they are clearly catalysts, giving me energy to move mountains.
It's interesting for me now to think of both people and tasks in terms of catalysts and drains. Now I consciously spend more time with catalysts, and less time with drains, and I enjoy the results.
I found a way to explore more and churn less on incubation (i.e. R&D) projects. It helps to think of your project experiments and key risks in terms of these three categories and in this order:1. user experience2. technical feasibility3. business value
Sequence matters. If you don't get the user experience right first, who cares if it's technically feasible? Once you get the user experience right, meaning customers get value, the business value will follow.
Here's how I learned this the hard way ...
My project was time-boxed and budget constrained. To keep our stakeholders happy, my strategy was to deliver incremental value. This translated to short ship cycles to test with customers. We used a rhythm of shipping every two weeks. This let us track whether we were trending towards or away from the right solutions.
While this was a relatively short feedback cycle, it wasn't actually efficient. Most of our prototyping was around exploring user experiences, although we didn't know this at the time. We were focused on shipping prioritized customer scenarios and features. Delivering these scenarios and features, mixed exploring user experience, tech feasibility and business value. It's not a bad mix -- it just wasn't the most efficient.
Necessity is the mother of invention. When we weren't "learning' at the pace we expected, we had to find a better way. We moved to rapid prototyping user experience with slideware and walkthroughs. This meant faster feedback and less do-overs than our software prototypes. It also meant, in our software prototypes, we would consciously and explicitly focus on technical feasibility
User experience was the real challenge and the most value. Spending a week to build a software prototype to test technical feasibility and identify engineering risks makes sense. Spending a week to build a software prototype to test user experience, sucks. In other words, what previously took a week or more to build out and test (the user experience), we could now do in a few hours.
In hindsight, it's easy to see that incubation was about user experience, tech feasibility and business value, even though I didn't realize it at the time. It's also easy to see now that the dominant challenge was usually user experience.
The moral of the story isn't that you can use slideware for all your user experience testing. Instead, the lesson I would pass along is be aware of whether you are really testing user experience, tech feasibility or business value. By knowing which category you're exploring, you can then pick the right approach.
One of the most effective approaches I've found for chunking up a project for incremental value is using a Scenario and Feature Matrix.
A Scenario and Feature Matrix organizes scenarios and features into a simple view. The scenarios are your rows. The features are your columns. You list your scenarios in order of "MUST", "SHOULD", and "COULD" (or Pri 1, 2, and 3) .. through vNext. You list your features by cross-cutting and vertical. By cross-cutting, I mean that feature applies to multiple scenarios. By vertical, I mean that feature applies to just one scenario. It helps to think of scenarios in this case as goals customers achieve. It helps to think of the features as chunks of value that support the scenario. The features are a bridge between the customer's scenario and the developer's work. You can make this frame on a whiteboard before baking into slides or docs.
You now have a simple frame where you can see your baseline release, your "cuttable" scenarios, and your dependencies. You can quickly analyze some basic questions:
Because it's visual, it's an easy tool to get the team on board and communicate in terms of value, before getting mired in detail. When you get mired in detail, as you figure out features and dependencies, you can ground yourself back in the scenarios.
From what I've seen over time, most projects can't cut scope without messing up quality, because they weren't designed to. Cutting the leg off your table doesn't help save time or quality, it just makes a bad table. If you didn't have enough time or resources to make four legs should you have started? Should you build the four legs first and get the table standing, before you add that extra widget?
A Scenario and Feature Matrix makes analyzing and communicating these problems simpler because you create a visual strawman. Anytime, you can quickly bring more eyes to the table, it helps. I also like to think of this as "Axiomatic" Project Management at heart because I used simplified axiomatic design principles for the approach. If you're starting a new project, challenge yourself by asking if you can incrementally deliver value and if you can cut chunks of work without ruining your deliverable (or your team), and see if a Scenario and Feature Matrix doesn't help.
If you use a principle-based approach, you can get rid of classes of security issues. SQL injection, cross-site scripting and other flavors of input injection attacks are possible because of some bad practices. Here's a few of the bad practices:
The key to input and data validation is to use a principle-based approach. Here's some of the core princpiples and practices:
If you use principle-based approach, you don't have to chase every new threat or attack or its variation. Here's a few resources that help get you started:
In general, "scenario" usually means a possible sequence of events.
In the software industry, "scenario" usually means one of the following:1. Same as a use case2. Path through a use case3. Instance of a use case
#3 is generally preferred because it provides a testable instance with specific results.
Around Microsoft, we use "scenarios" quite a bit ...1. At customer events, it's common to ask, "What's your scenario". This is another way of asking, "what's your context?" and "what are you trying to accomplish?"2. In specs, scenarios up front set the context for the feature descriptions.3. Marketing teams often use scenarios to encapsulate and communicate key customer pain/problems.4. Testing teams often use scenarios for test cases.
At the end of the day, what I think is important about scenarios is they help keep things grounded, tangible and human. I like them because they can stretch to fit, from fine-grained activities to large-scale, end-to-end outcomes.
When I need to quickly analyze a product and give actionable feeback, I use scenario evaluations. Scenario evaluations are basically an organized set of scenarios and criteria I use to test and evaluate against. It's a pretty generic approach so you can tailor it for your situation. Here's an example of the frame I used to evaluate the usage of Code Analysis (FX Cop) in some security usage scenarios:
Scenario Evaluation MatrixDevelopment life cycle
Input and Data Validation
In this case, I organized the scenarios by life cycle, app type, and security categories. This makes a pretty simple table. Explicitly listing the scenarios out helps see where the solution fits in and where it does not, as well as identify opportunities. A key aspect for effective scenario evaluation is finding the right matrix of scenarios. For this exercise, some of the scenarios are focused on the user experience of using the tool, while others are focused on how well the tool addresses recommendations. What's not shown here is that I also list personas and priorities next to each scenario, which are also extremely helpful for scoping.
What becomes interesting is when I applied criteria to the scenarios above. For example:
I then walked the scenarios, testing and evaluating against the criteria. This produced a nicely organized set of actionable feedback against how well the solution is working (or not). I think part of today's product development challenge isn't a lack of feedback, but rather a lack of actionable feedback that's organized and prioritized.
The beauty of this approach is that you can use this to evaluate your own solutions as well as others. If you're evaluating somebody else's solution, this actually helps quite a bit because you can avoid making it personal and argue the data.
The other beauty is that you can scale this approach along your product line. Create the frames that organize the tests and "outsource" the execution of the scenario evaluations to people you trust.
I've seen variations of this approach scale down to customer applications and scale up to full-blown platform evaluations for analysts. Personally, I've used it mostly for performance and security evaluations of various technologies and it helps me quickly find holes I might otherwise miss and it helps me communicate what I find.
Alik is out in the field helping customers bake security into their product cycles. Of course, customers ask how much does it cost to implement Security Engineering practices? The answer is, of course, ... it depends. The flip side is, what's the cost of NOT doing it?
I think understanding the cost of NOT doing it is important because it gets you thinking about risk and impact. This sets the stage for an informed business case for security. While your business case mileage may vary, you'll get further with it, than without it.
Scenarios and Solutions are basically whiteboard solutions that quickly depict key engineering decisions. You can think of them as baselines for your own design. We have a set of solutions that show the most common end-to-end ASP.NET 2.0 authentication and authorization patterns: Intranet
The advantage of starting with these you get to quickly see what combinations have worked for many others.