Software Engineering, Project Management, and Effectiveness
I've highlighted my take-aways from the "World Class Testing" section of Managing the Design Factory by Donald G. Reinertsen. It's an insightful book whether you're optimizing your product line or designing a new product. It's packed with empirical punch, counter-intuition, and practical techniques for re-shaping your engineering results.
Viewing Testing as an Asset
Ways to Optimize Testing
What I like about this particular book, is that it doesn't prescribe a one-size fits all. Instead, you get a pick list of options and strategies, depending on what you're trying to optimize. It's effectively a choose-your-own-adventure book for product developers.
Here's my trying to explain threat modeling (actually core modeling) to a customer …
My core theme of the modeling is this:
This is the approach I use whether it's security or performance or any other quality attribute. In the case of threat modeling, vulnerabilities are the key. These go in your bug database and help scope test.
This is such a fundamental question. It has an enormous impact on your product design and how you structure your product life cycle.
For example, are you optimizing time? ... money? ... impact? ... innovation? ... resource utilization? If you don't answer this question first, it's very easy to pick the wrong hammer for your screws.
A few things I use to help me figure out what to optimize are I figure out my objectives, I figure out my constraints, and I look for my possible high ROI paths. I always want more out of what I do. The trick is to know when doing more, gets you less. Your objectives keep you grounded along the way.
What I like about this question is it universally applies to any activity you do, including how you design your day. Are you optimizing around results, or connecting with people? Are you optimizing for enjoyment along the way or for reward in the end?
We've used TFS for more than a year, so it's interesting to see what we're doing in practice. If you looked at our source control, you'd see something like the following:
+ Project X version 1+ Project X version 2- Project Y |----Branches |----Releases |----Spikes |----TeamStuff |----Trunk |----Build |----Docs |----Keys |----Source |----Tools
While there's some variations across projects, the main practices are:
I'll be taking a look at how customers and different groups in Microsoft have been using TFs for Source Control. If you have some practices you'd like to share, I'd like to hear them. Comment here or send mail to VSGuide@microsoft.com
I'm paving paths through the VSTS body of knowledge for my team. I like tags and tag clouds. They quickly tell me what people are thinking and talking about. Here's a list of tags I put together for my team:
Here's a quick set of steps for using Live.com (http://www.Live.com) as your RSS reader. What I like about it is that I can log in to it from anywhere. What I like most is that I can create multiple pages to organize my feeds. This let's me focus my information.
Here's the steps for creating pages and adding feeds to them: (you need to login to Live.com for these options)
Adding a New Page
Tip - If I need to search for a feed, I use a separate browser, do my search, and then paste the path next to the Subscribe button. I don't like the Search for feeds option because I lose my context.
I like Live.com for my Web-based RSS reading experience. I use JetBrains Omea Pro for my desktop experience, where I do my heavy processing. I think of this like my Outlook Web Access and Outlook desktop experiences. My Web experience is optimized for reach; my desktop experience is optimized for rich.
Below is a walkthrough of the steps I took to install VSTF (Visual Studio Team Foundation) on a VPC. I found a path that worked for me. I'm providing it as a reference both for myself and for others that might need it. I ran into issues during my initial installation. These turned out to be problems with my base installation of Windows on my VPC, not VSTF.
Here's the steps I took:
I don't think it was actually necessary to do all the reboots that I did. Personally, I've found that interspersing reboots during product installations help me avoid common installation hiccups.
If you want a glimpse into our workspace and how we work at patterns & practices, watch the patterns & practices - A Team of Thieves video on Channel9. Rory interviews Ed and Peter from our team.
During the interview they bring up our previous team name, PAG. PAG, at one point stood for Prescriptive Architecture Guidance, and then became Platform Architecture Guidance. What I liked about our PAG days was that we got used as a noun and a verb:
I thought it might be helpful to share how I think about the problem of "policy verification through the life cycle." I use policy as a mapping for "rules", "building codes" or requirements.
For simplicity, I think about requirements as either user, system requirements or business. I also break it down by business requirements, operational constraints, technological requirements, organizational and industry compliance. From a life cycle perspective, I break the rules up into design, implementation, and deployment. This helps me very quickly parse and prioritize the space. It also helps me use the right tool for the job and right-size my efforts.
How does this help? It helps when you evaluate your approaches.
Brian Foote gave an insightful talk about dynamic languages to our patterns & practices team. I walked away with a few insights from the delivery and from the content.
On the delivery side of things, I liked the way he used short stories, metaphors and challenging questions to make his points. The beauty of his approach was, I could either it at face value, or re-examine my assumptions and paradigms. I think I ended up experiencing recursive insight.
On the content side, I Iiked Brian's points:
A highlight for me was when Brian asked the question, what would or should have caught the error? (the example he showed was a significant blunder measured in millions) There's a lot of factors across the people, process and technology spectrum. The problem is, are specs executable? ... are processes executable? are engineers executable? ... etc.
After the talk, I had to ask Brian what lead him to Big Ball of Mud. I wasn't familiar with it, but more importantly I wanted Brian's take. My guess was it was a combination of a big ball of spaghetti that was as clear as mud. He said ball of mud was actually a fairly common expression at the time, and it hit home.
Following one ponderable thought after another, we landed on the topic of copy and paste reuse in terms of code. We all know the downsides of copy and paste, but Brian mentioned the upside that cut and paste localizes changes (you could make a change here, without breaking code there). Dragos and I also brought up the issue of over-engineering reuse and how sometimes reverse engineering is more expensive than a fresh start. In other words, sometimes so much energy and effort has been put into a great big resusable gob of goo, but you just need to do one thing, that the one thing is tough to do. I did point out that the copy and paste-ability factor seemed to go up if you found the recurring domain problems inside of recurring application features inside of recurring application types.
Things got really interesting when we explored how languages could go from generalized to optimized as you walk the stack from lower-level framework type code up to more domain specific applications. We didn't talk specifically about domain specific languages, but we did play out the concept which turned into metaphors of one-size fits all mammals versus trucks with hoses that put out fires and trucks that dump dirt (i.e. use the right, optimized tool for the job).
We published 24 ASP.NET 2.0 Security FAQs now in Guidance Explorer. You'll find them under the patterns & practices library node. We pushed the FAQs into Guidance Explorer because one of our consultants in the field, Alik, is busy building out a customer's security knowledge base using Guidance Explorer.
Don't let the FAQ name fool you. FAQ can imply high-level or introductory. These FAQs actually reflect some deeper issues. In retrospect, we should have named this class of guidance modules "Questions and Answers."
Each FAQ takes a question and answer format, where we summarize the solution and then point to more information.
Do people understand what you need from them? Do people get your point? A quick way to check is to say, "echo it back to me." Variations might include:
You might be surprised by the results. I've found this to be an effective way to narrow the communication gap for common scenarios.
Although Guidance Explorer (GE for short) was designed to showcase patterns & practices guidance, you can use it to create your own personal knowledge base. It's a simple authoring environment, much like an offline blog. The usage scenario is you create and store guidance items in "My Library".
While using GE for day to day use, I noticed a simple but important user experience issue. I think we should have optimized around creating free-form notes, that you could later turn into more structured guidance modules. There's too much friction to go right to guidance, and in practice, you start with some notes, then refine to more structure.
To optimize around creating fre-form notes in GE, I think the New Item menu that you get when you right-click the MyLibrary node, should have been:1. Note - This would be a simple scratch pad so you could store unstructured notes of information.2. Guidance Module - This menu option would list the current module types (i.e. Checklist Item, Code Example, … etc.)
We actually did include the "Technote" type for this scenario. A "Technote" is an unstructured note that you can tag with meta-data to help sort and organize in your library. The problem is that this is not obvious and gets lost among the other structured guidance types in the list.
The benefit of 20/20 hind-sight strikes again!
On a good note, I have been getting emails from various customers that are using Guidance Explorer and they like the fact they get something of a structured Wiki experience, but with a lot more capability around sorting and viewing chunks of guidance. They also like the fact that you get templates for key types (so when five folks create a guideline, all the guidelines have the same basic structure.) I'll post soon about some of the key learnings that can help reshape creating, managing and sharing your personal, team, and community knowledge in today's landscape.
I was blog-tagged by Ed, so here are 5 things you probably don't know about me ...
I'm tagging Alik, Rico, Ron, Srinath and Wojtek to post their 5 things.
How do you learn a problem space? I've had to chunk up problem spaces to give advice for the last several years, so I've refined my approach over time. In fact, When I find myself churning or don't have the best answers, I usually find that I've missed an important step.
Problem Domain Analysis
1. What are the best sources of information?Finding the best sources of information is key to saving time. I cast a wide net then quickly spiral down to find the critical, trusted sources of information in terms of people, blogs, sites, aliases, forums, ... etc. Sources are effectively the key nodes in my knowledge network.
2. What are the key questions for this problem space?Identifying the questions is potentially the most crucial step. If I'm not getting the right answers, I'm not asking the right questions. Questions also focus the mind, and no problem withstands sustained thinking (thinking is simply asking and answering questions).
3. What are the key buckets or categories in this problem space?It's not long before questions start to fall into significant buckets or categories. I think of these categories as a frame of reference for the problem space. This is how we created our "Security Frame" for simplifying how we analyzed application security.
4. What are the possible answers for the key questions?When identifying the answers, the first step is simply identifying how it's been solved before. I always like to know if this problem is new and if not, what are the ways it's been solved (the patterns). If I think I have a novel problem, I usually haven't looked hard enough. I ask myself who else would have this problem, and I don't limit myself to the software industry. For example, I've found great insights for project management and for storyboarding software solutions by borrowing from the movie industry.
One pitfall to avoid is that just because a solution worked in one case doesn't mean it's right for you. The biggest differences are usually context. I try to find the "why" and "when" behind the solution, so that I can understand what's relevant for me, as well as tailor it as necessary. When I'm given blanket advice, I'm particularly curious what's beneath the blanket.
5. What are the empirical results to draw from?Nothing beats empirical results. Specifically I mean reference examples. Reference examples are short-cuts for success. Success leaves clues. I try to find the case studies and the people behind them. This way I can model from their success and learn from their failure (failure is just another lesson in how not to do something).
6. Who can be my sounding board?One assumption I make when solving a problem is that there's always somebody better than me for that problem. So I then ask, well who is that and I seek them out. It's a chance to learn from the best and force me to grow my network. This is also how I build up a sounding board of experts. A sounding board is simply a set of people I trust to have useful perspective on a problem, even if it's nothing more than improving my own questions.
7. What are the best answers for the key questions?The answers that I value the most are the principles. These are my gems. A prinicple is simply a fundamental law. I'd rather know a single priniciple, then a bunch of rules. By knowing a single principle, I can solve many variations of a problem.
Now, while I've left some details out, I've hopefully highlighted enough for you here that you find something you can use in your own problem domain analysis.
It's not 9 new guidelines, it's actually 70. It looks like my Guidance Explorer wasn't done synching when I wrote my previous post.
Prashant sent me a quick note. Here is the complete status for Dec and Jan
You should see 9 new performance guidelines in Guidance Explorer (GE). Well, not entirely new, but refactored and cleaned up. Prashant Bansode (from our original Improving .NET Performance guide team) was busy while I was out of office for the holidays. What you'll notice is that many of the guidelines are missing problem and solution examples. Job #1 is first putting the guidance into this form. Our new schema for guidelines is more elaborate than the original guidance, which means we'll have information holes. Fleshing out the missing information would be job next.
BTW - if you're using Guidance Explorer and have an interesting story on how you've used it, please share it with us at firstname.lastname@example.org.
Have you noticed the transition from guides to guidance modules over time? My first few guidance projects were actually guides:
While the chapters in the guides were modular, the overall outcome was an end-to-end guide. On the upside, there was a lot of cohesion among the chapters. On the downside, the guides were overwhelming for many customers who just wanted bits and pieces of the information. That's the challenge of making a full guide available in html, pdf and print.
Examples of Guidance Modules.NET 2.0 Security Guidance was the first project to use "Guidance Modules". Guidance Modules are effectively modular types of guidance:
Benefits of Guidance ModulesThe benefits of modules include:
The Chunking Has Just BegunWhile the initial chunking of guidance has certainly helped, there's more to go. Customers have asked for even smaller chunks. For example, rather than have all the guidelines in a single module, chunk each guideline into its own page.
Dealing with ChunksChunking up the guidance creates new challenges. How do you find, gather and organize the right set of modules for your scenario? This is a good problem to have. Assuming there's guidance modules that have great community around them and they're prescriptive in nature (it prescribes vs. describes solutions), then the next step is to improve how you can leverage the modular information. That's where Guidance Explorer comes in. It was an experiment to explore new ways of creating, finding, and using guidance modules. We learned a great deal about user experience, which I'll share in a future post.
John Socha-Leialoha wrote up a nice bit of insight on how Users are Idiomatic. John writes:
"First, different users will have different definitions of "intuitive." ... Second, and this isn't conveyed directly by the definition of idiomatic, users actually expect inconsistent behavior."
In my experience, I've found this to be true (user experience walkthroughs with customers are very revealing and insightful).
I first got introduced to idiomatic design for user experience several years back. One of my colleagues challenged me to improve my user interface design by trading what might seem like intuitive paradigms for more useful idioms. He used the example of a car. He said the placement of the gas/break pedals was not intuitive, but idiomatic.
He argued that what's important is that the pedals are placed where they are efficient and effective, not necessarily intuitive. His point was that I should make design decisions by thinking through user effectiveness/efficiency in the long term vs. just thinking of up front discoverability of intuitive models. He added that sometimes intuitive placement makes sense, as long as you're not trading overall user experience.
User experience in software is challenging so I enjoy distinctions like this that make me think of the solution from different angles.
Scenarios and Solutions are basically whiteboard solutions that quickly depict key engineering decisions. You can think of them as baselines for your own design. We have a set of solutions that show the most common end-to-end ASP.NET 2.0 authentication and authorization patterns: Intranet
The advantage of starting with these you get to quickly see what combinations have worked for many others.
If you use a principle-based approach, you can get rid of classes of security issues. SQL injection, cross-site scripting and other flavors of input injection attacks are possible because of some bad practices. Here's a few of the bad practices:
The key to input and data validation is to use a principle-based approach. Here's some of the core princpiples and practices:
If you use principle-based approach, you don't have to chase every new threat or attack or its variation. Here's a few resources that help get you started:
This is a follow up to my post, Manage Energy, Not Time. A few folks have asked me how I figure out energy drains and catalysts.
For me, clarity came when I broke it down into:
On the task side ...This hit home for me when one of the instructors gave some example scenarios:
He asked, "how do you feel?" He said some people will have "energy" for some of these. Others won't. Some people will be excited by the chance to drill into data and cells. He said others will be excited by painting the broader strokes. He then gave more examples, such as, the irony of how you might have the energy to go skiing, but not to go to the movies.
The point he was making was that energy was relative and that you should be aware of what gives you energy or takes it away.
On the people side ...I pay more attention to people now in terms catalysts and drains. With some individuals, I'm impressed at their ability to sap energy. (I can almost hear Gauntlet in the background ..."Your life force is running out ..."). With other individuals, they are clearly catalysts, giving me energy to move mountains.
It's interesting for me now to think of both people and tasks in terms of catalysts and drains. Now I consciously spend more time with catalysts, and less time with drains, and I enjoy the results.
In general, "scenario" usually means a possible sequence of events.
In the software industry, "scenario" usually means one of the following:1. Same as a use case2. Path through a use case3. Instance of a use case
#3 is generally preferred because it provides a testable instance with specific results.
Around Microsoft, we use "scenarios" quite a bit ...1. At customer events, it's common to ask, "What's your scenario". This is another way of asking, "what's your context?" and "what are you trying to accomplish?"2. In specs, scenarios up front set the context for the feature descriptions.3. Marketing teams often use scenarios to encapsulate and communicate key customer pain/problems.4. Testing teams often use scenarios for test cases.
At the end of the day, what I think is important about scenarios is they help keep things grounded, tangible and human. I like them because they can stretch to fit, from fine-grained activities to large-scale, end-to-end outcomes.
One of the most effective approaches I've found for chunking up a project for incremental value is using a Scenario and Feature Matrix.
A Scenario and Feature Matrix organizes scenarios and features into a simple view. The scenarios are your rows. The features are your columns. You list your scenarios in order of "MUST", "SHOULD", and "COULD" (or Pri 1, 2, and 3) .. through vNext. You list your features by cross-cutting and vertical. By cross-cutting, I mean that feature applies to multiple scenarios. By vertical, I mean that feature applies to just one scenario. It helps to think of scenarios in this case as goals customers achieve. It helps to think of the features as chunks of value that support the scenario. The features are a bridge between the customer's scenario and the developer's work. You can make this frame on a whiteboard before baking into slides or docs.
You now have a simple frame where you can see your baseline release, your "cuttable" scenarios, and your dependencies. You can quickly analyze some basic questions:
Because it's visual, it's an easy tool to get the team on board and communicate in terms of value, before getting mired in detail. When you get mired in detail, as you figure out features and dependencies, you can ground yourself back in the scenarios.
From what I've seen over time, most projects can't cut scope without messing up quality, because they weren't designed to. Cutting the leg off your table doesn't help save time or quality, it just makes a bad table. If you didn't have enough time or resources to make four legs should you have started? Should you build the four legs first and get the table standing, before you add that extra widget?
A Scenario and Feature Matrix makes analyzing and communicating these problems simpler because you create a visual strawman. Anytime, you can quickly bring more eyes to the table, it helps. I also like to think of this as "Axiomatic" Project Management at heart because I used simplified axiomatic design principles for the approach. If you're starting a new project, challenge yourself by asking if you can incrementally deliver value and if you can cut chunks of work without ruining your deliverable (or your team), and see if a Scenario and Feature Matrix doesn't help.