Let’s say you are talking about a program with someone. Imagine stakes are somewhat high. Suddenly one of these two phrases gets thrown out:

1. "It could be faster"

In any but the healthiest of teams, these are great words to put folks on defensive and stop rational conversations about code. Suddenly the neurons spin (?) and we wonder- is there something we missed? Is my code wrong? Did I screw it up all over the place? Does this guy/gal have some privileged piece of information? When someone tells you "It could be faster" - what's your typical reaction…?

2. "It's for security reasons"

That's another one. And it's funny I even dare to bring this up, being from Microsoft and all that. Who would be brave enough enter a debate about code being written for such an honorable purpose? Who would be enough of a security expert to dare question the existence of code that is there to protect us from evil, shame and regret? Do you want your grandchildren to remember you for being the dude/dudette that left that gaping security hole open? That was exploited by that worm that took civilization back to the 19th century in one fell swoop? I know I don't want to be associated with an accidental return to hunter/gatherer lifestyle ...so we block the issue in our heads and move on.

What's going on here?

These are just two examples of what I call 'invoking ghosts' in discussions about software. We bring up a topic that is vague, crosscutting, obscure, to 1) provide an excuse to do work or not do work, independently of how sensible it is to do it or not 2) win a debate against The Other Idea 3) raise FUD levels and gain control of the situation or reduce others' credibility 4) basically steer the conversation away from the code and into people manipulation.

Now, don't get me wrong. Perf and security are important (Doh! I sound like an idiot....I guess you can quote me on that...No, the first phrase) Things can be faster and you could be doing things for real security measures - but how to tell? What do you do if in a conversation someone invokes these topics as ghosts, and steer it back into a productive place?

Fortunately, I think there are good patterns and tools to communicate around these issues. A huge one is to be transparent about the goals and unknowns. This basically puts on the table 1. What is it that you are trying to achieve, and 2. An acknowledgment that the understanding of the problem and the shape of the solution are still evolving.

What has helped me is to get to a shared (implicit or explicit) commitment to an approach:

1) Establish the needs & the priorities

2) Analyze current situation and set a goal for an improvement

3) Work on improving the system, checkpoint against goals

4) do it all again

This actually sounds quite obvious, but it has helped me bring sanity to discussions about quality attributes such as performance & scalability, security, manageability, maintainability, and other areas where 'ghosts' tend to live. Quality attributes tend to be particularly good 'haunted houses' because of their crosscutting, distributed nature, and because we tend to have less than ideal tools to think about tests of success.

If you look at the 4 steps above the first one is about establishing needs - figuring out your tests, if you will. There has been a lot of work for many years around helping people figure out their tests/needs for quality attributes.

For security you have Threat Modeling, for manageability you have Health Modeling, for performance you have Perf Modeling, and I hope good frames come up for other quality attributes as well. But what I particularly like about these models is that they aren't models about the solution, but models for tests (and steps 2, 3 & 4 are just process guidance to help us make them pass…the “green and refactor” of the “red, green, refactor” cycle).  

In my opinion the focus on tests first for these complex areas makes these approaches and tools intrinsically suitable for expressing partial, incomplete or overlapping goals (which tends to happen whenever humans are involved..), and helping people iteratively collaborate around defining, verifying and refining them.

By starting with tests, we support an inductive approach to building the solution, akin to what you do with TDD. Now maybe we still don't have good tools to automatically test our systems based on all these testing models, but I hope that's just a matter of time (base DSL for tests that can be specialized for specific domains, anyone?). Eventually, these tools could become part of the suite of tests that drive the creation of the solution, treated as acceptance tests tied to business needs, and maybe even verifiable against systems at runtime.

…In the meantime it takes toil, tinkering, and a dabbling with mental experiments. But most of all I think it takes clear communication between people. Here’s some resources around these topics that I’ve used to help me scare the ghosts away from the conversations, and keep them productive, collaborative, and smart:

What makes a good threat model – by J.D. Meier

Performance & Scalability Checkpoint to improve your software engineering – by JD too

MSDN page having lots of things threat model (the generic portal)

Rico Mariani’s blog – Entry with the foreword of the p&p perf & scale guide with a quote of Donald Knuth quoting Tony Hoare saying ‘

Health Modeling - whitepaper on the Windows Server. They’ve done us all the favor of dumping the health model of a printer in XML. Whee! (But can you write a test runner for it?)

(Whither my simplicity model paper? Related to maintainability of code? I digress)

Kudos: It’s clear that J.D. has done a lot of work in this area. He’s the best I know at distilling these areas in significant, actionable, and insightful ways.

(OK, My cereal is going something like "Ed - W-T-F-is-this-fluffy-post? Why don't you tell us about dependency injection, aspects, executable models, Object Builder, using XAML to define types dynamically, gps, CAB running in mobile devices, workflow and UI or concurrency or whatever?"  A: "I believe writing software is a social activity. I believe that if we learn to communicate, improve how we  reason, become more self-aware, and question assumptions in more effective ways we will plain simply write much better software and enjoy it more along the way. Plus - it's my blog. And my doctor told me I shouldn't talk to cereal.)