J.D. Meier's Blog

Software Engineering, Project Management, and Effectiveness

  • J.D. Meier's Blog

    Brian Foot and Dynamic Languages

    • 1 Comments


    Brian Foote gave an insightful talk about dynamic languages to our patterns & practices team. I walked away with a few insights from the delivery and from the content. 

    On the delivery side of things, I liked the way he used short stories, metaphors and challenging questions to make his points.  The beauty of his approach was, I could either it at face value, or re-examine my assumptions and paradigms.  I think I ended up experiencing recursive insight. 

    On the content side, I Iiked Brian's points:

    • Leave your goalie there.  No matter how good your game strategy is, would you risk playing without your goalie?
    • Do it when it counts.  If you're going to check it later do you need to check it now?  Are you checking it at the most effective time?  Are you checking when it matters most?  Are you checking too much or too often?
    • End-to-end testing.  what happens when there's a mistake in your model?  did you simulate enough in your end-to-end to spot the problem? 
    • Readable code.  If you can't eyeball your code, who's going to catch what the compiler won't?

    A highlight for me was when Brian asked the question, what would or should have caught the error?  (the example he showed was a significant blunder measured in millions)  There's a lot of factors across the people, process and technology spectrum.  The problem is, are specs executable? ... are processes executable? are engineers executable? ... etc.

    After the talk, I had to ask Brian what lead him to Big Ball of Mud.  I wasn't familiar with it, but more importantly I wanted Brian's take.  My guess was it was a combination of a big ball of spaghetti that was as clear as mud. He said ball of mud was actually a fairly common expression at the time, and it hit home.

    Following one ponderable thought after another, we landed on the topic of copy and paste reuse in terms of code.  We all know the downsides of copy and paste, but Brian mentioned the upside that cut and paste localizes changes (you could make a change here, without breaking code there).  Dragos and I also brought up the issue of over-engineering reuse and how sometimes reverse engineering is more expensive than a fresh start. In other words, sometimes so much energy and effort  has been put into a great big resusable gob of goo, but you just need to do one thing, that the one thing is tough to do.  I did point out that the copy and paste-ability factor seemed to go up if you found the recurring domain problems inside of recurring application features inside of recurring application types.

    Things got really interesting when we explored how languages could go from generalized to optimized as you walk the stack from lower-level framework type code up to more domain specific applications.  We didn't talk specifically about domain specific languages, but we did play out the concept which turned into metaphors of one-size fits all mammals versus trucks with hoses that put out fires and trucks that dump dirt (i.e. use the right, optimized tool for the job).

  • J.D. Meier's Blog

    24 ASP.NET 2.0 Security FAQs Added to Guidance Explorer

    • 1 Comments

    We published 24 ASP.NET 2.0 Security FAQs now in Guidance Explorer.  You'll find them under the patterns & practices library node.  We pushed the FAQs into Guidance Explorer because one of our consultants in the field, Alik, is busy building out a customer's security knowledge base using Guidance Explorer.

    Don't let the FAQ name fool you.  FAQ can imply high-level or introductory.  These FAQs actually reflect some deeper issues.  In retrospect, we should have named this class of guidance modules "Questions and Answers."

    Each FAQ takes a question and answer format, where we summarize the solution and then point to more information.

    Enjoy!

  • J.D. Meier's Blog

    Echo It Back To Me

    • 1 Comments

    Do people understand what you need from them?  Do people get your point?  A quick way to check is to say, "echo it back to me."  Variations might include:

    • Tell me what you heard so far to make sure we're all on the same page ...
    • I want to make sure I've communicated properly, spit it back out to me ...

    You might be surprised by the results.  I've found this to be an effective way to narrow the communication gap for common scenarios.

  • J.D. Meier's Blog

    Guidance Explorer as Your Personal Guidance Store

    • 2 Comments

    Although Guidance Explorer (GE for short) was designed to showcase patterns & practices guidance, you can use it to create your own personal knowledge base.  It's a simple authoring environment, much like an offline blog.  The usage scenario is you create and store guidance items in "My Library".

    While using GE for day to day use, I noticed a simple but important user experience issue.  I think we should have optimized around creating free-form notes, that you could later turn into more structured guidance modules.  There's too much friction to go right to guidance, and in practice, you start with some notes, then refine to more structure.

    To optimize around creating fre-form notes in GE, I think the New Item menu that you get when you right-click the MyLibrary node, should have been:
    1.  Note - This would be a simple scratch pad so you could store unstructured notes of information.
    2.  Guidance Module - This menu option would list the current module types (i.e. Checklist Item, Code Example, … etc.)

    We actually did include the "Technote" type for this scenario.  A "Technote" is an unstructured note that you can tag with meta-data to help sort and organize in your library.  The problem is that this is not obvious and gets lost among the other structured guidance types in the list.

    The benefit of 20/20 hind-sight strikes again!

    On a good note, I have been getting emails from various customers that are using Guidance Explorer and they like the fact they get something of a structured Wiki experience, but with a lot more capability around sorting and viewing chunks of guidance.   They also like the fact that you get templates for key types (so when five folks create a guideline, all the guidelines have the same basic structure.)  I'll post soon about some of the key learnings that can help reshape creating, managing and sharing your personal, team, and community knowledge in today's landscape.

  • J.D. Meier's Blog

    Five Things You Didn't Know About Me

    • 3 Comments

    I was blog-tagged by Ed, so here are 5 things you probably don't know about me ...

    • I've trained in Muay Thai kickboxing (head-butting and all), which reminds me I need to work on my splits again.
    • I have lineage to a king long ago, in a country I don't live in.  (This one actually surprised me!)
    • Robert Redford accidentaly stepped on my foot while shooting The Quiz Show. (A related long story told short here is that
    • I found out two weeks too late that I had been offered a speaking role for an HBO special)
    • I've applied Neuro-Linguistic Programming (NLP) to shape software engineering and guidance.
    • I have a pet Wallaby


    I'm tagging Alik, Rico, Ron, Srinath and Wojtek to post their 5 things.

  • J.D. Meier's Blog

    Context is Key

    • 1 Comments
    I was browsing Rico's blog and I came across his post Do Performance Analysis in Context.  I couldn't agree more.  When it comes to evaluation, context is key.  If you don't know the scenarios and context, you can't trust the merits of your data or solutions.  To spread the idea and importance at work, I've coined the term Context-Precision.
  • J.D. Meier's Blog

    Analyzing a Problem Space

    • 3 Comments

    How do you learn a problem space?  I've had to chunk up problem spaces to give advice for the last several years, so I've refined my approach over time.  In fact, When I find myself churning or don't have the best answers, I usually find that I've missed an important step.

    Problem Domain Analysis

    1. What are the best sources of information?
    2. What are the key questions for this problem space?
    3. What are the key buckets or categories in this problem space?
    4. What are the possible answers for the key questions?
    5. What are the empirical results to draw from?
    6. Who can be my sounding board?
    7. What are the best answers for the key questions?

    1. What are the best sources of information?
    Finding the best sources of information is key to saving time. I cast a wide net then quickly spiral down to find the critical, trusted sources of information in terms of people, blogs, sites, aliases, forums, ... etc.  Sources are effectively the key nodes in my knowledge network.

    2. What are the key questions for this problem space?
    Identifying the questions is potentially the most crucial step.  If I'm not getting the right answers, I'm not asking the right questions.  Questions also focus the mind, and no problem withstands sustained thinking (thinking is simply asking and answering questions).

    3. What are the key buckets or categories in this problem space?
    It's not long before questions start to fall into significant buckets or categories.  I think of these categories as a frame of reference for the problem space.  This is how we created our "Security Frame" for simplifying how we analyzed application security.

    4. What are the possible answers for the key questions?
    When identifying the answers, the first step is simply identifying how it's been solved before.  I always like to know if this problem is new and if not, what are the ways it's been solved (the patterns).  If I think I have a novel problem, I usually haven't looked hard enough.  I ask myself who else would have this problem, and I don't limit myself to the software industry.  For example, I've found great insights for project management and for storyboarding software solutions by borrowing from the movie industry.

    One pitfall to avoid is that just because a solution worked in one case doesn't mean it's right for you.  The biggest differences are usually context.  I try to find the "why" and "when" behind the solution, so that I can understand what's relevant for me, as well as tailor it as necessary.  When I'm given blanket advice, I'm particularly curious what's beneath the blanket.

    5. What are the empirical results to draw from?
    Nothing beats empirical results.  Specifically I mean reference examples.  Reference examples are short-cuts for success.  Success leaves clues.  I try to find the case studies and the people behind them.  This way I can model from their success and learn from their failure (failure is just another lesson in how not to do something).

    6. Who can be my sounding board?
    One assumption I make when solving a problem is that there's always somebody better than me for that problem.  So I then ask, well who is that and I seek them out.  It's a chance to learn from the best and force me to grow my network.  This is also how I build up a sounding board of experts.  A sounding board is simply a set of people I trust to have useful perspective on a problem, even if it's nothing more than improving my own questions.

    7. What are the best answers for the key questions?
    The answers that I value the most are the principles.  These are my gems.  A prinicple is simply a fundamental law.  I'd rather know a single priniciple, then a bunch of rules.  By knowing a single principle, I can solve many variations of a problem.

    Now, while I've left some details out, I've hopefully highlighted enough for you here that you find something you can use in your own problem domain analysis.

  • J.D. Meier's Blog

    844 Guidance Items in Guidance Explorer

    • 1 Comments

    It's not 9 new guidelines, it's actually 70.  It looks like my Guidance Explorer wasn't done synching when I wrote my previous post.

    Prashant sent me a quick note.  Here is the complete status for Dec and Jan

    • Dec – 33 Guidelines items (.NET 2.0)
    • Jan – 37 Guidelines items (ADO.NET 2.0)

     

    • Total Items – 844
    • Total Guidelines Items – 547
  • J.D. Meier's Blog

    9 New Perf Guidelines in Guidance Explorer

    • 1 Comments

    You should see 9 new performance guidelines in Guidance Explorer (GE).  Well, not entirely new, but refactored and cleaned up.  Prashant Bansode (from our original Improving .NET Performance guide team) was busy while I was out of office for the holidays.  What you'll notice is that many of the guidelines are missing problem and solution examples.  Job #1 is first putting the guidance into this form.  Our new schema for guidelines is more elaborate than the original guidance, which means we'll have information holes.  Fleshing out the missing information would be job next.

    BTW - if you're using Guidance Explorer and have an interesting story on how you've used it, please share it with us at getool@microsoft.com.

  • J.D. Meier's Blog

    From Guides to Guidance Modules

    • 4 Comments

    Have you noticed the transition from guides to guidance modules over time?  My first few guidance projects were actually guides:

    While the chapters in the guides were modular, the overall outcome was an end-to-end guide.  On the upside, there was a lot of cohesion among the chapters.  On the downside, the guides were overwhelming for many customers who just wanted bits and pieces of the information.  That's the challenge of making a full guide available in html, pdf and print.

    Examples of Guidance Modules
    .NET 2.0 Security Guidance was the first project to use "Guidance Modules".  Guidance Modules are effectively modular types of guidance:

    Benefits of Guidance Modules
    The benefits of modules include:

    • "right-size" the solution to the problem
    • easier to author and test the guidance (templates and test cases)
    • easier to consume (you can grab just the guidance you need)
    • release as we go vs. wait until the end
    • you can build guides and books from the modules (strong building blocks for guides)

    The Chunking Has Just Begun
    While the initial chunking of guidance has certainly helped, there's more to go.  Customers have asked for even smaller chunks.  For example, rather than have all the guidelines in a single module, chunk each guideline into its own page.

    Dealing with Chunks
    Chunking up the guidance creates new challenges.  How do you find, gather and organize the right set of modules for your scenario?  This is a good problem to have.  Assuming there's guidance modules that have great community around them and they're prescriptive in nature (it prescribes vs. describes solutions), then the next step is to improve how you can leverage the modular information.  That's where Guidance Explorer comes in.  It was an experiment to explore new ways of creating, finding, and using guidance modules.  We learned a great deal about user experience, which I'll share in a future post.

  • J.D. Meier's Blog

    Idioms and Paradigms

    • 0 Comments

    John Socha-Leialoha wrote up a nice bit of insight on how Users are Idiomatic.  John writes:

    "First, different users will have different definitions of "intuitive." ... Second, and this isn't conveyed directly by the definition of idiomatic, users actually expect inconsistent behavior."

    In my experience, I've found this to be true (user experience walkthroughs with customers are very revealing and insightful).

    I first got introduced to idiomatic design for user experience several years back.  One of my colleagues challenged me to improve my user interface design by trading what might seem like intuitive paradigms for more useful idioms.  He used the example of a car.  He said the placement of the gas/break pedals was not intuitive, but idiomatic. 

    He argued that what's important is that the pedals are placed where they are efficient and effective, not necessarily intuitive.  His point was that I should make design decisions by thinking through user effectiveness/efficiency in the long term vs. just thinking of up front discoverability of intuitive models.  He added that sometimes intuitive placement makes sense, as long as you're not trading overall user experience.

    User experience in software is challenging so I enjoy distinctions like this that make me think of the solution from different angles.

  • J.D. Meier's Blog

    ASP.NET 2.0 Security Scenarios and Solutions

    • 1 Comments

    Scenarios and Solutions are basically whiteboard solutions that quickly depict key engineering decisions.  You can think of them as baselines for your own design.   We have a set of solutions that show the most common end-to-end ASP.NET 2.0 authentication and authorization patterns:
     
    Intranet

    Internet

    The advantage of starting with these you get to quickly see what combinations have worked for many others.

  • J.D. Meier's Blog

    Input Validation Principles and Practices

    • 1 Comments


    If you use a principle-based approach, you can get rid of classes of security issues.  SQL injection, cross-site scripting and other flavors of input injection attacks are possible because of some bad practices.  Here's a few of the bad practices:

    Bad Practices

    • you're relying on client-side input
    • you're not validating input
    • you're ignoring that input includes querystring, cookies, file and url paths
    • you're making security decisions on user input
    • you're not "sanitizing" (i.e. make safe) output


    The key to input and data validation is to use a principle-based approach.  Here's some of the core princpiples and practices:

    Good Practices

    • validate length, range, format and type
    • use whitelisting techniques over blacklisting
    • keep user input out of the control path
    • don't make security decision from client input

    If you use principle-based approach, you don't have to chase every new threat or attack or its variation.  Here's a few resources that help get you started:

  • J.D. Meier's Blog

    Catalysts and Drains

    • 2 Comments

    This is a follow up to my post, Manage Energy, Not Time.  A few folks have asked me how I figure out energy drains and catalysts.

    For me, clarity came when I broke it down into:

    • Tasks
    • People

    On the task side ...
    This hit home for me when one of the instructors gave some example scenarios:

    • You have to analyze a few 1000 rows of data in a spreadsheet
    • You have to give a last-minute presentation for a few thousand people in an hour
    • You have a whiteboarding session to design a product
    • You have to code a 1000 lines to solve an important problem

    He asked, "how do you feel?"  He said some people will have "energy" for some of these.  Others won't.  Some people will be excited by the chance to drill into data and cells.  He said others will be excited by painting the broader strokes.  He then gave more examples, such as, the irony of how you might have the energy to go skiing, but not to go to the movies. 

    The point he was making was that energy was relative and that you should be aware of what gives you energy or takes it away.

    On the people side ...
    I pay more attention to people now in terms catalysts and drains.  With some individuals, I'm impressed at their ability to sap energy. (I can almost hear Gauntlet in the background ..."Your life force is running out ...").  With other individuals, they are clearly catalysts, giving me energy to move mountains.

    It's interesting for me now to think of both people and tasks in terms of catalysts and drains.  Now I consciously spend more time with catalysts, and less time with drains, and I enjoy the results.

  • J.D. Meier's Blog

    What's a Scenario

    • 2 Comments

    In general, "scenario" usually means a possible sequence of events.

    In the software industry, "scenario" usually means one of the following:
    1. Same as a use case
    2. Path through a use case
    3. Instance of a use case

    #3 is generally preferred because it provides a testable instance with specific results.

    Around Microsoft, we use "scenarios" quite a bit ...
    1.  At customer events, it's common to ask, "What's your scenario".  This is another way of asking, "what's your context?" and "what are you trying to accomplish?"
    2.  In specs, scenarios up front set the context for the feature descriptions.
    3.  Marketing teams often use scenarios to encapsulate and communicate key customer pain/problems.
    4.  Testing teams often use scenarios for test cases.

    At the end of the day, what I think is important about scenarios is they help keep things grounded, tangible and human.  I like them because they can stretch to fit, from fine-grained activities to large-scale, end-to-end outcomes.

  • J.D. Meier's Blog

    Scenario and Feature Matrixes

    • 3 Comments

    One of the most effective approaches I've found for chunking up a project for incremental value is using a Scenario and Feature Matrix.

    A Scenario and Feature Matrix organizes scenarios and features into a simple view.  The scenarios are your rows.  The features are your columns.  You list your scenarios in order of "MUST", "SHOULD", and "COULD" (or Pri 1, 2, and 3) .. through vNext.  You list your features by cross-cutting and vertical.  By cross-cutting, I mean that feature applies to multiple scenarios.  By vertical, I mean that feature applies to just one scenario.  It helps to think of scenarios in this case as goals customers achieve.  It helps to think of the features as chunks of value that support the scenario.  The features are a bridge between the customer's scenario and the developer's work.  You can make this frame on a whiteboard before baking into slides or docs.

    You now have a simple frame where you can see your baseline release, your "cuttable" scenarios, and your dependencies.  You can quickly analyze some basic questions:

    • Do you have a good baseline set of scenarios?
    • Do you have an incremental story?
    • Do you have cuttable scenarios?
    • Can you cut a feature without cutting value for this release?

    Because it's visual, it's an easy tool to get the team on board and communicate in terms of value, before getting mired in detail.  When you get mired in detail, as you figure out features and dependencies, you can ground yourself back in the scenarios.

    From what I've seen over time, most projects can't cut scope without messing up quality, because they weren't designed to.  Cutting the leg off your table doesn't help save time or quality, it just makes a bad table.  If you didn't have enough time or resources to make four legs should you have started?  Should you build the four legs first and get the table standing, before you add that extra widget? 

     A Scenario and Feature Matrix makes analyzing and communicating these problems simpler because you create a visual strawman.  Anytime, you can quickly bring more eyes to the table, it helps.  I also like to think of this as "Axiomatic" Project Management at heart because I used simplified axiomatic design principles for the approach.  If you're starting a new project, challenge yourself by asking if you can incrementally deliver value and if you can cut chunks of work without ruining your deliverable (or your team), and see if a Scenario and Feature Matrix doesn't help.

     

  • J.D. Meier's Blog

    What's the Cost of Not Doing Security Engineering

    • 0 Comments


    Alik is out in the field helping customers bake security into their product cycles.  Of course, customers ask how much does it cost to implement Security Engineering practices?  The answer is, of course, ... it depends.  The flip side is, what's the cost of NOT doing it?

    I think understanding the cost of NOT doing it is important because it gets you thinking about risk and impact.  This sets the stage for an informed business case for security.  While your business case mileage may vary, you'll get further with it, than without it.

     

  • J.D. Meier's Blog

    Manage Energy, Not Time

    • 5 Comments

    Manage energy, not time, to get more things done ...  This concept really resonates with me.  I also like it because it can be counter intuitive or non-obvious.

    One way to try and get more things done is to, jam more in your schedule.  Yuck!  Unfortunately, that's a fairly common practice.

    I actually have lots of practices for managing time (outcome-based work breakdown structures, managing outcomes vs. activities, prioritizing outcomes based on usage and value, avoiding over-managing minutia, using outcome-based agendas for meetings, distinguishing getting results vs. building connections in meetings, using time-boxes to deliver incremental results in projects, "zero-mail in the inbox" practice … etc.)   While I'm always open to new time management practices, I think I was getting diminishing returns from yet more time management techniques.

    So stepping back, here's the situation … I was using a full arsenal of time management techniques, I was known for getting results, and yet I wanted to reach the next level.  What happened next was, I noticed a common thread among a few very different trainings and books around leadership and results.  Energy was a recurring theme.

    Of course, then it made total sense (the beauty of 20/20 hindsight!).  We've all had that great hour of brilliance or that unproductive work week.  I did a reality check against several past projects.  It was easy for me to see the connection of energy and results, when all else was equal.  The problem was, I didn't have an arsenal of practices for managing energy.  It turns out, I didn't really need to.  Simply by knowing what drains me or catalyzes me helped a lot.

    Now that I've been aware of this underlying concept for a while, I have learned a few practices along the way.  One practice I use is I explicitly ask the team when and how often do they want to deliver customer results (i.e. how often do they want to see the fruits of their effort?).   I balance this with capability, customer demand, project constraints and a bunch of other drivers, but the fact that I explicitly try to leverage energy and rhythm, helps crank the energy up a notch (and, as a bonus, results).

  • J.D. Meier's Blog

    User Experience, Tech Feasibility and Business Value

    • 2 Comments

    I found a way to explore more and churn less on incubation (i.e. R&D) projects.   It helps to think of your project experiments and key risks in terms of these three categories and in this order:
    1. user experience
    2. technical feasibility
    3. business value

    Sequence matters.  If you don't get the user experience right first, who cares if it's technically feasible?  Once you get the user experience right, meaning customers get value, the business value will follow.

    Here's how I learned this the hard way ...

    My project was time-boxed and budget constrained.  To keep our stakeholders happy, my strategy was to deliver incremental value.  This translated to short ship cycles to test with customers.   We used a rhythm of shipping every two weeks.  This let us track whether we were trending towards or away from the right solutions.

    While this was a relatively short feedback cycle, it wasn't actually efficient.  Most of our prototyping was around exploring user experiences, although we didn't know this at the time.  We were focused on shipping prioritized customer scenarios and features.  Delivering these scenarios and features, mixed exploring user experience, tech feasibility and business value.  It's not a bad mix -- it just wasn't the most efficient.

    Necessity is the mother of invention.  When we weren't "learning' at the pace we expected, we had to find a better way.  We moved to rapid prototyping user experience with slideware and walkthroughs.  This meant faster feedback and less do-overs than our software prototypes.  It also meant, in our software prototypes, we would consciously and explicitly focus on technical feasibility

    User experience was the real challenge and the most value.  Spending a week to build a software prototype to test technical feasibility and identify engineering risks makes sense.  Spending a week to build a software prototype to test user experience, sucks.   In other words, what previously took a week or more to build out and test (the user experience), we could now do in a few hours.

    In hindsight, it's easy to see that incubation was about user experience, tech feasibility and business value, even though I didn't realize it at the time.  It's also easy to see now that the dominant challenge was usually user experience.

    The moral of the story isn't that you can use slideware for all your user experience testing.  Instead, the lesson I would pass along is be aware of whether you are really testing user experience, tech feasibility or business value.  By knowing which category you're exploring, you can then pick the right approach.

  • J.D. Meier's Blog

    Timing Managed Code in .NET 2.0

    • 5 Comments

    In .NET 1.1, we timed managed code by wrapping QueryPerformanceCounter and QueryPerformanceFrequency.  The following How To shows how:

    In .NET 2.0, you can use the Stopwatch Class.  I found the following references useful:

  • J.D. Meier's Blog

    Scenario Evaluations for Product Design and Feedback

    • 1 Comments


    When I need to quickly analyze a product and give actionable feeback, I use scenario evaluations.  Scenario evaluations are basically an organized set of scenarios and criteria I use to test and evaluate against.  It's a pretty generic approach so you can tailor it for your situation.  Here's an example of the frame I used to evaluate the usage of Code Analysis (FX Cop) in some security usage scenarios:

    Scenario Evaluation Matrix
    Development life cycle  

    • Scenario: Dev lead integrates FX Cop in build process.  
    • Scenario: Dev lead integrates FX Cop in design process.  
    • Scenario: Developer uses FX Cop in their development process.  
    • Scenario: Tester integrates FX Cop in testing process.  
    • Scenario: Developer integrates FX Cop in deployment process.  
    • Scenario: Dev lead creates a new FX Cop rule to support custom policies.

    Application type  

    • Scenario: Developer uses FX Cop to evaluate security of web applications.  
    • Scenario: Developer uses FX Cop to evaluate security of desktop applications.  
    • Scenario: Developer uses FX Cop to evaluate security of components.  
    • Scenario: Developer uses FX Cop to evaluate security of web services.

    Input and Data Validation  

    • Scenario: Identify database input that is not validated  
    • Scenario: Input data is constrained and validated for type, length, format, and range.  
    • Scenario: Identify output sent to untrusted sources that are not encoded fields

    Sensitive Data  

    • Scenario: Check secrets are not hard coded  
    • Scenario: Check plain text secrets are not stored in memory for extended periods of time  
    • Scenario: Check sensitive data is not serialized.

    ... etc.


    In this case, I organized the scenarios by life cycle, app type, and security categories.  This makes a pretty simple table.  Explicitly listing the scenarios out helps see where the solution fits in and where it does not, as well as identify opportunities.  A key aspect for effective scenario evaluation is finding the right matrix of scenarios. For this exercise, some of the scenarios are focused on the user experience of using the tool, while others are focused on how well the tool addresses recommendations.  What's not shown here is that I also list personas and priorities next to each scenario, which are also extremely helpful for scoping. 

    Criteria

    What becomes interesting is when I applied criteria to the scenarios above.  For example:

    • Recommended practice compliance
    • Implementation complexity
    • Quality of documentation/code
    • Developer competence
    • Time to implement

    I then walked the scenarios, testing and evaluating against the criteria.  This produced a nicely organized set of actionable feedback against how well the solution is working (or not).  I think part of today's product development challenge isn't a lack of feedback, but rather a lack of actionable feedback that's organized and prioritized.

    The beauty of this approach is that you can use this to evaluate your own solutions as well as others.  If you're evaluating somebody else's solution, this actually helps quite a bit because you can avoid making it personal and argue the data.

    The other beauty is that you can scale this approach along your product line.  Create the frames that organize the tests and "outsource" the execution of the scenario evaluations to people you trust.

    I've seen variations of this approach scale down to customer applications and scale up to full-blown platform evaluations for analysts.  Personally, I've used it mostly for performance and security evaluations of various technologies and it helps me quickly find holes I might otherwise miss and it helps me communicate what I find.

  • J.D. Meier's Blog

    Be the Software

    • 1 Comments

    When you're working on an R&D project, how do you shorten the cycles around testing your user experience models?
    ... Be the Software

    That's the advise John Socha-Leialoha, father of Norton Commander, gave me and it worked like a champ.


    We faced a lot of user experience design issues early in our R&D project.  For example ....

    • how to filter a large picklist of items
    • how to optimize views based on type of item
    • how to integrate social software features (tagging, rating, ...)

    Initially, we did a bunch of whiteboard modeling, talk-throughs, and prototyping.  The problem was the prototypes weren't efficient.  I had a distributed team so it was tough to paint a good picture of the prototype, even when we all agreed to the scenarios and requirements.  The other problem was customer reviews were tough because it was easy to rat-hole or get distracted by partial implementations.  The worst case was when we would finish a prototype and it would be a do-over.

    We experimented with two techniques:

    1. Build modular slideware for visual walkthroughs of task-based features.
    2. Be the software.

    This radically improved customer verification of the user experience and kept our dev team building out the right experience.

    Mocking up in slides is nothing new.  The trick was making it efficient and effective:

    1. We prioritized scenarios that were the most risk for user experience.
    2. We created modular slide decks.  Each deck focused on exactly one scenario-based task (and scenarios were outcome based).  Modular slide decks are easier to build, review and update.  Our average deck was around six slides.
    3. Each slide in a deck was a single step in the task from the user's perspective.
    4. Each slide had a visual mock up of what the user would see
    5. To paint some of the bigger stories, we did larger wrapper decks, but only after getting the more fine-grained scenarios right.  Our house was made of stone instead of straw.  In practice, I see a lot of beautiful end-to-end scenarios decks that are too big, too fragile and too make believe.

    For example, here's the slide list for one deck:

    1. scenario - User subscribes to a guidance feed
    2. summary of steps (flat list of the steps)
    3. Step 1.  user finds a relevant item
    4. Step 2.  user subscribes to view
    5. Step 3.  user displays view in RSS reader

    What originally took a week to prototype, we could mock up in an hour if not minutes.  Do-overs were no longer a problem.  In fact, mocking up alternate solutions was a breeze.  The beauty was we could keep our release rhythm of every two weeks, while we explored solution paths in the background, with less disruption to the dev team. 

    The other beauty was we could use the same deck to walkthrough with customers and the dev team.  The customers would bang on the user experience.  The developers would bang on the technical feasibility.    For example, show a catalog to customers and they evaluated the the best way to browse and filter.  Sow the same screen to the devs and they would evalute the performance of the catalog.  We would also brainstorm the "what-ifs", such as how will the catalog perform when there's a billion items in it ... etc.  We got better at teasing out the key risks before we hit them.

    Building the software became more an exercise of instantiating the user experience versus leaving too much to be made up on the fly.

    To "be the software", it's as simple as letting the user walk through the user experience of performing that task (via the slides), and, as John put it, "you be the software ... you simply state how the software would respond."   If slides are too heavy, draw on paper or use a whiteboard.  The outcome is the user gets a good sense of what it's like to use your solution, while you get a sense of the user's more specific needs.  The interactive approach produces way more benefits than a simple spec review or 1/2-baked prototype.

  • J.D. Meier's Blog

    MyLifeBits vs. Mental Snapshots

    • 2 Comments

    Yesterday's snowfall in Redmond was interesting for me.  During my drive home, it was pretty dark, icy and cold.  As I came up Old Redmond Road, I saw an object coming towards me, moving somewhat erratically, that looked too small to be a car.

    It wasn't a car at all.  It was a cross-country skier making his way down the middle of the road, followed by a trail of cars.  I'm not sure at what point the street looked like good skiing and I don't know if he had an exit strategy, but he did seem to be having fun and going the speed limit.

    I had my digital camera with me, but I forgot to use it.  I was more focused on skating my car down the right side of the road.  By the time I got home, I had a bunch of "mental snapshots" of various scenes along the way home, but nothing in hand to share. 

    That got me thinking of the MyLifeBits project.  MyLifeBits is effectively software for "lifelogging" or archiving your life on disk.  Although it's a bit extreme for me, there are times where I wish I automatically had more than just the mental snapshots.

  • J.D. Meier's Blog

    Test-Driven Guidance

    • 8 Comments

    When I last met with Rob Caron to walk him through Guidance Explorer, one of the concepts that peaked his interest was test-cases for content.   He suggested I blog it, since it's not common practice and could benefit others.  I agreed.

    If you're an author or a reviewer, this technique may help you.  You can create explicit test-cases for the content.  Simply put, these are the "tests for success" for a given piece of content.  Here's an example of a few test cases for a guideline:

    Test Cases for Guidelines

    Title

    • Does the title clearly state the action to take?
    • Does the title start with an action word (eg. Do something, Avoid something)?

    Applies To

    • Do you list technology and version? (e.g. ASP.NET 2.0)

    What to Do

    • Do you state the action to take?
    • Do you avoid stating more than the action to take?

    Why

    • Do you provide enough information for the user to make a decision?
    • Do you state the negative consequences of not following this guideline?

    When

    • Do you state when the guideline is applicable?
    • Do you state when not to use this guideline?

    How

    • Do you state enough information to take action?
    • Do you provide explicit steps that are repeatable?

    Problem Example

    • Do you show a real world example of the problem from experience?
    • If there are variations of the problem, do you show the most common?
    • If this is an implementation guideline, do you show code?

    Solution Example

    • Does the example show the resulting solution if the problem example is fixed?
    • If this is a design guideline is the example illustrated with images and text?
    • If this is an implementation guideline is the example in code?

    Additional Resources

    • Are the links from trusted sites?
    • Are the links correct in context of the guideline?

    Related Items

    • Are the correct items linked in the context of the guideline?

    Additional Tests to Consider When Writing a Guideline

    • Does the title clearly state the action to take?
    • Does the title start with an action word (eg. Do something, Avoid something)?
    • If the item is a MUST, meaning it is prevelant and high impact, is Priority = p1?
    • If the item is a SHOULD, meaning it has less impact or is only applicable in narrower circumstances, is Priority = p2?
    • If the item is a COULD, meaning it is nice to know about but isn't highly prevelant or impactful, is Priority = p3?
    • If this item will have cascading impact on application design, is Type = Design?
    • If this item should be followed just before deployment, is concerned with configuration details or runtime behavior, is Type = Deployment?
    • If this item is still in progress or not fully reviewed, is Status = Beta?

    Benefits to Authors and Reviewers
    The test-cases serve as checkpoints that help both authors and reviewers produce more effective guidance.  While you probably implicitly ask many of these questions, making them explicit makes them a repeatable practice for yourself or others.  I've found questions to be the best encapsulation of the test because they set the right frame of mind.  If you're an author, you can start writing guidance by addressing the questions.  If you're a reviewer, you can efficiently check for the most critical pieces of information.  How much developer guidance exists that does not answer the why or when?  Too much.  As I sift through the guidance I've produced over the years, I can't believe how many times I've missed making the why or when explicit.

    I'm a fan of the test-driven approach to guidance and here's my top reasons why:

    • I can tune the guidance across a team.  As I see patterns of problems in the quality, I can weed it out by making an explicit test case.
    • I can tailor test cases based on usage scenarios.  For example, in order to use our checklist items for tooling scenarios, our problem and solution examples need to have certain traits.  I can burn this into the test cases.
    • I can bound the information.  When is it done and what does "good enough" look like?  The test case sets a bar for the information.
    • I can improve the precision and accuracy of the information.  By precision, I mean filter out everything that's not relevant.  When it comes to technical information to do my job, I'm a fan of density (lots of useful information per square inch of text).  Verbosity is for story time.

    Examples of Test Cases for Guidance
    I've posted examples of our test-cases for guidance on Channel 9.

  • J.D. Meier's Blog

    238 New Items in Guidance Explorer

    • 2 Comments

    Today we published 238 new guidance items in Guidance Explorer.  If you use the offline client, it should automatically synchronize to our online store.

    We're in the process of performing a guidance sweep.  The approach to the sweep is twofold:
    1.  Make existing guidance available in Guidance Explorer.
    2.  Identify user experience issues with the information models and tool design.

    Benefits in GE
    Making existing guidance available in Guidance Explorer involves re-factoring existing security guidance and performance guidance.  The benefits of having the guidance available in Guidance Explorer include:

    • you can view across topics (for example, you can see across the security and the performance guidance)
    • you can filter down to exactly the guidance items you need for a given scenario or task
    • you can build multiple custom views based on how you need to use the guidance
    • you can build guides on the fly (you can save a view as a Word doc or HTML files for example)
    • you can tailor the guidance to your scenario (e.g. save an item into your library in GE and edit the guidance to your liking)
    • you can supplement the guidance for your scenario (because GE is also an authoring environment, you can write your own guidance)

    How We Improve Our Guidance
    An underlying strategy in GE was to help support users quickly hunt and gather relevant items rather than try and guess your context and what you need.  In other words, it's a tool to help smart people versus a smart tool that might get in your way.  This was actually an important decision because we had to pick a problem we knew we could help directly solve and add value.

    The feedback from customers on existing guidance was that it was great stuff, but there were 3 key problems:
    1.  it's a copy+paste exercise to grab just the guidance you need
    2.  it's not atomic enough (monoliths over bite-sized chunks)
    3.  many of the items, while they read well, were not actionable enough

    That's why we took the following measures on our guidance:

    • split the guidelines and checklists into individual items (we chunked the guidance into units of action)
    • we cleaned up our templates for the various guidance types (we gave the chunked items a common look and feel)
    • made the schema explicitly include answers to "why" and "how", as well as include problem examples and solution examples (we made the chunks more actionable and verifiable)

    As we port existing guidance to our updated schemas, we often find guidance items lacking key information such as why or how, or example code.

    Guidance Explorer in Practice
    What's been great so far is that some folks in the field have let me know how they've been using it for customer engagments.  Apparently the ability to customize guidance has resonated very well.  One consultant in particular has used Guidance Explorer for several engagements to save time and effort.  He uses GE as a general purpose rules and guidelines store.  He's also tailored guidelines and checklists for different audience levels (executive, development leads, architects, developers, PMs) and for different activities (design reviews, code reviews, and deployment reviews).

    A few customers have let me know they are using the UNC share scenario to create guidance libraries for their team development.  They told me they like the idea that it is like a simple typed-wiki that you can act on.  The fact that they can create views and print out docs from the library has been the main appeal.

    The other benefit that more customers are appreciating is the templates for guidelines and checklists.  They like the fact that it starts to simplify authoring as well as sharing prescriptive guidance.  For anybody who has authored guidelines or checklists, they know that it's challenging to write actionable guidance that can be reused.  What we're sharing in Guidance Explorer is the benefit of experience and lessons learned over the years of producing resuable guidance for various audiences.

    R&D Project
    As a reminder and to keep things in perspective, Guidance Explorer is an R&D project.  While there are immediately tangible benefits, the real focus is on the learnings around user experience so that patterns & practices can improve it's ability to author and share guidance, and to make progress on helping debottleneck the creation of prescritive guidance for the software industry.

    Feedback
    You can send feedback on GE directly to the team at getool@microsoft.com

Page 41 of 43 (1,069 items) «3940414243