J.D. Meier's Blog

Software Engineering, Project Management, and Effectiveness

February, 2007

  • J.D. Meier's Blog

    My Personal Approach for Daily Results

    • 16 Comments

    I'm dedicating this post to anybody who's faced with task saturation, or needs some new ideas on managing their days or weeks... 

    One of the most important techniques I share with those I mentor, is how to manage To Dos.  It's too easy to experience churn or task saturation.  It's also too easy to confuse activities with outcomes.  At Microsoft, I have to get a lot done and I have to know what's important vs. what's urgent, and I have to get results.

    My approach is effective and efficient for me.  I think it's effective because it's simple and it's a system, rather than a silver bullet.  Here's my approach in a nutshell:

    1. Monday Vision.
    2. Daily Outcomes.
    3. Friday Reflection.

    Monday Vision
    Monday Vision is simply a practice where each Monday, I identify the most important outcomes for the week.  This lets me work backwards from the end in mind.  I focus on outcome not activities.  I ask questions such as, "if this were Friday, what would I feel good about having accomplished?" ... "if this were Friday, what would suck most if it wasn't done?" ... etc.  I also use some questions from Flawless Execution.

    Daily Outcomes
    Daily Outcomes is where each day, I make a short To Do list.  I title it by date (i.e. 02-03-07).  I start by listing my MUST items. Next, I list my SHOULD or COULD.  I use this list throughout the day, as I fish my various streams for action.  My streams include meetings, email, conversations, or bursts of brilliance throughout the day.   Since I do this at the start of my day, I have a good sense of priorities.  This also helps me deal with potentially randomizing scenarios.  This also helps batch my work.  For example, if I know there's a bunch of folks I need to talk to in my building, I can walk the halls efficiently rather than have email dialogues with them.  On ther other hand, if there's a lot of folks I need to email, I can batch that as well.

    Friday Reflection
    Friday Reflection is a practice where I evaluate what I got done or didn't and why.  Because I have a flat list of chunked up To Do lists by day, it's very easy to review a week's worth and see patterns for improvement.  It's actually easy for me to do this for months as well.  Trends stand out.  Analyzing is easy, particularly with continuous weekly practice.  My learnings feed into Monday's Vision.

    It Works for Teams Too
    Well, that's my personal results framework, but it works for my teams too.  On Monday's I ask my teams what they'd like to get done, as well as what MUST get done. I try to make sure my team enjoys the rythm of their results.  Then each day, in our daily 10-minute calls, we reset MUSTs, SHOULDs, and COULDs.  On Fridays, I do a team-based Lessons Learned exercise (I send an email where we reply all with lessons we each personally learned).

    Why This Approach Works for Me ...

    • It's self-correcting and I can course correct throughout the week.
    • I don't keep noise in my head (the buzz of all the little MUSTs, SHOULDs, COULDs that could float around)
    • Unimportant items slough off (I just don't carry them forward -- if they're important, I'll rehydrate them when needed)
    • I manage small and simple lists -- never a big bloated list.
    • It's not technology bound.  When I'm not at my desk, pen and paper work fine.
    • Keeping my working set small, let's me prioritize faster or course correct as needed.
    • It's a system with simple habbits and practices.  It's a system of constantly checkpointing course, allowing for course correction, and integration lessons learned.
    • My next actions are immediate and obvious, in relation to SHOULDs and COULDs. 

    Why Some Approaches I've Tried Don't ....

    • They were too complex or too weird
    • They ended up in monolithic lists or complicated slicing and dicing to get simple views for next actions.
    • I got lost in activity instead of driving by outcome.
    • They didn't account for the human side.
    • Keeping the list or lists up to date and managing status was often more work than some of the actual items.
    • Stuff that should slough off would, woulddn't, and would have a snowball effect, ultimately making the approach unweildy.

    I've been using this approach now for many months.  I've simplified it as I've shown others over time.  While I learn everyday, I particularly enjoy my Friday Reflections.  I also found a new enjoyment in Mondays because I'm designing my days and driving my weeks.

    My Related Post

  • J.D. Meier's Blog

    Using Scannable Outcomes with My Results Approach

    • 15 Comments

    Some readers asked to hear more on how I use my Scannable Outcome Lists in conjunction with My Personal Approach for Daily Results.  Here's the work flow in a nutshell ...

    Mondays
    On Mondays, I figure out my key outcomes for the week.  To do this:

    • I remind myself what I learned from last Friday's reflections.
    • I scan my calendar
    • I scan my inbox for new information
    • I scan my Scannable Outcome List for each category

    I keep my inbox completely empty, so the only items are what comes in over the weekend.  The empty inbox is particularly important for me.  I get ~150 mails directly to me each day, and I send about that, so I can't be a paper shuffler.  For my Scannable Outcome Lists, I use a flat list of posts in Outlook.  I name each post according to category: Body, Career, Mind, Project X, Project Y .. etc.

    As I scan, I use four guiding questions:

    1. What must be done? ... what should be done? ... what could be done?
    2. What customer value am I delivering? (I measure in value delivered vs. activity performed)
    3. How am I improving myself in key areas: career, mind, body, financial, relationships?
    4. What are the things that if I don't get done ... I'm screwed?  (By using the principle of contrast, I paint a picture of where I don't want to be.)

    As I scan, I also do some quick shuffling:

    I get a few outcomes from this

    • Most importantly, I have a mental picture for the week's outcomes (notice outcomes vs. activity)
    • I know my big risks for the week
    • I know my MUSTs vs. SHOULDs vs. COULDs
    • I have my list of outcomes for the day -- my Daily Outcomes.

    I have weekly iteration meetings with my team on Mondays, so this information helps me shape the outcomes with my team.

    Daily
    Each day, I construct my Daily Outcomes list.  Since I did the bulk of the work on Monday for identifying key priorities, this is a fast exercise.  In fact, it's usually 5 minutes.  It's as fast as it takes me to open a new post in Outlook, name it the current day (e.g. 02-25-07) and write the key outcomes down.  Throughout the day, I add to this.  I fish my email stream throughout the day for relevant actions and I add these to the current day's daily outcome.  If it's a longer team outcome, I list it under the relevant Scannable Outcome List.

    Fridays
    This is the day where I do more reflection.  To do this:

    • I scan my Daily Outcomes for the past week.  (This is fast because, for each day, I have a single post named by date.  For example: 02-19-07, 02-20-07, 02-21-07, 02-22-07, 02-23-07)   
    • I scan accomplishments
    • I scan my backlog

    As I scan, I ask some guiding questions:

    • If something's not getting done, then why not? ... Is there a habbit or practice I need to change for efficiency or effectiveness?
    • Do I need to change my approach for myself or the team?
    • What key lessons learned need to carry forward?

    I'll note that underlying my approach is my belief that important things should float to the top, less important should slough off, and I should be able to deal with change.  Having my Scannable Outcomes keeps me grounded in what's important vs. urgent.  This to me is the key to driving versus reacting.  If an area is slipping that I want to improve, I narrow my focus and concentrate on that.  There's few problems that withstand sustained focus.

    Well, that's the heart of the approach.  What I like most about this approach is that it's low-overhead and it works.  I've done away with over-engineered approaches, where you die the death of a 1000 paper cuts in administration.  I also like this approach because it's systematic, yet holistic and flexible.  Basically, it's designed for getting real results, in real life.

  • J.D. Meier's Blog

    MSF Agile Persona Template

    • 1 Comments

    I was looking for examples of persona templates, and I came across Personas: Moving Beyond Role-Based Requirements Engineering by Randy Miller and Laurie Williams.  I found it to be insightful and practical.  I also like the fact they included a snapshot of a persona template example from MSF Agile ...

    MSF Agile Persona Template

    • Name - Enter a respectful, fictitious name for the persona.
    • Status and Trust Level - Favored or disfavored and level of credentials.
    • Role - Place the user group in which the persona belongs.
    • Demographics - Age and personal details optional Goals, motives and concerns.
    • Knowledge, skills and abilities - Group real but generalized information about the capabilities of the persona.
    • Goals, motives, and concerns - Describe the real needs of the users in the user group represented by the persona. If multiple groupings exist, write a persona for each grouping.
    • Usage Patterns - Write the frequency and usage patterns of the system by the persona. Develop a detailed understanding of what functions would be most used. Look for any challenges that the system must help the persona overcome. Note the learning and interaction style if the system is new. Does the persona explore the system to find new functionality or need guidance? Keep this area brief but accurate.
  • J.D. Meier's Blog

    How patterns and practices Does Source Control with Team Foundation Server (TFS)

    • 8 Comments

    We've used TFS for more than a year, so it's interesting to see what we're doing in practice.  If you looked at our source control, you'd see something like the following:

    + Project X version 1
    + Project X version 2
    - Project Y
        |----Branches
        |----Releases
        |----Spikes
        |----TeamStuff
        |----Trunk
            |----Build
            |----Docs
            |----Keys
            |----Source
            |----Tools

    While there's some variations across projects, the main practices are:

    • Major versions get their own team project (Project X version 1, Project X version 2 ...)
    • The "Trunk" folder contains the main source tree.
    • Spikes go in a separate "Spikes" folder -- not for shipping, but are kept with the project.
    • QFEs use "Branches".  Branches are effectively self-contained snapshots of code.
    • Bug fixes go in the main "Source" under the "Trunk" folder
    • If we're shipping a CTP (Customer Tech Preview) next week, but we have a good build for this week.  We create a "shelveset" for the CTP.  
    • Within each release we have a number of Customer Tech Preview (CTP) releases, we "Label" those within the structure so we can go back and find at any point in time

    I'll be taking a look at how customers and different groups in Microsoft have been using TFs for Source Control.  If you have some practices you'd like to share, I'd like to hear them.  Comment here or send mail to VSGuide@microsoft.com

  • J.D. Meier's Blog

    Scannable Outcome Lists

    • 13 Comments

    I realized another key for helping manage To Dos.  It's having scannable lists of outcomes.  I keep flat lists of outcomes chunked by area or project.  These aren't the next actions.  They're the results I want to accomplish.  They act as prompts to help me quickly identify next actions.

    I keep lists for all my various areas for outcomes:

    • Continuous Improvement: mind, body, career, relationships, financial
    • Projects
    • Ideas
    • Goals and committments
    • Recurring items (such as backup, status reports)
    • Habbits or practices I'm developing
    • Training
    • Information sources (places or people that I routinely browse or pull information from)

    In a single view, I can first scan all of my areas.  I can then quickly scan any particular area for outcomes.  What I like about this approach is that I get a bird's-eye view of all the areas that I'm working on.  Because I like to focus on a given area for results, I could easily neglect areas.  This approach keeps important things on my radar and helps keep me balanced.

    I use my scannable outcome lists in conjunction with my personal approach for daily results.

    Related Posts

  • J.D. Meier's Blog

    World Class Testing

    • 2 Comments

    I've highlighted my take-aways from the "World Class Testing" section of Managing the Design Factory by Donald G. Reinertsen.  It's an insightful book whether you're optimizing your product line or designing a new product.  It's packed with empirical punch, counter-intuition, and practical techniques for re-shaping your engineering results.

    Viewing Testing as an Asset

    • View testing as an asset not a problem.  If you don't, you'll likey have an underresourced and undermanaged test department.  
    • Testing is typically 30 to 60 percent of the overall dev cycle - treat it as a major design activity.
    • The mismatch between theory and real-world is often unexpected - test results have inherently high information content

    Ways to Optimize Testing

    • Distinguish between design testing and manufacturing testing.  Manufacturing testing is done to identify mistakes in the manufacturing process. 
    • Design testing is done to generate information about the design.
    • Test at the level in the product structure where you can find defects most efficiently.
    • Identify what you're optimizing in your testing - expense, unit cost of impact, performance, or time. 
    • Use economic analysis to help choose what to optimize.
    • To reduce cost, eliminate duplicate testing, test at the most efficient subsystem level, automate testing processes, and avoid overtesting the product.
    • Avoid overtesting, by branching test plans.  If the product fails certain tests, follow different paths.  Don't blindly run the tests where the rest of the tests no longer have significance.
    • To reduce the unit cost impact of testing, you can eliminate product features that exist only to make the product easier to test, and use testing as a tool to fine tune product costs.  Sometimes a design improved through testing, is cheapter than a do-it-right-the-first-time design.
    • To optimize product performance, you can either increase your test coverage or enhance the validity of your tests.  When you increase testing coverage, it's more practical to test probable applied use vs. all the possible permutations (which is usually statiscially impossible or inefficient).  To improve your test validity, generate the same types of failures in your labs, that you see in the field.
    • To decrease testing time, increase the amount of parallel testing, use relability prediction, decrease testing batch sizes, or monitor and manage queues.  To use reliability prediction, begin downstream activities when you can predict that you will likely achieve your targeted reliability for the product.

    What I like about this particular book, is that it doesn't prescribe a one-size fits all.  Instead, you get a pick list of options and strategies, depending on what you're trying to optimize.  It's effectively a choose-your-own-adventure book for product developers.

  • J.D. Meier's Blog

    patterns and practices Visual Studio Team System Guidance Now Available

    • 6 Comments

    This is our first release of our Visual Studio Team System Guidance.  This project is a collaborative effort with VSTS team members, customers, field, and industry experts.

    What you'll see so far, is Practices at a Glance and Questions and Answers as we're tackling source control / versioning.  They are designed to give you a quick path through the lessons learned and emerging practices.  These are works in progress.  As we learn, we update.  What you'll also see is a section that includes artifacts we create to help us build the guidance.  You'll also see a Team System Resources Index which is lists of the various public resources we use.  One of the first things steps we took to ramp our team, was gather and catalog the available resources.  There are a lot!

    Once we tackle source control, we'll be moving through other high priority areas, such as build, work items, and reporting.   We're evaluating customer success using scenarios, organized as Scenario Frames.  Here's our emerging Source Control Scenario Frame. The working plan at this point is to build community around our Visual Studio Team System Guidance, while we tune and prune our guidance.  We have a fast publishing path in CodePlex and we can experiment with usability and various guidance form factors.  As the guidance matures, we can make key guidance broadly available in the MSDN library.

    Our team includes previous members from security and performance guidance: Alex Mackman, Jason Taylor and Prashant Bansode.  We're working closely with Jeff Beehler, Rob Caron, Graham Barry, and Bijan Javidi on the VSTS side to bring you the guidance you need.  Send your feedback, fan mail and hate mail to VSGuide@microsoft.com

  • J.D. Meier's Blog

    Task-Analysis Grid for Communicating Product Design

    • 1 Comments

    How do you communicate design decisions? … Srinath sent me a helpful link on the Task-Analysis Grid.   A Task Analysis Grid is effectively columns of scenarios along with sub-tasks to complete the task.

    Here's the key points:

    • The columns are organized by Before, After, and Future
    • The sub-tasks are prioritized and color coded.

    In practice, I think the Task-Analysis Grid is useful for communicating user experiences and high-level product design.  For driving engineering and project management, I use Scenario Grids.  Scenario Grids are useful for figuring out baseline releases, incremental releases, dependencies, as well as doing scenario-based evaluations and competitive assessments.  I'll post more on building Scenario Grids another day.

  • J.D. Meier's Blog

    2 Key Process Pitfalls

    • 1 Comments

    If I had to pick two easily corrected issues I see show up time and again, I'd pick:

    1. Optimizing the wrong thing.
    2. Optimizing your process, when what you need is a different process.

    Two questions I think help:

    1. "What do you want to optimize?" (time? money? resource utilization? impact? innovation?)
    2. "Is your ladder up against the right wall?" (equivalent to barking up the wrong tree)

    A few well-timed and well placed questions go a long way.

  • J.D. Meier's Blog

    Actors, Personas, and Roles

    • 4 Comments

    In user modeling, I usually come across actors, personas, and roles (user roles).  I thought it would be helpful to distinguish these so that I can use the right tool for the job, or at least understand their relationships, strengths and weaknesses.

    Summary
    Actor

    • Defined - someone or something that "acts" on or with the system.
    • Sample- customer, fulfillment, credit approval.

    Roles

    • Defined - a set of needs, behaviors, and expectations.
    • Use - develop an initial task model.
    • Keys - three Cs of context, characteristics, and criteria (i.e. overall responsibilities, patterns of interaction, and design objectives)
    • Sample - regular buyer, incidental buyer, casual browser.

    Personas

    • Defined - fictitious characters that represent user types.
    • Use - create familiar faces to help inspire and focus project teams.
    • Sample - The Enterprise Architect is Art, and is a Programming/Platforms Expert who influences the technology direction, defines technology standards, and oversees (from a technology perspective, not a managerial perspective), the design of several applications in his business unit.

    Analysis
    Personas

    • Pros - They can encourage empathy among technology-focused designers and developers. 
    • Cons - They can encourage projection and overly concrete thinking.  They may not be truly representative.  It can be tricky for the engineer or designer to figure out what matters and what doesn't.

    Roles

    • Pros - they abstract the user roles efficiently and make them easy to work with.
    • Cons - they can be abstract to the point where you lose empathy.

    They All Have Their Place
    At the end of the day, actors, roles and personas have their place.  I like the example from forUse: The Electronic Newsletter of Usage-Centered Design
    #15, August 2001: http://www.foruse.com/newsletter/foruse15.htm

    "For a simplified example, a business-to-business e-commerce application might be modeled with three actors: Customer, Fulfillment, and Credit Approval. The Customer might be differentiated into several roles: Regular Buying Role, Incidental Buying Role, and Casual Browsing Role. The latter might be described as: not necessarily in the industry and buying may not be sole or primary responsibility (CONTEXT), typically intermittent and unpredictable use, often merely for information regarding varied lines and products, driven by curiosity as much as need (CHARACTERISTICS); may need enticements to become customer, linkage to others from same account, access to retail sources and pricing (CRITERIA)."

    What to Do
    In practice, I've found the following guidance helpful: Create your full range of user roles, then create the personas for selected roles.

    References
    I've found the following references helpful on this topic 

  • J.D. Meier's Blog

    It's Between Your Ears

    • 4 Comments
    A friend of mine told me a story the other day.  I liked his reminder of how your job sat, is more about your perspective, than the job.
     
    The story goes like this.  As he was walking to his jet, on a picture perfect day, he thought to himself, how boring ... one more routine solo flight.  Then it hit him.  He's doing a job that other people only dream of.  He realized that day and ever since, it's not your job that determines what you enjoy ... it's what's between your ears.
  • J.D. Meier's Blog

    patterns and practices Performance Testing Guidance

    • 4 Comments

    We have a new Performance Testing Guidance project in progress.  Our team includes some of the original members from Improving .NET Application Performance as well as some new faces.  We're tackling various flavors of performance testing (stress, load, capacity) as well as how to bake performance testing into your life cycle.  We're also tackling how to use Visual Studio 2005 for effective performance testing.

    We're building our performance testing BOG (Body of Guidance) in three parts:

    1. Methodologies 
    2. Techniques
    3. Visual Studio 2005 Performance Testing Guidance

    This factoring let's us create both a focused set of timeless performance testing practices, as well as a set of very specific practices for getting the most out of the tool.  You can pick the modules you need for your tasks at hand.

  • J.D. Meier's Blog

    Personas at patterns and practices

    • 4 Comments

    At patterns & practices, we introduced personas a few years back to help design user experience for our deliverables.  Personas helped with a few things:

    • Understanding demographics.
    • Building empathy by putting a face behind the user role.
    • Building a common set of customer examples we can all talk about in meetings. (...Is this for "Art", "Bert","Mort" or "Elvis"?)

    I think of a persona as a specific (yet generalized) instance of a role to "personify" and represent what users that play that role, might be like.  While we originally argued over the details of the personas, a great by-product was that we focused on the distinctions across our various customer sets.  This helped reduce ambiguity during product design.  It also helped us make calls on where to put our resources and effort.

    One important lesson we learned was that personas weren't as reusable across groups as they originally seemed they might be.  In other words, we couldn't just grab a set of personas from another group, and call them our own.  Instead, it meant time and effort to build a set that had specific meaning for our group in the context of what we build.  While our naming overlapped with other groups, we had our own set of reference examples.

    Here's the core personas we originally used:

    • ART (ARCHITECT / PLATFORMS AND PROGRAMMING EXPERT)
    • BERT (DEV LEAD / CORE COMPONENT DEVELOPER)
    • MORT (DEVELOPER)

    Here's additional personas we used:

    • ELVIS - THE PRAGMATIC PROGRAMMER
    • ISAAC - BUSINESS APPLICATION DEVELOPER
    • SIMON - SYSTEM IMPLEMENTER 

    For sharing the persona information, we used a simple template

    • Persona
    • Background
    • Environment
    • Job Description
    • Attributes
    • User Experience goals
    • Information Sources
    • What does it mean to create a patterns & practices deliverable for this persona?

    While that was a practical set of info for quick sharing, the research behind the personas included:

    • Overview
    • Household and Leisure Activities
    • A Day in the Life
    • Work Activities
    • Communication and Collaboration
    • Skills, Knowledge, and Abilities
    • Goals, Fears and Aspirations
    • Primary Roles
    • Secondary Roles
    • Fears
    • Career Aspirations
    • Computer Skills, Knowledge and Abilities
    • Technology Attitudes and User Experience Values
    • Tools
    • Issues
    • International Considerations
    • Opportunities
    • Market Influence
    • Demographic Attributes
    • References

    Since our earlier days, I think we've shifted from persona-based design to more customer-connected engineering.  We have a lot more direct customer involvement throughout the engineering.

  • J.D. Meier's Blog

    Avoiding Do Overs - Testing Your Key Engineering Decisions

    • 1 Comments

    I noticed Rico has a Performance Problems Survey.  From what I've seen, the most important problem is failure to test and explore key engineering decisions.  By key engineering decisions, I mean the decisions that have cascading engineering impact.  By testing, I mean, doing quick end-to-end tests with minimal code that gives you insight into the costs and glass ceilings of different strategies.

    When I was working our developer labs, I would work with around 50 or so customers in a week.  I had to quickly find the potential capability killers. To do so, I had to find the decisions that could easily result in expensive do-overs.  If I took care of the big rocks, the little rocks fell into place.

    Here's the categories that I found to be the most useful for finding key engineering decisions:

    • Authentication
    • Authorization
    • Auditing and Logging
    • Caching
    • Configuration
    • Data Access
    • Debugging
    • Exception Management
    • Input and Data Validation
    • Instrumentation
    • Monitoring
    • State Management

    As you can see, there's a lot of intersection among quality attributes, such as security, performance, and reliability.  One of my favorite, and often over-looked capabilities is supportability (configurable levels of logging, instrumentation of key scenarios, ... etc.)  This intersection is important.  If you only look at a strategy from one perspective, such as performance, you might miss your security requirements.  On the other hand, sometimes security requirements will constrain your performance options, so this will help you narrow down your set of potential strategies.  In fact, my challenge was usually to help customers build scalable and secure applications.

    By using this set, I could quickly find the most important end-to-end tests.  For example, data access was a great category because I could quickly test strategies for paging records, which could make or break the application.  I could also test the scalability impact of flowing the caller to the database or using a trusted service account to make the calls.

    To contrast this approach of end to end design tests versus typical prototyping, it wasn't about making a feature work or showing user experiences.  These architectural proofs/spikes were for evaluating alternative strategies and litmus testing capabilities to avoid or limit downstream surprises and do-overs.

  • J.D. Meier's Blog

    4 Questions to Cap Your Day

    • 3 Comments

    At the end of each day, I ask myself the following:

    1. What did I learn?
    2. What did I improve?
    3. What did I enjoy?
    4. What kind act did I do?

    I use these questions to reflect on daily improvements as well as course correct.  I also use them to appreciate life's little lessons each day.  It's a simple practice but it helps make sure I don't slip into life's auto-pilot mode.  What's interesting too, is this simple practice can actually raise your happiness thermometer, according to Do You Have What It Takes to Lead a Happier Life?

  • J.D. Meier's Blog

    patterns & practices Enterprise Library Test Guide

    • 1 Comments

    We released the Enterprise Library Test Guide.  I plan on analyzing the sections on security and performance testing.  I do like the focus on testing for compliance with recommended practice over testing for all the permutations of bad (whitelist over blacklist testing).

  • J.D. Meier's Blog

    What Are You Optimizing

    • 4 Comments

    This is such a fundamental question.  It has an enormous impact on your product design and how you structure your product life cycle. 

    For example, are you optimizing time? ... money? ... impact? ... innovation? ... resource utilization? If you don't answer this question first, it's very easy to pick the wrong hammer for your screws. 

    A few things I use to help me figure out what to optimize are I figure out my objectives, I figure out my constraints, and I look for my possible high ROI paths.  I always want more out of what I do.  The trick is to know when doing more, gets you less.  Your objectives keep you grounded along the way.

    What I like about this question is it universally applies to any activity you do, including how you design your day.  Are you optimizing around results, or connecting with people? Are you optimizing for enjoyment along the way or for reward in the end?

  • J.D. Meier's Blog

    Scenario Frames for Guidance

    • 5 Comments

    When I tackle a problem domain, I first frame out the space.  To do this, I list out scenarios and sub-scenarios.  I group the scenarios under categories.  Sometimes categories come first, sometimes scenarios do.  I call the result, a Scenario Frame.

    I use Scenario Frames to evaluate platform, tools, and guidance.   I also use them for product design, innovation, competitive assessments, subject matter expert reviews, arch and design reviews, and as a way to build shared understanding of a problem space.

    Here's a Scenario Frame Example my team is creating to enumerate and evaluate Source Control scenarios in VSTS 2005:

    What's your favorite tool for framing out problem spaces?

  • J.D. Meier's Blog

    How I Explain Threat Modeling to Customers

    • 5 Comments

    Here's my trying to explain threat modeling (actually core modeling) to a customer …

    My core theme of the modeling is this:

    • Define what good looks like (e.g. objectives)
    • Establish boundaries of good (constraints, goals -- what can't happen, what needs to happen, what's nice to happen)
    • Identify ests for success (define criteria ... entry criteria and exit criteria ... how do I know when it's good enough)
    • Model to play 'what if' scenarios before going down long-winded dead ends
    • Identify and prototype the high risk end-to-end engineering decisions (to provide feedback, inform the direction, update the objectives)
    • Use an information model (e.g. the web app security frame -- use 'buckets' to organize both decomposition as well as package up the principles, practices, and patterns) ... another trick here is that the frame encapsulates 'actionable' categories ... you're modeling to inform choices and build on other's knowlege
    • Leverage community knowledge. (The information/model frame also helps leverage community knowlege - you don't have to start from scratch or be a subject matter expert - to speak to the dev, you can use patterns, anti-patterns, code samples)
    • Model just enough to reduce your key risks and make informed decisions (look before you leap)
    • Incrementally render your information (you basically spiral down risk reduction ... you identify what you know and what you need to know next
    • Use a set of complimentary activities over a single silver bullet (use case analysis is complimentary to data flow analysis is complimentary to subject object matrix ... etc.; threat modeling does not replace security design inspection or code inspection or deployment inspection)

    This is the approach I use whether it's security or performance or any other quality attribute.  In the case of threat modeling, vulnerabilities are the key.  These go in your bug database and help scope test.

  • J.D. Meier's Blog

    Visual Studio Team System Technotes

    • 1 Comments
    I found the Visual Studio Team System Technotes while hunting and gathering resources for my team.  I like the Technotes because they are short and focused on a specific concept or task.  I also like the fact that they are first-hand from many of the product team members.
Page 1 of 1 (20 items)