J.D. Meier's Blog

Software Engineering, Project Management, and Effectiveness

  • J.D. Meier's Blog

    Actors, Personas, and Roles


    In user modeling, I usually come across actors, personas, and roles (user roles).  I thought it would be helpful to distinguish these so that I can use the right tool for the job, or at least understand their relationships, strengths and weaknesses.


    • Defined - someone or something that "acts" on or with the system.
    • Sample- customer, fulfillment, credit approval.


    • Defined - a set of needs, behaviors, and expectations.
    • Use - develop an initial task model.
    • Keys - three Cs of context, characteristics, and criteria (i.e. overall responsibilities, patterns of interaction, and design objectives)
    • Sample - regular buyer, incidental buyer, casual browser.


    • Defined - fictitious characters that represent user types.
    • Use - create familiar faces to help inspire and focus project teams.
    • Sample - The Enterprise Architect is Art, and is a Programming/Platforms Expert who influences the technology direction, defines technology standards, and oversees (from a technology perspective, not a managerial perspective), the design of several applications in his business unit.


    • Pros - They can encourage empathy among technology-focused designers and developers. 
    • Cons - They can encourage projection and overly concrete thinking.  They may not be truly representative.  It can be tricky for the engineer or designer to figure out what matters and what doesn't.


    • Pros - they abstract the user roles efficiently and make them easy to work with.
    • Cons - they can be abstract to the point where you lose empathy.

    They All Have Their Place
    At the end of the day, actors, roles and personas have their place.  I like the example from forUse: The Electronic Newsletter of Usage-Centered Design
    #15, August 2001: http://www.foruse.com/newsletter/foruse15.htm

    "For a simplified example, a business-to-business e-commerce application might be modeled with three actors: Customer, Fulfillment, and Credit Approval. The Customer might be differentiated into several roles: Regular Buying Role, Incidental Buying Role, and Casual Browsing Role. The latter might be described as: not necessarily in the industry and buying may not be sole or primary responsibility (CONTEXT), typically intermittent and unpredictable use, often merely for information regarding varied lines and products, driven by curiosity as much as need (CHARACTERISTICS); may need enticements to become customer, linkage to others from same account, access to retail sources and pricing (CRITERIA)."

    What to Do
    In practice, I've found the following guidance helpful: Create your full range of user roles, then create the personas for selected roles.

    I've found the following references helpful on this topic 

  • J.D. Meier's Blog

    Personas at patterns and practices


    At patterns & practices, we introduced personas a few years back to help design user experience for our deliverables.  Personas helped with a few things:

    • Understanding demographics.
    • Building empathy by putting a face behind the user role.
    • Building a common set of customer examples we can all talk about in meetings. (...Is this for "Art", "Bert","Mort" or "Elvis"?)

    I think of a persona as a specific (yet generalized) instance of a role to "personify" and represent what users that play that role, might be like.  While we originally argued over the details of the personas, a great by-product was that we focused on the distinctions across our various customer sets.  This helped reduce ambiguity during product design.  It also helped us make calls on where to put our resources and effort.

    One important lesson we learned was that personas weren't as reusable across groups as they originally seemed they might be.  In other words, we couldn't just grab a set of personas from another group, and call them our own.  Instead, it meant time and effort to build a set that had specific meaning for our group in the context of what we build.  While our naming overlapped with other groups, we had our own set of reference examples.

    Here's the core personas we originally used:


    Here's additional personas we used:


    For sharing the persona information, we used a simple template

    • Persona
    • Background
    • Environment
    • Job Description
    • Attributes
    • User Experience goals
    • Information Sources
    • What does it mean to create a patterns & practices deliverable for this persona?

    While that was a practical set of info for quick sharing, the research behind the personas included:

    • Overview
    • Household and Leisure Activities
    • A Day in the Life
    • Work Activities
    • Communication and Collaboration
    • Skills, Knowledge, and Abilities
    • Goals, Fears and Aspirations
    • Primary Roles
    • Secondary Roles
    • Fears
    • Career Aspirations
    • Computer Skills, Knowledge and Abilities
    • Technology Attitudes and User Experience Values
    • Tools
    • Issues
    • International Considerations
    • Opportunities
    • Market Influence
    • Demographic Attributes
    • References

    Since our earlier days, I think we've shifted from persona-based design to more customer-connected engineering.  We have a lot more direct customer involvement throughout the engineering.

  • J.D. Meier's Blog

    My Personal Approach for Daily Results


    I'm dedicating this post to anybody who's faced with task saturation, or needs some new ideas on managing their days or weeks... 

    One of the most important techniques I share with those I mentor, is how to manage To Dos.  It's too easy to experience churn or task saturation.  It's also too easy to confuse activities with outcomes.  At Microsoft, I have to get a lot done and I have to know what's important vs. what's urgent, and I have to get results.

    My approach is effective and efficient for me.  I think it's effective because it's simple and it's a system, rather than a silver bullet.  Here's my approach in a nutshell:

    1. Monday Vision.
    2. Daily Outcomes.
    3. Friday Reflection.

    Monday Vision
    Monday Vision is simply a practice where each Monday, I identify the most important outcomes for the week.  This lets me work backwards from the end in mind.  I focus on outcome not activities.  I ask questions such as, "if this were Friday, what would I feel good about having accomplished?" ... "if this were Friday, what would suck most if it wasn't done?" ... etc.  I also use some questions from Flawless Execution.

    Daily Outcomes
    Daily Outcomes is where each day, I make a short To Do list.  I title it by date (i.e. 02-03-07).  I start by listing my MUST items. Next, I list my SHOULD or COULD.  I use this list throughout the day, as I fish my various streams for action.  My streams include meetings, email, conversations, or bursts of brilliance throughout the day.   Since I do this at the start of my day, I have a good sense of priorities.  This also helps me deal with potentially randomizing scenarios.  This also helps batch my work.  For example, if I know there's a bunch of folks I need to talk to in my building, I can walk the halls efficiently rather than have email dialogues with them.  On ther other hand, if there's a lot of folks I need to email, I can batch that as well.

    Friday Reflection
    Friday Reflection is a practice where I evaluate what I got done or didn't and why.  Because I have a flat list of chunked up To Do lists by day, it's very easy to review a week's worth and see patterns for improvement.  It's actually easy for me to do this for months as well.  Trends stand out.  Analyzing is easy, particularly with continuous weekly practice.  My learnings feed into Monday's Vision.

    It Works for Teams Too
    Well, that's my personal results framework, but it works for my teams too.  On Monday's I ask my teams what they'd like to get done, as well as what MUST get done. I try to make sure my team enjoys the rythm of their results.  Then each day, in our daily 10-minute calls, we reset MUSTs, SHOULDs, and COULDs.  On Fridays, I do a team-based Lessons Learned exercise (I send an email where we reply all with lessons we each personally learned).

    Why This Approach Works for Me ...

    • It's self-correcting and I can course correct throughout the week.
    • I don't keep noise in my head (the buzz of all the little MUSTs, SHOULDs, COULDs that could float around)
    • Unimportant items slough off (I just don't carry them forward -- if they're important, I'll rehydrate them when needed)
    • I manage small and simple lists -- never a big bloated list.
    • It's not technology bound.  When I'm not at my desk, pen and paper work fine.
    • Keeping my working set small, let's me prioritize faster or course correct as needed.
    • It's a system with simple habbits and practices.  It's a system of constantly checkpointing course, allowing for course correction, and integration lessons learned.
    • My next actions are immediate and obvious, in relation to SHOULDs and COULDs. 

    Why Some Approaches I've Tried Don't ....

    • They were too complex or too weird
    • They ended up in monolithic lists or complicated slicing and dicing to get simple views for next actions.
    • I got lost in activity instead of driving by outcome.
    • They didn't account for the human side.
    • Keeping the list or lists up to date and managing status was often more work than some of the actual items.
    • Stuff that should slough off would, woulddn't, and would have a snowball effect, ultimately making the approach unweildy.

    I've been using this approach now for many months.  I've simplified it as I've shown others over time.  While I learn everyday, I particularly enjoy my Friday Reflections.  I also found a new enjoyment in Mondays because I'm designing my days and driving my weeks.

    My Related Post

  • J.D. Meier's Blog

    Avoiding Do Overs - Testing Your Key Engineering Decisions


    I noticed Rico has a Performance Problems Survey.  From what I've seen, the most important problem is failure to test and explore key engineering decisions.  By key engineering decisions, I mean the decisions that have cascading engineering impact.  By testing, I mean, doing quick end-to-end tests with minimal code that gives you insight into the costs and glass ceilings of different strategies.

    When I was working our developer labs, I would work with around 50 or so customers in a week.  I had to quickly find the potential capability killers. To do so, I had to find the decisions that could easily result in expensive do-overs.  If I took care of the big rocks, the little rocks fell into place.

    Here's the categories that I found to be the most useful for finding key engineering decisions:

    • Authentication
    • Authorization
    • Auditing and Logging
    • Caching
    • Configuration
    • Data Access
    • Debugging
    • Exception Management
    • Input and Data Validation
    • Instrumentation
    • Monitoring
    • State Management

    As you can see, there's a lot of intersection among quality attributes, such as security, performance, and reliability.  One of my favorite, and often over-looked capabilities is supportability (configurable levels of logging, instrumentation of key scenarios, ... etc.)  This intersection is important.  If you only look at a strategy from one perspective, such as performance, you might miss your security requirements.  On the other hand, sometimes security requirements will constrain your performance options, so this will help you narrow down your set of potential strategies.  In fact, my challenge was usually to help customers build scalable and secure applications.

    By using this set, I could quickly find the most important end-to-end tests.  For example, data access was a great category because I could quickly test strategies for paging records, which could make or break the application.  I could also test the scalability impact of flowing the caller to the database or using a trusted service account to make the calls.

    To contrast this approach of end to end design tests versus typical prototyping, it wasn't about making a feature work or showing user experiences.  These architectural proofs/spikes were for evaluating alternative strategies and litmus testing capabilities to avoid or limit downstream surprises and do-overs.

  • J.D. Meier's Blog

    2 Key Process Pitfalls


    If I had to pick two easily corrected issues I see show up time and again, I'd pick:

    1. Optimizing the wrong thing.
    2. Optimizing your process, when what you need is a different process.

    Two questions I think help:

    1. "What do you want to optimize?" (time? money? resource utilization? impact? innovation?)
    2. "Is your ladder up against the right wall?" (equivalent to barking up the wrong tree)

    A few well-timed and well placed questions go a long way.

  • J.D. Meier's Blog

    World Class Testing


    I've highlighted my take-aways from the "World Class Testing" section of Managing the Design Factory by Donald G. Reinertsen.  It's an insightful book whether you're optimizing your product line or designing a new product.  It's packed with empirical punch, counter-intuition, and practical techniques for re-shaping your engineering results.

    Viewing Testing as an Asset

    • View testing as an asset not a problem.  If you don't, you'll likey have an underresourced and undermanaged test department.  
    • Testing is typically 30 to 60 percent of the overall dev cycle - treat it as a major design activity.
    • The mismatch between theory and real-world is often unexpected - test results have inherently high information content

    Ways to Optimize Testing

    • Distinguish between design testing and manufacturing testing.  Manufacturing testing is done to identify mistakes in the manufacturing process. 
    • Design testing is done to generate information about the design.
    • Test at the level in the product structure where you can find defects most efficiently.
    • Identify what you're optimizing in your testing - expense, unit cost of impact, performance, or time. 
    • Use economic analysis to help choose what to optimize.
    • To reduce cost, eliminate duplicate testing, test at the most efficient subsystem level, automate testing processes, and avoid overtesting the product.
    • Avoid overtesting, by branching test plans.  If the product fails certain tests, follow different paths.  Don't blindly run the tests where the rest of the tests no longer have significance.
    • To reduce the unit cost impact of testing, you can eliminate product features that exist only to make the product easier to test, and use testing as a tool to fine tune product costs.  Sometimes a design improved through testing, is cheapter than a do-it-right-the-first-time design.
    • To optimize product performance, you can either increase your test coverage or enhance the validity of your tests.  When you increase testing coverage, it's more practical to test probable applied use vs. all the possible permutations (which is usually statiscially impossible or inefficient).  To improve your test validity, generate the same types of failures in your labs, that you see in the field.
    • To decrease testing time, increase the amount of parallel testing, use relability prediction, decrease testing batch sizes, or monitor and manage queues.  To use reliability prediction, begin downstream activities when you can predict that you will likely achieve your targeted reliability for the product.

    What I like about this particular book, is that it doesn't prescribe a one-size fits all.  Instead, you get a pick list of options and strategies, depending on what you're trying to optimize.  It's effectively a choose-your-own-adventure book for product developers.

  • J.D. Meier's Blog

    How I Explain Threat Modeling to Customers


    Here's my trying to explain threat modeling (actually core modeling) to a customer …

    My core theme of the modeling is this:

    • Define what good looks like (e.g. objectives)
    • Establish boundaries of good (constraints, goals -- what can't happen, what needs to happen, what's nice to happen)
    • Identify ests for success (define criteria ... entry criteria and exit criteria ... how do I know when it's good enough)
    • Model to play 'what if' scenarios before going down long-winded dead ends
    • Identify and prototype the high risk end-to-end engineering decisions (to provide feedback, inform the direction, update the objectives)
    • Use an information model (e.g. the web app security frame -- use 'buckets' to organize both decomposition as well as package up the principles, practices, and patterns) ... another trick here is that the frame encapsulates 'actionable' categories ... you're modeling to inform choices and build on other's knowlege
    • Leverage community knowledge. (The information/model frame also helps leverage community knowlege - you don't have to start from scratch or be a subject matter expert - to speak to the dev, you can use patterns, anti-patterns, code samples)
    • Model just enough to reduce your key risks and make informed decisions (look before you leap)
    • Incrementally render your information (you basically spiral down risk reduction ... you identify what you know and what you need to know next
    • Use a set of complimentary activities over a single silver bullet (use case analysis is complimentary to data flow analysis is complimentary to subject object matrix ... etc.; threat modeling does not replace security design inspection or code inspection or deployment inspection)

    This is the approach I use whether it's security or performance or any other quality attribute.  In the case of threat modeling, vulnerabilities are the key.  These go in your bug database and help scope test.

  • J.D. Meier's Blog

    What Are You Optimizing


    This is such a fundamental question.  It has an enormous impact on your product design and how you structure your product life cycle. 

    For example, are you optimizing time? ... money? ... impact? ... innovation? ... resource utilization? If you don't answer this question first, it's very easy to pick the wrong hammer for your screws. 

    A few things I use to help me figure out what to optimize are I figure out my objectives, I figure out my constraints, and I look for my possible high ROI paths.  I always want more out of what I do.  The trick is to know when doing more, gets you less.  Your objectives keep you grounded along the way.

    What I like about this question is it universally applies to any activity you do, including how you design your day.  Are you optimizing around results, or connecting with people? Are you optimizing for enjoyment along the way or for reward in the end?

  • J.D. Meier's Blog

    How patterns and practices Does Source Control with Team Foundation Server (TFS)


    We've used TFS for more than a year, so it's interesting to see what we're doing in practice.  If you looked at our source control, you'd see something like the following:

    + Project X version 1
    + Project X version 2
    - Project Y

    While there's some variations across projects, the main practices are:

    • Major versions get their own team project (Project X version 1, Project X version 2 ...)
    • The "Trunk" folder contains the main source tree.
    • Spikes go in a separate "Spikes" folder -- not for shipping, but are kept with the project.
    • QFEs use "Branches".  Branches are effectively self-contained snapshots of code.
    • Bug fixes go in the main "Source" under the "Trunk" folder
    • If we're shipping a CTP (Customer Tech Preview) next week, but we have a good build for this week.  We create a "shelveset" for the CTP.  
    • Within each release we have a number of Customer Tech Preview (CTP) releases, we "Label" those within the structure so we can go back and find at any point in time

    I'll be taking a look at how customers and different groups in Microsoft have been using TFs for Source Control.  If you have some practices you'd like to share, I'd like to hear them.  Comment here or send mail to VSGuide@microsoft.com

  • J.D. Meier's Blog

    Tags for Visual Studio 2005, Team System, and Team Foundation Server


    I'm paving paths through the VSTS body of knowledge for my team.  I like tags and tag clouds.  They quickly tell me what people are thinking and talking about.  Here's a list of tags I put together for my team:


    Code Plex

    MSDN Tags


  • J.D. Meier's Blog

    Using Live.com for RSS


    Here's a quick set of steps for using Live.com (http://www.Live.com) as your RSS reader.  What I like about it is that I can log in to it from anywhere.  What I like most is that I can create multiple pages to organize my feeds.  This let's me focus my information.

    Here's the steps for creating pages and adding feeds to them: (you need to login to Live.com for these options)

    Adding a New Page

    1. Click Add Page.
    2. Click the drop-down arrow next to the new page you just created.
    3. Click Rename and type the name of your page (for example VS.NET)

    Adding Feeds

    1. Click Add stuff (upper left)
    2. Click Advanced options
    3. Put the path to the RSS feed you want (for example, http://blogs.msdn.com/jmeier/rss.xml), next to the Subscribe button.
    4. Click subscribe.  This will add the feed to your page.
    5. Either add more feeds or click Add stuff again (upper left) to close the options.

    Tip - If I need to search for a feed, I use a separate browser, do my search, and then paste the path next to the Subscribe button.  I don't like the Search for feeds option because I lose my context.

    I like Live.com for my Web-based RSS reading experience.  I use JetBrains Omea Pro for my desktop experience, where I do my heavy processing.  I think of this like my Outlook Web Access and Outlook desktop experiences.  My Web experience is optimized for reach; my desktop experience is optimized for rich.

  • J.D. Meier's Blog

    VPC with VSTF Single-Server Deployment


    Below is a walkthrough of the steps I took to install VSTF (Visual Studio Team Foundation) on a VPC.  I found a path that worked for me.  I'm providing it as a reference both for myself and for others that might need it.  I ran into issues during my initial installation. These turned out to be problems with my base installation of Windows on my VPC, not VSTF.

    Here's the steps I took:

    1. Downloaded the most current version of the VSTF installation guide from http://go.microsoft.com/fwlink/?LinkId=40042  
    2. Created a VPC with windows 2003 SP 1.  I installed Windows Server 2003 with Service Pack 1 (SP1), Enterprise Edition.
    3. Created three local service accounts.  I created the following custom accounts: "tfsetup", "tfserviceaccount", "tfreportingaccount."  I added my "tfsetup" to the local administrators group.  I added my "tfservice" account and "tfreportingaccount" to the IIS_WPG group.
    4. I logged out of Windows and then logged in using my "tfsetup" account.
    5. Installed IIS 6 and selected ASP.NET during the installation.
    6. Rebooted.
    7. Ran Windows Update at http://windowsupdate.microsoft.com.  I ran the Express check.
    8. Installed SQL Server 2005 Enterprise.  On the SQL Server Database Services page, I installed the following components: Analysis Services, Integration Services, Reporting Services.  On the Feature Selection page, under Client Components, I selected to install Management.  On the Service Account page, I selected to use the built-in System account.  I selected local system.  In Start services at the end of setup, I selected all services: SQL Server, SQL  Server Agent, Analysis Services, Reporting Services, and SQL Browser.  On the Authentication Mode screen, I chose Windows Authentication Mode.  On the Report Server Installation Options page, I chose Install default configuration.  On the Error and Usage Report Settings, I chose "Automatically send Error reports for SQL Server 2005 to Microsoft or your corporate error`reporting server" and "Automatically send Feature Usage data for SQL Server 2005 to Microsoft."
    9. Rebooted
    10. Installed Microsoft SQL Server 2005 SP1 (SQLServer2005SP1-KB913090-x86-ENU.exe) from http://go.microsoft.com/fwlink/?LinkId=57414. SP1 included the hotfix that updates SQL Server Analysis Services to support reporting  more efficiently.  When I encountered locked files during the install I stopped the SQL Services using the SQL Server Configuration Manager. 
    11. Rebooted.
    12. Installed the .NET Framework 2.0 HotFix.  I installed KB913393 (NDP20-KB913393-X86.exe) from the VSTF installation media.
    13. Installed  SharePoint Services with Service Pack 2  (http://go.microsoft.com/fwlink/?linkid=55087)  On the Type of Installation page, I selected Server Farm.  After setup completed, I browsed to http://localhost:13667/configadminvs.aspx.  I didn't make any changes.  I went to Windows Update and checked for critical updates using the Express option.
    14. Rebooted.
    15. Installed Team Foundation Server.  I chose Single-Server Installation.  On the System Health Check, I ran into an error.  The error message told me that the SQL Server Agent service was stopped.  I enabled the SQL Server Agent using the Services applet from the control panel and set it to automatic.  On the Service Account page, I used my custom local account: "tfserviceaccount" On the Reporting Data Source Account page, I used my custom local account:  "tfreportingaccount."  I completed the installation without further issues.
    16. Rebooted.

    I don't think it was actually necessary to do all the reboots that I did.  Personally, I've found that interspersing reboots during product installations help me avoid common installation hiccups.

  • J.D. Meier's Blog

    patterns & practices - A Team of Thieves


    If you want a glimpse into our workspace and how we work at patterns & practices, watch the patterns & practices - A Team of Thieves video on Channel9.  Rory interviews Ed and Peter from our team. 

    During the interview they bring up our previous team name, PAG.  PAG, at one point stood for Prescriptive Architecture Guidance, and then became Platform Architecture Guidance.  What I liked about our PAG days was that we got used as a noun and a verb:

    • We need guidance on that! ... can you "PAG it"?!!!
    • Sounds good ... "PAG it!"!!
    • We need a PAG on that! (meaning build a set of guidance)


  • J.D. Meier's Blog

    Policy Verification Through the Life Cycle


    I thought it might be helpful to share how I think about the problem of "policy verification through the life cycle."  I use policy as a mapping for "rules", "building codes" or requirements.

    For simplicity, I think about requirements as either user, system requirements or business.   I also break it down by business requirements, operational constraints, technological requirements, organizational and industry compliance.  From a life cycle perspective, I break the rules up into design, implementation, and deployment.  This helps me very quickly parse and prioritize the space.  It also helps me use the right tool for the job and right-size my efforts.

    How does this help?  It helps when you evaluate your approaches.

    • What are the most effective ways to verify design rules? (for example manual design inspections)
    • What are the most effective ways to verify implementation rules? (for example, FX Cop and Code Analysis, for low-hanging fruit, combined with manaul code inspections)
    • What are the most effective ways to verify deployment rules? (for example, deployment inspections)
  • J.D. Meier's Blog

    Brian Foot and Dynamic Languages


    Brian Foote gave an insightful talk about dynamic languages to our patterns & practices team. I walked away with a few insights from the delivery and from the content. 

    On the delivery side of things, I liked the way he used short stories, metaphors and challenging questions to make his points.  The beauty of his approach was, I could either it at face value, or re-examine my assumptions and paradigms.  I think I ended up experiencing recursive insight. 

    On the content side, I Iiked Brian's points:

    • Leave your goalie there.  No matter how good your game strategy is, would you risk playing without your goalie?
    • Do it when it counts.  If you're going to check it later do you need to check it now?  Are you checking it at the most effective time?  Are you checking when it matters most?  Are you checking too much or too often?
    • End-to-end testing.  what happens when there's a mistake in your model?  did you simulate enough in your end-to-end to spot the problem? 
    • Readable code.  If you can't eyeball your code, who's going to catch what the compiler won't?

    A highlight for me was when Brian asked the question, what would or should have caught the error?  (the example he showed was a significant blunder measured in millions)  There's a lot of factors across the people, process and technology spectrum.  The problem is, are specs executable? ... are processes executable? are engineers executable? ... etc.

    After the talk, I had to ask Brian what lead him to Big Ball of Mud.  I wasn't familiar with it, but more importantly I wanted Brian's take.  My guess was it was a combination of a big ball of spaghetti that was as clear as mud. He said ball of mud was actually a fairly common expression at the time, and it hit home.

    Following one ponderable thought after another, we landed on the topic of copy and paste reuse in terms of code.  We all know the downsides of copy and paste, but Brian mentioned the upside that cut and paste localizes changes (you could make a change here, without breaking code there).  Dragos and I also brought up the issue of over-engineering reuse and how sometimes reverse engineering is more expensive than a fresh start. In other words, sometimes so much energy and effort  has been put into a great big resusable gob of goo, but you just need to do one thing, that the one thing is tough to do.  I did point out that the copy and paste-ability factor seemed to go up if you found the recurring domain problems inside of recurring application features inside of recurring application types.

    Things got really interesting when we explored how languages could go from generalized to optimized as you walk the stack from lower-level framework type code up to more domain specific applications.  We didn't talk specifically about domain specific languages, but we did play out the concept which turned into metaphors of one-size fits all mammals versus trucks with hoses that put out fires and trucks that dump dirt (i.e. use the right, optimized tool for the job).

  • J.D. Meier's Blog

    24 ASP.NET 2.0 Security FAQs Added to Guidance Explorer


    We published 24 ASP.NET 2.0 Security FAQs now in Guidance Explorer.  You'll find them under the patterns & practices library node.  We pushed the FAQs into Guidance Explorer because one of our consultants in the field, Alik, is busy building out a customer's security knowledge base using Guidance Explorer.

    Don't let the FAQ name fool you.  FAQ can imply high-level or introductory.  These FAQs actually reflect some deeper issues.  In retrospect, we should have named this class of guidance modules "Questions and Answers."

    Each FAQ takes a question and answer format, where we summarize the solution and then point to more information.


  • J.D. Meier's Blog

    Echo It Back To Me


    Do people understand what you need from them?  Do people get your point?  A quick way to check is to say, "echo it back to me."  Variations might include:

    • Tell me what you heard so far to make sure we're all on the same page ...
    • I want to make sure I've communicated properly, spit it back out to me ...

    You might be surprised by the results.  I've found this to be an effective way to narrow the communication gap for common scenarios.

  • J.D. Meier's Blog

    Guidance Explorer as Your Personal Guidance Store


    Although Guidance Explorer (GE for short) was designed to showcase patterns & practices guidance, you can use it to create your own personal knowledge base.  It's a simple authoring environment, much like an offline blog.  The usage scenario is you create and store guidance items in "My Library".

    While using GE for day to day use, I noticed a simple but important user experience issue.  I think we should have optimized around creating free-form notes, that you could later turn into more structured guidance modules.  There's too much friction to go right to guidance, and in practice, you start with some notes, then refine to more structure.

    To optimize around creating fre-form notes in GE, I think the New Item menu that you get when you right-click the MyLibrary node, should have been:
    1.  Note - This would be a simple scratch pad so you could store unstructured notes of information.
    2.  Guidance Module - This menu option would list the current module types (i.e. Checklist Item, Code Example, … etc.)

    We actually did include the "Technote" type for this scenario.  A "Technote" is an unstructured note that you can tag with meta-data to help sort and organize in your library.  The problem is that this is not obvious and gets lost among the other structured guidance types in the list.

    The benefit of 20/20 hind-sight strikes again!

    On a good note, I have been getting emails from various customers that are using Guidance Explorer and they like the fact they get something of a structured Wiki experience, but with a lot more capability around sorting and viewing chunks of guidance.   They also like the fact that you get templates for key types (so when five folks create a guideline, all the guidelines have the same basic structure.)  I'll post soon about some of the key learnings that can help reshape creating, managing and sharing your personal, team, and community knowledge in today's landscape.

  • J.D. Meier's Blog

    Five Things You Didn't Know About Me


    I was blog-tagged by Ed, so here are 5 things you probably don't know about me ...

    • I've trained in Muay Thai kickboxing (head-butting and all), which reminds me I need to work on my splits again.
    • I have lineage to a king long ago, in a country I don't live in.  (This one actually surprised me!)
    • Robert Redford accidentaly stepped on my foot while shooting The Quiz Show. (A related long story told short here is that
    • I found out two weeks too late that I had been offered a speaking role for an HBO special)
    • I've applied Neuro-Linguistic Programming (NLP) to shape software engineering and guidance.
    • I have a pet Wallaby

    I'm tagging Alik, Rico, Ron, Srinath and Wojtek to post their 5 things.

  • J.D. Meier's Blog

    Context is Key

    I was browsing Rico's blog and I came across his post Do Performance Analysis in Context.  I couldn't agree more.  When it comes to evaluation, context is key.  If you don't know the scenarios and context, you can't trust the merits of your data or solutions.  To spread the idea and importance at work, I've coined the term Context-Precision.
  • J.D. Meier's Blog

    Analyzing a Problem Space


    How do you learn a problem space?  I've had to chunk up problem spaces to give advice for the last several years, so I've refined my approach over time.  In fact, When I find myself churning or don't have the best answers, I usually find that I've missed an important step.

    Problem Domain Analysis

    1. What are the best sources of information?
    2. What are the key questions for this problem space?
    3. What are the key buckets or categories in this problem space?
    4. What are the possible answers for the key questions?
    5. What are the empirical results to draw from?
    6. Who can be my sounding board?
    7. What are the best answers for the key questions?

    1. What are the best sources of information?
    Finding the best sources of information is key to saving time. I cast a wide net then quickly spiral down to find the critical, trusted sources of information in terms of people, blogs, sites, aliases, forums, ... etc.  Sources are effectively the key nodes in my knowledge network.

    2. What are the key questions for this problem space?
    Identifying the questions is potentially the most crucial step.  If I'm not getting the right answers, I'm not asking the right questions.  Questions also focus the mind, and no problem withstands sustained thinking (thinking is simply asking and answering questions).

    3. What are the key buckets or categories in this problem space?
    It's not long before questions start to fall into significant buckets or categories.  I think of these categories as a frame of reference for the problem space.  This is how we created our "Security Frame" for simplifying how we analyzed application security.

    4. What are the possible answers for the key questions?
    When identifying the answers, the first step is simply identifying how it's been solved before.  I always like to know if this problem is new and if not, what are the ways it's been solved (the patterns).  If I think I have a novel problem, I usually haven't looked hard enough.  I ask myself who else would have this problem, and I don't limit myself to the software industry.  For example, I've found great insights for project management and for storyboarding software solutions by borrowing from the movie industry.

    One pitfall to avoid is that just because a solution worked in one case doesn't mean it's right for you.  The biggest differences are usually context.  I try to find the "why" and "when" behind the solution, so that I can understand what's relevant for me, as well as tailor it as necessary.  When I'm given blanket advice, I'm particularly curious what's beneath the blanket.

    5. What are the empirical results to draw from?
    Nothing beats empirical results.  Specifically I mean reference examples.  Reference examples are short-cuts for success.  Success leaves clues.  I try to find the case studies and the people behind them.  This way I can model from their success and learn from their failure (failure is just another lesson in how not to do something).

    6. Who can be my sounding board?
    One assumption I make when solving a problem is that there's always somebody better than me for that problem.  So I then ask, well who is that and I seek them out.  It's a chance to learn from the best and force me to grow my network.  This is also how I build up a sounding board of experts.  A sounding board is simply a set of people I trust to have useful perspective on a problem, even if it's nothing more than improving my own questions.

    7. What are the best answers for the key questions?
    The answers that I value the most are the principles.  These are my gems.  A prinicple is simply a fundamental law.  I'd rather know a single priniciple, then a bunch of rules.  By knowing a single principle, I can solve many variations of a problem.

    Now, while I've left some details out, I've hopefully highlighted enough for you here that you find something you can use in your own problem domain analysis.

  • J.D. Meier's Blog

    844 Guidance Items in Guidance Explorer


    It's not 9 new guidelines, it's actually 70.  It looks like my Guidance Explorer wasn't done synching when I wrote my previous post.

    Prashant sent me a quick note.  Here is the complete status for Dec and Jan

    • Dec – 33 Guidelines items (.NET 2.0)
    • Jan – 37 Guidelines items (ADO.NET 2.0)


    • Total Items – 844
    • Total Guidelines Items – 547
  • J.D. Meier's Blog

    9 New Perf Guidelines in Guidance Explorer


    You should see 9 new performance guidelines in Guidance Explorer (GE).  Well, not entirely new, but refactored and cleaned up.  Prashant Bansode (from our original Improving .NET Performance guide team) was busy while I was out of office for the holidays.  What you'll notice is that many of the guidelines are missing problem and solution examples.  Job #1 is first putting the guidance into this form.  Our new schema for guidelines is more elaborate than the original guidance, which means we'll have information holes.  Fleshing out the missing information would be job next.

    BTW - if you're using Guidance Explorer and have an interesting story on how you've used it, please share it with us at getool@microsoft.com.

  • J.D. Meier's Blog

    From Guides to Guidance Modules


    Have you noticed the transition from guides to guidance modules over time?  My first few guidance projects were actually guides:

    While the chapters in the guides were modular, the overall outcome was an end-to-end guide.  On the upside, there was a lot of cohesion among the chapters.  On the downside, the guides were overwhelming for many customers who just wanted bits and pieces of the information.  That's the challenge of making a full guide available in html, pdf and print.

    Examples of Guidance Modules
    .NET 2.0 Security Guidance was the first project to use "Guidance Modules".  Guidance Modules are effectively modular types of guidance:

    Benefits of Guidance Modules
    The benefits of modules include:

    • "right-size" the solution to the problem
    • easier to author and test the guidance (templates and test cases)
    • easier to consume (you can grab just the guidance you need)
    • release as we go vs. wait until the end
    • you can build guides and books from the modules (strong building blocks for guides)

    The Chunking Has Just Begun
    While the initial chunking of guidance has certainly helped, there's more to go.  Customers have asked for even smaller chunks.  For example, rather than have all the guidelines in a single module, chunk each guideline into its own page.

    Dealing with Chunks
    Chunking up the guidance creates new challenges.  How do you find, gather and organize the right set of modules for your scenario?  This is a good problem to have.  Assuming there's guidance modules that have great community around them and they're prescriptive in nature (it prescribes vs. describes solutions), then the next step is to improve how you can leverage the modular information.  That's where Guidance Explorer comes in.  It was an experiment to explore new ways of creating, finding, and using guidance modules.  We learned a great deal about user experience, which I'll share in a future post.

  • J.D. Meier's Blog

    Idioms and Paradigms


    John Socha-Leialoha wrote up a nice bit of insight on how Users are Idiomatic.  John writes:

    "First, different users will have different definitions of "intuitive." ... Second, and this isn't conveyed directly by the definition of idiomatic, users actually expect inconsistent behavior."

    In my experience, I've found this to be true (user experience walkthroughs with customers are very revealing and insightful).

    I first got introduced to idiomatic design for user experience several years back.  One of my colleagues challenged me to improve my user interface design by trading what might seem like intuitive paradigms for more useful idioms.  He used the example of a car.  He said the placement of the gas/break pedals was not intuitive, but idiomatic. 

    He argued that what's important is that the pedals are placed where they are efficient and effective, not necessarily intuitive.  His point was that I should make design decisions by thinking through user effectiveness/efficiency in the long term vs. just thinking of up front discoverability of intuitive models.  He added that sometimes intuitive placement makes sense, as long as you're not trading overall user experience.

    User experience in software is challenging so I enjoy distinctions like this that make me think of the solution from different angles.

Page 42 of 45 (1,108 items) «4041424344»