J.D. Meier's Blog

Software Engineering, Project Management, and Effectiveness

  • J.D. Meier's Blog

    Update on Key Projects

    • 0 Comments

    While I've been quiet on my blog, we've been busy behind the scenes.  Here's a rundown on key things:

    • Arch Nuggets. These don't exist yet.  I'm socializing an idea to create small, focused guidance for key engineering decisions.  I'm picturing small articles with insight and chunks of code.  The code would be less about reuse and more about helping you quickly prototype and test.  You can think of them as focused architectural spikes or tests.  The scope would be cross-technology, cross-cutting concerns and application infrastructure type scenarios such as data access, exception management, logging, ... etc.  They'll be light-weight and focused on helping you figure out how to put our platform legos together.  For a concrete example, imagine more articles and code examples such as How To: Page Records in .NET Applications
    • CodePlex Site - Performance Testing Guidance.  This is our online knowledge base for performance testing nuggets.   We'll refactoring nuggets from our performance testing guide.  We'll then create new modules that show you how to make the most out of Visual Studio.
    • CodePlex Site - VSTS Guidance.  This is our online knowledge base for Visual Studio Team Foundation Server guidance.  We're refactoring nuggets from our TFS guide.
    • Guidance Explorer.  This is where we store our reusable guidance nuggets.  On the immediate radar, we're making some fixes to improve performance as well as improve browsing our catalog of nuggets.
    • Guide - Getting Results.  As a pet project, I'm putting together what I've learned over the past several years getting results at Microsoft.  It's going to include what I learned from the school of hard knocks.  I'm approaching it the way I approach any guide and I'm focusing on the principles, practices and patterns for effectiveness.
    • Guide - Performance Testing Guidance for Web Applications.  We're wrapping this up this week.  We're finishing the final edits and then building a new PDF.
    • Guide - Team Development with Visual Studio Team Foundation Server.   We’re basically *guidance complete.*  Since the Beta release, we added guidelines and practices for build, project management, and reporting.  We also revamped the deployment chapter, as well as improved the process guidance.  It's a substantial update. 
    • MSDN port of the guidance.  We have enough critical mass in terms of VSTS and Performance Testing guidance to justify porting to MSDN.  While many customers have used the guidance from the CodePlex site as is, for greater reach, we need to start the process of making the guidance a part of the MSDN library.  This will be an interesting exercise.
    • Sharepoint test for our guidance store.  We're testing the feasibility of using Sharepoint for our back-end (our guidance store) and our online Web application.  The key challenges we're hitting are creating effective publishing and consuming user experiences.  It's interesting and insightful and there's lots to learn.

    I'll have more to say soon.

  • J.D. Meier's Blog

    Security Inspections

    • 2 Comments

    Inspections are among my favorite tools for improving security.   I like them because they’re so effective and efficient.  Here’s why:

    • If you know what to look for, you have a better chance of finding it.  (The reverse is also true: if you don’t know what you’re looking for, you’re not going to see it)
    • You can build your inspection criteria from common patterns (Security issues tend to stem from common patterns)
    • You can share your inspection criteria
    • You can prioritize your inspection criteria
    • You can chunk your inspection criteria

    Bottom line -- you can identify, catalog and share security criteria faster than new security issues come along.

    Security Frame
    Our Security Frame is simply a set of categories we use to “frame” out, organize, and chunk up security threats, attacks, vulnerabilities and countermeasures, as well as principles, practices and patterns.  The categories make it easy to distill and share the information in a repeatable way. 

    Security Design Inspections
    Performing a Security Design Inspection involves evaluating your application’s architecture and design in relation to its target deployment environment from a security perspective.  You can use the Security Frame to help guide your analysis.   For example, you can walk the categories (authentication, authorization, … etc.) for the application.  You can also use the categories to do a layer-by-layer analysis.  Design inspections are a great place to checkpoint your core strategies, as well as identify what sort of end-to-end tests you need to verify your approach.

    Here's the approach in a nutshell:

    • Step 1.  Evaluate the deployment and infrastructure. Review the design of your application as it relates to the target deployment environment and the associated security policies. Consider the constraints imposed by the underlying infrastructure-layer security and the operational practices in use.
    • Step 2.  Evaluate key security design using the Security frame. Review the security approach that was used for critical areas of your application. An effective way to do this is to focus on the set of categories that have the most impact on security, particularly at an architectural and design level, and where mistakes are most often made. The security frame describes these categories. They include authentication, authorization, input validation, exception management, and other areas. Use the security frame as a road map so that you can perform reviews consistently, and to make sure that you do not miss any important areas during the inspection.
    • Step 3.  Perform a layer-by-layer analysis. Review the logical layers of your application, and evaluate your security choices within your presentation, business, and data access logic.

    For more information, see our patterns & practices Security Design Inspection Index.

    Security Code Inspections
    This is truly a place where inspections shine.  While static analysis will catch a lot of the low hanging fruit, manual inspection will find a lot of the important security issues that are context dependent.  Because it’s a manual exercise, it’s important to set objectives, and to prioritize based on what you’re looking for.   Whether you do your inspections in pairs or in groups or individually, checklists in the form of criteria or inspection questions are helpful.

    Here's the approach in a nutshell:

    • Step 1. Identify security code review objectives. Establish goals and constraints for the review.
    • Step 2. Perform a preliminary scan. Use static analysis to find an initial set of security issues and improve your understanding of where the security issues are most likely to be discovered through further review.
    • Step 3. Review the code for security issues. Review the code thoroughly with the goal of finding security issues that are common to many applications. You can use the results of step two to focus your analysis.
    • Step 4. Review for security issues unique to the architecture. Complete a final analysis looking for security issues that relate to the unique architecture of your application. This step is most important if you have implemented a custom security mechanism or any feature designed specifically to mitigate a known security threat.

    For more information on Security Code Inspections, see our patterns & practices Security Code Inspection Index.  For examples of “Inspection Questions”, see Security Question List: Managed Code (.NET Framework 2.0) and Security Question List: ASP.NET 2.0.” (Security Question List: ASP.NET 2.0).

    Security Deployment Inspections
    Deployment Inspections are particularly effective for security because this is where the rubber meets the road.  In a deployment inspection, you walk the various knobs and switches that impact the security profile of your solution.  This is where you check things such as accounts, shares, protocols, … etc. 

    The following server security categories are key when performing a security deployment inspection:

    • Patches and Updates
    • Accounts Accounts
    • Auditing and Logging
    • Files and Directories
    • Ports
    • Protocols
    • Registry
    • Services
    • Shares

    For more information, see our patterns & practices Security Deployment Inspection Index.

    My Related Posts

  • J.D. Meier's Blog

    Performance Inspections

    • 1 Comments

    In this post, I'll focus on design, code, and deployment inspections for performance.  Inspections are a white-box technique to proactively check against specific criteria.  You can integrate inspections at key stages in your life cycle, such as design, implementation and deployment.

    Keys to Effective Inspections

    • Know what you're looking for.
    • Use scenarios to illustrate a problem.
    • Bound the acceptance criteria with goals and constraints.

    Performance Frame
    The Performance Frame is a set of categories that helps you organize and focus on performance issues.   You can use the frame to organize principles, practices, patterns and anti-patterns.  The categories are also effective for organizing sets of questions to use during inspections.  By using the categories in the frame, you can chunk up your inspections.   The frame is also good for finding low-hanging fruit.    

    Performance Design Inspections
    Performance design inspections focus on the key engineering decisions and strategies.  Basically, these are the decisions that have cascading impact and that you don't want to make up on the fly.  For example, your candidate strategies for caching per user and application-wide data, paging records, and exception management would be good to inspect.  Effective performance design inspections include analyzing the deployment and infrastructure, walking the performance frame, and doing a layer-by-layer analysis.  Question-driven inspections are good because they help surface key risks and they encourage curiosity.

    While there are underlying principles and patterns that you can consider, you need to temper your choices with prototypes, tests and feedback.  Performance decisions are usually trade-offs with other quality attributes, such as security, extensibility, or maintainability.  Performance Modeling helps you make trade-off decisions by focusing on scenarios, goals and constraints. 

    For more information, see Architecture and Design Review of a .NET Application for Performance and Scalability and Performance Modeling.

    Performance Code Inspections
    Performance code inspections focus on evaluating coding techniques and design choices. The goal is to identify potential performance and scalability issues before the code is in production.  The key to effective performance code inspections is to use a profiler to localize and find the hot spots.  The anti-pattern is blindly trying to optimize the code.  Again, a question-driven technique used in conjunction with measuring is key.

    For more information, see Performance Code Inspection.

    Performance Deployment Inspections
    Performance deployment inspections focus on tuning the configuration for your deployment scenario.  To do this, you need to have measurements and runtime data to know where to look.  This includes simulating your deployment environment and workload.  You also need to know the knobs and switches that influence the runtime behavior.  You also need to be bounded by your quality of service requirements so you know when you're done.  Scenarios help you prioritize.

    My Related Posts

  • J.D. Meier's Blog

    Inspections

    • 3 Comments

    Inspections are a white-box technique to proactively check against specific criteria.  You can integrate inspections as part of your testing process at key stages, such as design, implementation and deployment.

    Design Inspections
    In a design inspection, you evaluate the key engineering decisions.   This helps avoid expensive do-overs.  Think of inspections as a dry-run of the design assumptions.   Here’s some practices I’ve found to be effective for design inspections:

    • Use inspections to checkpoint your strategies before going too far down the implementation path.
    • Use inspections to expose the key engineering risks.
    • Use scenarios to keep the inspections grounded.  You can’t evaluate the merits of a design or architecture in a vacuum.
    • Use a whiteboard when you can.  It’s easy to drill into issues, as well as step back as needed.
    • Tease out the relevant end-to-end test cases based on risks you identify.
    • Build pools of strategies (i.e. design patterns) you can share.  It’s likely that for your product line or context, you’ll see recurring issues.
    • Balance user goals, business goals, and technical goals.  The pitfall is to do a purely technical evaluation.  Designs are always trade-offs.

    Code Inspections
    In a code inspection, you focus on the implementation.  Code inspections are particularly effective for finding lower-level issues, as well as balancing trade-offs.  For example, a lot of security issues are implementation level, and they require trade-off decisions.  Here’s some practices I’ve found to be effective for code inspections: 

    • Use checklists to share the “building codes.”  For example, the .NET Design Guidelines are one set of building codes.  There's also building codes for security, performance ... etc.
    • Use scenarios and objectives to bound and test.  This helps you avoid arbitrary optimization or blindly applying recommendations.
    • Focus the inspection.  I’ve found it’s better to do multiple, short-burst, focused inspections than a large, general inspection.
    • Pair with an expert in the area you’re inspecting.
    • Build and draw from a pool of idioms (i.e. patterns/anti-patterns)

    Deployment Inspections
    Deployment is where application meets infrastructure.  Deployment inspections are particularly helpful for quality attributes such as performance, security, reliability and manageability concerns.  Here’s some practices I’ve found to be effective for deployment inspections:

    • Use scenarios to help you prioritize.
    • Know the knobs and switches that influence runtime behavior.
    • Use checklists to help build and share expertise.  Knowledge of knobs and switches tends to be low-level and art-like.
    • Focus your inspections.  I’ve found it more productive and effective to do focused inspections.  Think of it as divide and conquer.

    Additional Considerations

    • Set objectives.  Without objectives, it's easy to go all over the board.
    • Keep a repository.  In practice, one of the most effective approaches is to have a common share that all teams can use as a starting point.  Each team then tailors for their specific project.
    • Integrate inspections with your quality assurance efforts for continuous improvement.
    • Identify skill sets you'll need for further drill downs (e.g. detail design, coding, troubleshooting, maintenance.)  If you don't involve the right people, you won't produce effective results.
    • Use inspections as part of your acceptance testing for security and performance.
    • Use checklists as starting points.  Refine and tailor them for your context and specific deliverables.
    • Leverage tools to automate the low-hanging fruit.  Focus manual inspections on more context-sensitive or more complex issues, where you need to make trade-offs.
    • Tailor your checklists for application types (Web application, Web Service, desktop application, component) and for verticals (manufacturing, financial ... etc.) or project context (Internet-facing, high security, ... etc.)

    In the future, I'll post some more specific techniques for security and performance.

  • J.D. Meier's Blog

    MSF Agile Frame (Workstreams and Key Activities)

    • 1 Comments

    When I review an approach, I find it helpful to distill it to a simple frame so I can get a bird's-eye view.  For MSF Agile, I found the most useful frame to be the workstreams and key activities.  According to MSF, workstreams are simply groups of activities that flow logically together and are usually associated with a particular role.  I couldn't find this view in MSF Agile, so I created one:

    Workstream Role Key Activities
    Capture Project Vision Business Analyst Write Vision Statement; Define Personas; Refine Personas
    Create a Quality of Service Requirement Business Analyst Brainstorm quality of Service Requirements; Develop Lifestyle Snapshot; Prioritize Quality of Service Requirements List; Write Quality of Service Requirements; Identify Security Objectives
    Create a Scenario Business Analyst Brainstorm Scenarios; Develop Lifestyle Snapshot; Prioritize Scenario List; Write Scenario Description; Storyboard a Scenario
    Guide Project Project Manager Review Objectives; Assess Progress; Evaluate Test Metric Thresholds; Triage Bugs; Identify Risk
    Plan an Iteration Project Manager Determine Iteration Length; Estimate Scenario; Estimate Quality of Service Requirements; Schedule Scenario; Schedule Quality of Service Requirement; Schedule bug Fixing Allotment; Divide Scenarios into Tasks; Divide Quality of Service Requirements into Tasks
    Guide Iteration Project Manager Monitor Iteration; Mitigate a Risk; Conduct Retrospectives
    Create a Solution Architecture Architect Partition the System; Determine Interfaces; Develop Threat Model; Develop Performance Model; Create Architectural Prototype; Create Infrastructure Architecture
    Build a Product Developer Start a Build; Verify a Build; Fix a Build; Accept Build
    Fix a Bug Developer Reproduce the Bug; Locate the Cause of a Bug; Reassign a Bug; Decide on a Bug Fix Strategy; Code the Fix for a Bug; Create or Update a Unit Test; Perform a Unit Test; Refactor Code; Review Code
    Implement a Development Task Developer Cost a Development Task; Create or Update a Unit Test; Write Code for a Development Task; Perform Code Analysis; Perform a Unit Test; Refactor Code; Review Code; Integrate Code Changes
    Close a Bug Tester Verify a Fix; Close the Bug
    Test a Quality of Service Requirement Tester Define Test Approach; Write Performance Tests; Write Security Tests; Write Stress Tests; Write Load Tests; Select and Run a Test Case; Open a Bug; Conduct Exploratory Testing
    Test a Scenario Tester Define Test Approach; Write Validation Tests; Select and Run a Test Case; Open a Bug; Conduct Exploratory Testing
    Release a Product Release Manager Execute a Release Plan; Validate a Release; Create Release Notes; Deploy the Product

  • J.D. Meier's Blog

    How To Share Lessons Learned

    • 2 Comments

    I'm a fan of sharing lessons learned along the way.  One light-weight technique I do with a distributed team is a simple mail of Do's and Dont's.  At the end of the week or as needed, I start the mail with a list of dos and dont's I learned and then ask the team to reply all with their lessons learned.

    Example of a Lessons Learned Mail

    Collaboration

    • Do require daily live synchs to keep the team on the same page and avoid churn in mail.
    • Do reduce the friction to be able to spin up Live Meetings as needed.

    Guidance Engineering

    • Do index product docs to help build categories and to know what's available.
    • Do scenario frames to learn and prioritize the problem space.
    • Do use Scenarios, Questions and Answers, Practices at a Glance, and Guidelines to build and capture knowledge as we go.
    • Do use Scenarios as a scorecard for the problem space.
    • Do use Questions and Answers as a chunked set of focused answers, indexed by questions.
    • Do use Practices as a Glance, as a frame for organizing task-based nuggets (how to blah …)
    • Do use Guidelines for recommended practices (do this, don't do this … etc.)
    • Do create the "frame"/categories earlier vs. later.

    Personal Effectiveness

    • Do blog as I go versus over-engineer entries.
    • Do sweep across bodies of information and compile indexes up front versus ad-hoc (for example, compile bloggers, tags, doc indexes, articles, sites … etc.)

    Project Management

    • Don't split the team across areas.  Let the team first cast a wide net to learn the domain, but then focus everybody on the same area for collaboration, review, pairing …etc.

    Tools

    • Do use CodePlex as a channel for building community content projects.

    Guidelines Help Carry Lessons Forward
    While this approach isn't perfect, I found it makes it easier to carry lessons forward, since each lesson is a simple guideline.  I prefer this technique to approaches where there's a lot of dialogue but no results.  I also like it because it's a simple enough forum for everybody to share their ideas and focus on objective learnings versus finger point and dwelling.  I also find it easy to go back through my projects and quickly thumb through the lessons learned.

    Do's and Don'ts Make Great Wiki Pages Too
    Note that this approach actually works really well in Wikis too.  That's where I actually started it.  On one project, my team created a lot of lessons learned in a Wiki, where each page was dedicated to something we found useful.  The problem was, it was hard to browse the lessons in a useful way.  It was part rant, part diatribe, with some ideas on improvements scattered here or there.  We then decided to name each page as a Do or Don't and suddenly we had a Wiki of valuable lessons we could act on.

  • J.D. Meier's Blog

    Quick and Dirty Getting Things Done

    • 4 Comments

    If you're backlogged and you want to get out, here's a quick, low tech, brute force approach.  On your whiteboard, first write your key backlog items.  Next to it, write down To Do.  Under To Do, write the three most valuable things you'll complete today.  Not tomorrow or in the future, but what you'll actually get done today.  Don't bite off more than you can chew.  Bite off enough to feel good about what you accomplished when the day is done.

    If you don't have a whiteboard, substitute a sheet of paper.  The point is keep it visible and simple. Each day for this week, grab a new set of three.  When you nail the three, grab more.  Again, only bite off as much as you can chew for the day.  At the end of the week, you'll feel good about what you got done.

    This is a technique I've seen work for many colleagues and it's stood the test of time.  There's a few reasons behind why this tends to work:

    • Whiteboards make it easy to step back, yet keep focus.
    • You only bite off a chunk at a time, so you don't feel swamped.
    • As you get things done, you build momentum.
    • You have constant visual feedback of your progress.
    • Unimportant things slough off.

    My Related Posts

  • J.D. Meier's Blog

    VSTS Guidance Projects Roundup

    • 5 Comments

    Here's a quick rundown of our patterns & practices VSTS related Guidance projects.   It's a combination of online knowledge bases, guides, video-based guidance and a community Wiki for public participation.  We're using CodePlex for agile release, before baking into MSDN for longer term.

    Guides

    Knowledge Bases

    • patterns & practices Performance Testing Guidance Wiki - This project is focused on creating an online knowledge base of how tos, guidelines, and practices for performance testing, including performance testing using Visual Studio Team System. It's a collaborative effort between industry experts, Microsoft ACE, patterns & practices, Premier, VSTS team members, and customers.
    • patterns & practices Visual Studio Team System Guidance Wiki - This project is focused on creating an online knowledge base of how tos, guidelines, and practices for Microsoft Visual Studio Team System. It's a collaborative effort between patterns & practices, Team System team members, industry experts, and customers.

    Video-Based Guidance

    Community Wiki

    Note that we're busy wrapping up the guides.  Once the guides are complete, we'll do a refresh of the online knowledge bases.  We'll also push some updated modules to Guidance Explorer.

    My Related Posts

     

  • J.D. Meier's Blog

    Get Lean, Eliminate Waste

    • 4 Comments

    If you want to tune your software engineering, take a look at Lean.  Lean is a great discipline with a rich history and proven practices to draw from.  James has a good post on applying Lean principles to software engineering.  I think he summarizes a key concept very well:

    "You let quality drive your speed by building in quality up front and with increased speed and quality comes lower cost and easier maintenance of the product moving forward."

    7 Key Principles in Lean
    James writes about 7 key principles in Lean:

    1. Eliminate waste.
    2. Focus on learning.
    3. Build quality in.
    4. Defer commitment.
    5. Deliver fast.
    6. Respect people.
    7. Optimize the whole.

    Example of Deferring Commitment
    I think the trick with any principles is knowing when to use them and how to apply them in context.  James gives an example of how Toyota defers commitment until the last possible moment:

    "Another key idea in Toyota's Product Development System is set-based design. If a new brake system is needed for a car, for example, three teams may design solutions to the same problem. Each team learns about the problem space and designs a potential solution. As a solution is deemed unreasonable, it is cut. At the end of a period, the surviving designs are compared and one is chosen, perhaps with some modifications based on learning from the others - a great example of deferring commitment until the last possible moment. Software decisions could also benefit from this practice to minimize the risk brought on by big up-front design."

    Examples in Software Engineering
    From a software perspective, what I've seen teams do is prototype multiple solutions to a problem and then pick the best fit.  The anti-pattern that I've seen is committing to one path too early without putting other options on the table.

    A Lean Way of Life
    How can you use Lean principles in your software development effort?  ... your organization?  ... your life?

    More Information

  • J.D. Meier's Blog

    Clearing Your Inbox

    • 9 Comments

    Today I helped a colleague clear their inbox.  I've kept a zero mail inbox for a few years.  I forgot this wasn't common practice until a colleague said to me, "wow, your inbox doesn't scroll."

    I didn't learn the zen of the zero mail inbox over night.  As pathetic as this sounds, I've actually compared email practices over the years with several people to find some of the best practices that work over time.  The last thing I wanted to do was waste time in email, if there were better ways.  Some of my early managers also instilled in me that to be effective, I needed to master the basics.  Put it another way, don't let administration get in the way of results.

    Key Steps for a Clear Inbox
    My overall approach is to turn actions into next steps, and keep stuff I've seen, out of the way of my incoming mail.  Here's the key steps: 

    1. Filter out everything that's not directly to you.  To do so, create an inbox rule to remove everything that's not directly To or CC you.  As an exception, I do let my immediate team aliases fall through.
    2. Create a folder for everything that's read.  I have a folder to move everything I read and act on.  This is how I make way for incoming.
    3. Create a list for your actions.  Having a separate list means you can list the actions in the sequence that makes sense for you, versus let the sequence in your inbox drive you.

    Part of the key is acting on mail versus shuffling it.  For a given mail, if I can act on it immediately, I do.  If now's not the time, I add it to my list of actions.  If it will take a bit of time, then I drag it to my calendar and schedule the time.

    Anti-Patterns
    I think it's important to note the anti-patterns:

    1. Using your inbox as a large collection of action and semi-action items with varying priorities
    2. Using your inbox as a pool of interspersed action and reference items
    3. Adopting complicated mail and task management systems

    My Related Posts

    1. Scannable Outcome Lists
    2. Using Scannable Outcomes with My Results Approach
  • J.D. Meier's Blog

    How To Do Tasks More Efficiently

    • 2 Comments

    Here's my short-list of techniques I use for improving efficiency on a given task:

    • Increase the frequency.  If I'm not efficient at something and I need to be, I start doing it more.  A lot more.  Frequency helps me get over resistance.  I also get more chances to learn little things each time that help me improve.   
    • Reduce friction.  This is important and goes in hand with increasing the frequency.  When I do something more, I can quickly find the friction points.  For example, I was finding that pictures were piling up on my camera.  The problem was I needed my camera's cradle to transfer my pics.  When I got my new camera, I could transfer pics through the memory disk without the cradle and the friction was gone.  It was a world of difference.  I pay attention to friction points now in all the recurring tasks I need to do.
    • Model the best.  If I look around, I can usually find somebody who's doing what I want to do, better than I'm doing it.  I learn from them.  For example, when I was doing an improvement sprint on making videos, I learned from Jason Taylor, Alik Levin, and Alex Mackman, since they were all doing videos for some time and had lessons to share.
    • Batch the tasks.  There's two ways I batch tasks.  First, I gather enough so that when I do them, I'll learn in a batch.  Second, I look for way to split the work and to batch the workstreams.  For example, when I was working on an improvement sprint for speech to text, I made very little progress if I tried to dictate and edit.  I made much more progress when I dictated in batch, and then edited in batch.  It was a simple shift in strategy, but made a world of difference.

    While each technique is useful, I find I improve faster when I'm using them together.  It's synergy in action, where the sum is better than the parts.

    My Related Posts

  • J.D. Meier's Blog

    Timebox Your Day

    • 5 Comments

    Grigori Melnik joined our team recently.  He's new to Microsoft so I shared some tips for effectiveness.  Potentially, the most important advice I gave him was to timebox his day.  If you keep time a constant (by ending your day at a certain time), it helps with a lot of things:

    • Worklife balance (days can chew into nights can chew into weekends)
    • Figuring our where to optimize your day
    • Prioritizing (time is a great forcing function)

    To start, I think it helps to carve up your day into big buckets (e.g. administration, work time, think time, connect time), and then figure out how much time you're willing to give them.  If you're not getting the throughput you want, you can ask yourself:

    • are you working on the right things?
    • are you spending too much time on lesser things?
    • are there some things you can do more efficiently or effectively?

    To make the point hit home, I pointed out that without a timebox, you can easily spend all day reading mails, blogs, aliases, doing self-training, ... etc. and then wonder where your day went.  Microsoft is a technical playground with lots of potential distractions for curious minds that want to grow.  Using timeboxes helps strike balance.  Timeboxes also help with pacing.  If I only have so many hours to produce results, I'm very careful to spend my high energy hours on the right things.

    My Related Posts

  • J.D. Meier's Blog

    How To Research Efficiently

    • 7 Comments

    Building guidance takes a lot of research.  Over the years, I've learned how to do this faster and easier.  One of the most important things I do is setup my folders (whether file system or Groove)

    Initial Folders

    /Project X
    	/Drafts
    	/Research
    	/Reference
    
    Folder Over Time
    Over time, this ends up looking more like
    Project X
    	/Builds
    		/2007_05_26
    		/2007_05_27
    	/Drafts
    	/Reference
    		/Articles
    		/Blogs
    		/Bugs
    		/CaseStudies
    		/Docs
    		/Slides
    		/Source X
    		/Source Y
    		/Source Z
    	/Research
    		/Braindumps
    		/DataPoints
    		/QuestionsLists
    		/Topic X
    		/Topic Y
    		/Topix Z
    	/Tests
    		/Tests X
    		/Tests Y
    		/Tests Z
    	/Whiteboards
    		/Topic X
    		/Topic Y
    		/Topic Z
    


    Key Points

    • Factor reference from research.  Reference is stuff I pull in from various sources, such as slides, blogs, articles ... etc.  Research holds the notes and docs I create.  This way I avoid mixing information I create with information that I reference.  Having a place to store my reference information helps me optimize when I'm hunting and gathering resources in batch mode.  I also find that it saves me time when I have to go back and figure out where information came from.
    • Factor stages of information.  In my basic workflow, I move information from research to drafts to builds.  (where builds are guides)  Keeping them separate makes it very easy for me to know the current state of the information and it gives me a safe place to re-factor and make changes.  Research is effectively my sandbox to create documents and organize my notes as I see fit.  Drafts is where I have to make decision on what and how to share the information.  Builds is where I produce a shareable set of information.
    • Have a place for whiteboard captures.  Whiteboards is where I dump pics from whiteboarding sessions.  I'm a fan of doing braindumps at the whiteboard and quickly dumping to a place to reference.  If it's just text, I write it down.  If it's visual, I take a pic and file it.

    I use this approach whether I'm doing personal learning or building 1200+ page guides.  This approach helps me spend more time researching and less time figuring out where to put the information.

    My Related Posts

  • J.D. Meier's Blog

    Performance Testing Guide Beta 1 is Available

    • 6 Comments

    Today we released our Beta 1 of Performance Testing Guidance for Web Applications Guide.  It shows you an end-to-end approach for implementing performance testing, based on lessons learned from applied use in customer scenarios.  Whether you're new to performance testing or looking for ways to improve your current approach, you'll find insights you can use.

    Contents at a Glance

    • Part 1, Introduction to Performance Testing
    • Part II, Exemplar Performance Testing Approaches
    • Part III, Identify the Test Environment
    • Part IV, Identify Performance Acceptance Criteria
    • Part V, Plan and Design Tests
    • Part VI, Execute Tests
    • Part VII, Analyze Results and Report
    • Part VIII, Performance Testing Techniques


    Chapters

    • Introduction
    • Ch 01 - Fundamentals of Web Application Performance Testing
    • Ch 02 - Types of Performance Testing
    • Ch 03 - Risks Performance Testing Addresses
    • Ch 04 – Core Activities
    • Ch 05 - Managing an Agile Performance Test Cycle
    • Ch 06 - Coordinate Performance Testing with an Iteration-Based Process
    • Ch 07 – Managing the Performance Testing Cycle in a CMMI Environment
    • Ch 08 - Evaluating Systems to Improve Performance Testing
    • Ch 09 - Performance Testing Objectives
    • Ch 10 - Quantifying End User Response Time Goals
    • Ch 11 - Consolidate Various Types of Performance Acceptance Criteria
    • Ch 12 - Modeling Application Usage
    • Ch 13 - Modeling User Variances
    • Ch 16 - Test Execution
    • Ch 17 - Performance Testing Math
    • Ch 18 - Reporting Fundamentals
    • Ch 19 - Load Testing Web Applications
    • Ch 20 - Stress Testing Web Applications

    About Our Team

    • Carlos Farre - Carlos is our performance and security specialist in patterns & practices.  He helps make sure our patterns & practices code follows our performance and security guidance.
    • Prashant Bansode - When Prashant's on a project, you can be sure he's ripping through the technical accuracy and improving the customer focus.  This is the same Prashant from Guidance Explorer, Security Guidance, and VSTS Guidance.
    • Scott Barber - Scott brings his many years of performance testing experience to the table.  If you do performance testing for a living, you probably know his name, his articles and the trails he's blazed.  Scott’s worked with us previously on Improving .NET Application Performance and Scalability.
    • Dennis Rea - Dennis brings his years of editorial experience to the table.  He worked with us previously on our Security Guidance.
  • J.D. Meier's Blog

    TFS Guide Beta 1 is Available

    • 20 Comments

    Today we released our Beta 1 of Team Development with Visual Studio Team Foundation Server Guide.  It's our Microsoft playbook for TFS.  This is our guide to help show you how to make the most of Team Foundation Server.  It's a distillation of many lessons learned.  It's a collaborative effort among product team members, field, industry experts, MVPs, and customers.

    Contents at a Glance

    • Part I, Fundamentals
    • Part II, Source Control
    • Part III, Builds
    • Part IV, Large Project Considerations
    • Part V, Project Management
    • Part VI, Process Guidance
    • Part VII, Reporting
    • Part VIII, Setting Up and Maintaining the Team Environment


    Chapters

    • Introduction
    • Ch 01 - Introducing the Team Environment
    • Ch 02 - Team Foundation Server Architecture
    • Ch 03 - Structuring Projects and Solutions
    • Ch 04 - Structuring Projects and Solutions in Team Foundation Server
    • Ch 05 - Defining Your Branching and Merging Strategy
    • Ch 06 - Managing Source Control Dependencies in Visual Studio Team System
    • Ch 07 - Team Build Explained
    • Ch 08 - Setting Up Continuous Integration with Team Build
    • Ch 09 - Setting Up Scheduled Builds with Team Build
    • Ch 10 - Large Project Considerations
    • Ch 11 - Project Management Explained
    • Ch 12 - Work Items Explained
    • Ch 13 – MSF Agile Projects
    • Ch 14 - Process Templates Explained
    • Ch 15 - Reporting Explained
    • Ch 16 - Team Foundation Server Deployment
    • Ch 17 - Providing Internet Access to Team Foundation Server

    About Our Team

    • Prashant Bansode - Prashant's an experienced guidance builder and a master of execution.  He's a solid pillar on the team.
    • Jason Taylor - Jason's a master of results.  I've worked with Jason across a few projects.  He always hits the ground running and accelerates from there.
    • Alex Mackman - I worked with Alex on Building Secure ASP.NET Applications, Improving Perf and Scale, and Improving .NET Performance and Scalability, so it's great to have him back.
    • Kevin Jones - Kevin is new to our team, but getting up to speed fast.  He brings a wealth of Visual Studio Team System experience to the table.


    Contributors and Reviewers
    Here's our contributors and reviewers so far:

    • Microsoft: Ajay Sudan; Ajoy Krishnamoorthy; Alan Ridlehoover; Alik Levin; Bijan Javidi; Buck Hodges; Burt Harris; Doug Neumann; Edward Jezierski; Eric Charran; Graham Barry; Jeff Beehler; Julie MacAller; Ken Perilman; Mario Rodriguez; Marc Kuperstein; Matthew Mitrik; Michael Puleio; Nobuyuki Akama; Paul Goring; Pete Coupland; Peter Provost; Rob Caron; Robert Horvick; Rohit Sharma; Sajee Mathew; Siddharth Bhatia; Tom Hollander; Venky Veeraraghavan
    • External: David P. Romig, Sr; Eric Blanchet; Leon Langleyben; Martin Woodward; Quang Tran; Sarit Tamir; Tushar More; Vaughn Hughes; Michael Rummier

     

  • J.D. Meier's Blog

    Put Your Thinking Hat On

    • 1 Comments

    I'm a fan of using different techniques for improving thinking. Here's a write-up on Six Thinking Hats.  This book presents a simple and effective thinking framework.  What I like about the approach is that it's both effective for individuals as well as a team.  What I also like about the approach is that rather than focus on trying to change personalities, it creates a way for different personalities to play well together.  Imagine the time you'll save in meetings!

    Because Six Thinking Hats uses the hats as a metaphor, nobody gets a label.  Instead, the entire team can put on the relevant hat for the task at hand: white, red, black, yellow, green, or blue.  Imagine the surprises you get when the dominantly data-driven put on their green hats and get creative.  Better yet, imagine what happens when the overly optimistic put on their black hats and play the "devil's advocate"?

    What's interesting is this type of mode switching already happens.  For example, in security we use white hats and black hats.  On my team, I often ask, "what's your gut say" to tap into intuition and emotions.  If I see the team too optimisitic, I ask "why won't this work?".

    I think having a simple set of metaphorical hats and rules for the game will really help improve thinking and collaboration, and avoid the stale-mates that can often happen in meetings.  As the author puts it, you "think your way forward versus judge your way forward."

  • J.D. Meier's Blog

    Feed Readers

    • 4 Comments

    Darren asks Which Feed Reader is Best?  I was going to just add a comment, but it quickly turned into a post.

    I've used Bloglines, Google.com, Google Reader, Live.com, Newzie, OMEA Reader, and RSS Bandit.  I know I've used more that I'm forgetting.  They all have their strengths and weaknesses, so finding the right match for my scenarios is the key.  They all seem to continue to improve, so I find I also have to go back and re-evaluate from time to time.

    For the rich desktop experience, I ended up using NewzieRob pointed me to it and I know he does a lot of feed reading and he too had tried a lot of readers.  What's interesting about Newzie is its use of color-coding to flag by time.  I also like the fact that it has multiple views, including a tree view, list view, news ticker view, and a today view.

    For my "webtop" experience, I end up mostly using Live.com so I could get to my feeds from any desktop.  I created pages for different topics.  This lets me chunk up my reading experience and never get overwhelmed.  The nice thing about a page view is it's easy to scan across. 

    When I help somebody get started reading feeds, if they have a Windows Live account, then I show them how to add pages and add feeds to Live.com, since I don't think it's obvious.  If they don't have a Windows Live account, then I have them download Newzie and help them add a few posts of their favorite topic, and then show them how to swtich views.

    My Related Posts

  • J.D. Meier's Blog

    The Better Adapted You Are, the Less Adaptable You Tend To Be

    • 10 Comments

    I was skimming The Secrets of Consulting and I came across this nugget: 

    “...Many years ago, Sir Ronald Fisher noted that every biological system had to face the problem of present versus future, and that the future was always less certain than the present. To survive, a species had to do well today, but not so well that it didn’t allow for possible change tomorrow. His Fundamental Theorem of Natural Selection said that the more adapted an organism was to present conditions, the less adaptable it tended to be to unknown future conditions. We can apply the theorem to individuals, small groups of people, large organizations, organizations of people and machines, and even complex systems of machinery, and can generalize it as follows: The better adapted you are, the less adaptable you tend to be...”
    Source: Gerald M. Weinberg, The Secrets of Consulting (New York, Dorset House Publishing, 1985) pp 29-30

    Along the same lines, I was scanning Lean Software Engineering and came across this nugget:

    "... When it comes to large-scale, creative engineering, the right processes for all the various teams in an organization depends on both people and situation — both of which are constantly changing. You can’t just adopt a particular process and be done with it.  So really the only “bad process” is one that doesn’t provide framework to reflect and permission to adapt..."
    Source: Avoid Dogma When Herding Cats

    This reminded me of a quote from Hereclitus - "Nothing endures but change."

    I'm a fan of adaptability and continuous improvement.  I think adaptability is a key ingredient for effectiveness.  I always reflect on and test how adaptable is my mindset? ... my approach? ... my tools? ... my teams? ... my organization? ... my company? ... etc.

  • J.D. Meier's Blog

    ARCast.net - Defending the Application

    • 1 Comments

    Ron talks security with Alik in ARCast.net - Defending the Application.  If you want to hear some practical advice on security, listen to Alik.  He's in the field doing security every day with customers.  It doesn't get anymore real-world than that.

    The key take-away for me is the focus on proven practices.  I have a belief that focusing on a set of core practices is more effective than chasing all the variations of bad symptoms.  For example, if you adopt a practice of constraining, rejecting and sanitizing input, and you verify input for length, range, format and type, you tackle injection issues (cross-site scripting, SQL injection, SQL truncation ... etc.) at the source.

    At one point in the interview, Ron mentions that attackers share information all the time.  Unfortunately, security is a game of what you don't know can hurt you.  That's why I think community efforts and knowledge bases are a must.  I'm glad to see more information sharing in blogs.  I'm also glad to see efforts like the Open Web Application Security Project (OWASP).  It's also why I try to share as much as possible through patterns & practices security guidance, Guidance Explorer, and SecurityGuidanceShare.com.

     

  • J.D. Meier's Blog

    Per's Blogging

    • 1 Comments

    Per Vonge Nielsen is blogging! He's been my manager for several years at patterns and practices.  He's also been a mentor for myself and many others, so it's great to see him share his learnings more broadly.   Per has a way of distilling information down into the essential insights, which is a treat in today's information overloaded world.

    Enjoy Per's first post - Divide and Conquer – one step at a time.

  • J.D. Meier's Blog

    Security Guidance Share Experiment

    • 3 Comments

    SecurityGuidanceShare.com is an experiment.  I'm testing different ways to maintain and share a large body of guidance.  I'm also exploring ways to factor and maintain a comprehensive set of more stable principles and practices, while dealing with more volatile, technology-specific information.

    I'd like your feedback on

    1. Overall organization of the information (it's a massive body)
    2. Usability of the chunks (can you grab just what you need? are the chunks right-sized?)
    3. Ability to find your way around

    My two favorite features:

    1. All Pages - this let's me quickly see the knowledge base at a glance.
    2. Inspection Questions - these are factored so you can chunk up your inspections.

    Comment here or send mail to SecNet.

  • J.D. Meier's Blog

    Jason Taylor is Blogging

    • 1 Comments

    Are you experiencing anxiousness, self-doubt or guilt?  It might not be your fault.  A parasite might be controlling your mind.  Jason explains how in Mind Control and the Friendly Mouse.

    I've worked with Jason for a few years from building software to writing guidance.  He's fast and effective.  We regularly swap techniques for getting results.  He's got a gift for distilling insights into action.  He shares that gift in his blog.

    Check out Jason Taylor's blog - The Good Life, to learn:

    • How to be an effective manager
    • How to be an effective leader
    • How to prioritize tough decisions

    You can also use his blog to learn how to recover from repetitive stress injuries.

    Jason's currently working with me and Prashant on the patterns & practices Visual Studio Team System Guidance project.

  • J.D. Meier's Blog

    Incremental Environments for Performance Excellence

    • 2 Comments

    Mark Tomlinson shared an emerging industry practice with me.  Customers are setting up incremental environments.  The environments are incremental steps from a developer environment to production.
     
    Incremental Environments

    1. Component-level performance testing. (close to dev)  The lab is setup with debuggers and profilers - anything a developer would need to investigate issues.
    2. Application performance testing.  A single "slice" of architecture, good for scale-up and optimization/tuning); usually dedicated for optimization or tuning of a single application/system; still have debuggers and profilers setup.
    3. Performance integration.  This is still the basic "slice" of architecture, but now bring into play other applications or systems; usually has multiple applications and supporting technologys that mutually get loaded (e.g. IIS and AD); network diagnostic tools and debuggers may be used here sometimes.
    4. System performance and stress.  Larger performance testing with scale-out scenarios, load balancing, failover; larger sized systems get more load - so you see more stressing of entire system resources, esp. network; often just for 1 application, but also for multiple integration testing.
    5. Large-scale integration and performance.  Multiple applications, with everything needed to prove business needs will be met; usually without some security and perhaps some 3rd-party integrations; usually not a stress testing environment - e.g. virtual users are set to generate real-world pacing and load.
    6. Pre-production simulation.  This is just like the real thing - full sized system, with full security and network topology (only not production); used both for internally built applications, and 3rd-party solutions which must be integrated; production repro's, patches, fixes, etc can be tested here safely.

    There's no strict rule for how many of each type of environment, and the most sohpisticated setup has multiple physical environments/labs which could be used for any of each purpose.  The beauty of this approach is that instead of having a great big wall to throw your application over, it's a series of incremental hurdles.  Each hurdle represents increasing requirements and constraints. 
     
    This approach is also great for Centers of Excellence.  A Center of Excellence team can build the environment to reflect and codify their practices.   The Center of Excellence team can also harvest and share the lessons learned to help teams over each incremental step.

  • J.D. Meier's Blog

    Baking Performance Into the Life Cycle

    • 2 Comments

    To engineer for performance, you need to embed a performance culture in your development life cycle, and you need a methodology. When you use a methodology, you know where to start, how to proceed, and when you are finished.

    Keys to Performance Engineering
    These are fundamental concepts to performance engineering:

    • Set objectives and measure.
    • Performance modeling helps you design performance for your scenarios.
    • Measuring continues throughout the life cycle and helps you determine whether you are moving towards your objectives.

    High ROI Techniques
    These are some of the most effective techniques we use to directly impact performance results:

    • Performance Objectives
    • Performance Design Guidelines
    • Performance Modeling
    • Performance Design Inspections
    • Performance Code Inspections
    • Performance Testing
    • Performance Tuning
    • Performance Deployment Inspections

    Key Notes

    • Think about performance up front versus after the fact.  If performance isn't a part of your scenarios, you're ignoring your user's experience, or you're ignoring your businesses.  Don't expect users to ask for performance.  They just expect it.
    • Use objectives and constraints to set boundaries.  Objectives tell you how much to invest in performance and what good looks like (for users, for the system, and for the business).
    • Use Objective-driven inspections over code reviews.  Don't tune your code for tuning's sake.  Know what good looks like.  Model and measure to know where to spend your time.  (Make sure your ladder is up against the right wall!)
    • Use design guidelines to make performance actionable.  Build a repository for your performance knowledge.  Wikis are great for this.  Capture your insights as principles, patterns, guidelines, ... etc.  Don't think of this as a blanket set of rules to follow.  Think of it as a knowledge base that you and your teams can draw from when desiging solutions, doing inspections, tuning performance ... etc. 

    More Information
    You can find more about the concepts above at:

  • J.D. Meier's Blog

    Lean Software Engineering

    • 1 Comments

    I'm jazzed to see Corey and Bernie on the blog scene.  They're partners in crime on a Lean Software Engineering blog.  They have real advice for real people doing software.

    Why listen to what Corey and Bernie have to say?  They know what they're talking about from experience.  They have the knowledge that can turn your software engineering around, if you need it.  A lot of what they know, is not well known (or at least not applied), so their blog is something of a gateway to a world of better software engineering.

    Whether you shape software, build it, or manage it, you'll find insights you can use.  Here's some of the things you'll learn:

    • How do you determine the minimum deployable feature set?
    • What essential principle allows Lean development to be something more than Agile?
    • How to take an evolutionary approach to software process change?
    • Why is quality NOT the fourth variable in the project triangle?
    • How do you aggregate decentralized knowledge?
    • What happens when single-piece flow meets the V model?
Page 39 of 45 (1,125 items) «3738394041»