J.D. Meier's Blog

Software Engineering, Project Management, and Effectiveness

January, 2011

  • J.D. Meier's Blog

    Methodologies at a Glance

    • 6 Comments

    I'm a fan of simple maps to help drill in. After all, it's hard to explore the concepts if you don’t know they exist, or you don’t know what they are called.  Below is a work in progress.  I’m making a quick, simple map of the key activities for a some software project-relevant processes.

    I’m sure I’m missing key practices and some of the names have changed.  So I’m sharing it, so that folks can help share what they know, to get to a map that includes the right top level names of key practices.

    Process Practices
    Kanban
    • Limit WIP (Work in Progress)
    • Measure the Lead Time
    • Visualize the workflow
    Scrum
    • Burndown Chart
    • Daily Scrum
    • Definition of Done
    • Estimation
    • Impediment Backlog
    • Product Backlog
    • Product Owner
    • Retrospective
    • Scrum Master
    • Scrum Team
    • Sprint
    • Sprint Backlog
    • Sprint Demo
    • Sprint Planning Meeting
    • Velocity
    XP
    • Coding Standard
    • Collective Code Ownership
    • Continuous Integration
    • On-Site Customer
    • Pair Programming
    • Planning Game
    • Refactoring
    • Simple Design
    • Small Releases
    • Sustainable Pace
    • System Metaphor
    • Test-Driven Development
    MSF Agile Activities
    • Build a Product
    • Capture Project Vision
    • Close a Bug
    • Create a Quality of Service Requirement
    • Create a Scenario
    • Create a Solution Architecture
    • Fix a Bug
    • Guide Iteration
    • Guide Project
    • Implement a Development Task
    • Plan an Iteration
    • Test a Quality of Service Requirement
    • Test a Scenario
    Artifacts (Work Products)
    • Architectural Prototypes
    • Bug Reports
    • Change Sets
    • Check In Notes
    • Classes
    • Development Tasks
    • Interface Models
    • Personas
    • Scenarios
    • Storyboards
    • System Architecture (See DSI and Whitehorse)
    • Test Plan (see Context-Driven Testing)
    • Test Cases
    • Unit Tests
    • Vision Statement
    RUP

    Activities

    • Analyze Runtime Behavior (Implementer)
    • Architectural Analysis (Architect)
    • Assess viability of Architectural Proof-of-Concept (Architect)
    • Capsule Design (Capsule Designer)
    • Class Design (Designer)
    • Construct Architectural Proof-of-Concept (Architect)
    • Database Design (Database Designer)
    • Describe Distribution (Architect)
    • Describe the Run-time Architecture (Architect)
    • Design Testability (Designer)
    • Design the User-Interface (User-Interface Designer)
    • Develop Installation Artifacts (Implementer)
    • Elements (Designer)
    • Execute Developer Test (Implementer)
    • Identify Design Mechanisms (Architect)
    • Identify Design Elements (Architect)
    • Implement Design Elements (Implementer)
    • Implement Developer Test (Implementer)
    • Implement Testability Elements (Implementer)
    • Implementation Model (Software Architect)
    • Incorporate Existing Design Elements (Architect)
    • Integrate Subsystem (Integrator)
    • Integrate System (Integrator)
    • Plan Subsystem Integration (Integrator)
    • Plan System Integration (Integrator)
    • Prototype the User Interface (User-Interface Designer)
    • Review Code (Technical Reviewer)
    • Review the Architecture (Technical Reviewer)
    • Review the Design (Technical Reviewer)
    • Structure the (Software Architect)
    • Subsystem Design (Designer)
    • Use-Case Analysis (Designer)
    • Use-Case Design (Designer)

    Artifacts

    • Analysis Class
    • Analysis Model
    • Architectural Proof-of-Concept
    • Build
    • Capsule
    • Data Model
    • Deployment Model
    • Design Class
    • Design Model
    • Design Package
    • Design Subsystem
    • Event
    • Implementation Element
    • Implementation Model
    • Implementation Subsystem
    • Integration Build Plan
    • Interface
    • Navigation Map
    • Protocol
    • Reference Architecture
    • Signal
    • Software Architecture Document
    • Test Design
    • Test Stub
    • Testability Class
    • Testability Element
    • Use-Case Realization
    • User-Interface Prototype
  • J.D. Meier's Blog

    You Against Your Lists

    • 2 Comments

    As a PM (Program Manager) at Microsoft, one of the things I end up doing a lot is making lists.  Lists of priorities, lists of features, lists of scenarios, lists of open issues, lists of ideas, etc.

    I know a lot of people makes lists.  But what's the difference that makes the difference?

    I think it's three things:

    1. priorities
    2. precision
    3. action

    As the joke goes, a plan is a lists of things you'll never do.  That's what happens when you fall into analysis-paralysis or don't take action. (BTW - action and timeboxing are the cure for analysis-paralysis)

    A "laundry list" is not an actionable list because it's just a random dump of things.  The laundry list becomes actionable when you rank and prioritize the items, turning it into an "ordered list."

    Precision is an important attribute.  Precision simply means filtering out everything that's not directly relevant.  I find the most valuable lists are precise.  I’d rather have two precise lists, than one mixed up list.  A precise list of actions, or a precise list of ideas, or a precise list of issues is a thing of beauty.  It’s the elegance before the action.

    There are list makers and there are list doers.  Having a list is a start, but action is what really makes any list valuable.  An effective list is a springboard for the right actions.

    If you're an avid list maker, challenge yourself to be a skilled list doer.  It's a key to making things happen, and action is the difference that makes the difference.

  • J.D. Meier's Blog

    Guidance Product Model for Domain “X”

    • 0 Comments

    Here is a sketch of the mental model I use when thinking through how to address a space with prescriptive guidance:

    image

    At a high level, it’s a “stack,” and by having a model of the stack, you can choose how far up the stack to go:

    • Domain Knowledge – This is about breaking the problem space down into useful nuggets.  The most useful nuggets I’ve found are: frames, application types, qualities, hot spots, design guidelines, principles, patterns, and capabilities.  Frames would simply be mental models or ways to look at the space.  Hot Spots are areas within the problem space that get a lot of attention, either because they create a lot of pain, or create a lot of opportunity, or they are simply high use.   Qualities would be quality attributes like security, performance, reliability, etc.  These cross-cutting concerns shape the solutions within the space.
    • “Blue Books” – This is a way to package, share, and scale the expertise within a given domain.  It’s a collection of the key principles, patterns, and practices for that domain, packaged into a cohesive whole.  This action-guide, or prescriptive guidance, creates a bird’s-eye view of guiding principles that help govern solutions within the space.  By sharing the common application types, hot spots, frames, deployment patterns, technology maps, etc., it creates a way to make sense of the problem domain, and serves as a firm foundation for more specific solutions.  It’s like the driver’s guide for the space.  See The Power of Blue Books for Platform Impact.  They have been the most effective way I’ve been able to transfer large amounts of know-how to the developer and architect communities.
    • Playbooks – These are like thin “Blue Books.”  A playbook can be used to target a specific scenario or set of scenarios.  These are effectively “thin” Blue Books.  They build on the generalized guidance and make it more specific, by walkthrough through instances or examples to light up the guidance.
    • Knowledge Base (KB) – I’m a fan of a “thin guide” + “thick KB” approach.  The knowledge base serves as a clearing house for the nuggets of insight and action within the domain.   They are more of a random access collection of useful solutions or insight into key concepts, without having to be packaged into a guide.
    • Tooling – There are many ways to provide tooling.  The ideal scenario is to have flexible tools that codify the patterns and practices by having better defaults, starter templates, starter code, guiding principles, baselines that can easily be tailored, etc.  Ideally, the tooling is connected to a live stream, where it can continuously evolve based on the knowledge from the community.  For example, “building code” would be very specific guidelines, such as technology recommendations or scenario-based patterns.  Ideally, these can turn into either rule-checkers or at least provide support for manual inspection.
    • Community – This is where the magic can happen.   By providing enough of these raw materials in the form of mental models, principles, patterns, etc. it’s easier for folks in the trenches, thought leaders, and anybody with a passion in the space to build on the knowledge and stretch and evolve it into new directions, and continue to advance the knowledge in the domain.  A key to enable this though, is having places to go where people feel like their contributions can actually make a difference, and help influence or impact the greater good.

    One thing I didn’t explicitly show in the model, is the idea of media, such as videos and slides, and train-the-trainer material.  To really get adoption, the media and train-the-trainer material help spread the word.  They make it easier for raving fans to adopt and to help spread, as well as to help teach others.

    Together, all these parts work to create a “platform” and an “ecosystem” for prescriptive guidance.  While it’s not a hard and fast model, it has helped me both figure out the opportunities, as well as evaluate competition, and it helps me see where various types of deliverables fit into a bigger backdrop for impact.

  • J.D. Meier's Blog

    Just Enough

    • 0 Comments

    I happened to look over to my bookshelf and noticed that I have two books that landed together by chance:

    1. Just Enough Project Management
    2. Just Enough Software Architecture

    I’m a fan of “just enough.”  One of my mentors liked to quiz me with the question: 

    “How much process do you need?”

    The answer was always, “just enough.”

    The question, of course, then becomes, how much is “just enough?”  The answer to that is, it depends on what’s the risk? … what’s at stake?  It should be commensurate to risk.

    I always liked the example we gave regarding how much to invest in performance modeling:

    “The time, effort, and money you invest up front in performance modeling should be proportional to project risk. For a project with significant risk, where performance is critical, you may spend more time and energy up front developing your model. For a project where performance is less of a concern, your modeling approach might be as simple as white-boarding your performance scenarios.”

    Just enough not only helps you eliminate waste, in the form of unnecessary overhead, but it frees you up to better balance your other trade-offs and priorities.

  • J.D. Meier's Blog

    Why Does Culture Matter?

    • 0 Comments

    I saw the Facebook privacy issue on the news. I remember somebody saying, developers should just be responsible.  A common practice is to "make it work, then make it right."  The problem is, you don't always get a chance to "make it right."  That very much depends on what your organization values.  The values define the culture.

    I flashed back to our early values in patterns & practices.  The thing to know about values, is that values flow down.  It's what the leaders say, it's what they reward and punish.  It reminded me of why our collective set of values was so important.

    If you value cost …

    1. You might find that nobody that makes the stuff, cares about the stuff.
    2. Your customers only love you while you're the cheapest.
    3. You might be chasing a losing battle, or win a battle to lose the price war ... another person, team, company, country is always cheaper.
    4. If it's all about cost, nobody will be excited about making great things, or fixing the stuff that gets in the way of great.

    If you value execution …

    1. You might ship a bunch of stuff.
    2. You might spend all your time on the wrong things.
    3. You might find nobody cares about the stuff, including your customers.
    4. You might ship at a rate, that nobody can absorb.
    5. You might ship a bunch of problems that your users have to deal with.
    6. You might ship a bunch of stuff, but lack the impact that counts.
    7. You give yourself a chance to iterate your way to goodness and greatness.

    If you value learning …

    1. You might find the problems, before your customer do.
    2. You might fix the problems, your customers tell you about.
    3. You might prevent your problems in the first place.
    4. You might evolve your process and your product enough to stick around.
    5. You attract continuous learners, who like to improve what's around them.
    6. Your learning loops create a path to greatness.
    7. You might invest in innovation and R&D, in a way that changes the game, or at least your game.

    If you value quality …

    1. You might change your game.
    2. You attract people with a passion for excellence.
    3. You create trust and reputation (good things in a reputation world.)
    4. You improve your people, process, and product as a natural by-product.
    5. You prioritize time and energy to "make it right."
    6. Your quality becomes a differentiator that's hard to copy.

    If you value customer-connection ...

    1. You might value, what your customers value.
    2. You might know how to price your stuff.
    3. You might figure out, what they don't like, maybe even before they do.
    4. You might ship the things that your customers care about.
    5. You might find more ways to create value, in ways that match your customer's world.

    When I look back to the values we had in patterns & practices, I see how they helped pave the way for great:

    • Continuous Learning - Continuous learning, innovation and improvement - We have a bias toward action (over more planning) and customer engagement and feedback (over more analysis.)
    • Customer-Connected - Open, collaborative, relationships with customers, Microsoft field, partners, and Microsoft teams.
    • Execution - we take strategic bets, but we hold ourselves accountable for creating value, shipping early and often, and delivering results that have impact with customers and in Microsoft.
    • Transparent - Explicit, transparent, and direct communication with customers and with our team and others in our company.
    • Quality - Quality over scope - no guidance is better than bad guidance.

    If you don't think you, your team, your company, etc. are on a path to great, check the values for clues.  It’s not about having this value or that (after all, all values are … well … valuable) ... the magic is in the blend, and often the difference is in what’s missing or out of balance.

  • J.D. Meier's Blog

    Lessons Learned from Bill Gates

    • 0 Comments

    I shared some lessons learned from Bill Gates.   He sets a high bar and pushes the envelope of what's possible in a lifetime.  That's what's great.  Thinking back, he was one of the key reasons I joined Microsoft.  He'd rather be making impact, than sitting on the beach.  His passion is contagious.  He set the bar for “smart and gets results.”

    Read lessons learned from Bill Gates and if you have a lesson or insight from Bill, be sure to share.

  • J.D. Meier's Blog

    Lessons Learned in Execution

    • 0 Comments

    I’ve been thinking about execution and the lessons learned.   I’ve summarized some insights and reminders.

    I’ve been lucky enough to grow up with patterns & practices over the last 10 years, so I’ve been able to see what works, what doesn’t, and the difference that makes the difference. 

    The Vital Few
    Here are the vital few lessons:

    1. Portfolios, Programs, and Projects.  The portfolio helps paint the map of investments at a glance.  It’s your heat map of opportunity, where to invest, and de-invest.    Programs connect the projects to simple themes and big bets.  Your execution is gated by how many projects can run in parallel at a healthy rate.  For example, “each year, we can produce 5 big projects, and 3 small ones”, or “every six months, we can do 3 big things, or 10 little ones”, etc.
    2. Product Line and Catalog.  Internally, you have a product line – the “things” you make.  Externally, you have a “catalog” which organizes your products in a meaningful way for customers.  Customers can ask for “xyz” in the catalog.  An effective model for catalogs is organizing by “topic or category” and “type of thing”  The closer you can map your portfolio to your catalog, the easier it is to see execution, results, and customer impact.  Your product line helps you know at a glance, roughly how long a given product takes to build.  A good product line is also a way to ensure that the interfaces across your product line and the key relationships among your products is well understood.
    3. Project cycles and product cycles.  Having a distinction between the project and product cycle help you optimize and use the right tool for the job.  For a simple example, Scrum is more of a project process, while XP is more of a product development process.  The project cycle is important at the business level.  It’s the cadence of the projects.  It’s where the vital few milestones are established in terms of start, key checks, ship, and post-mortem.  Product cycles on the other hand, are geared towards the product development.  The secret sauce here is that the work breakdown structure (WBS) is shaped by the product line, and maturing your work breakdown structure is how you streamline execution.  The WBS is also a way to share tribal knowledge and promote success across teams.  It’s how we know how to build patterns, or build a guide, or build a Reference Implementation versus make it up as we go, do it ad-hoc, or just wing-it.  It’s also how we know the right people and skills to have on the project, understand the nature of the work, know the key bottlenecks, and know the basic timeline.
    4. Vendor partners you trust.     This is the key to scaling execution.  It’s the key to keeping engaging work.  It’s the key to moving up the stack.  It’s the key to keeping up with a changing landscape.  The partnership is also key though, versus throwing over the wall.  Sharing values, principles, patterns, and practices helps optimize the execution.  One way to improve execution here is to have the right relationships in place (such as an account owner).  Another way to dramatically improve results is simple checklists that help share tribal know-how.
    5. Project teams and resource pools.   If project work is how we get things done, then the model for the project team is essential.  Time and again, projects fail because they didn’t have the right skills or capabilities on a team, and too many dependencies or risks, that weren’t obvious.  The key though is having resource pools of the right disciplines that support having the right project teams.  The keys to effective project teams are: roles, responsibilities, capabilities, accountabilities, empowerment, processes, and tools.

    20 Additional Game-Changers …
    Here are some additional ways to improve execution:

    1. Planning Frameworks.    This is the heart of the portfolio planning and creates buy-in from the top down, and an execution framework for the bottom up.  What’s important is that you have an agreed planning framework and that’s easy to change.  Additionally, it should be responsible to learning from the bottom up and by people in the trenches.
    2. Organizational model.  This is the heart of the execution engine.  The key here is reducing decision making complexity, pushing autonomy to the end leaf nodes, and providing a clear escalation path for cross-team issues.
    3. Feature/Scenario crews.  This is the unit of execution for projects.  It’s the assembly of the key roles and disciplines into a functional team, for a product, feature, or scenario.  In patterns & practices, our “solution” or feature crews consisted of program manager, product manager, architect, developers, testers, user education, and vendor support.
    4. Cadence and communication.   This is where project milestones, checkpoints, and communiqué or newsletters, as well as quarterly business reviews help make a difference.
    5. Know your Worst Bottleneck.  TOC (Theory of Constraints) boils down to knowing what you’re gated by: ideas? People? Money? Time? … where’s the friction?   If you had X, how much more Y could you do?
    6. Rhythm of the Business (ROBs).   ROBs help create a cadence and a framework for enforcing accountability.  Minimally, this translates into quarterly business reviews.
    7. Measures, metrics, and Scorecard.  The key here is having a small frame around both internal success and external success.  For example, external success is measuring awareness, adoption, and sat.  Internal success might be around product impact, execution excellence, team health.
    8. Dashboards.  Dashboards are a simple way to reflect back to everybody how we’re doing.  A good dashboard helps confirm what you know, reveal surprises, and helps create a spirit of momentum and learning.
    9. Customer Success Metrics.  This is crucial for online success.  It’s about having one measure that you can use to evaluate customer success.  For example, with Amazon, they can measure completed transactions.  That means somebody found what they wanted and voted for it by paying, and Amazon successfully fulfilled the need.  When you have the one simple measure of success, then you can experiment with your online strategy and do A/B testing to see whether you improve or reduce success against the one guiding metric – it’s your North Star of online success.
    10. The Pie and the Slices.  This is about knowing at a glance, how big the pie is, and what your slices are, that you can impact.  This impacts charter and helps establish boundaries and collaboration, and reduce or change competition in a healthy way.  
    11. Innovate in your process and product.  Innovation in your process is what enables innovation in your product … otherwise, you end up to expensive or get pushed out by a competitor’s approach.
    12. Pilots and experiments.   This is a way to reduce risk, while setting the stage for innovation.  The simplest way to reduce the risk is to timebox it, constrain resources, or set a budget limit, or a combination thereof.  The key to getting results is knowing what your pilot or experiment is testing for.  Start with hypotheses so you can guide your work and make the most of it.
    13. Raving fans.  Measure results by building raving fans and brand loyalty.   Net Promoter score and customer satisfaction scores are keys.
    14. Customer-Proof Points.    These give you a quick way to tell stories of customer success, and to boil down to two simple numbers: 1) a change in satisfaction, and 2) a change in customer confidence. 
    15. Customer-Connected Engineering.  Co-create the product with the customer.  Rather than build it and they will come, or throw it over the wall to see what sticks, pair with customers up front.  See Customer-Connected Engineering.
    16. Surveys are the short-cut.   Surveying customers up front to influence the investments and where to invest, helps ensure hitting the customer sweet spots.  It’s part of Customer-Connected Engineering for creating feedback loops.
    17. Scenario Maps.  Create maps of questions and tasks from customers.  This is one of the most effective ways to gain a solid handle on the problem space.  What you lose in time from execution, you make up in time by working on the right things (and more importantly, by not wasting time on the wrong things.)  It’s a focus on the vital few, while having a shared map of the larger context of the problem space.
    18. Measure against effectiveness.  This is the short-cut to intrinsic value.  Rather than chase perception, you can cut right through and measure customers against performing concrete/actionable tasks with the product.  This provides actionable feedback to improve your product, and it improves customer success, effectiveness, and satisfaction along the way.
    19. Quality gates and inspections.  A simple way to streamline the execution and to avoid downstream do-overs is to inject just enough quality gates and inspections that bake in the learnings.
    20. Templates and tools.  Templates speed up projects.  By having templates for scorecards, vision scopes, and checkpoints, it makes it easier to scale section across the team, and make success more systematic and repeatable.  It also makes it easier to fine-tune the process since there is a common backdrop.
Page 1 of 1 (7 items)