J.D. Meier's Blog

Software Engineering, Project Management, and Effectiveness

  • J.D. Meier's Blog

    Any Activity Can Be Turned into a Game

    • 0 Comments

    Any activity can be turned into a game, if it meets the right criteria.  Wise words from Dan Cook:

         “If an activity can be learned…

         If the player’s performance can be measured…

         If the player can be rewarded or punished in a timely fashion…

         Then any activity that meets these criteria can be turned into a game.”

    Gamification is hot.  I called it out in my Trends for 2013 roundup.   When all things are equal, fun is a differentiating factor.

    You Might Also Like

    Microsoft Secret Stuff

    The Gamification of Education

    Wearable Computing

  • J.D. Meier's Blog

    Agile Security Engineering

    • 1 Comments

    “It is not necessary to change. Survival is not mandatory.”—Edwards Deming

    I gave a talk for the developer security MVPs at the Microsoft 2010 MVP Summit last week.  While I focused primarily on Azure Security, I did briefly cover Agile Security Engineering.  Here is the figure I used to help show how we do Agile Security Engineering in patterns & practices:

    Agile Security Engineering

    What’s important about the figure is that it shows an example of how you can overlay security-specific techniques on an existing life cycle.  In this case, we simply overlay some security activities on top of an Agile software cycle.

    Rather than make security a big up front design or doing it all at the end or other security approaches that don’t work, we’re baking security into the life cycle.  The key here is integrating security into your iterations.

    Here is a summary of the key security activities and how they play in an agile development cycle:

    • Security Objectives – This is about getting clarity on your goals, objectives, and constraints so that you effectively prioritize and invest accordingly.
    • Security Spikes – In Agile, a spike is simply a quick experiment in code for the developer to explore potential solutions.  A security spike is focused exploring potential security solutions with the goal of reducing technical risk.  During exploration, you can spike on some of the cross-cutting security concerns for your solution.
    • Security Stories – In Agile, a story is a brief description of the steps a user takes to perform a goal.  A security story is simply a security-focused scenario.  This might be an existing “user” story, but you apply a security lens, or it might be a new “system” story that focuses on a security goal, requirement, or constraint.  Identify security stories during exploration and during your iterations.
    • Security Guidelines – To help guide the security practices throughout the project, you can create a distilled set of relevant security guidelines for the developers.  You can fine tune them and make them more relevant for your particular security stories.
    • Threat Modeling – Use threat modeling to shape your software design.  A threat model is a depiction of potential threats and attacks against your solution, along with vulnerabilities.  Think of a threat as a potential negative effect and a vulnerability as a weakness that exposes your solution to the threat or attack.  You can threat model at the story level during iterations, and you can threat model at the macro level during exploration.
    • Security Design Inspections – Similar to a general architecture and design review, this is a focus on the security design.  Security questions and criteria guide the inspection.  The design inspection is focused on higher-level, cross-cutting, and macro-level concerns.
    • Security Code Inspections – Similar to a general code review, this is a focus on inspecting the code for security issues.  Security questions and criteria guide your inspection.
    • Security Deployment Inspections – Similar to a general deployment review, this is a focus on inspecting for security issues of your deployed solution.  Physical deployment is where the rubber meets the road and this is where runtime behaviors might expose security issues that you didn’t catch earlier in your design and code inspections.

    The sum of these activities is more than the parts and using a collection of proven, light-weight activities that you can weave into your life cycle helps you stack the deck in your favor.  This is in direct contrast to relying on one big silver bullet.

    Note that we originally used “reviews” which are more exploratory, but we later optimized around “inspections.”  The distinction is that inspections use criteria (e.g. a 12 point inspection.)  We share the criteria using security checklists for design, coding, and deployment inspections.

    There are two keys to chunking up security so that you can effectively focus on it during iterations:

    1. Security stories
    2. Security frame

    Stories are a great way of chunking up value.  Each story represents a user performing a useful goal.  As such, you can also chunk up your security work, by focusing on the security concerns of a story.

    A security frame is a lens for security.  It’s simply a set of categories or “hot spots” (e.g. auditing and logging, authentication, authorization, configuration management, cryptography, exception management, sensitive data, session management.)   Each category is a container for principles, patterns, and anti-patterns.  By grouping your security practices into these buckets, you can more effectively consolidate and leverage your security know-how during each iteration.  For example, one iteration might have stories that involve authentication and authorization, while another iteration might have stories that involve input and data validation.

    Together, stories and security frames help you chunk up security and bake it into the life cycle, while learning and responding along the way.

    For more information on security engineering, see patterns & practices Security Engineering Explained.

  • J.D. Meier's Blog

    Windows Azure Developer Guidance Map

    • 5 Comments

    image

    If you’re a Windows Azure developer or you want to learn Windows Azure, this map is for you.   Microsoft has an extensive collection of developer guidance available in the form of Code Samples, How Tos, Videos, and Training.  The challenge is -- how do you find all of the various content collections? … and part of that challenge is knowing *exactly* where to look.  This is where the map comes in.  It helps you find your way around the online jungle and gives you short-cuts to the treasure troves of available content.

    The Windows Azure Developer Guidance Map helps you kill a few birds with one stone:

    1. It show you the key sources of Windows Azure content and where to look (“teach you how to fish”)
    2. It gives you an index of the main content collections (Code Samples, How Tos, Videos, and Training)
    3. You can also use the map as a model for creating your own map of developer guidance.

    Download the Windows Azure Developer Guidance Map

    Contents at a Glance

    • Introduction
    • Sources of Windows Azure Developer Guidance
    • Topics and Features Map (a “Lens” for Finding Windows Azure Content)
    • Summary Table of Topics
    • How The Map is Organized (Organizing the “Content Collections”)
    • Getting Started
    • Architecture and Design
    • Code Samples
    • How Tos
    • Videos
    • Training

    Mental Model of the Map
    The map is a simple collection of content types from multiple sources, organized by common tasks, common topics, and Windows Azure features:

    image

    Special Thanks …
    Special thanks to David Aiken, James Conard, Mike Tillman, Paul Enfield, Rob Boucher, Ryan Dunn, Steve Marx, Terri Schmidt, and Tobin Titus for helping me find and round up our various content collections.

    Enjoy and share the map with a friend.

  • J.D. Meier's Blog

    ADO.NET Code Samples Collection

    • 0 Comments

    image

    The ADO.NET Code Samples Collection is a roundup and map of some of the various data access code samples from  various sources including the MSDN library, Code Gallery, CodePlex, and Microsoft Support.

    You can add to the code examples collection by sharing in the comments or emailing me at FeedbackAndThoughts at live.com.

    Common Categories for ADO.NET Code Samples
    The ADO.NET Code Samples Collection is organized using the following categories:

    image

    ADO.NET Code Samples Collection

    Category

    Items

    Data Binding

    MSDN Library

    Data Models

    Code Gallery

    Microsoft Support

    DataReader

    MSDN Library

    DataSet

    MSDN Library

    DataTable

    MSDN Library

    Entity Framework

    All-in-One Code Framework

    Code Gallery

    General

    All-in-One Code Framework

    MSDN Library

    LINQ to DataSet

    MSDN Library

    LINQ to Entities

    MSDN Library

    LINQ to Objects

    All-in-One Code Framework

    LINQ to SQL

    All-in-One Code Framework

    Code Gallery

    Code Gallery

    N-Tier

    Code Gallery

    O/RM Mapping

    Code Gallery

    OData

    Code Gallery

    POCO

    Silverlight

    Code Gallery

    SQL Server

    MSDN Library

    Streaming

    Code Gallery

    WCF Data Services

    All-in-One Code Framework

    Code Gallery

    My Related Posts

  • J.D. Meier's Blog

    Pruning or Preserving a Synapse

    • 2 Comments

    How can you keep your brown from throwing out a perfectly good behavior? Positive feedback. David Rock and Jeffrey Schwartz write about how positive feedback can preserve important synapses, in their article, "The Neuroscience of Leadership", in "strategy+business" magazine.

    Positive Feedback for Preserving a Synapse
    Rock and Schwartz write:

    "In a world with so many distractions, and with new mental maps potentially being created every second in the brain, one of the biggest challenges is being able to focus enough attention on any one idea. Leaders can make a big difference by gently reminding others about their useful insights, and thus eliciting attention that otherwise would not be paid. Behaviorists may recognize this type of reminder as "positive feedback," or a deliberate effort to reinforce behavior that already works, which, when conducted skillfully, is one aspect of behaviorism that has beneficial congnitive effect. In a brain that is constantly pruning connections while making new ones, positive feedback may play a key functional role as "a signal to do more of something." As neuroscientist Thomas B. Czerner notes, "The encouraging sounds of 'yes, good, that's it' help to mark a synapse for preservation rather than pruning."

    Key Take Aways
    I think this is similar to "you get what you measure", but in this case, you get more of what you reward.

  • J.D. Meier's Blog

    Scrum at a Glance (Visual)

    • 0 Comments

    I’ve shared a Scrum Flow at a Glance before, but it was not visual.

    I think it’s helpful to know how to whiteboard a simple view of an approach so that everybody can quickly get on the same page. 

    Here is a simple visual of Scrum:

    image

    There are a lot of interesting tools and concepts in scrum.  The definitive guide on the roles, events, artifacts, and rules is The Scrum Guide, by Jeff Sutherland and Ken Schwaber.

    I like to think of Scrum as an effective Agile project management framework for shipping incremental value.  It works by splitting big teams into smaller teams, big work into smaller work, and big time blocks into smaller time blocks.

    I try to keep whiteboard visuals pretty simple so that they are easy to do on the fly, and so they are easy to modify or adjust as appropriate.

    I find the visual above is pretty helpful for getting people on the same page pretty fast, to the point where they can go deeper and ask more detailed questions about Scrum, now that they have the map in mind.

    You Might Also Like

    Agile vs. Waterfall

    Agile Life-Cycle Frame

    Don’t Push Agile, Pull It

    Scrum Flow at a Glance

    The Art of the Agile Retrospective

  • J.D. Meier's Blog

    What Makes a Good Threat Model

    • 4 Comments

    While trying to create threat model template for customers, I analyzed many threat models inside and outside Microsoft.  It was insightful to see the patterns of what was useful across threat models and what was noise.

    A good threat model has the following components:

    • Security objectives.  What must you do vs. what's nice to do?  These set the boundaries of what's in scope vs. what's out of scope.  
    • Key Scenarios.  Where and how will your software be used? These put your software in context and gives you context while evaluating.
    • Security mechanisms.  These shine the spotlight on explicit security engineering decisions.
    • Trust boundaries.  These help you focus on critical places where security trust levels change.  These also help prioritize entry points.
    • Data flows.  These help you trace data through the system, to expose potential issues.
    • Entry points.   Where do you accept input?  These are primary attack vectors.
    • Exit points.  Where do you write output?
    • Threats.  A list of these helps you put perspective when ranking vulnerabilities.  What's the worst that can happen?  What can you live with?
    • Vulnerabilities.  A list of these helps you identify actionable places in your software to address security concerns.

    A good threat model serves the following purposes:

    • Informs your design
    • Scopes your security testing
    • Helps reviewers evaluate your security decisions

    By far, the most tangible output of the threat modeling activity is a prioritized list of vulnerabilities.  These are action items for your developers and input for your testers.  The developer makes a call on whether and how to fix, and the tester will test the fix.


    This sample Template for a Web Applications Threat Model comes very close to showing what I've empirically seen to be useful, though there's always a gap between reality and real-time.

  • J.D. Meier's Blog

    How To Use Getting Results the Agile Way with Evernote

    • 0 Comments

    One of the most common questions I get with Getting Results the Agile Way is, “What tools do I use to implement it?”

    The answer is, it depends on how "lightweight" or "heavy" I need to be for a given scenario.  The thing to keep in mind is that the system is stretch to fit because it's based on a simple set of principles, patterns, and practices.  See Values, Principles, and Practices of Getting Results the Agile Way.

    That said, I have a few key scenarios:

    1. Just me.
    2. Pen and Paper.
    3. Evernote.

    The Just Me Scenario
    In the "Just Me" scenario, I don't use any tools.  I just take "mental notes."  I use The Rule of Three to identify three outcomes for the day.  I simply ask the question, "What are the three most important results for today?"  Because it's three things, it's easy to remember, and it helps me stay on track.  Because it's results or outcomes, not activities, I don't get lost in the minutia.

    The Pen and Paper Scenario
    In the Pen and Paper scenario, I carry a little yellow sticky pad.  I like yellow stickies because they are portable and help me free up my mind by writing things down.  The act of writing it down, also forces me to get a bit more clarity.  As a practice, I either write the three results I want for the day on the first note, or I write one outcome per note.  The main reason I write one result per sticky note is so that I can either jot supporting notes, such as tasks, or so I can throw it away when I've achieve that particular result.  It's a simple way to game my day and build a sense of progress.

    I do find that writing things down, even as a simple reference, helps me stay on track way more than just having it in my head.

    The Evernote Scenario
    The Evernote scenario is my high-end scenario.  This is for when I'm dealing with multiple projects, leading teams, etc.  It's still incredibly light-weight, but it helps me stay on top of my game, while juggling many things.  It also helps me quickly see when I have too much open work, or when I'm splitting my time and energy across too many things.  It also helps me see patterns by flipping back through my daily outcomes, weekly outcomes, etc.

    It's hard to believe, but already I've been using Evernote with Getting Results the Agile Way for years.  I just checked the dates of my daily outcomes, and I had switched to Evernote back in 2009.  Time sure flies.  It really does.

    Anyway, I put together a simple step-by-step How To to walk you through setting up Getting Results the Agile Way in Evernote.  Here it is:

    OneNote
    If you’re a OneNote user, and you want to see how to use Getting Results the Agile Way with OneNote, check out Anu’s post on using Getting Results the Agile Way with OneNote.

  • J.D. Meier's Blog

    Cloud Security Threats and Countermeasures at a Glance

    • 0 Comments

    Cloud security has been a hot topic with the introduction of the Microsoft offering of the Windows Azure platform.  One of the quickest ways to get your head around security is to cut to the chase and look at the threats, attacks, vulnerabilities and countermeasures.  This post is a look at threats and countermeasures from a technical perspective.

    The thing to keep in mind with security is that it’s a matter of people, process, and technology.  However, focusing on a specific slice, in this case the technical slice, can help you get results.  The thing to keep in mind about security from a technical side is that you also need to think holistically in terms of the application, network, and host, as well as how you plug it into your product or development life cycle.  For information on plugging it into your life cycle, see the Security Development Lifecycle.

    While many of the same security issues that apply to running applications on-premise also apply to the cloud, the context of running in the cloud does change some key things.  For example, it might mean taking a deeper look at claims for identity management and access control.  It might mean rethinking how you think about your storage.  It can mean thinking more about how you access and manage virtualized computing resources.  It can mean thinking about how you make calls to services or how you protect calls to your own services.

    Here is a fast path through looking at security threats, attacks, vulnerabilities, and countermeasures for the cloud …

    Objectives

    • Learn a security frame that applies to the cloud
    • Learn top threats/attacks, vulnerabilities and countermeasures for each area within the security frame
    • Understand differences between threats, attacks, vulnerabilities and countermeasures

    Overview
    It is important to think like an attacker when designing and implementing an application. Putting yourself in the attacker’s mindset will make you more effective at designing mitigations for vulnerabilities and coding defensively.  Below is the cloud security frame. We use the cloud security frame to present threats, attacks, vulnerabilities and countermeasures to make them more actionable and meaningful.

    You can also use the cloud security frame to effectively organize principles, practices, patterns, and anti-patterns in a more useful way.

    Threats, Attacks, Vulnerabilities, and Countermeasures
    These terms are defined as follows:

    • Asset. A resource of value such as the data in a database, data on the file system, or a system resource.
    • Threat. A potential occurrence – malicious or otherwise – that can harm an asset.
    • Vulnerability. A weakness that makes a threat possible.
    • Attack. An action taken to exploit vulnerability and realize a threat.
    • Countermeasure. A safeguard that addresses a threat and mitigates risk.

    Cloud Security Frame
    The following key security concepts provide a frame for thinking about security when designing applications to run on the cloud, such as Windows Azure. Understanding these concepts helps you put key security considerations such as authentication, authorization, auditing, confidentiality, integrity, and availability into action.

    Hot Spot Description
    Auditing and Logging Cloud auditing and logging refers to how security-related events are recorded, monitored, audited, exposed, compiled & partitioned across multiple cloud instances. Examples include: Who did what and when and on which VM instance?
    Authentication Authentication is the process of proving identity, typically through credentials, such as a user name and password. In the cloud this also encompasses authentication against varying identity stores.
    Authorization Authorization is how your application provides access controls for roles, resources and operations. Authorization strategies might involve standard mechanisms, utilize claims and potentially support a federated model.
    Communication Communication encompasses how data is transmitted over the wire. Transport security, message encryption, and point to point communication are covered here.
    Configuration Management Configuration management refers to how your application handles configuration and administration of your applications from a security perspective. Examples include: Who does your application run as? Which databases does it connect to? How is your application administered? How are these settings secured?
    Cryptography Cryptography refers to how your application enforces confidentiality and integrity. Examples include: How are you keeping secrets (confidentiality)? How are you tamper-proofing your data or libraries (integrity)? How are you providing seeds for random values that must be cryptographically strong? Certificates and cert management are in this domain as well.
    Input and Data Validation Validation refers to how your application filters, scrubs, or rejects input before additional processing, or how it sanitizes output. It's about constraining input through entry points and encoding output through exit points. Message validation refers to how you verify the message payload against schema, as well as message size, content and character sets. Examples include: How do you know that the input your application receives is valid and safe? Do you trust data from sources such as databases and file shares?
    Exception Management Exception management refers to how you handle applications errors and exceptions. Examples include: When your application fails, what does your application do? Does it support graceful failover to other application instances in the cloud? How much information do you reveal? Do you return friendly error information to end users? Do you pass valuable exception information back to the caller?
    Sensitive Data Sensitive data refers to how your application handles any data that must be protected either in memory, over the network, or in persistent stores. Examples include: How does your application handle sensitive data? How is sensitive data shared between application instances?
    Session Management A session refers to a series of related interactions between a user and your application. Examples include: How does your application handle and protect user sessions?

     

    Threats and Attacks

     

    Category Items
    Auditing and Logging
    • Repudiation. An attacker denies performing an operation, exploits an application without trace, or covers his or her tracks.
    • Denial of service (DoS). An attacker overwhelms logs with excessive entries or very large log entries.
    • Disclosure of confidential information. An attacker gathers sensitive information from log files.
    Authentication
    • Network eavesdropping. An attacker steals identity and/or credentials off the network by reading network traffic not intended for them.
    • Brute force attacks. An attacker guesses identity and/or credentials through the use of brute force.
    • Dictionary attacks. An attacker guesses identity and/or credentials through the use of common terms in a dictionary designed for that purpose.
    • Cookie replay attacks. An attacker gains access to an authenticated session through the reuse of a stolen cookie containing session information.
    • Credential theft. An attacker gains access to credentials through data theft; for instance, phishing or social engineering.
    Authorization
    • Elevation of privilege. An attacker enters a system as a lower-level user, but is able to obtain higher-level access.
    • Disclosure of confidential data. An attacker accesses confidential information because of authorization failure on a resource or operation.
    • Data tampering. An attacker modifies sensitive data because of authorization failure on a resource or operation.
    • Luring attacks. An attacker lures a higher-privileged user into taking an action on their behalf. This is not an authorization failure but rather a failure of the system to properly inform the user.
    • Token stealing. An attacker steals the credentials or token of another user in order to gain authorization to resources or operations they would not otherwise be able to access.
    Communication
    • Failure to encrypt messages. An attacker is able to read message content off the network because it is not encrypted.
    • Theft of encryption keys. An attacker is able to decrypt sensitive data because he or she has the keys.
    • Man-in-the-middle attack. An attacker can read and then modify messages between the client and the service.
    • Session replay. An attacker steals messages off the network and replays them in order to steal a user's session.
    • Data tampering. An attacker modifies the data in a message in order to attack the client or the service
    Configuration Management
    • Unauthorized access to configuration stores. An attacker gains access to configuration files and is able to modify binding settings, etc.
    • Retrieval of clear text configuration secrets. An attacker gains access to configuration files and is able to retrieve sensitive information such as database connection strings.
    Cryptography
    • Encryption cracking. Breaking an encryption algorithm and gaining access to the encrypted data.
    • Loss of decryption keys. Obtaining decryption keys and using them to access protected data.
    Exception Management
    • Information disclosure. Sensitive system or application details are revealed through exception information.
    • Denial of service. An attacker uses error conditions to stop your service or place it in an unrecoverable error state.
    • Elevation of privilege. Your service encounters an error and fails to an insecure state; for instance, failing to revert impersonation.
    Input and Data Validation
    • Canonicalization attacks. Canonicalization attacks can occur anytime validation is performed on a different form of the input than that which is used for later processing. For instance, a validation check may be performed on an encoded string, which is later decoded and used as a file path or URL.
    • Cross-site scripting. Cross-site scripting can occur if you fail to encode user input before echoing back to a client that will render it as HTML.
    • SQL injection. Failure to validate input can result in SQL injection if the input is used to construct a SQL statement, or if it will modify the construction of a SQL statement in some way.
    • Cross-Site Request Forgery: CSRF attacks involve forged transactions submitted to a site on behalf of another party.
    • XPath injection. XPath injection can result if the input sent to the Web service is used to influence or construct an XPath statement. The input can also introduce unintended results if the XPath statement is used by the Web service as part of some larger operation, such as applying an XQuery or an XSLT transformation to an XML document.
    • XML bomb. XML bomb attacks occur when specific, small XML messages are parsed by a service resulting in data that feeds on itself and grows exponentially. An attacker sends an XML bomb with the intent of overwhelming a Web service’s XML parser and resulting in a denial of service attack.
    Sensitive Data
    • Memory dumping. An attacker is able to read sensitive data out of memory or from local files.
    • Network eavesdropping. An attacker sniffs unencrypted sensitive data off the network.
    • Configuration file sniffing. An attacker steals sensitive information, such as connection strings, out of configuration files.
    Session Management
    • Session hijacking. An attacker steals the session ID of another user in order to gain access to resources or operations they would not otherwise be able to access.
    • Session replay. An attacker steals messages off the network and replays them in order to steal a user’s session.
    • Man-in-the-middle attack. An attacker can read and then modify messages between the client and the service.
    • Inability to log out successfully. An application leaves a communication channel open rather than completely closing the connection and destroying any server objects in memory relating to the session.
    • Cross-site request forgery. Cross-site request forgery (CSRF) is where an attacker tricks a user into performing an action on a site where the user actually has a legitimate authorized account.
    • Session fixation. An attacker uses CSRF to set another person’s session identifier and thus hijack the session after the attacker tricks a user into initiating it.
    • Load balancing and session affinity. When sessions are transferred from one server to balance traffic among the various servers, an attacker can hijack the session during the handoff.

    Vulnerabilities

    Category Items
    Auditing and Logging
    • Failing to audit failed logons.
    • Failing to secure log files.
    • Storing sensitive information in log files.
    • Failing to audit across application tiers.
    • Failure to throttle log files.
    Authentication
    • Using weak passwords.
    • Storing clear text credentials in configuration files.
    • Passing clear text credentials over the network.
    • Permitting prolonged session lifetime.
    • Mixing personalization with authentication.
    • Using weak authentication mechanisms (e.g., using basic authentication over an untrusted network).
    Authorization
    • Relying on a single gatekeeper (e.g., relying on client-side validation only).
    • Failing to lock down system resources against application identities.
    • Failing to limit database access to specified stored procedures.
    • Using inadequate separation of privileges.
    • Connection pooling.
    • Permitting over privileged accounts.
    Configuration Management
    • Using insecure custom administration interfaces.
    • Failing to secure configuration files on the server.
    • Storing sensitive information in the clear text.
    • Having too many administrators.
    • Using over privileged process accounts and service accounts.
    Communication
    • Not encrypting messages.
    • Using custom cryptography.
    • Distributing keys insecurely.
    • Managing or storing keys insecurely.
    • Failure to use a mechanism to detect message replays.
    • Not using either message or transport security.
    Cryptography
    • Using custom cryptography
    • Failing to secure encryption keys
    • Using the wrong algorithm or a key size that is too small
    • Using the same key for a prolonged period of time
    • Distributing keys in an insecure manner
    Exception Management
    • Failure to use structured exception handling (try/catch).
    • Revealing too much information to the client.
    • Failure to specify fault contracts with the client.
    • Failure to use a global exception handler.
    Input and Data Validation
    • Using non-validated input used to generate SQL queries.
    • Relying only on client-side validation.
    • Using input file names, URLs, or usernames for security decisions.
    • Using application-only filters for malicious input.
    • Looking for known bad patterns of input.
    • Trusting data read from databases, file shares, and other network resources.
    • Failing to validate input from all sources including cookies, headers, parameters, databases, and network resources.
    Sensitive Data
    • Storing secrets when you do not need to.
    • Storing secrets in code.
    • Storing secrets in clear text in files, registry, or configuration.
    • Passing sensitive data in clear text over networks.
    Session Management
    • Passing session IDs over unencrypted channels.
    • Permitting prolonged session lifetime.
    • Having insecure session state stores.
    • Placing session identifiers in query strings.

     

    Countermeasures

     

    Category Items
    Auditing and Logging
    • Identify malicious behavior.
    • Know your baseline (know what good traffic looks like).
    • Use application instrumentation to expose behavior that can be monitored.
    • Throttle logging.
    • Strip sensitive data before logging.
    Authentication
    • Use strong password policies.
    • Do not store credentials in an insecure manner.
    • Use authentication mechanisms that do not require clear text credentials to be passed over the network.
    • Encrypt communication channels to secure authentication tokens.
    • Use Secure HTTP (HTTPS) only with forms authentication cookies.
    • Separate anonymous from authenticated pages.
    • Using cryptographic random number generators to generate session IDs.
    Authorization
    • Use least-privileged accounts.
    • Tie authentication to authorization on the same tier.
    • Consider granularity of access.
    • Enforce separation of privileges.
    • Use multiple gatekeepers.
    • Secure system resources against system identities.
    Configuration Management
    • Using insecure custom administration interfaces.
    • Failing to secure configuration files on the server.
    • Storing sensitive information in the clear text.
    • Having too many administrators.
    • Using overprivileged process accounts and service accounts.
    Communication
    • Use message security or transport security to encrypt your messages.
    • Use proven platform-provided cryptography.
    • Periodically change your keys.
    • Use any platform-provided replay detection features.
    • Consider creating custom code if the platform does not provide a detection mechanism.
    • Turn on message or transport security.
    Cryptography
    • Do not develop and use proprietary algorithms (XOR is not encryption. Use established cryptography such as RSA)
    • Avoid key management.
    • Use the RNGCryptoServiceProvider method to generate random numbers
    • Periodically change your keys
    Exception Management
    • Use structured exception handling (by using try/catch blocks).
    • Catch and wrap exceptions only if the operation adds value/information.
    • Do not reveal sensitive system or application information.
    • Implement a global exception handler.
    • Do not log private data such as passwords.
    Sensitive Data
    • Do not store secrets in software.
    • Encrypt sensitive data over the network.
    • Secure the channel.
    • Encrypt sensitive data in configuration files.
    Session Management
    • Partition the site by anonymous, identified, and authenticated users.
    • Reduce session timeouts.
    • Avoid storing sensitive data in session stores.
    • Secure the channel to the session store.
    • Authenticate and authorize access to the session store.
    Validation
    • Do not trust client input.
    • Validate input: length, range, format, and type.
    • Validate XML streams.
    • Constrain, reject, and sanitize input.
    • Encode output.
    • Restrict the size, length, and depth of parsed XML messages.

    Threats and Attacks Explained

    1.  Brute force attacks. Attacks that use the raw computer processing power to try different permutations of any variable that could expose a security hole. For example, if an attacker knew that access required an 8-character username and a 10-character password, the attacker could iterate through every possible (256 multiplied by itself 18 times) combination in order to attempt to gain access to a system. No intelligence is used to filter or shape for likely combinations.
    2. Buffer overflows. The maximum size of a given variable (string or otherwise) is exceeded, forcing unintended program processing. In this case, the attacker uses this behavior to cause insertion and execution of code in such a way that the attacker gains control of the program in which the buffer overflow occurs. Depending on the program’s privileges, the seriousness of the security breach will vary.
    3. Canonicalization attacks. There are multiple ways to access the same object and an attacker uses a method to bypass any security measures instituted on the primary intended methods of access. Often, the unintended methods of access can be less secure deprecated methods kept for backward compatibility.
    4. Cookie manipulation. Through various methods, an attacker will alter the cookies stored in the browser. Attackers will then use the cookie to fraudulently authenticate themselves to a service or Web site.
    5. Cookie replay attacks. Reusing a previously valid cookie to deceive the server into believing that a previously authenticated session is still in progress and valid.
    6. Credential theft. Stealing the verification part of an authentication pair (identity + credentials = authentication). Passwords are a common credential.
    7. Cross-Site Request Forgery (CSRF). Interacting with a web site on behalf of another user to perform malicious actions. A site that assumes all requests it receives are intentional is vulnerable to a forged request.
    8. Cross-site scripting (XSS). An attacker is able to inject executable code (script) into a stream of data that will be rendered in a browser. The code will be executed in the context of the user’s current session and will gain privileges to the site and information that it would not otherwise have.
    9. Connection pooling. The practice of creating and then reusing a connection resource as a performance optimization. In a security context, this can result in either the client or server using a connection previously used by a highly privileged user being used for a lower-privileged user or purpose. This can potentially expose vulnerability if the connection is not reauthorized when used by a new identity.
    10. Data tampering. An attacker violates the integrity of data by modifying it in local memory, in a data-store, or on the network. Modification of this data could provide the attacker with access to a service through a number of the different methods listed in this document.
    11. Denial of service. Denial of service (DoS) is the process of making a system or application unavailable. For example, a DoS attack might be accomplished by bombarding a server with requests to consume all available system resources, or by passing the server malformed input data that can crash an application process.
    12. Dictionary attack. Use of a list of likely access methods (usernames, passwords, coding methods) to try and gain access to a system. This approach is more focused and intelligent than the “brute force” attack method, so as to increase the likelihood of success in a shorter amount of time.
    13. Disclosure of sensitive/confidential data. Sensitive data is exposed in some unintended way to users who do not have the proper privileges to see it. This can often be done through parameterized error messages, where an attacker will force an error and the program will pass sensitive information up through the layers of the program without filtering it. This can be personally identifiable information (i.e., personal data) or system data.
    14. Elevation of privilege. A user with limited privileges assumes the identity of a privileged user to gain privileged access to an application. For example, an attacker with limited privileges might elevate his or her privilege level to compromise and take control of a highly privileged and trusted process or account. More information about this attack in the context of Windows Azure can be found in the Security Best Practices for Developing Windows Azure Applications at http://www.microsoft.com/downloads/details.aspx?displaylang=en&FamilyID=0ff0c25f-dc54-4f56-aae7-481e67504df6
    15. Encryption. The process of taking sensitive data and changing it in such a way that it is unrecognizable to anyone but those who know how to decode it. Different encryption methods have different strengths based on how easy it is for an attacker to obtain the original information through whatever methods are available.
    16. Information disclosure. Unwanted exposure of private data. For example, a user views the contents of a table or file that he or she is not authorized to open, or monitors data passed in plaintext over a network. Some examples of information disclosure vulnerabilities include the use of hidden form fields, comments embedded in Web pages that contain database connection strings and connection details, and weak exception handling that can lead to internal system-level details being revealed to the client. Any of this information can be very useful to the attacker.
    17. Luring attacks. An attacker lures a higher-privileged user into taking an action on his or her behalf. This is not an authorization failure but rather a failure of the system to properly inform the user.
    18. Man-in-the-middle attacks. A person intercepts both the client and server communications and then acts as an intermediary between the two without each ever knowing. This gives the “middle man” the ability to read and potentially modify messages from either party in order to implement another type of attack listed here.
    19. Network eavesdropping. Listening to network packets and reassembling the messages being sent back and forth between one or more parties on the network. While not an attack itself, network eavesdropping can easily intercept information for use in specific attacks listed in this document.
    20. Open Redirects. Attacker provides a URL to a malicious site when allowed to input a URL used in a redirect. This allows the attacker to direct users to sites that perform phishing attacks or other malicious actions.
    21. Password cracking. If the attacker cannot establish an anonymous connection with the server, he or she will try to establish an authenticated connection. For this, the attacker must know a valid username and password combination. If you use default account names, you are giving the attacker a head start. Then the attacker only has to crack the account’s password. The use of blank or weak passwords makes the attacker’s job even easier.
    22. Repudiation. The ability of users (legitimate or otherwise) to deny that they performed specific actions or transactions. Without adequate auditing, repudiation attacks are difficult to prove.
    23. Session hijacking. Also known as man-in-the-middle attacks, session hijacking deceives a server or a client into accepting the upstream host as the actual legitimate host. Instead, the upstream host is an attacker’s host that is manipulating the network so the attacker’s host appears to be the desired destination.
    24. Session replay. An attacker steals messages off of the network and replays them in order to steal a user’s session.
    25. Session fixation. An attacker sets (fixates) another person’s session identifier artificially. The attacker must know that a particular Web service accepts any session ID that is set externally; for example, the attacker sets up a URL such as http://unsecurewebservice.com/?sessionID=1234567. The attacker then sends this URL to a valid user, who clicks on it. At this point, a valid session with the ID 1234567 is created on the server. Because the attacker determines this ID, he or she can now hijack the session, which has been authenticated using the valid user’s credentials.
    26. Spoofing. An attempt to gain access to a system by using a false identity. This can be accomplished by using stolen user credentials or a false IP address. After the attacker successfully gains access as a legitimate user or host, elevation of privileges or abuse using authorization can begin.
    27. SQL injection. Failure to validate input in cases where the input is used to construct a SQL statement or will modify the construction of a SQL statement in some way. If the attacker can influence the creation of a SQL statement, he or she can gain access to the database with privileges otherwise unavailable and use this in order to steal or modify information or destroy data.
    28. Throttling. The process of limiting resource usage to keep a particular process from bogging down and/or crashing a system. Relevant as a countermeasure in DoS attacks, where an attacker attempts to crash the system by overloading it with input.

    Countermeasures Explained

    1. Assume all input is malicious. Assuming all input is malicious means designing your application to validate all input. User input should never be accepted without being filtered and/or sanitized.
    2. Audit and log activity through all of the application tiers. Log business critical and security sensitive events. This will help you track security issues down and make sense of security problems. Skilled attackers attempt to cover their tracks, so you’ll want to protect your logs.
    3. Avoid storing secrets. Design around storing secrets. If necessary, sometimes they can be avoided by storing them after using a one-way hash algorithm.
    4. Avoid storing sensitive data in the Web space. Anything exposed to the public Internet is considered “web space.” Sensitive data stored in a location that might be compromised by any member of the public places it at much higher risk.
    5. Back up and regularly analyze log files. Some attacks can occur over time. Regular analysis of logs will allow you to recognize with sufficient time to address them. Performing regular backups lowers the risk of an attacker covering his tracks by deleting logging of his activities.
    6. Be able to disable accounts. The ability to reactively defend an attack by shutting out a user should be supported through the ability to disable an account.
    7. Be careful with canonicalization issues. Predictable naming of file resources is convenient for programming, but is also very convenient for malicious parties to attack. Application logic should not be exposed to users in this manner. Instead, you use file names derived from the original names or fed through a one-way hashing algorithm.
    8. Catch exceptions. Unhandled exceptions are at risk of passing too much information to the client. Handle exceptions when possible.
    9. Centralize your input and data validation. Input and data validation should be performed using a common set of code such as a validation library.
    10. Consider a centralized exception management framework. Exception handling frameworks are available publically and provide an established and tested means for handling exceptions.
    11. Consider authorization granularity. Every object needs to have an authorization control that authorizes access based on the identity of the authenticated party requesting access. Fine grained authorization will control access to each resource, while coarse grained authorization will control access to groups of resources or functional areas of the application.
    12. Consider identity flow. Auditing should be traceable back to the authenticated party. Take note of identity transitions imposed by design decisions like impersonation.
    13. Constrain input. Limit user input to expected ranges and formats.
    14. Constrain, reject, and sanitize your input. Constrain, reject and sanitize should be primary techniques in handling input data.
    15. Cycle your keys periodically. Expiring encryption keys lowers the risk of stolen keys.
    16. Disable anonymous access and authenticate every principle. When possible, require all interactions to occur as an authenticated party as opposed to an anonymous one. This will help facilitate more effective auditing.
    17. Do not develop your own cryptography. Custom cryptography is not difficult for experts to crack. Established cryptography is preferred because it is known to be safe.
    18. Do not leak information to the client. Exception data can potentially contain sensitive data or information exposing program logic. Provide clients only with the error data they need for the UI.
    19. Do not log private data such as passwords. Log files are an attack vector for malicious parties. Limit the risk of their being compromised by not logging sensitive data in the log.
    20. Do not pass sensitive data using the HTTP-GET protocol. Data passed using HTTP GET is appended to the querystring. When users share links by copying and pasting them from the browser address bar, sensitive data may also be inadvertently passed. Pass sensitive data in the body of a POST to avoid this.
    21. Do not rely on client-side validation. Any code delivered to a client is at risk of being compromised. Because of this, it should always be assumed that input validation on the client might have been bypassed.
    22. Do not send passwords over the wire in plaintext. Authentication information communicated over the wire should always be encrypted. This may mean encrypting the values, or encrypting the entire channel with SSL.
    23. Do not store credentials in plaintext. Credentials are sometimes stored in application configuration files, repositories, or sent over email. Always encrypt credentials before storing them.
    24. Do not store database connections, passwords, or keys in plaintext. Configuration secrets should always be stored in encrypted form, external to the code.
    25. Do not store passwords in user stores. In the event that the user store is compromised, an attack should never be able to access passwords. A derivative of a password should be stored instead. A common approach to this is to encrypt a version of the password using a one-way hash with a SALT. Upon authentication, the encrypted password can be re-generated with the SALT and the result can be compared to the original encrypted password.
    26. Do not store secrets in code. Secrets such as configuration settings are convenient to store in code, but are more likely to be stolen. Instead, store them in a secure location such as a secret store.
    27. Do not store sensitive data in persistent cookies. Persistent cookies are stored client-side and provide attackers with ample opportunity to steal sensitive data, be it through encryption cracking or any other means.
    28. Do not trust fields that the client can manipulate (query strings, form fields, cookies, or HTTP headers). All information sent from a client should always be assumed to be malicious. All information from a client should always be validated and sanitized before it is used.
    29. Do not trust HTTP header information. Http header manipulation is a threat that can be mitigated by building application logic that assumes HTTP headers are compromised and validates the HTTP headers before using them.
    30. Encrypt communication channels to protect authentication tokens. Authentication tokens are often the target of eavesdropping, theft or replay type attacks. To reduce the risk in these types of attacks, it is useful to encrypt the channel the tokens are communicated over. Typically this means protecting a login page with SSL encryption.
    31. Encrypt sensitive cookie state. Sensitive data contained within cookies should always be encrypted.
    32. Encrypt the contents of the authentication cookies. In the case where cookies are compromised, they should not contain clear-text session data. Encrypt sensitive data within the session cookie.
    33. Encrypt the data or secure the communication channel. Sensitive data should only be passed in encrypted form. This can be accomplished by encrypting the individual items that are sent over the wire, or encrypting the entire channel as with SSL.
    34. Enforce separation of privileges. Avoid building generic roles with privileges to perform a wide range of actions. Roles should be designed for specific tasks and provided the minimum privileges required for those tasks.
    35. Enforce unique transactions. Identify each transaction from a client uniquely to help prevent replay and forgery attacks.
    36. Identify malicious behavior. Monitoring site interactions that fall outside of normal usage patterns, you can quickly identify malicious behavior. This is closely related to “Know what good traffic looks like.
    37. Keep unencrypted data close to the algorithm. Use decrypted data as soon as it is decrypted, and then dispose of it promptly. Unencrypted data should not be held in memory in code.
    38. Know what good traffic looks like. Active auditing and logging of a site will allow you know recognize what regular traffic and usage patterns are. This is a required step in order to be able to identify malicious behavior.
    39. Limit session lifetime. Longer session lifetimes provide greater opportunity for Cross-Site Scripting or Cross-Site Request Forgery attacks to add activity onto an old session.
    40. Log detailed error messages. Highly detailed error message logging can provide clues to attempted attacks.
    41. Log key events. Profile your application and note key or sensitive operations and/or events, and log these events during application operation.
    42. Maintain separate administration privileges. Consider granularity of authorization in the administrative interfaces as well. Avoid combining administrator roles with distinctly different roles such as development, test or deployment.
    43. Make sure that users do not bypass your checks. Bypassing checks can be accomplished by canonicalization attacks, or bypassing client-side validation. Application design should avoid exposing application logic, and segregating application logic into flow that can be interrupted. For example, an ASPX page that performs only validations and then redirects. Instead, validation routines should be tightly bound to the data they are validating.
    44. Pass Forms authentication cookies only over HTTPS connections. Cookies are at risk of theft and replay type attacks. Encrypting them with SSL helps reduce the risk of these types of attacks.
    45. Protect authentication cookies. Cookies can be manipulated with Cross-Site Scripting attacks, encrypt sensitive data in cookies, and use browser features such as the HttpOnly cookie attribute.
    46. Provide strong access controls on sensitive data stores. Access to secret stores should but authorized. Protect the secret store as you would other secure resources by requiring authentication and authorization as appropriate.
    47. Reject known bad input. Rejecting known bad input involves screening input for values that are known to be problematic or malicious. NOTE: Rejecting should never be the primary means of screening bad input, it should always be used in conjunction with input sanitization.
    48. Require strong passwords. Enforce password complexity requirement by requiring long passwords with a combination of upper case, lower case, numeric and special (for example punctuation) characters. This helps mitigate the threat posed by dictionary attacks. If possible, also enforce automatic password expiry.
    49. Restrict user access to system-level resources. Users should not be touching system resources directly. This should be accomplished through an intermediary such as the application. System resources should be restricted to application access.
    50. Retrieve sensitive data on demand. Sensitive data stored in application memory provides attackers another location they can attempt to access the data. Often this data is used in unencrypted form also. To minimize risk of sensitive data theft, sensitive data should be used immediately and then cleared from memory.
    51. Sanitize input. Sanitizing input is the opposite of rejecting bad input. Sanitizing input is the process of filtering input data to only accept values that are known to be safe. Alternatively, input can be rendered innocuous by converting it to safe output through output encoding methods.
    52. Secure access to log files. Log files should only be accessible to administrators, auditors, or administrative interfaces. An attacker with access to the logs might be able to glean sensitive data or program logic from logs.
    53. Secure the communication channel for remote administration. Eavesdropping and replay attacks can target administration interfaces as well. If using a web based administration interface, use SSL.
    54. Secure your configuration store. The configuration store should require authenticated access and should store sensitive settings or information in an encrypted format.
    55. Secure your encryption keys. Encryption keys should be treated as secrets or sensitive data. They should be secured in a secret store or key repository.
    56. Separate public and restricted areas. Applications that contain public front-ends as well as content that requires authentication to access should be partitioned in the same manner. Public facing pages should be hosted in a separate file structure, directory or domain from private content.
    57. Store keys in a restricted location. Protect keys with authorization policies.
    58. Support password expiration periods. User passwords and account credentials are commonly compromised. Expiration policies help mitigate attacks from stolen accounts, or disgruntled employees who have been terminated.
    59. Use account lockout policies for end-user accounts. Account login attempts should have a cap on failed attempts. After the cap is exceeded the account should prevent further login attempts. Lockout helps prevent dictionary and brute force attacks.
    60. Use application instrumentation to expose behavior that can be monitored: Application transactions that are more likely to be targeted by malicious interactions should be logged or monitored. Examples of this might be adding logging code to an exception handler, or logging individual API calls. By providing a means to watch these transactions you have a higher likelihood of being able to identify malicious behavior quickly.
    61. Use authentication mechanisms that do not require clear text credentials to be passed over the network: A variety of authentication approaches exist for use with web based applications some involve the use of tokens while others will pass user credentials (user name/id and password) over the wire. When possible, it is safer to use an authentication mechanism that does not pass the credentials. If credentials must be passed, it is preferable to encrypt them, and/or send them over an encrypted channel such as SSL.
    62. Use least privileged accounts. The privileges granted to the authenticated party should be the minimum required to perform all required tasks. Be careful of using existing roles that have permissions beyond what is required.
    63. Use least privileged process and service accounts. Allocate accounts specifically for process and service accounts. Lock down the privileges of these accounts separately from other accounts.
    64. Use multiple gatekeepers. Passing the authentication system should not provide a golden ticket to any/all functionality. System and/or application resources should have restricted levels of access depending on the authenticated party. Some design patterns might also enforce multiple authentications, sometimes distributed through application tiers.
    65. Use SSL to protect session authentication cookies. Session authentication cookies contain data that can be used in a number of different attacks such as replay, Cross-Site Scripting or Cross-Site Request Forgery. Protecting these cookies helps mitigate these risks.
    66. Use strong authentication and authorization on administration interfaces. Always require authenticated access to administrative interfaces. When applicable, also enforce separation of privileges within the administrative interfaces.
    67. Use structured exception handling. A structured approach to exception handling lowers the risk of unexpected exceptions from going unhandled.
    68. Use the correct algorithm and correct key length. Different encryption algorithms are preferred for varying data types and scenarios.
    69. Use tried and tested platform features. Many cryptographic features are available through the .NET Framework. These are proven features and should be used in favor of custom methods.
    70. Validate all values sent from the client. Similar to not relying on client-side validation, any input from a client should always be assumed to have been tampered with. This input should always be validated before it is used. This encompasses user input, cookie values, HTTP headers, and anything else that is sent over the wires from the client.
    71. Validate data for type, length, format, and range. Data validation should encompass these primary tenets. Validate for data type, string lengths, string or numeric formats, and numeric ranges.

    SDL Considerations
    For more information on preferred encryption algorithms and key lengths, see the Security Development Lifecycle at http://www.microsoft.com/security/sdl/ .

  • J.D. Meier's Blog

    Lessons Learned in 2008

    • 5 Comments

    I posted my Lessons Learned in 2008 on Sources of Insight.  2008 was a pretty insightful year for me.  I met a lot of great people, read a lot of books, and learned a lot along the way.  I recapped my top 10 lessons here.

    Top Ten Lessons for 2008

    • Adapt, adjust, or avoid situations. Learn how to read situations. Some situations you should just avoid.  Some situations you should adapt yourself, as long as you play to your strengths.  Some situations you should adjust the situation to set yourself up for success.  See The Change Frame.
    • Ask questions over make statements.  If you want to get in an argument, make statements.  If you want to avoid arguments, ask questions.
    • Character trumps emotion trumps Logic.  Don’t just go for the logical win.  Win the heart and the mind follows.  Build rapport.  Remember the golden rule of “rapport before influence.  Have the right people on your side.   If you win the right pillars first, it’s a domino effect.  It’s part of social influence.  See Character Trumps Emotion Trumps Logic.
    • Develop a routine for exceptional thinking.  Create a preperformance routine that creates consistent and dependable thinking.  Work backwards from the end in mind.  Know what it’s like when you’re at your best.  Model from your best experiences.  Success leaves clues.  Turn them into a routine.  Set time boundaries.  Don’t let yourself take as long as it takes.  Work has a way of filling the available hours. Set a timebox and improve your routine until you can shift gears effectively within your time boundaries.  See Design a Routine for Exceptional Thinking.
    • Give your best where you have your best to give.   Design your time to spend most of your time on your strengths.  Limit the time you spend in your weaknesses.   Play to your strengths.  When you play to your strengths, if you get knocked down, it’s easier to get up again.  It’s also how you unleash your best.  See Give Your Best Where You Have Your Best to Give.
    • Label what is right with things.  There’s been too much focus on what’s wrong with things.  Find and label what’s right with you.  We all have a deep need to know what’s right with us.  Shift from labeling what’s wrong, to labeling what’s right. See Label What is Right with Things.
    • One pitch at a time.  Focus on one pitch at a time.  Hook on to one thing.  Be absorbed in the moment, no matter what’s at stake.  Let results be the by-product of what you’re doing.  Don’t judge yourself while you’re performing.  Don’t rearrange your work; rearrange your focus.  See One Pitch at a Time.
    • Spend 75 percent on your strengths.  Very few people spend the majority of their time on their strengths.  Create timeboxes for your non-negotiables.  You’re not your organization’s greatest asset until you spend your time on your strengths.  Activities that you don’t like, hurt less, if you compartmentalize them to a smaller chunk of your day.  See Spend 75 Percent on Your Strengths.
    • Ask Solution-focused questions.   Ask things like “how do we make the most of this?” … “what’s the solution?” … “if we knew the solution, what might it be?”  Believe it or not, a lot of folks get stuck unless you add the “if you did know the solution …” or “what might it be?”  See Solution-Focused Questions.
    • Use stress to be your best.  It’s not what happens to you, it’s what you make of it.  Distinguish stress from anxiety.  Stress is your body’s response.  Anxiety is your mind’s response.   See Use Stress to Be Your Best.
  • J.D. Meier's Blog

    Writing Books on Time and on Budget

    • 1 Comments

    AppARch2.0_small

    One of the questions I get asked is how did we execute our patterns & practices Application Architecture Guide 2.0 project, on time and on budget?  It was a six month project, during which we ....

    2 Keys to Success
    We used two keys to success:

    • Fix time, Flex scope
    • Agile Guidance Engineering

    Fix Time, Flex Scope
    One of the most successful patterns I've used for years now is to fix time, and flex scope.  The idea is to deliver incremental value and find a way to flow value along the way rather than wait for one big bang at the end.  This allows you to deliver the most timely and relevant value with a healthy worklife balance.  It helps reduce project risk along the way.  More importantly, it helps get your stakeholders on board, by showing them results versus just trust you to the end.  Scope is the best to flex because there's the least amount of precision or accuracy up front, and it enables you to respond to the market or stakeholder concerns more effectively.

    Agile Guidance Engineering
    This is the secret sauce.  I call it Agile Guidance Engineering:

    AgileGuidanceEngineering2

    In a nutshell, Agile Guidance Engineering is about building guidance using nuggets of specific types (how tos, guidelines, checklists ... etc.) and composing them into books.  The books themselves are actually an information model.  The information model is designed to both structure the content as well as structure the problem domain.  We vet the nuggets as we go for feedback, and we prioritize, tune, and improve them along the way.

    I've used Agile Guidance Engineering successfully to build the following Microsoft patterns & practices Blue Books:

    My Related Posts

  • J.D. Meier's Blog

    patterns and practices Complete Catalog

    • 5 Comments

    What's the full patterns & practices catalog?  I created a quick index of the patterns & practices catalog since I've needed to hunt down a few things.  I figured this might be useful to share.

    Views

    Blocks

    Enterprise Library

    Factories

    Guides

    Patterns

    Tools

  • J.D. Meier's Blog

    WCF Security Resources

    • 7 Comments

    If you're building Web services or if you're implementing SOA on the Microsoft platform , then you're probably either working with or exploring WCF (Windows Communication Foundation.)   When we started our patterns & practices WCF Security Guidance project, one of the first things I did was compile a list of WCF security resources for our team.  This helped us quickly ramp up and as well as see gaps.  One thing that surprised me is how much is available in the product documentation, if you know where to look.  Here's a preliminary look at our WCF Security resources index which we'll include in our WCF Security Guide: 

    Getting Started

    Community

    Articles

    Microsoft

    Community

    Blogs

    Microsoft

    Community

    Channel9

    Podcasts

    ARCast.TV

    Videos

    Tags

    Documentation (MSDN Product Documentation)

    Overview

    Guidance

    Scenarios

    Threats and Countermeasures

    Topics

    How Tos

    Guides

    Community

    Posts

    Microsoft

    Community

    patterns & practices

    Product Support Services (PSS)

    Samples

    Microsoft

    Community

    Videos

    Web Casts

    MSDN Support WebCasts

  • J.D. Meier's Blog

    3 Keys to Agile Results

    • 0 Comments

    Agile Results is the name of the system I talk about in Getting Results the Agile Way.   It’s a simple time management system for meaningful results.  The focus is on meaningful results, not doing more things.  There are three keys to the Agile Results system:

    1. The Rule of Three
    2. Monday Vision, Daily Wins, Friday Reflection
    3. Hot Spots

    The Rule of 3
    The Rule of 3 helps you avoid getting overwhelmed.  It’s also a guideline that helps you prioritize and scope. Rather than bite off more than you can chew, you bite off three meaningful things. You can use The Rule of 3 at different levels by picking three wins for the day, three wins for the week, three wins for the month, and three wins for the year. This helps you see the forest for the trees since your three wins for the year are at a higher level than your three wins for the month, and your three wins for the week are at a higher level than your three wins for the day.  You can easily zoom in and out to help balance your perspective on what’s important, for the short term and the longer term.

    Monday Vision, Daily Wins, Friday Reflection
    Monday Vision, Daily Wins, Friday Reflection is a weekly results pattern.  This is a simple “time-based” pattern. Each week is a fresh start. On Mondays, you think about three wins you would like for the week.  Each day you identify three wins you would like for the day. On Fridays, you reflect on lessons learned; you ask yourself, “What three things are going well, and what three things need improvement?”  This weekly results pattern helps you build momentum.

    Hot Spots
    Hot Spots are a way to heat map your life.  They help you map out your results by identifying “what’s hot?.” Hot Spots become both your levers and your lens to help you identify and focus on what’s important in your life. They can represent areas of pain or opportunity. You can use Hot Spots as your main dashboard.  You can organize your Hot Spots by work, personal, and the “big picture” of your life. At a glance, you should be able to quickly see the balls you are juggling and what’s on your plate. To find your Hot Spots, simply make a list of the key things that need your time and energy. Then for each of these key things, create—a simple list, a “tickler list” that answers the question, “What do you want to accomplish?” Once you know the wins you want to achieve in your Hot Spots, you have the ultimate map for your meaningful results.

    You can use Agile Results for work or home or anywhere you need to improve your results in life. Agile Results is compatible with, and can enhance the results of, any productivity system or time management you already use.  That’s because the foundation of the Agile Results platform is a core set of principles, patterns, and practices for getting results.

    The simplest way to get started with Agile Results is to read Getting Started with Agile Results, and take the 30 Day Boot Camp for Getting Results.

  • J.D. Meier's Blog

    Software Architecture Best Practices at a Glance

    • 3 Comments

    Today we posted our updated software architecture best practices at a glance to CodePlex, as part of our patterns & practices Application Architecture Guide 2.0 project:

    They’re essentially a brief collection of problems and solutions around building software applications.  The answers are short and sweet so that you can quickly browse them.  You can think of them as a bird’s-eye view of the problem space we tackled.  When we add them to the Application Architecture Guide 2.0, we'll provide quick links into the guide for elaboration.

    This is your chance to bang on the set of problems and solutions before Beta 2 of the guide.

  • J.D. Meier's Blog

    Agile Guidance

    • 2 Comments

    When I ramp new folks on the team, I find it helpful to whiteboard how I build prescriptive guidance.  Here's a rough picture of the process:

    AgileGuidanceEngineering2

    Examples
    I've used the same process for Performance Testing Guidance, Team Development with Visual Studio Team Foundation Server, and WCF Security.

    Here's a brief explanation of what happens along the way:

    Design
    The dominant focus here is identifying candidate problems, candidate solutions, and figuring out key risks, as well as testing paths to explore.  The best outcome is a set of scenarios we can execute against.

    • Research - finding the right people, the right problems, and the right solutions.
    • Prototypes - experiment and test areas of high risk to prove the path.  This can include innovating on how we build prescriptive guidance.  We also use these to test with customers and get feedback on the approach.
    • Question Lists - building organized lists of one-liner user questions.
    • Task Lists - building organized lists of one-liner user tasks.
    • Scenario Frames - organizing scenarios into meaningful buckets.  See Scenario Frame Example.
    • Information Models - framing out the problem space and creating useful ways to organize, share, and act on the information.  See Web Services Security Frame.
    • Guidance Types  - testing which guidance types to use (how tos, checklists, guidelines, patterns, ... etc.)

    Execution
    The dominant focus here is product results.  It's scenario-driven.  Each week we pick scenarios to execute against.

    • Development - building prescriptive guidance, including coding, testing, and writing.
    • Backlog - our backlog is a prioritized set of scenarios and guidance modules.
    • Iterations - picking sets of scenarios to focus development on and test against.
    • Refactoring - tuning and pruning the guidance to improve effectiveness.  This includes breaking the content up and rewriting it.  For example, a common refactoring is factoring reference information from action.  We try to keep reference information in our Explained modules and action information in our How Tos.
    • Testing -  step through the guidance against the scenario.  The first pass is about making sure it works.  It should be executable by a human.  Next, it's about reducing friction and making sure the guidance really hits the what, why and how.  We test the guidance against objectives and scenarios with acceptance criteria so we know when we're done.
    • Problem Repros - creating step by step examples that reproduce a given problem.
    • Solution Repros - creating step by step examples that reproduce a given solution.

    Release
    We produce a Knowledge Base (KB) of guidance modules and a guide.  The guidance modules are modular and can be reused.   The guide includes chapters in addition to the guidance modules.  Here's examples from our WCF Security Guide:

    Agile Publishing
    We release our guidance modules along the way to test reactions, get feedback and reshape the approach as needed.

    • CodePlex - we publish our guidance modules to CodePlex so we can share early versions of the guidance and get customer feedback, as well as to test the structure of the guidance, and experiment with user experience.
    • Guidance Explorer - we publish guidance modules to Guidance Explorer so users can do their own guidance mash ups and build their own personalized guides.  Our field also uses this to build customized sets of guidance for customers.

    Stable Reference
    Once we've tested and vetted the guidance and have made it through a few rounds of customer feedback, we push the guidance to MSDN.

    • MSDN - this is the trusted site that users expect to see our prescriptive guidance in final form.
    • Visual Studio/ Visual Studio Team System - once we're a part of the MSDN distribution, we can automatically take advantage of the VS/VSTS documentation integration.

    My Related Posts

  • J.D. Meier's Blog

    If You’re Afraid of Your To-Do List, It’s Not Working

    • 0 Comments

    If you’re afraid to look at your To-Do list, it’s not working.  Your To-Do list should inspire you.

    One of the things that happens a lot with To-Do lists is they can get overwhelming.  It’s easy to pile on more things.  Eventually, you’re afraid to even look at your To-Do list.   What once started out as a great list of things to make happen, has now became a laundry list of things that hurts more than it helps.

    Worse, it’s easy to spawn a lot of lists that are full of once great intentions, so the problem spreads.

    There are multiple ways to hack the problem down to size, but here are the three I use the most:

    1. New Lists.  I create a new list each day and each week.   This gives me a fresh start.  This way I can have a master list (a “list of lists”), or all up project lists, but then carve out a specific list of outcomes and actions for a given segment of time, whether it’s a day, a week, a month, etc.
    2. Prioritizing.   A quick way to make the list more useful is to make sure that priorities float to the top.  By floating your priorities to the top, you can squeeze out the lower priorities, and let them slough off.  I find it’s easier to figure out the few great things to do, than it is to try and figure out all the things not worth doing.  So I use my short-list of priorities (“the vital few” in Covey speak), to help crowd out the lesser things.
    3. Three Wins at the top.   This is by far the most useful method to reshape a To-Do list into something more meaningful, more rewarding, and less intimidating.   Simply add your Three Wins to the top of your To-Do list.

    Here is a simple visual that shows adding Three Wins to the top of your To-Do list:

    image

    Identify the 3 most important results you want to accomplish today and bubble them to the top of your To Do list.  Prioritize your day against those 3 results you want to achieve, whether it’s incoming requests or you’re making your way through your backlog of things to do on your To-Do list.

    You can use this approach to chop any To-Do list down to size and make it more consumable.

    This tip on building better To-Do lists is from the book, Getting Results the Agile Way: A Personal Results System for Work and Life (Amazon).

  • J.D. Meier's Blog

    10 Years at patterns & practices

    • 1 Comments

    I never imagined I would invest 10 years on the patterns & practices team at Microsoft.  Life is short and I always imagined I would spend it across so many more adventures.  What surprised me is how much you can grow yourself, and grow the job in the process.  While I sometimes wonder about the path not taken, there’s no doubt I’ve built a deep set of capabilities, achievements, and experiences as a direct result of investing my time in patterns & practices.  I’ve shared some of my best lessons learned at patterns & practices, as well as my proven practices for individual contributors.

    I think my biggest take away lesson is follow your heart, follow the growth, and invest in yourself (mind, heart, body, emotions, career, financial, relationships, and fun.)

    Why patterns & practices?
    There are lots of reasons why I chose patterns & practices.  At the end of the day, it was the people, the values, and the mission.

    Our Mission
    While we’ve had various flavors of the mission, I like to think of it as …. “Customer success on the Microsoft platform” … or … “Proven practices for the platform.”  I had the toughest time explaining to my Aunt what I do, until finally I said, “I help customers put the legos together.”  She then said, “ahhh, I get it.”

    Goals
    In patterns & practices, the goals are simple:

    • Simplify the customer experience of building quality solutions on the Microsoft platform.
    • Improve the customer value of Microsoft products and technologies through customer connection and solution engineering.
    • Grow the professional knowledge and capability of the Microsoft development community.
    • Help customers and partners build their LOB (line-of-business) applications and services faster and more predictably than any platform in the world.

    Values
    In patterns & practices, we value:

    • Continuous learning, innovation and improvement - We have a bias toward action (over more planning) and customer engagement and feedback (over more analysis.)
    • Open, collaborative, relationships with customers, Microsoft field, partners, and Microsoft teams.
    • Execution - we take strategic bets, but we hold ourselves accountable for creating value, shipping early and often, and delivering results that have impact with customers and in Microsoft.
    • Explicit, transparent, and direct communication with customers and with our team and others in our company.
    • Quality over scope - no guidance is better than bad guidance.

    Principles
    We use the following principles to guide our work:

    • Start with the end in mind; think about end to end scenarios and how the products we produce fit in the solution architecture and into the patterns & practices catalog.
    • Help the customer succeed with their intent - the results they want to achieve, not just what they are trying to do.
    • Find the minimal solution required for a good result and ship it.
    • Our tools platforms are assets that expand the types of guidance we can express - use all of what they provide where it naturally fits the scenario.
    • Constructive tension between customer needs and Microsoft product and business strategy is expected - when we do our job well, this tension is healthy.

    Capabilities, Achievements, and Experience
    How do you measure the impact of the time you spend down a given career path?  I’ve been looking for an effective lens, and I think it boils down to capabilities, achievements, and experience.   It’s the simplest way that I can organize and reflect on where I am, based on where I’ve been.   Capabilities are simply my skills.  They are the things I’ve learned how to do, from soft skills to technical abilities.  Achievements are my results.  This includes my impact on Microsoft, the software industry, and customers.  I lump my books, patents I filed, and the methodologies I’ve baked into the platform and tools here.  In terms of experience, I think of the job roles and activities I’ve had along the way.

    Key Themes
    I think  I can boil my impact and results down into 3 key themes:

    • Project management.  I drive projects from pitch to ship.  I’ve built dream teams that go on amazing adventures to change the world.  I’ve consistently shipped projects on time and on budget year over year.  I’ve mentored many project managers and PMs at Microsoft to share the best of the best of what I’ve learned about shipping, execution, impact and results in patterns & practices.  I’ve had unique experiences here, especially since we adopted Agile practices early on, and I’ve lead distributed teams around the world since 2001.  I’ve learned a lot in terms of managing innovation, delivering incremental value, fixing time, while flexing scope, and experience-driven development (my latest thinking on software development.)  I think my biggest achievement here was helping shape the patterns & practices catalog, the programs, and the execution.  See Writing Books on Time and On Budget.
    • Software engineering.   I’ve invested the bulk of my time in application life cycle management, process improvement, quality attributes (security, performance, … etc), and application architecture.  Most of my talks and writings have been focused on security, performance, and software architecture, but I’ve done a lot more behind the scenes.  One of the big things I’ve focused on at Microsoft, is “solution engineering”, which is really about problem solving, while satisficing the user, business, and technology perspectives.  I think my biggest achievements here were baking security and performance into the life cycle, and into Visual Studio Team Foundation Server.
    • Effectiveness.  I’m a fan of continuous improvement.  I’m not a productivity junkie though.  I’m all about impact and results.  I’ve learned from the best of the best around Microsoft.  I’ve hunted and gathered patterns and practices for effectiveness over the span of several years.  More importantly, I’ve bounced the ideas and techniques against reality to see what sticks.  In the last few years, I’ve regularly carried 8 mentees.  I’ve given talks to our X-Box team on productivity and results systems.  Effectiveness is an art and science, and I’m trying to bridge the gap between state of the art and state of the practice.  See 7 Habits of Highly Effective PMs and Effectiveness Post Roundup.

    Years at a Glance
    I think browsing by years is a healthy reality check against impact over time.  Looking back is the simplest way for me to respond to the question, “if I had it to do over again, what would I do differently?”  Where there answer is “nothing” – those are the sweet spots.  Where the answer is “everything” – those are the lessons :)

    Year Results
    2009 Books
    • Application Architecture Guide 2.0
    Projects
    • Azure Security Guidance Project
    • Core Systems Information Model
    • Cloud Architecture Scenarios
    • Customer-Connected Engineering
    • Productivity coach for the Xbox team.
    2008 Books
    • Improving Web Services Security
    Projects
    • Line-of-Business (LOB) Frame
    • Catalog Sweep
    • Visual Studio Add-In for Guidance Explorer
    2007 Books
    • Performance Testing Guidance for Web Applications
    • Team Development with Visual Studio Team Foundation Server
    2006
    • 8 patents filed (Security, performance, and info models for software life cycles and application life cycle management.)
    Projects
    • ASP.NET Security RI (Reference Implementation)
    • Competitive Assessment for Security Engineering
    • Defending Your Code
    • Guidance Explorer
    • PDL (Performance Development Life Cycle)
    • Practices Checker
    • Scenario Evaluation Framework
    • Security Case Studies
    • Security Code Examples
    • Security Toolbar
    2005 Books
    • Security Engineering Explained
    Projects
    • Security Engineering in VSTS
    • Threat Modeling Web Applications
    • Whidbey Security Guidance
    2004 Books
    • Improving .NET Application Performance and Scalability
    2003 Books
    • Improving Web Application Security
    2002 Books
    • Building Secure ASP.NET Applications

    Books
    My books at a glance:

    Pocket Guides
    My pocket guides at a glance:

    • Agile Architecture Method Pocket Guide
    • Mobile Architecture Pocket Guide
    • Performance Pocket Guide
    • RIA Architecture Pocket Guide
    • Rich Client Architecture Pocket Guide
    • Security Pocket Guide
    • Service Architecture Pocket Guide
    • Web Architecture Pocket Guide

    Projects
    My projects at a glance:

    • Application Architecture Guide 2.0 – A guide, knowledge base, information model and methodologies for the Microsoft platform.
    • ASP.NET Security Reference Implementation - Sample application for ASP.NET 2.0.
    • Building Secure ASP.NET Applications – A guide for designing authentication and authorization and end-to-end applications scenarios.
    • Catalog Sweep – Information model for organizing the complete patterns & practices catalog of code and content assets.
    • Customer Connected Engineering – Methodology for engaging customers throughout the life cycle (“patterns & practices secret sauce.”)
    • Defending Your Code – An online knowledge base for software security.
    • Guidance Explorer – An online knowledge base for prescriptive guidance ("ITunes for knowledge.")
    • Improving .NET Application Performance and Scalability – A guide and methodology for baking performance into the life cycle.
    • Improving Web Application Security – A guide for threats, attacks, vulnerabilities and countermeasures for LOB applications.
    • Improving Web Services Security – A guide for threats, attacks, vulnerabilities and countermeasures for Web services.
    • Performance Testing Guidance for Web Applications – A guide and testing methodology for testing Web application performance.
    • PDL (Performance Development Life Cycle) – Methodology, activities and artifacts for baking performance into the life cycle.
    • Practices Checker – An application that checks software against patterns & practices recommendations.
    • Scenario Evaluation Framework – Assessment technique for design, implementation and deployment “building codes.”
    • Security Case Studies – A model and examples for sharing business impact from patterns & practices security guidance.
    • Security Code Examples – 60 security code examples in VB.NET / C#.
    • Security Engineering Explained – A guide and methodology for baking security into the life cycle.
    • Security Engineering in VSTS – Baked security engineering into VSTS / MSF.
    • Security Information Model – A unified model for Microsoft’s security guidance.
    • Security Toolbar – A toolbar for browsing patterns & practices guidance from Visual Studio.
    • Threat Modeling Web Applications – A technique to identify relevant threats and vulnerabilities for your scenario to help you shape your application's security design.
    • Visual Studio Add-In for Guidance Explorer – Find, create, and share prescriptive guidance inside Visual Studio.
    • Whidbey Security Guidance – A collection of guidelines, checklists, and step-by-step how tos for improving software security based on the .NET Framework 2.0.

    Where do we go from here?  You write your future a page at a time.  If there’s one thing I’ve learned, it’s continue to reinvent yourself, reinvent your job, and make the most of what you’ve got.

    My Related Posts

  • J.D. Meier's Blog

    What is a PM?

    • 2 Comments

    What is a PM (Program Manager)?  While the Program Manager role seems unique to Microsoft, in general, when you map it to other companies, it’s a product manager, or a project manager, or a combination of the two.  At Microsoft, there are various flavors of PMs (“design” PM, “project” PM, “process” PM, etc.) and the PM discipline can be very different in different groups.  I’ve also seen the PM title used as a general job title, in the absence of something more specific.

    At Microsoft it’s a role that means many things to many people.  In general though, when you meet a PM at Microsoft, you expect somebody who has vision, can drive a project to completion, can manage scope and resources, coordinate work across a team, bridge the customer, the business, and the technology, act as a customer champ, and influence without authority.  From a metaphor standpoint, they are often the hub to the spokes, they drive ideas to done, they take the ball and run with it, or find out who should run with the ball.  Some PMs are better at thought leadership, some are better at people leadership, and the best are great at both. 

    Here is a roundup of some of my favorite points that elaborate, clarify, and distill what a PM is, what a PM does, and how to be one.

    Attributes and Qualities of a PM
    Here is a list of key attributes from Steven Sinofsky’s post -- PM at Microsoft:

    • “Strong interest in what computing can do -- a great interest in technology and how to apply that to problems people have in life and work”
    • “Great communication skills -- you do a lot of writing, presenting, and convincing”
    • “Strong views on what is right, but very open to new ideas -- the best PMs are not necessarily those with the best ideas, but those that insure the best ideas get done”
    • “Selfless -- PM is about making sure the best work gets done by the team, not that your ideas get done. “
    • “Empathy -- As a PM you are the voice of the customer so you have to really understand their point of view and context “
    • “Entrepreneur -- as a PM you need to get out there and convince others of your ideas, so being able to is a good skill”

    Here is an example of PM qualities from Sean Lyndersay’s post -- Exchange team defines a Program Manager:

    • You are passionate about working directly with customers, able to clearly articulate customers requirements and pains to executives, architects and developers.
    • You have experience building strong teams and have a passion for mentoring.
    • The ability to work with multiple teams to develop a plan to provide this value to the customers.
    • You understand how software features will impact and/or modify the marketplace once they are shipped.
    • You have a love of innovation and the ability to think through big, long term ideas.
    • Demonstrated expertise at prioritization and project management.
    • You are hands-on with software development, and passionate about user experience, both as an end-user and administrator.

    Microsoft Careers site on Program Manager
    Here is a description of a Program Manager from the Microsoft Careers site:
    “As a Program Manager, you’ll drive the technical vision, design, and implementation of next-generation software solutions. You’ll transform the product vision into elegant designs that will ultimately turn into products used by Microsoft customers. Managing feature sets throughout the product lifecycle, you’ll have the chance to see your design through to completion. You’ll also work directly with other key team members including Software Development Engineers and Software Development Engineers in Test. Program Managers are advocates for end-users, so your passion for anticipating customer needs and creating outside-the-box solutions for them will really help you shine in this role. As a Program Manager you will have the ability to lead within a product’s life cycle using evangelism, empathy, and negotiation to define and deliver results. You will also be responsible for authoring technical specifications, including envisaged usage cases, customer scenarios, and prioritized requirements lists.” 

    Chris Pratley on Program Manager 
    Here are some points on Program Management from Chris Pratley’s post -- Program Manager:

    • “One way to describe PMs is that they not only "pick up and run with the ball, they go find the ball". That really defines the difference between "knowing what to do and doing it", and "not knowing what to do, but using your own wits to decide what to do, then doing it". That means as a PM you are constantly strategizing and rethinking what is going on to find out if there is something you are missing, or the team is missing. You’re also constantly deciding what is important to do, and whether action needs to be taken.”
    • "... In Office, there are "design" PMs who mainly work on designing the products, "process" PMs who mainly work on driving processes that make sure we get things done, localization PMs who are sort of like process PMs but also sort of like developers in that they sometimes produce code and make bits that ship in the product..."
    • “This ‘jack of all trades’ or ‘renaissance man’ … acted as a hub of communication, and made the marketers job easier, since they had someone to talk to who could take the time to really understand the outside world, and the devs could talk to someone who spoke their language and was not ridiculous …”
    • “You’re also constantly deciding what is important to do, and whether action needs to be taken. The number of such decisions is staggering. I might make 50 a day, sometimes more than 100 or even 200. Most of the time there is not nearly enough hard data to say for certain what to do, and in any case almost all these decisions could never have hard data anyway - you need to apply concentration and clear thinking.”

    Here are the stages of your first year as a PM, according to Pratley:

    1. Start off with excitement and enthusiasm for the new job.
    2. About 4 weeks into the job, you start to feel strange. People keep asking you to decide things you don’t know anything about, as if you’re some kind of expert.
    3. By month two, you're convinced you are the dumbest person on the team by far.
    4. By month four, you have lived through a torture of feeling incompetent and a dead weight on your team.
    5. By month six, you have a great moment.
    6. By month 12, you have developed your network of contacts that pass information to you, you are a subject matter expert on your area, and people on the team are relying on you because you know lots of things they don't know. You have made it.

    Joel Spolsky on Program Manager
    Here are points from Joel’s post on How To Be a Program Manager:

    • “Having a good program manager is one of the secret formulas to making really great software”
      “According to Joel, Charles Simonyi invented the job “Program Manager”, but Jabe Blumenthal, a programmer on the Mac Excel team in the late 80s, reinvented the job completely.”
    • “What does a Program Manager do?  “Design UIs, write functional specs, coordinate teams, serve as the customer advocate.”
    • “How To Be a Program Manger - Making Things Happen, by Scott Berkun, Don't Make Me Thin, by Steve Krug's, User Interface Design for Programmers, by Joel Spolsky, and How To Win Friends and Influence People by Dale Carnegie.“

    Ray Schraff on Program Manager
    Here are point from the comments on Chris Pratley’s post, Program Management:

    • “Once they find the ball, PMs don't pick up every ball themselves... but they own the task of making sure that every ball is picked up and carried to the correct endzone by SOMEBODY. “
    • “PMs translate technology to English “
    • “PMs translate English to technology”

    Sean Lyndersay on Program Manager

    • “My favorite definition involves an analogy: A program manager is to a software project what an architect is to a building.”  See Reflections Program Management at Microsoft.
    • “… the PM occupies a unique position in most software engineering structures – sort of the hub of a bumpy wheel (with dev, QA/test, design, usability, marketing, planning, customer support, etc. being the spokes).  See Someone has an Interesting View of PMs (at least at Yahoo!)”  See Someone has an Interesting View of PMs (at least at Yahoo)
    • "You are the center of the hurricane, the eye of the product development storm. You have passion and more, you have vigor. You are fueled by pure energy and endless drive. You are reading this and wondering why it's not in bullet-points because that would have been more efficient. You are working on a game that merges poker with chess in your spare time because neither game uses your full capabilities and talents. You have no use for extraneous clutter or mundane activities and you wonder what is holding up the full-scale integration of robots into the home – its 2007 already and doing the dishes remains as mundane and inefficient as it ever was. You are thinking that this job description is taking too long to read, and you are right. So, here is the rest of it in bullet points."  See Exchange team defines a Program Manager.
    • “With regard to where ideas come from, what we like to say is that the job of program management is not to have all the great ideas but to make sure all the great ideas are ultimately picked. The best program managers make sure the best ideas get done, no matter where they come from.”  See The Job of Program Management.

    Steven Sinofsky on Program Manager
    Here are points from Sinfoksy’s post on PM at Microsoft:

    • “Learn, convince, spec, refine”
    • “PM is one equal part of a whole system.”
    • “PM is a role that has a lot of responsibility but you also define it--working in partnership with expert designers, clever developers, super smart testers, etc. you all work together to define the product AND the responsibilities each of you have”
    • “… the PM role at Microsoft is both unique and one that has unlimited potential -- it is PMs that can bring together a team and rally them around a new idea and bring it to market (like OneNote, InfoPath).”
    • “It is PMs that can tackle a business problem and work with marketing and sales to really solve a big customer issue with unique and innovative software (like SharePoint Portal Server).”
    • “Where developers were focused on code, architecture, performance, and engineering, the PM would focus on the big picture of "what are we trying to do" and on the details of the user experience, the feature set, the way the product will get used.”
    • “A key benefit of program management is that we are far more agile because we have program management.  That can be counter-intuitive (even for many developers at Microsoft who might be waiting for their PM to iron out the spec).  But the idea that you can just start writing code without having a clear view of the details and specification is a recipe for a poorly architected features.”
    • “A great PM knows when the details are thought through enough to begin and a great developer knows when they can start coding even without the details for sure.  But like building a house--you don't start without the plans.  “
    • “A good book that describes the uniqueness of PM at Microsoft is Michael Cussumano's book Microsoft Secrets or his new book, The Business of Software.”

    My Related Posts

  • J.D. Meier's Blog

    Microsoft Cloud Case Studies at a Glance

    • 0 Comments

    Cloud computing is hot.  As customers makes sense of what the Microsoft cloud story means to them, one of the first things they tend to do is look for case studies of the Microsoft cloud platform.  They like to know what their peers, partners, and other peeps are doing.

    Internally, I get to see a lot of what our customers are doing across various industries and how they are approaching the cloud behind the scenes.  It’s amazing the kind of transformations that cloud computing brings to the table and makes possible.  Cloud computing is truly blending and connecting business and IT (Information Technology), and it’s great to see the connection.  In terms of patterns, customers are using the cloud to either reduce cost, create new business opportunities and agility, or compete in ways they haven’t been able to before.  One of the most significant things cloud computing does is force people to truly understand what business they are in and what their strategy actually is.

    Externally, luckily, we have a great collection of Microsoft cloud case studies available at Windows Azure Case Studies.

    I find having case studies of the Microsoft cloud story makes it easy to see patterns and to get a sense of where some things are going.  Here is a summary of some of the case studies available, and a few direct links to some of the studies.

    Advertising Industry
    Examples of the Microsoft cloud case studies in advertising:

    Air Transportation Services
    Examples of the Microsoft cloud case studies in air transportation services:

    Capital Markets and Securities Industry
    Examples of the Microsoft cloud case studies in capital markets and securities:

    Education
    Examples of the Microsoft cloud case studies in education:

    Employment Placement Agencies
    Examples of the Microsoft cloud case studies in employment agencies:

    • OCC Match - Job-listing web site scales up solution, reduces costs by more than U.S. $500,000.


    Energy and Environmental Agencies
    Examples of the Microsoft cloud case studies in enery and environmental agencies:

    • European Environment Agency (EEA) - Environment agency's pioneering online tools bring revolutionary data to citizens.

    Financial Services Industry
    Examples of the Microsoft cloud case studies in the financial services industry:

    • eVision Systems - Israeli startup offers cost-effective, scalable procurement system using cloud services.
      Fiserv - Fiserv evaluates cloud technologies as it enhances key financial services offerings.
    • NVoicePay - New company tackles big market with cloud-based B2B payment solution.
    • RiskMetrics - Financial risk-analysis firm enhances capabilities with dynamic computing.


    Food Service Industry
    Examples of the Microsoft cloud case studies in the food service industry:

    • Outback Steakhouse - Outback Steakhouse boosts guests loyalty with Facebook and Windows Azure.

    Government Agencies
    Examples of the Microsoft cloud case studies in government agencies:

    Healthcare Industry
    Examples of the Microsoft cloud case studies in healthcare:

    • Vectorform - Digital design and technology firm supports virtual cancer screening with 3-D viewer.

    High Tech and Electronics Manufacturing
    Examples of the Microsoft cloud case studies in high tech and electronics manufacturing:

    • 3M - 3M launches Web-based Visual Attention Service to heighten design impact. - http://www.microsoft.com/casestudies/Case_Study_Detail.aspx?CaseStudyID=4000005768
    • GXS Trading Grid - Electronic services firm reaches out to new markets with cloud-based solution.
    • iLink Systems - Custom developer reduces development time, cost by 83 percent for Web, PC, mobile target.
    • Microsoft Worldwide Partner Group - Microsoft quickly delivers interactive cloud-based tool to ease partner transition.
    • Sharpcloud - Software startup triples productivity, saves $500,000 with cloud computing solution.
    • Symon Communications - Digital innovator uses cloud computing to expand product line with help from experts.
    • VeriSign - Security firm helps customers create highly secure hosted infrastructure solutions.
    • Xerox - Xerox cloud print solution connects mobile workers to printers around the world.

    Hosting
    Examples of the Microsoft cloud case studies in hosting:

    • Izenda - Hosted business intelligence solution saves companies up to $250,000 in IT support and development costs.
    • Mamut - Hosting provider uses scalable computing to create hybrid backup solution.
    • Metastorm - Partner opens new market segments with cloud-based business process solution.
    • Qlogitek - Supply chain integrator relies on Microsoft platform to facilitate $20 billion in business.
    • SpeechCycle - Next generation contact center solution uses cloud to deliver software-plus-services.
    • TBS Mobility - Mobility software provider opens new markets with software-plus-services.

    Insurance Industry
    Examples of the Microsoft cloud case studies in the insurance industry:

    IT Services
    Examples of the Microsoft cloud case studies in IT services:

    • BEDIN Shop Systems - Luxury goods retailer gains point-of-sale solution in minutes with cloud-based system. - http://www.microsoft.com/casestudies/Case_Study_Detail.aspx?CaseStudyID=4000008195
    • Broad Reach Mobility - Firm streamlines field-service tasks with cloud solution. - http://www.microsoft.com/casestudies/Windows-Azure/Broad-Reach-Mobility/Firm-Streamlines-Field-Service-Tasks-with-Cloud-Solution/4000008493
    • Codit - Solution provider streamlines B2B connections using cloud services. - http://www.microsoft.com/casestudies/Microsoft-BizTalk-Server/Codit/Solution-Provider-Streamlines-B2B-Connections-Using-Cloud-Services/4000008528
    • Cumulux - Software developer focuses on innovation, extends cloud services value for customers. - http://www.microsoft.com/casestudies/Case_Study_Detail.aspx?CaseStudyID=4000007947
    • eCraft - IT firm links web applications to powerful business management software.
    • EdisonWeb - Web firm saves $30,000 annually, expands global growth with cloud database service.
    • Epicor - Software developer saves money, enhances application with Internet-based platform.
    • ESRI - GIS provider lowers cost of customer entry, opens new markets with hosted services.
    • Formotus - Forms automation company uses cloud storage to lower mobile data access costs.
    • FullArmor - FullArmor PolicyPortal Technical Brief: A Windows Azure/Software-plus-Services Solution
    • Gcommerce - Service provider transforms special-order process with a hybrid cloud and on-premises inventory solution.
    • GoGrid - Hosting provider extends service offerings, attracts customers with "cloud" platform.
    • Guppers - Mobile data services quickly and cost-effectively scale with cloud services solution.
    • HCL Technologies - IT firm delivers carbon-data management in the cloud, lowers barriers for customers.
    • HubOne - Australian development firm grows business exponentially with cloud services.
    • IMPACTA - IT security firm delivers low cost, high protection with online file transfer service.
    • Infosys - Infosys creates cloud-based solution for auto dealers using SQL data services.
    • InterGrid GreenButton - GreenButton super computing power in the cloud.
    • InterGrid - Software developers offer quick processing of compute-heavy tasks with cloud services.
    • ISHIR Infotech - IT company attracts new customers at minimal cost with cloud computing solution.
    • K2 - Software firm moves business processes and line-of-business data into the cloud.
    • Kelly Street Digital - Digital marketing startup switches cloud providers and saves $4,200 monthly.
    • Kompas Xnet - IT integrator delivers high-availability website at lower cost with online services.
    • LINQPad - Software developers gain the ease of LINQ data queries to compelling cloud content.
    • Meridium - Asset performance management solution increases performance and interoperability.
    • metaSapiens - ISV optimizes data browsing tool for online data, expects to enter global marketplace.
    • Microsoft - Microsoft IT moves auction tool to the cloud, makes it easier for employees to donate.
    • NeoGeo New Media - Digital media asset management solution gains scalability with SQL Data Services.
    • Paladin Data Systems - Software provider reduces operating costs with cloud-based solution.
    • Persistent Systems - Software services provider delivers cost-effective e-government solution.
    • Propelware - Intuit integration provider reduces development time, cost by 50 percent.
    • Quilink - Innovative technology startup creates contact search solution, gains global potential.
    • Siemens - Siemens expands software delivery service, significantly reduces TCO.
    • Sitecore - Company builds compelling web experiences in the cloud for multinational organizations.
    • SOASTA - Cloud services help performance-testing firm simulate real-world Internet traffic.
    • Softronic - Firm meets demand for streamlined government solutions using cloud platform.
    • SugarCRM - CRM vendor quickly adapts to new platform, adds global, scalable delivery channel.
    • Synteractive - Solution provider uses cloud technology to create novel social networking software.
    • Transactiv - Software start-up scales for demand without capital expenses by using cloud services.
    • Umbraco - Web content management system provider moves solution to the cloud to expand market.
    • Volantis - Mobile services ISV gains seamless scalability with Windows Azure platform.
    • Wipro - IT services company reduces costs, expands opportunities with new cloud platform.Zitec - IT consultancy saves up to 90
    • percent on relational database costs with cloud services.
    • Zmanda - Software company enriches cloud-based backup solution with structured data storage.

    Life Sciences
    Examples of the Microsoft cloud case studies in life sciences:

    Manufacturing
    Examples of the Microsoft cloud case studies in manufacturing:

    Media and Entertainment Industry
    Examples of the Microsoft cloud case studies in media and entertainment:

    • OriginDigital - Video services provider to reduce transcoding costs up to half.
    • Sir Speedy - Publishing giant creates innovative web based service for small-business market.
    • STATS - Sports data provider saves $1 million on consumer market entry via cloud services.
    • TicketDirect - Ticket seller finds ideal business solution in hosted computing platform.
    • TicTacTi - Advertising company adopts cloud computing, gets 400 percent improvement.
    • Tribune - Tribune transforms business for heightened relevance by embracing cloud computing.
    • VRX Studios - Global photography company transforms business with scalable cloud solution.

    Metal Fabrication Industry
    Examples of the Microsoft cloud case studies in metal fabrication:

    • ExelGroup - ExelGroup achieves cost reduction and efficiency increase with Soft1 on Windows Azure.


    Nonprofit Organizations
    Examples of the Microsoft cloud case studies in non-profit organizations:

    • Microsoft Disaster Response Team - Helping governments recover from disasters: Microsoft and partners provide technology and other assistance following natural disasters in Haiti and Pakistan.

    Oil and Gas Industry
    Examples of the Microsoft cloud case studies in oil and gas:

    • The Information Store (iStore) - Solution developer expects to boost efficiency with software-plus-services strategy.

    Professional Services
    Examples of the Microsoft cloud case studies in professional services:

    Publishing Industry
    Examples of the Microsoft cloud case studies in publishing:

    • MyWebCareer - Web startup saves $305,000, sees ever-ready scalability—without having to manage IT.


    Retail Industry
    Examples of the Microsoft cloud case studies in retail:

    • Glympse.com - Location-sharing solution provider gains productivity, agility with hosted services.
      höltl Retail Solutions - German retail solutions firm gains new customers with cloud computing solution.

    Software Engineering
    Examples of the Microsoft cloud case studies in software:

    Telecommunications Industry
    Examples of the Microsoft cloud case studies in telecommunications:

    • IntelePeer - Telecommunications firm develops solution to speed on-demand conference calls.
    • SAPO - Portugal telecom subsidiary helps ensure revenue opportunities in the cloud.
    • T-Mobile USA - Mobile operator speeds time-to-market for innovative social networking solution.
    • T-Systems - Telecommunications firm reduces development and deployment time with hosting platform.

    Training Industry
    Examples of the Microsoft cloud case studies in training:

    • Point8020 - Learning content provider uses cloud platform to enhance content delivery.

    Transportation and Logistics Industry
    Examples of the Microsoft cloud case studies in transportation:

    • TradeFacilitate - Trade data service scales online solution to global level with "cloud" services model.

    My Related Posts

  • J.D. Meier's Blog

    My Projects on MSDN

    • 5 Comments

    This post is a simple way to browse the bulk of my patterns & practices work on MSDN and CodePlex.   After I walk customers through things, the next question is usually, "OK, so where do we find this?"  This is the link I'll be sharing.

    Guides

    Performance

    Books / Guides

    Methods

    Guidelines

    Checklists

    Practices at a Glance

    How Tos

    Security

    Guides

    Methods

    Threats and Countermeasures

    Cheat Sheets

    Guidelines

    Checklists

    Practices at a Glance

    Questions and Answers

    Explained

    Application Scenarios

    ASP.NET Security How Tos

    WCF Security How Tos

    Visual Studio Team System

    Guides

    Guidelines

    Practices at a Glance

    Questions and Answers

    How Tos

    My Related Posts

  • J.D. Meier's Blog

    Motivation Techniques and Motivation Theories

    • 3 Comments

    "To hell with circumstances; I create opportunities." – Bruce Lee

    Motivation is a key to making things happen, whether you’re developing software, leading teams, or just getting yourself out of bed and on with your day.

    It's hard to change the world, or even just your world for that matter, if you lack the motivation or drive.  In a world where there is plenty that can bring you down, the best thing you can do is arm yourself with motivation techniques that work, and motivation theories that explain *why* they work.

    Motivation Techniques
    You can motivate yourself with skill, as well as others, if you know the key motivation techniques.  Here is my latest collection of motivation techniques and methods at a glance:

    You can use the motivation techniques to motivate yourself and others.

    Motivation Theories
    There are a lot of motivation theories that are relevant, and some have evolved over the years.    Maslow’s Hierarchy of Needs is useful to know for understanding some basic drivers.  It’s also useful to know David McClelland’s Theory of Needs , and that focuses on achievement, affiliation, and power as key drivers. 

    It’s also useful to distinguish between intrinsic and extrinsic motivation.  For example, if you depend on other people to carrot or stick you, you’re driving from extrinsic motivation.  If instead, you’re doing something because it makes you feel alive or unleashes your passion or simply just for a job well done, then you’re driving from intrinsic motivation, and that is a powerful place to be. 

    It’s also useful to know that at the end of the day, purpose is the most powerful driver, and if you can connect what you do to your purpose, then you bring out your best and you’re a powerful force to be reckoned with.  Purpose, passion, and persistence change the game.

    Timeline of Motivation Theories, Studies, and Models
    Here is a timeline of some interesting work on the study of motivation:

    • 1939 - The Hawthorne studies focused on supervision, incentives, and working conditions.
    • 1957 - Argyris focused on the congruence between individual's needs and organizational demands.
    • 1959 - Focused on sources of work satisfaction to design the work to make it enriching and rewarding (Herberg, Mausner, and Snyderman)
    • 1964 - Valence-instrumentality-expectancy model (Vroom.)
    • 1975 - Organizational behavior modification - Focused on the automatic role of rewards and feedback on work motivation, but downplayed the impact of psychological processes such as goals and self-efficacy.
    • 1977 - Self-efficacy (Locke)
    • 1980 - Focused on ways specific work characteristics and psychological processes that increase employee satisfaction. (Hackman, and Oldham.)
    • 1986 - Goals and self-efficacy (Bandura)
    • 1986 – Social-cognitive theory (Bandura)
    • 1986 - Attribution theory - Focuses on how the ways you make attributions affects your future choices and actions. (Weiner)
    • 1987 - Goal theory - Focuses on the effects of conscious goals as motivators of task performance. (Lord and Hanges)
    • 1997 – Self-efficacy has a powerful motivation effect on task performance (Bandura.)
    • 2002 - Goal-setting theory (Locke and Latham)

    Motivation Quotes
    If you need some inspiring words of wisdom, be sure to explore my collection of motivation quotes.

  • J.D. Meier's Blog

    patterns & practices Performance Engineering

    • 4 Comments

    As part of our patterns & practices App Arch Guide 2.0 project, we're consolidating our information on our patterns & practices Performance Engineering.  Our performance engineering approach is simply a collection of performance-focused techniques that we found to be effective for meeting your performance objectives.  One of the keys to the effectiveness is our performance frame.   Our performance frame is a collection of "hot spots" that organize principles, patterns, and practices, as well as anti-patterns.  We use the frame to perform effective performance design and code inspections.  Here's a preview of our cheat sheet so far.  You'll notice a lot of similarity with our patterns & practices Security Engineering.  It's by design so that you can use a consistent approach for handling both security and performance.

    Performance Overlay
    This is our patterns & practices Performance Overlay:

    PerformanceEngineering

    Key Activities in the Life Cycle
    This Performance Engineering approach extends these proven core activities to create performance specific activities.  These include:

    • Performance Objectives. Setting objectives helps you scope and prioritize your work by setting boundaries and constraints. Setting performance objectives helps you identify where to start, how to proceed, and when your application meets your performance goals.
    • Budgeting. Budget represents your constraints and enables you to specify how much you can spend (resource-wise) and how you plan to spend it.
    • Performance Modeling. Performance modeling is an engineering technique that provides a structured and repeatable approach to meeting your performance objectives.
    • Performance Design Guidelines. Applying design guidelines, patterns and principles which enable you to engineer for performance from an early stage.
    • Performance Design Inspections. Performance design inspections are an effective way to identify problems in your application design. By using pattern-based categories and a question-driven approach, you simplify evaluating your design against root cause performance issues.
    • Performance Code Inspections. Many performance defects are found during code reviews. Analyzing code for performance defects includes knowing what to look for and how to look for it. Performance code inspections to identify inefficient coding practices that could lead to performance bottlenecks.
    • Performance Testing. Load and stress testing is used to generate metrics and to verify application behavior and performance under normal and peak load conditions.
    • Performance Tuning.  Performance tuning is an iterative process that you use to identify and eliminate bottlenecks until your application meets its performance objectives. You start by establishing a baseline. Then you collect data, analyze the results, and make configuration changes based on the analysis. After each set of changes, you retest and measure to verify that your application has moved closer to its performance objectives.
    • Performance Health Metrics.   Identity the measures, measurements, and criteria for evaluating the health of your application from a performance perspective.
    • Performance Deployment Inspections. During the deployment phase, you validate your model by using production metrics. You can validate workload estimates, resource utilization levels, response time, and throughput.
    • Capacity Planning. You should continue to measure and monitor when your application is deployed in the production environment. Changes that may affect system performance include increased user loads, deployment of new applications on shared infrastructure, system software revisions, and updates to your application to provide enhanced or new functionality. Use your performance metrics to guide your capacity and scaling plans.

    Performance Frames
    Performance Frames define a set of patterns-based categories that can organize repeatable problems and solutions. You can use these categories to divide your application architecture for further analysis and to help identify application performance issues. The categories within the frame represent the critical areas where mistakes are most often made.

    Category Description
    Caching What and where to cache? Caching refers to how your applications caches data. The main points to be considered are Per user, application-wide, data volatility.
    Communication How to communicate between layers? Communication refers to choices for transport mechanism, boundaries, remote interface design, round trips, serialization, and bandwidth.
    Concurrency How to handle concurrent user interactions? Concurrency refers to choices for Transaction, locks, threading, and queuing.
    Coupling / Cohesion How to structure your application? Coupling and Cohesion refers structuring choices leading to loose coupling, high cohesion among components and layers.
    Data Access How to access data? Data Access refers to choices and approaches for schema design, Paging, Hierarchies, Indexes, Amount of data, and Round trips.
    Data Structures / Algorithms How to handle data? Data Structures and Algorithm refers to choice of algorithms; Arrays vs. collections.
    Exception Management How to handle exceptions? Exceptions management refers to choices / approach for catching, throwing, exceptions.
    Resource Management How to manage resources? Resource Management refers to approach for allocating, creating, destroying, and pooling of application resource
    State Management What and where to maintain state? State management refres to how your application maintains state. The main points to consider are Per user, application-wide, persistence, and location.


    Architecture and Design Issues
    Use the diagram below to help you think about performance-related architecture and design issues in your application.

    PerformanceIssues

    The key areas of concern for each application tier are:

    • Browser.  Blocked or unresponsive UI.
    • Web Server.  Using state affinity.  Wrong data types.  Fetching per request instead of caching.  Poor resource management.
    • Application Server.  Blocking operations.  Inappropriate choice of data structures and algorithms.  Not pooling database connections.
    • Database Server.  Chatty instead of batch processing.  Contention, isolation levels, locking and deadlock.

    Design Process Principles
    Consider the following principles to enhance your design process:

    • Set objective goals. Avoid ambiguous or incomplete goals that cannot be measured such as "the application must run fast" or "the application must load quickly." You need to know the performance and scalability goals of your application so that you can (a) design to meet them, and (b) plan your tests around them. Make sure that your goals are measurable and verifiable. Requirements to consider for your performance objectives include response times, throughput, resource utilization, and workload. For example, how long should a particular request take? How many users does your application need to support? What is the peak load the application must handle? How many transactions per second must it support? You must also consider resource utilization thresholds. How much CPU, memory, network I/O, and disk I/O is it acceptable for your application to consume?
    • Validate your architecture and design early. Identify, prototype, and validate your key design choices up front. Beginning with the end in mind, your goal is to evaluate whether your application architecture can support your performance goals. Some of the important decisions to validate up front include deployment topology, load balancing, network bandwidth, authentication and authorization strategies, exception management, instrumentation, database design, data access strategies, state management, and caching. Be prepared to cut features and functionality or rework areas that do not meet your performance goals. Know the cost of specific design choices and features.
    • Cut the deadwood. Often the greatest gains come from finding whole sections of work that can be removed because they are unnecessary. This often occurs when (well-tuned) functions are composed to perform some greater operation. It is often the case that many interim results from the first function in your system do not end up getting used if they are destined for the second and subsequent functions. Elimination of these "waste" paths can yield tremendous end-to-end improvements.
    • Tune end-to-end performance. Optimizing a single feature could take away resources from another feature and hinder overall performance. Likewise, a single bottleneck in a subsystem within your application can affect overall application performance regardless of how well the other subsystems are tuned. You obtain the most benefit from performance testing when you tune end-to-end, rather than spending considerable time and money on tuning one particular subsystem. Identify bottlenecks, and then tune specific parts of your application. Often performance work moves from one bottleneck to the next bottleneck.
    • Measure throughout the life cycle. You need to know whether your application's performance is moving toward or away from your performance objectives. Performance tuning is an iterative process of continuous improvement with hopefully steady gains, punctuated by unplanned losses, until you meet your objectives. Measure your application's performance against your performance objectives throughout the development life cycle and make sure that performance is a core component of that life cycle. Unit test the performance of specific pieces of code and verify that the code meets the defined performance objectives before moving on to integrated performance testing.  When your application is in production, continue to measure its performance. Factors such as the number of users, usage patterns, and data volumes change over time. New applications may start to compete for shared resources.

    Design Guidelines
    This table represents a set of secure design guidelines for application architects. Use this as a starting point for performance design and to improve performance design inspections.

    Category Description
    Caching Decide where to cache data. Decide what data to cache. Decide the expiration policy and scavenging mechanism. Decide how to load the cache data. Avoid distributed coherent caches.
    Communication Choose the appropriate remote communication mechanism. Design chunky interfaces. Consider how to pass data between layers. Minimize the amount of data sent across the wire. Batch work to reduce calls over the network. Reduce transitions across boundaries. Consider asynchronous communication. Consider message queuing. Consider a "fire and forget" invocation model.
    Concurrency Design for loose coupling. Design for high cohesion. Partition application functionality into logical layers. Use early binding where possible. Evaluate resource affinity.
    Coupling / Cohesion How to structure your application? Coupling and Cohesion refers structuring choices leading to loose coupling, high cohesion among components and layers.
    Data Structures / Algorithms Choose an appropriate data structure. Pre-assign size for large dynamic growth data types. Use value and reference types appropriately.
    Resource Management Treat threads as a shared resource. Pool shared or scarce resources. Acquire late, release early. Consider efficient object creation and destruction. Consider resource throttling.
    Resource Management How to manage resources? Resource Management refers to approach for allocating, creating, destroying, and pooling of application resource
    State Management Evaluate stateful versus stateless design. Consider your state store options. Minimize session data. Free session resources as soon as possible. Avoid accessing session variables from business logic.

    Additional Resources

    My Related Posts

  • J.D. Meier's Blog

    Career Growth

    • 2 Comments

    Do you have an effective approach for thinking about your career growth?   With things like a “jobless economic recovery,” careers ending, and a “skills-for-hire” economy, it’s even more important to focus on growth while managing your career.  At the end of the day, you play the most important role in your career growth – own it.

    This past year reminded me of a very valuable lessons – follow the growth.  This means follow your own growth and growth in the marketplace.  When there’s no growth, make some.

    Career Development, Professional Development, and Personal Development
    Steve Elston, our print and web publications manager on the patterns & practices team, shared this simple frame with our patterns & practices team for differentiating and thinking about development paths:

    • Career Development – Become a stronger leader.
    • Professional Development – Become a better craftsmen.
    • Personal Development – Become a more capable person.

    I think an effective way to think of this is …

    “Are you the person, the professional, the manager, or the executive you want to be?”

    Make Yourself Bigger
    In terms of personal development, I think “become a more *capable* person” is a great distinction over something like “become a better person.”   Rather than question self-worth or value, you put the focus on improving your effectiveness and capabilities.  It reminds me of a quote ...

    “You don’t overcome challenges by making them smaller but by making yourself bigger.” -- John Maxwell

    Career Growth, Professional Growth, and Personal Growth
    Steve shared some quick ways to think about who you can leverage for your growth and what sort of awareness you need for effective growth:

    Category Requires Awareness Of Who Helps
    Career Growth
    • Business Trends
    • Industry Trends
    • Mentors
    • Leaders
    • Colleagues
    • Manager
    Professional Growth
    • Organizational Trends
    • Industry Trends
    • Mentors
    • Manager
    • Colleagues
    Personal Growth
    • Self
  • Friends and Family
  • Leaders and Mentors
  • Role Models
  • As you can see from the table, the key to career growth is awareness of the business, the key to professional growth is awareness of organizational trends, and the key to personal growth is self-awareness.

    What, Who, and How
    What Who How

    Steve also shared a sample way to think about contributing factors to overall job satisfaction.

    • What You Do – The industry, the company, the organization, the manager, and the job.
    • Who You Do It With – Co-workers, partners, customers, and mentors.
    • How You Do It – Technology, process, philosophy, organization culture.

    Steve provides some cutting questions for thinking through these concerns:

    • What matters most to you?
    • Who has the power to improve the situation?
    • How can you influence your job satisfaction?

    Knowledge, Attitude, Skills and Habits (KASH model)
    Steve shared the KASH box model with our team:

    • Knowledge – what you know.
    • Attitude – your attitudes, along with your underlying values and beliefs.
    • Skills – your capabilities.
    • Habits – what you actually do.

    The KASH box is a performance coaching tool and it’s a simple way to look at the gap between knowing and doing and the “transfer of training” problem.  People know what to do, but they don’t do it, or don’t want to.  A lot of people are hired for “skills” and “knowledge,” but fired for “attitude” and “habits.”  In other words, it’s easy to focus on knowledge and skills but often it is people's attitudes and habits that limit them.

    Interestingly, if you know what to do, but you’re not doing what you know, it’s one of the simplest and most effective ways to unleash your growth.  Just start testing your results.

    There’s a video on the KASH box at Kashbox Coaching.com.

    Mentors are the Short-Cuts
    The right mentors can help you avoid the chutes and climb the ladders more effectively.  John deVadoss, our patterns & practices team Product Unit Manager, shared his key tips on how to effectively leverage mentors:

    • Know what you want and what you want from the relationship.
    • Be proactive – you need to drive the meetings and ask the right questions.
    • Keep an open mind regarding who this person might be.
    • Think about people who have been your mentors in the past.
    • You can have more than one mentor.

    This reflects a lot of my own experience.  One of my most important lessons learned is that mentors really are the short-cuts.  If you find somebody who’s “been there” and “done that,” it’s like having a tour guide.  Their maps from experience can save you a lot of wasted time and help you avoid obstacles, as well as find shorter paths to your destinations.

    A mentor can also be great for helping you find your blind spots as well as giving your more objective feedback on your attitudes and habits that might be limiting you.  This means finding mentors that are committed to your success and you trust their feedback and perspective.  Usually a good place to look is in your past.  You can draw from people that have helped you before.

    I make it a habit to use a sounding board of multiple mentors for growth in different areas.  I have a few vital mentors for ongoing growth, and then I supplement with mentors for specific things I need to learn.

    I also give back and I mentor others to help them optimize their growth and get results.   A lot of times, life is like Chutes and Ladders. You can climb up ladders only to slide back down.

    Who’s Job Do You Want?
    One of my mentors uses the question, “Who’s job do you want?” as a great forcing function:

    • Do you know what you want?
    • Is there a proven path?
    • What experiences and skills do you need to get there?

    The other beauty of this is it gives your managers and support network a good mental model for your career path, starting with the end in mind.

    Putting It All Together
    Steve outlined a simple roadmap for putting it all together:

    1. Know what you want.
    2. Get a mentor.
    3. Build your plan.
    4. Ask for support.

    Every day, is the perfect day, to become more of the person, professional, manager or executive you want to be.  Enjoy the process and remind yourself it’s the journey and the destination, and remember to periodically check that the ladder you’re climbing is up against the right wall.

  • J.D. Meier's Blog

    Extreme Programming (XP) at a Glance

    • 3 Comments

    Extreme Programming (XP) is a lightweight software development methodology based on principles of simplicity, communication, feedback, and courage.   I like to be able to scan methodologies to compare approaches.  To do so, I create a skeleton of the activities, artifacts, principles, and practices.    Here are my notes on XP:

    Activities

    • Coding
    • Testing
    • Listening
    • Designing

    Artifacts

    • Acceptance tests
    • Code
    • Iteration plan
    • Release and iteration plans
    • Stories
    • Story cards
    • Statistics about the number of tests, stories per iteration, etc.
    • Unit tests
    • Working code every iteration

    12 Practices
    Here's the 12 XP practices:

    • Coding Standards
    • Collective Ownership
    • Continuous Integration
    • On-Site Customer
    • Pair Programming
    • Planning Game
    • Refactoring
    • Short Releases
    • Simple Design
    • Sustainable Pace
    • System Metaphor
    • Test-Driven Development

    For a visual of the XP practices, see a picture of the Practices and Main Cycles of XP.

    5 Values (Extreme Programming Explained)

    • Communication
    • Courage
    • Feedback
    • Respect
    • Simplicity

    Phases
    The following are phases of an XP project life cycle.

    • Exploration Phase
    • Planning Phase
    • Iteration to Release Phase
    • Productionizing Phase
    • Maintenance Phase

    For a visual overview, see Agile Modeling Throughout the XP Lifecycle.

    12 Principles (Agile Manifesto)

    These are the 12 Agile principles according to the  Agile Manifesto:

    • Our highest priority is to satisfy the customer through early and continuous delivery of valuable software.
    • Welcome changing requirements, even late in development. Agile processes harness change for the customer’s competitive advantage.
    • Deliver working software frequently, from a couple of weeks to a couple of months, with a preference to the shorter timescale.
    • Business people and developers must work together daily throughout the project.
    • Build projects around motivated individuals.  Give them the environment and support they need, and trust them to get the job done.
    • The most efficient and effective method of conveying information to and within a development team is face-to-face conversation.
    • Working software is the primary measure of progress.
    • Agile processes promote sustainable development.   The sponsors, developers, and users should be able to maintain a constant pace indefinitely.
    • Continuous attention to technical excellence and good design enhances agility.
    • Simplicity–the art of maximizing the amount of work not done–is essential.
    • The best architectures, requirements, and designs emerge from self-organizing teams.
    • At regular intervals, the team reflects on how to become more effective, then tunes and adjusts its behavior accordingly
    4 Values (Agile Manifesto)

    These are the four Agile values according to the Agile Manifesto:

    • Individuals and interactions over processes and tools
    • Working software over comprehensive documentation
    • Customer collaboration over contract negotiation
    • Responding to change over following a plan
    Additional Resources
    My Related Posts
Page 5 of 46 (1,128 items) «34567»