J.D. Meier's Blog

Software Engineering, Project Management, and Effectiveness

  • J.D. Meier's Blog

    ADO.NET Code Samples Collection



    The ADO.NET Code Samples Collection is a roundup and map of some of the various data access code samples from  various sources including the MSDN library, Code Gallery, CodePlex, and Microsoft Support.

    You can add to the code examples collection by sharing in the comments or emailing me at FeedbackAndThoughts at live.com.

    Common Categories for ADO.NET Code Samples
    The ADO.NET Code Samples Collection is organized using the following categories:


    ADO.NET Code Samples Collection



    Data Binding

    MSDN Library

    Data Models

    Code Gallery

    Microsoft Support


    MSDN Library


    MSDN Library


    MSDN Library

    Entity Framework

    All-in-One Code Framework

    Code Gallery


    All-in-One Code Framework

    MSDN Library

    LINQ to DataSet

    MSDN Library

    LINQ to Entities

    MSDN Library

    LINQ to Objects

    All-in-One Code Framework

    LINQ to SQL

    All-in-One Code Framework

    Code Gallery

    Code Gallery


    Code Gallery

    O/RM Mapping

    Code Gallery


    Code Gallery



    Code Gallery

    SQL Server

    MSDN Library


    Code Gallery

    WCF Data Services

    All-in-One Code Framework

    Code Gallery

    My Related Posts

  • J.D. Meier's Blog

    Getting Results the Agile Way - The Book on Getting Results



    “Are you getting results? …”

    Over Christmas break, I committed to finishing the writing for a book that I expect to change a lot of people's lives.  It's my first non-technical book.  The working title is, Getting Results the Agile Way.  It's all about getting results in work and life.  It's the playbook I wish somebody had given me long ago for finding work/life balance, managing time, playing to my strengths, and making the most of what I've got.

    Why Getting Results
    The world is a tough place.  Between layoffs, the economy, and simply the unknown, a lot of people are having a really tough time in their lives.  There are constantly new challenges at a pace that's tough to keep up.  Worse, I don't think you learn a lot of these skills in school or on the job, except through the school of hard knocks.

    This is my playbook for you.  For more than 10 years at Microsoft I've tested and evaluated ways to get results.  I've had to find things that not only work for me, but that could work for the people I mentor inside and outside the company, as well as for large teams around the world.  I'm a big believer that everybody can get great results if they have the right know-how.

    What Sorts of Problems Does It Tackle
    The book is a system and a playbook for some of these common challenges:

    • How to find work / life balance
    • How to shift from tasks and activities to meaningful results and outcomes
    • How to use stories and scenario-driven results to carve out value in your life
    • How to overwhelm your challenges with fierce results
    • How to defeat perfectionism
    • How to avoid analysis paralysis and take action a simple story at a time
    • How to find your flow state for more engaging work
    • How to find your passion and purpose
    • How to play to your strengths for more energy and better results
    • How to conquer fear and avoid learned helplessness
    • How to motivate yourself in ways that make you feel you can move mountains
    • How to focus on what really counts
    • How to prioritize more effectively
    • How to create more value for yourself and others
    • How to spend more time on what you want, and less time on what you don’t

    It helps with a lot of things because mostly it gets you spending the right time, on the right things, with the right energy, the right way.  This is the key to your best results.

    My Story
    When I first joined Microsoft, it was sink or swim.  I saw a lot of people fail.  Among the chaos, I also saw many people thrive.  I wanted to know their secrets.  I started with people on my team, but the next thing you know I was studying success patterns around the company.  If somebody was known for getting results, I hunted them down and studied their ways.

    I learned so many simple things that actually worked.  For example, instead of managing time, the real key is managing your energy.  I'd rather have four power hours, than a week of just going through the motions.  The secret of work life balance is setting up your own artificial boundaries, whether it's "dinner on the table at 5:30" or "no work on the weekends."  Finding your passion can be as simple as connecting to your values.  For example, I use metaphors to make my project an epic adventure and I have the team create the movie poster of what great results will look like.  How's that for wanting to show up and give your best every day knowing you're working on blockbuster results?

    What is Agile Results?
    You'll hear me talk about Agile Results quite a bit.  It's the name I gave the system  that serves as the foundation for the Getting Results guide.  Agile is all about responding to change.  It's agility in action.  It's all about making progress while the world changes under your feet.

    My Agile Results system borrows the best principles, patterns, and practices across a variety of disciplines from sports, positive psychology, personal productivity, Agile development, Scrum, project management, time management, leadership skills, and strengths-based development.  It's more than a mash up -- I've tested and honed the system to work for individuals and teams while refining it over years of deliberate practice.  To me, great results for the team, always starts with unleashing an individual’s best.  Having fun is contagious and getting results spreads like a wild fire.

    Agile Results in a Nutshell
    Here is the Agile Results system at a glance:

    • The Rule of 3 – You can apply the Rule of 3 to work and life to avoid overwhelming yourself while carving out  value, a day at a time, a story at a time.  See The Rule of 3.
    • Monday Vision, Daily Outcomes, and Friday Reflection – This is a simple weekly pattern for results.  On Mondays figure out your 3 compelling results for the week.  Each day, figure out your 3 best results for the day.  On Fridays, identify 3 things going well, and 3 things to improve.  See Monday Vision, Daily Outcomes, and Friday Reflection.
    • Hot Spots -  This is your heat map.  Hot Spots are a simple lens to look at your life as a portfolio you invest in: mind, body, emotions, career, financial, relationships, and fun.  It’s under-investing or over-investing in these areas that can get in the way of great results.  See Hot Spots.

    How to Get Started
    Getting started is really easy.  If you write down 3 results you want for today, you're doing Agile Results.  Is there more to it? … Sure, but take it at your own pace.  Here’s a one-page guide for getting started with Agile Results.

    How To Follow Along for the Ride
    You can read Getting Results for free online in HTML.  I’ll continue to shape the guide over the next several weeks based on feedback.  I’ll also be making March a focus on getting results so if you’ve been looking for a jumpstart for your life, this is a great month to make it happen.   I’ll be sharing nuggets for getting results at my effectiveness blog, Sources of Insight.

    If you're not getting the results you want in your life, you just need the skills.  Use my guide to stuff your bag of tricks with some new tools that will change your game and help you unleash your best.

  • J.D. Meier's Blog

    New Release: patterns & practices WCF Security Guide


    Today we released our patterns & practices Improving Web Service security: Scenarios and Implementation Guidance for WCF on MSDN.  Using end-to-end application scenarios, this guide shows you how to design and implement authentication and authorization in WCF. You'll learn how to improve the security of your WCF services through prescriptive guidance including guidelines, a Q&A, practices at a glance, and step-by-step how to articles. The guide is the result of a collaborative effort between patterns & practices, WCF team members, and industry experts.

    Key Scenarios
    Here's the key scenarios:

    • A development team that wants to adopt WCF.
    • A software architect or developer looking to get the most out of WCF, with regard to designing their application security.
    • Interested parties investigating the use of WCF but don’t know how well it would work for their deployment scenarios and constraints.
    • Individuals tasked with learning WCF security.
    • Authentication, authorization, and communication design for your services
    • Solution patterns for common distributed application scenarios using WCF
    • Principles, patterns, and practices for improving key security aspects in services

    Contents at a Glance

    • Part I: Security Fundamentals for Web Services
    • Part II: Fundamentals of WCF Security
    • Part III: Intranet Application Scenarios
    • Part IV: Internet Application Scenarios


    • Foreword by Nicholas Allen
    • Foreword by Rockford Lhotka
    • Chapter 1: Security Fundamentals for Web Services
    • Chapter 2: Threats and Countermeasures for Web Services
    • Chapter 3: Security Design Guidelines for Web Services
    • Chapter 4: WCF Security Fundamentals
    • Chapter 5: Authentication, Authorization, and Identities in WCF
    • Chapter 6: Impersonation and Delegation in WCF
    • Chapter 7: Message and Transport Security
    • Chapter 8: Bindings
    • Chapter 9: Intranet - Web to Remote WCF Using Transport Security (Original Caller, TCP)
    • Chapter 10: Intranet - Web to Remote WCF Using Transport Security (Trusted Subsystem, HTTP)
    • Chapter 11: Intranet - Web to Remote WCF Using Transport Security (Trusted Subsystem, TCP)
    • Chapter 12: Intranet - Windows Forms to Remote WCF Using Transport Security (Original Caller, TCP)
    • Chapter 13: Internet - WCF and ASMX Client to Remote WCF Using Transport Security (Trusted Subsystem, HTTP)
    • Chapter 14: Internet - Web to Remote WCF Using Transport Security (Trusted Subsystem, TCP)
    • Chapter 15: Internet – Windows Forms Client to Remote WCF Using Message Security (Original Caller, HTTP)

    Our Team

    • J.D. Meier
    • Carlos Farre
    • Jason Taylor
    • Prashant Bansode
    • Steve Gregersen
    • Madhu Sundararajan
    • Rob Boucher

    Contributors / Reviewers

    • External Contributors / Reviewers: Andy Eunson; Anil John; Anu Rajendra; Brandon Bohling; Chaitanya Bijwe; Daniel Root; David P. Romig, Sr.; Dennis Rea; Kevin Lam; Michele Leroux Bustamante; Parameswaran Vaideeswaran; Rockford Lhotka; Rudolph Araujo; Santosh Bejugam
    • Microsoft Contributors / Reviewers: Alik Levin; Brandon Blazer; Brent Schmaltz; Curt Smith; David Bradley; Dmitri Ossipov; Jan Alexander; Jason Hogg; Jason Pang; John Steer; Marc Goodner; Mark Fussell; Martin Gudgin; Martin Petersen-Frey; Mike de Libero; Mohammad Al-Sabt; Nobuyuki Akama; Ralph Squillace; Richard Lewis; Rick Saling; Rohit Sharma; Scott Mason; Sidd Shenoy; Sidney Higa; Stuart Kwan; Suwat Chitphakdibodin; T.R. Vishwanath; Todd Kutzke; Todd West; Vijay Gajjala; Vittorio Bertocci; Wenlong Dong; Yann Christensen; Yavor Georgiev
  • J.D. Meier's Blog

    Scrum at a Glance (Visual)


    I’ve shared a Scrum Flow at a Glance before, but it was not visual.

    I think it’s helpful to know how to whiteboard a simple view of an approach so that everybody can quickly get on the same page. 

    Here is a simple visual of Scrum:


    There are a lot of interesting tools and concepts in scrum.  The definitive guide on the roles, events, artifacts, and rules is The Scrum Guide, by Jeff Sutherland and Ken Schwaber.

    I like to think of Scrum as an effective Agile project management framework for shipping incremental value.  It works by splitting big teams into smaller teams, big work into smaller work, and big time blocks into smaller time blocks.

    I try to keep whiteboard visuals pretty simple so that they are easy to do on the fly, and so they are easy to modify or adjust as appropriate.

    I find the visual above is pretty helpful for getting people on the same page pretty fast, to the point where they can go deeper and ask more detailed questions about Scrum, now that they have the map in mind.

    You Might Also Like

    Agile vs. Waterfall

    Agile Life-Cycle Frame

    Don’t Push Agile, Pull It

    Scrum Flow at a Glance

    The Art of the Agile Retrospective

  • J.D. Meier's Blog

    How To Turn IT into an Asset Rather than a Liability


    Why do some companies survive and thrive in the face of global competition?  What do some companies do differently so they are growing and making money, having more productive employees, getting more from their investments, and having more success with their strategic initiatives?

    They digitize their core processes.

    In the book, Enterprise Architecture as Strategy, Jeanne Ross, Peter Weill, and David Robertson write about how to turn IT into an asset rather than a liability.

    Digitize Your Core Operations

    Companies that survive and thrive can execute their core operations in a reliable and efficient way.  They can do so because they’ve digitized their core operations.  In this way, they’ve turned their IT investment into an asset versus a liability, and they’ve achieved business agility.   Ross, Weill, and Robertson write:

    “We believe these companies execute better because they have a better foundation for execution.  They have embedded technology in their processes so that they  can efficiently and reliably execute their core operations of the company.  These companies have made tough decisions about what they must execute well, and they've implemented the IT systems they need to digitize those operations.  These actions have made IT an asset rather than a liability and have created a foundation for business agility.”

    More Value from IT Investments AND Lower IT Costs

    Digitizing your core operations can lead to higher profitability, faster time to market, and more value from IT investments.  And, lower IT costs.   Ross, Weill, and Robertson write:

    “We surveyed 103 U.S. and European companies about their IT and IT-enabled business processes.  Thirty-four percent of those companies have digitized their core processes. Relative to their competitors, these companies have higher profitability, experience a faster time to market, and get more value from their IT investments.  They have better access to shared customer data, lower risk of mission-critical systems failures, and 80 percent higher senior management satisfaction with technology.  Yet, companies who have digitized their core processes have 25 percent lower IT costs.  These are the benefits of an effective foundation for execution.”

    Cutting Waste, but Not Adding Value

    Some companies just don’t get it.   They cut, but they don’t create value.  Meanwhile, the companies that do get it, pull further ahead.  Ross, Weill, and Robertson write:

    “In contrast, 12 percent of the companies we studied are frittering away management attention and technology investments on a myriad of (perhaps) locally sensible projects that don't support enterprise-wide objectives.  Another 48 percent of the companies are cutting waste from their IT budgets but haven't figured out how to increase value from IT.  Meanwhile, a few leading-edge companies are leveraging a foundation for execution to pull further and further ahead.”

    A Strong Foundation for Execution Accelerates Your Advantage

    With a strong foundation for execution, you can achieve business agility and profitable growth.  Ross, Weill, and Robertson write:

    “As such statistics show, companies with a good foundation for execution have an increasing advantage over those that don't.  In this book, we describe how to design, build, and leverage a foundation for execution.  Based on survey and case study research at more than 400 companies in the United States and Europe, we provide insights, tools, and a language to help managers recognize their core operations, digitize their core to more efficiently support their strategy, and exploit their foundation for execution to achieve business agility and profitable growth.”

    I’m lucky enough to be on the Enterprise Strategy team at Microsoft.   A focus in Enterprise Strategy is to help a company identify their core business and IT capabilities and to pick the best opportunities to transform their business in a scenario-based way.   Changes at the capability level have a ripple effect across people, process, and technology.   It’s effectively the business of business transformation.  The goal is to accelerate business value and help these companies survive and thrive in changing times.

    Speaking of changing times, the Cloud is really a great forcing function for business transformation.   More and more companies are looking at what they should do to make the most of what Cloud, mobile, social, and big data bring to the table.   I’ve been watching the transformations unfold and hearing the stories from the trenches.  

    I’ll be talking more about the book, Enterprise Architecture as Strategy, in the near future, as it’s one of the best books on how to create a foundation for business execution, and I get to see it in action.    Our Enterprise Architects tell me stories about how they are leading their customers on journeys to the Cloud and transforming IT for competitive advantage.

    These are truly exciting times to be at the leading edge of business transformation.

  • J.D. Meier's Blog

    Security Code Examples Project


    I'm working with the infamous Frank Heidt, George Gal and Jonathan Bailey to create a suite of modular, task-based security code examples.  They happen to be experts at finding mistakes in code.  Part of making good code is knowing what bad code looks like and more importantly what makes it bad, or what the trade-offs are.  I've also pulled in Prashant Bansode from my core security team to help push the envelope on making the examples consumable.  Prashant doesn't hold back when it comes to critical analysis and that's what we like about him.

    For this exercise, I'm time-boxing the effort to see what value we produce within the time-box.  We carved out a set of candidate code examples by identifying common mistakes in key buckets, including input/data validation, authentication, authorization, auditing and logging, exception management and a few others.  We then prioritized the list and do daily drops of code.  The outcome should be some useful examples and an approach for others to contribute examples.

    Sharing a chunk of code is easy.  We quickly learned that sharing insights with the code is not.  Exposing the thinking behind the code is the real value.  We want to make that repeatable.  I think the key is a schema with test cases.

    Here's our emerging schema and test cases ....

    Code Example Schema (Short Form)

    • Title
    • Summary
    • Applies To
    • Objectives
    • Solution Example
    • Problem Example
    • Test Case
    • Expected Results
    • More Information
    • Additional Resources

    For more information on the schema and test cases, see Code Example Schema for Sharing Code Insights.

    Today we had a deeply insightful review with Tom Hollander, Jason Taylor, and Paul Saitta.  Jason and Paul are on site while we're solving another class of problems for customers.  They each brought a lot to the table and collectively I think we have a much better understanding of what makes a good, reusable piece of code. 

    We made an important decision to optimize around "show me the code" and then explain it, versus a lot of build up and then the code.  Our emerging schema has its limits and does not take the place of a How To or guidelines or a larger resuable block of code, but it will definitely help as we try to share more modular code examples that demonstrate proven practices.

  • J.D. Meier's Blog

    Agile Security Engineering


    “It is not necessary to change. Survival is not mandatory.”—Edwards Deming

    I gave a talk for the developer security MVPs at the Microsoft 2010 MVP Summit last week.  While I focused primarily on Azure Security, I did briefly cover Agile Security Engineering.  Here is the figure I used to help show how we do Agile Security Engineering in patterns & practices:

    Agile Security Engineering

    What’s important about the figure is that it shows an example of how you can overlay security-specific techniques on an existing life cycle.  In this case, we simply overlay some security activities on top of an Agile software cycle.

    Rather than make security a big up front design or doing it all at the end or other security approaches that don’t work, we’re baking security into the life cycle.  The key here is integrating security into your iterations.

    Here is a summary of the key security activities and how they play in an agile development cycle:

    • Security Objectives – This is about getting clarity on your goals, objectives, and constraints so that you effectively prioritize and invest accordingly.
    • Security Spikes – In Agile, a spike is simply a quick experiment in code for the developer to explore potential solutions.  A security spike is focused exploring potential security solutions with the goal of reducing technical risk.  During exploration, you can spike on some of the cross-cutting security concerns for your solution.
    • Security Stories – In Agile, a story is a brief description of the steps a user takes to perform a goal.  A security story is simply a security-focused scenario.  This might be an existing “user” story, but you apply a security lens, or it might be a new “system” story that focuses on a security goal, requirement, or constraint.  Identify security stories during exploration and during your iterations.
    • Security Guidelines – To help guide the security practices throughout the project, you can create a distilled set of relevant security guidelines for the developers.  You can fine tune them and make them more relevant for your particular security stories.
    • Threat Modeling – Use threat modeling to shape your software design.  A threat model is a depiction of potential threats and attacks against your solution, along with vulnerabilities.  Think of a threat as a potential negative effect and a vulnerability as a weakness that exposes your solution to the threat or attack.  You can threat model at the story level during iterations, and you can threat model at the macro level during exploration.
    • Security Design Inspections – Similar to a general architecture and design review, this is a focus on the security design.  Security questions and criteria guide the inspection.  The design inspection is focused on higher-level, cross-cutting, and macro-level concerns.
    • Security Code Inspections – Similar to a general code review, this is a focus on inspecting the code for security issues.  Security questions and criteria guide your inspection.
    • Security Deployment Inspections – Similar to a general deployment review, this is a focus on inspecting for security issues of your deployed solution.  Physical deployment is where the rubber meets the road and this is where runtime behaviors might expose security issues that you didn’t catch earlier in your design and code inspections.

    The sum of these activities is more than the parts and using a collection of proven, light-weight activities that you can weave into your life cycle helps you stack the deck in your favor.  This is in direct contrast to relying on one big silver bullet.

    Note that we originally used “reviews” which are more exploratory, but we later optimized around “inspections.”  The distinction is that inspections use criteria (e.g. a 12 point inspection.)  We share the criteria using security checklists for design, coding, and deployment inspections.

    There are two keys to chunking up security so that you can effectively focus on it during iterations:

    1. Security stories
    2. Security frame

    Stories are a great way of chunking up value.  Each story represents a user performing a useful goal.  As such, you can also chunk up your security work, by focusing on the security concerns of a story.

    A security frame is a lens for security.  It’s simply a set of categories or “hot spots” (e.g. auditing and logging, authentication, authorization, configuration management, cryptography, exception management, sensitive data, session management.)   Each category is a container for principles, patterns, and anti-patterns.  By grouping your security practices into these buckets, you can more effectively consolidate and leverage your security know-how during each iteration.  For example, one iteration might have stories that involve authentication and authorization, while another iteration might have stories that involve input and data validation.

    Together, stories and security frames help you chunk up security and bake it into the life cycle, while learning and responding along the way.

    For more information on security engineering, see patterns & practices Security Engineering Explained.

  • J.D. Meier's Blog

    Cloud Security Threats and Countermeasures at a Glance


    Cloud security has been a hot topic with the introduction of the Microsoft offering of the Windows Azure platform.  One of the quickest ways to get your head around security is to cut to the chase and look at the threats, attacks, vulnerabilities and countermeasures.  This post is a look at threats and countermeasures from a technical perspective.

    The thing to keep in mind with security is that it’s a matter of people, process, and technology.  However, focusing on a specific slice, in this case the technical slice, can help you get results.  The thing to keep in mind about security from a technical side is that you also need to think holistically in terms of the application, network, and host, as well as how you plug it into your product or development life cycle.  For information on plugging it into your life cycle, see the Security Development Lifecycle.

    While many of the same security issues that apply to running applications on-premise also apply to the cloud, the context of running in the cloud does change some key things.  For example, it might mean taking a deeper look at claims for identity management and access control.  It might mean rethinking how you think about your storage.  It can mean thinking more about how you access and manage virtualized computing resources.  It can mean thinking about how you make calls to services or how you protect calls to your own services.

    Here is a fast path through looking at security threats, attacks, vulnerabilities, and countermeasures for the cloud …


    • Learn a security frame that applies to the cloud
    • Learn top threats/attacks, vulnerabilities and countermeasures for each area within the security frame
    • Understand differences between threats, attacks, vulnerabilities and countermeasures

    It is important to think like an attacker when designing and implementing an application. Putting yourself in the attacker’s mindset will make you more effective at designing mitigations for vulnerabilities and coding defensively.  Below is the cloud security frame. We use the cloud security frame to present threats, attacks, vulnerabilities and countermeasures to make them more actionable and meaningful.

    You can also use the cloud security frame to effectively organize principles, practices, patterns, and anti-patterns in a more useful way.

    Threats, Attacks, Vulnerabilities, and Countermeasures
    These terms are defined as follows:

    • Asset. A resource of value such as the data in a database, data on the file system, or a system resource.
    • Threat. A potential occurrence – malicious or otherwise – that can harm an asset.
    • Vulnerability. A weakness that makes a threat possible.
    • Attack. An action taken to exploit vulnerability and realize a threat.
    • Countermeasure. A safeguard that addresses a threat and mitigates risk.

    Cloud Security Frame
    The following key security concepts provide a frame for thinking about security when designing applications to run on the cloud, such as Windows Azure. Understanding these concepts helps you put key security considerations such as authentication, authorization, auditing, confidentiality, integrity, and availability into action.

    Hot Spot Description
    Auditing and Logging Cloud auditing and logging refers to how security-related events are recorded, monitored, audited, exposed, compiled & partitioned across multiple cloud instances. Examples include: Who did what and when and on which VM instance?
    Authentication Authentication is the process of proving identity, typically through credentials, such as a user name and password. In the cloud this also encompasses authentication against varying identity stores.
    Authorization Authorization is how your application provides access controls for roles, resources and operations. Authorization strategies might involve standard mechanisms, utilize claims and potentially support a federated model.
    Communication Communication encompasses how data is transmitted over the wire. Transport security, message encryption, and point to point communication are covered here.
    Configuration Management Configuration management refers to how your application handles configuration and administration of your applications from a security perspective. Examples include: Who does your application run as? Which databases does it connect to? How is your application administered? How are these settings secured?
    Cryptography Cryptography refers to how your application enforces confidentiality and integrity. Examples include: How are you keeping secrets (confidentiality)? How are you tamper-proofing your data or libraries (integrity)? How are you providing seeds for random values that must be cryptographically strong? Certificates and cert management are in this domain as well.
    Input and Data Validation Validation refers to how your application filters, scrubs, or rejects input before additional processing, or how it sanitizes output. It's about constraining input through entry points and encoding output through exit points. Message validation refers to how you verify the message payload against schema, as well as message size, content and character sets. Examples include: How do you know that the input your application receives is valid and safe? Do you trust data from sources such as databases and file shares?
    Exception Management Exception management refers to how you handle applications errors and exceptions. Examples include: When your application fails, what does your application do? Does it support graceful failover to other application instances in the cloud? How much information do you reveal? Do you return friendly error information to end users? Do you pass valuable exception information back to the caller?
    Sensitive Data Sensitive data refers to how your application handles any data that must be protected either in memory, over the network, or in persistent stores. Examples include: How does your application handle sensitive data? How is sensitive data shared between application instances?
    Session Management A session refers to a series of related interactions between a user and your application. Examples include: How does your application handle and protect user sessions?


    Threats and Attacks


    Category Items
    Auditing and Logging
    • Repudiation. An attacker denies performing an operation, exploits an application without trace, or covers his or her tracks.
    • Denial of service (DoS). An attacker overwhelms logs with excessive entries or very large log entries.
    • Disclosure of confidential information. An attacker gathers sensitive information from log files.
    • Network eavesdropping. An attacker steals identity and/or credentials off the network by reading network traffic not intended for them.
    • Brute force attacks. An attacker guesses identity and/or credentials through the use of brute force.
    • Dictionary attacks. An attacker guesses identity and/or credentials through the use of common terms in a dictionary designed for that purpose.
    • Cookie replay attacks. An attacker gains access to an authenticated session through the reuse of a stolen cookie containing session information.
    • Credential theft. An attacker gains access to credentials through data theft; for instance, phishing or social engineering.
    • Elevation of privilege. An attacker enters a system as a lower-level user, but is able to obtain higher-level access.
    • Disclosure of confidential data. An attacker accesses confidential information because of authorization failure on a resource or operation.
    • Data tampering. An attacker modifies sensitive data because of authorization failure on a resource or operation.
    • Luring attacks. An attacker lures a higher-privileged user into taking an action on their behalf. This is not an authorization failure but rather a failure of the system to properly inform the user.
    • Token stealing. An attacker steals the credentials or token of another user in order to gain authorization to resources or operations they would not otherwise be able to access.
    • Failure to encrypt messages. An attacker is able to read message content off the network because it is not encrypted.
    • Theft of encryption keys. An attacker is able to decrypt sensitive data because he or she has the keys.
    • Man-in-the-middle attack. An attacker can read and then modify messages between the client and the service.
    • Session replay. An attacker steals messages off the network and replays them in order to steal a user's session.
    • Data tampering. An attacker modifies the data in a message in order to attack the client or the service
    Configuration Management
    • Unauthorized access to configuration stores. An attacker gains access to configuration files and is able to modify binding settings, etc.
    • Retrieval of clear text configuration secrets. An attacker gains access to configuration files and is able to retrieve sensitive information such as database connection strings.
    • Encryption cracking. Breaking an encryption algorithm and gaining access to the encrypted data.
    • Loss of decryption keys. Obtaining decryption keys and using them to access protected data.
    Exception Management
    • Information disclosure. Sensitive system or application details are revealed through exception information.
    • Denial of service. An attacker uses error conditions to stop your service or place it in an unrecoverable error state.
    • Elevation of privilege. Your service encounters an error and fails to an insecure state; for instance, failing to revert impersonation.
    Input and Data Validation
    • Canonicalization attacks. Canonicalization attacks can occur anytime validation is performed on a different form of the input than that which is used for later processing. For instance, a validation check may be performed on an encoded string, which is later decoded and used as a file path or URL.
    • Cross-site scripting. Cross-site scripting can occur if you fail to encode user input before echoing back to a client that will render it as HTML.
    • SQL injection. Failure to validate input can result in SQL injection if the input is used to construct a SQL statement, or if it will modify the construction of a SQL statement in some way.
    • Cross-Site Request Forgery: CSRF attacks involve forged transactions submitted to a site on behalf of another party.
    • XPath injection. XPath injection can result if the input sent to the Web service is used to influence or construct an XPath statement. The input can also introduce unintended results if the XPath statement is used by the Web service as part of some larger operation, such as applying an XQuery or an XSLT transformation to an XML document.
    • XML bomb. XML bomb attacks occur when specific, small XML messages are parsed by a service resulting in data that feeds on itself and grows exponentially. An attacker sends an XML bomb with the intent of overwhelming a Web service’s XML parser and resulting in a denial of service attack.
    Sensitive Data
    • Memory dumping. An attacker is able to read sensitive data out of memory or from local files.
    • Network eavesdropping. An attacker sniffs unencrypted sensitive data off the network.
    • Configuration file sniffing. An attacker steals sensitive information, such as connection strings, out of configuration files.
    Session Management
    • Session hijacking. An attacker steals the session ID of another user in order to gain access to resources or operations they would not otherwise be able to access.
    • Session replay. An attacker steals messages off the network and replays them in order to steal a user’s session.
    • Man-in-the-middle attack. An attacker can read and then modify messages between the client and the service.
    • Inability to log out successfully. An application leaves a communication channel open rather than completely closing the connection and destroying any server objects in memory relating to the session.
    • Cross-site request forgery. Cross-site request forgery (CSRF) is where an attacker tricks a user into performing an action on a site where the user actually has a legitimate authorized account.
    • Session fixation. An attacker uses CSRF to set another person’s session identifier and thus hijack the session after the attacker tricks a user into initiating it.
    • Load balancing and session affinity. When sessions are transferred from one server to balance traffic among the various servers, an attacker can hijack the session during the handoff.


    Category Items
    Auditing and Logging
    • Failing to audit failed logons.
    • Failing to secure log files.
    • Storing sensitive information in log files.
    • Failing to audit across application tiers.
    • Failure to throttle log files.
    • Using weak passwords.
    • Storing clear text credentials in configuration files.
    • Passing clear text credentials over the network.
    • Permitting prolonged session lifetime.
    • Mixing personalization with authentication.
    • Using weak authentication mechanisms (e.g., using basic authentication over an untrusted network).
    • Relying on a single gatekeeper (e.g., relying on client-side validation only).
    • Failing to lock down system resources against application identities.
    • Failing to limit database access to specified stored procedures.
    • Using inadequate separation of privileges.
    • Connection pooling.
    • Permitting over privileged accounts.
    Configuration Management
    • Using insecure custom administration interfaces.
    • Failing to secure configuration files on the server.
    • Storing sensitive information in the clear text.
    • Having too many administrators.
    • Using over privileged process accounts and service accounts.
    • Not encrypting messages.
    • Using custom cryptography.
    • Distributing keys insecurely.
    • Managing or storing keys insecurely.
    • Failure to use a mechanism to detect message replays.
    • Not using either message or transport security.
    • Using custom cryptography
    • Failing to secure encryption keys
    • Using the wrong algorithm or a key size that is too small
    • Using the same key for a prolonged period of time
    • Distributing keys in an insecure manner
    Exception Management
    • Failure to use structured exception handling (try/catch).
    • Revealing too much information to the client.
    • Failure to specify fault contracts with the client.
    • Failure to use a global exception handler.
    Input and Data Validation
    • Using non-validated input used to generate SQL queries.
    • Relying only on client-side validation.
    • Using input file names, URLs, or usernames for security decisions.
    • Using application-only filters for malicious input.
    • Looking for known bad patterns of input.
    • Trusting data read from databases, file shares, and other network resources.
    • Failing to validate input from all sources including cookies, headers, parameters, databases, and network resources.
    Sensitive Data
    • Storing secrets when you do not need to.
    • Storing secrets in code.
    • Storing secrets in clear text in files, registry, or configuration.
    • Passing sensitive data in clear text over networks.
    Session Management
    • Passing session IDs over unencrypted channels.
    • Permitting prolonged session lifetime.
    • Having insecure session state stores.
    • Placing session identifiers in query strings.




    Category Items
    Auditing and Logging
    • Identify malicious behavior.
    • Know your baseline (know what good traffic looks like).
    • Use application instrumentation to expose behavior that can be monitored.
    • Throttle logging.
    • Strip sensitive data before logging.
    • Use strong password policies.
    • Do not store credentials in an insecure manner.
    • Use authentication mechanisms that do not require clear text credentials to be passed over the network.
    • Encrypt communication channels to secure authentication tokens.
    • Use Secure HTTP (HTTPS) only with forms authentication cookies.
    • Separate anonymous from authenticated pages.
    • Using cryptographic random number generators to generate session IDs.
    • Use least-privileged accounts.
    • Tie authentication to authorization on the same tier.
    • Consider granularity of access.
    • Enforce separation of privileges.
    • Use multiple gatekeepers.
    • Secure system resources against system identities.
    Configuration Management
    • Using insecure custom administration interfaces.
    • Failing to secure configuration files on the server.
    • Storing sensitive information in the clear text.
    • Having too many administrators.
    • Using overprivileged process accounts and service accounts.
    • Use message security or transport security to encrypt your messages.
    • Use proven platform-provided cryptography.
    • Periodically change your keys.
    • Use any platform-provided replay detection features.
    • Consider creating custom code if the platform does not provide a detection mechanism.
    • Turn on message or transport security.
    • Do not develop and use proprietary algorithms (XOR is not encryption. Use established cryptography such as RSA)
    • Avoid key management.
    • Use the RNGCryptoServiceProvider method to generate random numbers
    • Periodically change your keys
    Exception Management
    • Use structured exception handling (by using try/catch blocks).
    • Catch and wrap exceptions only if the operation adds value/information.
    • Do not reveal sensitive system or application information.
    • Implement a global exception handler.
    • Do not log private data such as passwords.
    Sensitive Data
    • Do not store secrets in software.
    • Encrypt sensitive data over the network.
    • Secure the channel.
    • Encrypt sensitive data in configuration files.
    Session Management
    • Partition the site by anonymous, identified, and authenticated users.
    • Reduce session timeouts.
    • Avoid storing sensitive data in session stores.
    • Secure the channel to the session store.
    • Authenticate and authorize access to the session store.
    • Do not trust client input.
    • Validate input: length, range, format, and type.
    • Validate XML streams.
    • Constrain, reject, and sanitize input.
    • Encode output.
    • Restrict the size, length, and depth of parsed XML messages.

    Threats and Attacks Explained

    1.  Brute force attacks. Attacks that use the raw computer processing power to try different permutations of any variable that could expose a security hole. For example, if an attacker knew that access required an 8-character username and a 10-character password, the attacker could iterate through every possible (256 multiplied by itself 18 times) combination in order to attempt to gain access to a system. No intelligence is used to filter or shape for likely combinations.
    2. Buffer overflows. The maximum size of a given variable (string or otherwise) is exceeded, forcing unintended program processing. In this case, the attacker uses this behavior to cause insertion and execution of code in such a way that the attacker gains control of the program in which the buffer overflow occurs. Depending on the program’s privileges, the seriousness of the security breach will vary.
    3. Canonicalization attacks. There are multiple ways to access the same object and an attacker uses a method to bypass any security measures instituted on the primary intended methods of access. Often, the unintended methods of access can be less secure deprecated methods kept for backward compatibility.
    4. Cookie manipulation. Through various methods, an attacker will alter the cookies stored in the browser. Attackers will then use the cookie to fraudulently authenticate themselves to a service or Web site.
    5. Cookie replay attacks. Reusing a previously valid cookie to deceive the server into believing that a previously authenticated session is still in progress and valid.
    6. Credential theft. Stealing the verification part of an authentication pair (identity + credentials = authentication). Passwords are a common credential.
    7. Cross-Site Request Forgery (CSRF). Interacting with a web site on behalf of another user to perform malicious actions. A site that assumes all requests it receives are intentional is vulnerable to a forged request.
    8. Cross-site scripting (XSS). An attacker is able to inject executable code (script) into a stream of data that will be rendered in a browser. The code will be executed in the context of the user’s current session and will gain privileges to the site and information that it would not otherwise have.
    9. Connection pooling. The practice of creating and then reusing a connection resource as a performance optimization. In a security context, this can result in either the client or server using a connection previously used by a highly privileged user being used for a lower-privileged user or purpose. This can potentially expose vulnerability if the connection is not reauthorized when used by a new identity.
    10. Data tampering. An attacker violates the integrity of data by modifying it in local memory, in a data-store, or on the network. Modification of this data could provide the attacker with access to a service through a number of the different methods listed in this document.
    11. Denial of service. Denial of service (DoS) is the process of making a system or application unavailable. For example, a DoS attack might be accomplished by bombarding a server with requests to consume all available system resources, or by passing the server malformed input data that can crash an application process.
    12. Dictionary attack. Use of a list of likely access methods (usernames, passwords, coding methods) to try and gain access to a system. This approach is more focused and intelligent than the “brute force” attack method, so as to increase the likelihood of success in a shorter amount of time.
    13. Disclosure of sensitive/confidential data. Sensitive data is exposed in some unintended way to users who do not have the proper privileges to see it. This can often be done through parameterized error messages, where an attacker will force an error and the program will pass sensitive information up through the layers of the program without filtering it. This can be personally identifiable information (i.e., personal data) or system data.
    14. Elevation of privilege. A user with limited privileges assumes the identity of a privileged user to gain privileged access to an application. For example, an attacker with limited privileges might elevate his or her privilege level to compromise and take control of a highly privileged and trusted process or account. More information about this attack in the context of Windows Azure can be found in the Security Best Practices for Developing Windows Azure Applications at http://www.microsoft.com/downloads/details.aspx?displaylang=en&FamilyID=0ff0c25f-dc54-4f56-aae7-481e67504df6
    15. Encryption. The process of taking sensitive data and changing it in such a way that it is unrecognizable to anyone but those who know how to decode it. Different encryption methods have different strengths based on how easy it is for an attacker to obtain the original information through whatever methods are available.
    16. Information disclosure. Unwanted exposure of private data. For example, a user views the contents of a table or file that he or she is not authorized to open, or monitors data passed in plaintext over a network. Some examples of information disclosure vulnerabilities include the use of hidden form fields, comments embedded in Web pages that contain database connection strings and connection details, and weak exception handling that can lead to internal system-level details being revealed to the client. Any of this information can be very useful to the attacker.
    17. Luring attacks. An attacker lures a higher-privileged user into taking an action on his or her behalf. This is not an authorization failure but rather a failure of the system to properly inform the user.
    18. Man-in-the-middle attacks. A person intercepts both the client and server communications and then acts as an intermediary between the two without each ever knowing. This gives the “middle man” the ability to read and potentially modify messages from either party in order to implement another type of attack listed here.
    19. Network eavesdropping. Listening to network packets and reassembling the messages being sent back and forth between one or more parties on the network. While not an attack itself, network eavesdropping can easily intercept information for use in specific attacks listed in this document.
    20. Open Redirects. Attacker provides a URL to a malicious site when allowed to input a URL used in a redirect. This allows the attacker to direct users to sites that perform phishing attacks or other malicious actions.
    21. Password cracking. If the attacker cannot establish an anonymous connection with the server, he or she will try to establish an authenticated connection. For this, the attacker must know a valid username and password combination. If you use default account names, you are giving the attacker a head start. Then the attacker only has to crack the account’s password. The use of blank or weak passwords makes the attacker’s job even easier.
    22. Repudiation. The ability of users (legitimate or otherwise) to deny that they performed specific actions or transactions. Without adequate auditing, repudiation attacks are difficult to prove.
    23. Session hijacking. Also known as man-in-the-middle attacks, session hijacking deceives a server or a client into accepting the upstream host as the actual legitimate host. Instead, the upstream host is an attacker’s host that is manipulating the network so the attacker’s host appears to be the desired destination.
    24. Session replay. An attacker steals messages off of the network and replays them in order to steal a user’s session.
    25. Session fixation. An attacker sets (fixates) another person’s session identifier artificially. The attacker must know that a particular Web service accepts any session ID that is set externally; for example, the attacker sets up a URL such as http://unsecurewebservice.com/?sessionID=1234567. The attacker then sends this URL to a valid user, who clicks on it. At this point, a valid session with the ID 1234567 is created on the server. Because the attacker determines this ID, he or she can now hijack the session, which has been authenticated using the valid user’s credentials.
    26. Spoofing. An attempt to gain access to a system by using a false identity. This can be accomplished by using stolen user credentials or a false IP address. After the attacker successfully gains access as a legitimate user or host, elevation of privileges or abuse using authorization can begin.
    27. SQL injection. Failure to validate input in cases where the input is used to construct a SQL statement or will modify the construction of a SQL statement in some way. If the attacker can influence the creation of a SQL statement, he or she can gain access to the database with privileges otherwise unavailable and use this in order to steal or modify information or destroy data.
    28. Throttling. The process of limiting resource usage to keep a particular process from bogging down and/or crashing a system. Relevant as a countermeasure in DoS attacks, where an attacker attempts to crash the system by overloading it with input.

    Countermeasures Explained

    1. Assume all input is malicious. Assuming all input is malicious means designing your application to validate all input. User input should never be accepted without being filtered and/or sanitized.
    2. Audit and log activity through all of the application tiers. Log business critical and security sensitive events. This will help you track security issues down and make sense of security problems. Skilled attackers attempt to cover their tracks, so you’ll want to protect your logs.
    3. Avoid storing secrets. Design around storing secrets. If necessary, sometimes they can be avoided by storing them after using a one-way hash algorithm.
    4. Avoid storing sensitive data in the Web space. Anything exposed to the public Internet is considered “web space.” Sensitive data stored in a location that might be compromised by any member of the public places it at much higher risk.
    5. Back up and regularly analyze log files. Some attacks can occur over time. Regular analysis of logs will allow you to recognize with sufficient time to address them. Performing regular backups lowers the risk of an attacker covering his tracks by deleting logging of his activities.
    6. Be able to disable accounts. The ability to reactively defend an attack by shutting out a user should be supported through the ability to disable an account.
    7. Be careful with canonicalization issues. Predictable naming of file resources is convenient for programming, but is also very convenient for malicious parties to attack. Application logic should not be exposed to users in this manner. Instead, you use file names derived from the original names or fed through a one-way hashing algorithm.
    8. Catch exceptions. Unhandled exceptions are at risk of passing too much information to the client. Handle exceptions when possible.
    9. Centralize your input and data validation. Input and data validation should be performed using a common set of code such as a validation library.
    10. Consider a centralized exception management framework. Exception handling frameworks are available publically and provide an established and tested means for handling exceptions.
    11. Consider authorization granularity. Every object needs to have an authorization control that authorizes access based on the identity of the authenticated party requesting access. Fine grained authorization will control access to each resource, while coarse grained authorization will control access to groups of resources or functional areas of the application.
    12. Consider identity flow. Auditing should be traceable back to the authenticated party. Take note of identity transitions imposed by design decisions like impersonation.
    13. Constrain input. Limit user input to expected ranges and formats.
    14. Constrain, reject, and sanitize your input. Constrain, reject and sanitize should be primary techniques in handling input data.
    15. Cycle your keys periodically. Expiring encryption keys lowers the risk of stolen keys.
    16. Disable anonymous access and authenticate every principle. When possible, require all interactions to occur as an authenticated party as opposed to an anonymous one. This will help facilitate more effective auditing.
    17. Do not develop your own cryptography. Custom cryptography is not difficult for experts to crack. Established cryptography is preferred because it is known to be safe.
    18. Do not leak information to the client. Exception data can potentially contain sensitive data or information exposing program logic. Provide clients only with the error data they need for the UI.
    19. Do not log private data such as passwords. Log files are an attack vector for malicious parties. Limit the risk of their being compromised by not logging sensitive data in the log.
    20. Do not pass sensitive data using the HTTP-GET protocol. Data passed using HTTP GET is appended to the querystring. When users share links by copying and pasting them from the browser address bar, sensitive data may also be inadvertently passed. Pass sensitive data in the body of a POST to avoid this.
    21. Do not rely on client-side validation. Any code delivered to a client is at risk of being compromised. Because of this, it should always be assumed that input validation on the client might have been bypassed.
    22. Do not send passwords over the wire in plaintext. Authentication information communicated over the wire should always be encrypted. This may mean encrypting the values, or encrypting the entire channel with SSL.
    23. Do not store credentials in plaintext. Credentials are sometimes stored in application configuration files, repositories, or sent over email. Always encrypt credentials before storing them.
    24. Do not store database connections, passwords, or keys in plaintext. Configuration secrets should always be stored in encrypted form, external to the code.
    25. Do not store passwords in user stores. In the event that the user store is compromised, an attack should never be able to access passwords. A derivative of a password should be stored instead. A common approach to this is to encrypt a version of the password using a one-way hash with a SALT. Upon authentication, the encrypted password can be re-generated with the SALT and the result can be compared to the original encrypted password.
    26. Do not store secrets in code. Secrets such as configuration settings are convenient to store in code, but are more likely to be stolen. Instead, store them in a secure location such as a secret store.
    27. Do not store sensitive data in persistent cookies. Persistent cookies are stored client-side and provide attackers with ample opportunity to steal sensitive data, be it through encryption cracking or any other means.
    28. Do not trust fields that the client can manipulate (query strings, form fields, cookies, or HTTP headers). All information sent from a client should always be assumed to be malicious. All information from a client should always be validated and sanitized before it is used.
    29. Do not trust HTTP header information. Http header manipulation is a threat that can be mitigated by building application logic that assumes HTTP headers are compromised and validates the HTTP headers before using them.
    30. Encrypt communication channels to protect authentication tokens. Authentication tokens are often the target of eavesdropping, theft or replay type attacks. To reduce the risk in these types of attacks, it is useful to encrypt the channel the tokens are communicated over. Typically this means protecting a login page with SSL encryption.
    31. Encrypt sensitive cookie state. Sensitive data contained within cookies should always be encrypted.
    32. Encrypt the contents of the authentication cookies. In the case where cookies are compromised, they should not contain clear-text session data. Encrypt sensitive data within the session cookie.
    33. Encrypt the data or secure the communication channel. Sensitive data should only be passed in encrypted form. This can be accomplished by encrypting the individual items that are sent over the wire, or encrypting the entire channel as with SSL.
    34. Enforce separation of privileges. Avoid building generic roles with privileges to perform a wide range of actions. Roles should be designed for specific tasks and provided the minimum privileges required for those tasks.
    35. Enforce unique transactions. Identify each transaction from a client uniquely to help prevent replay and forgery attacks.
    36. Identify malicious behavior. Monitoring site interactions that fall outside of normal usage patterns, you can quickly identify malicious behavior. This is closely related to “Know what good traffic looks like.
    37. Keep unencrypted data close to the algorithm. Use decrypted data as soon as it is decrypted, and then dispose of it promptly. Unencrypted data should not be held in memory in code.
    38. Know what good traffic looks like. Active auditing and logging of a site will allow you know recognize what regular traffic and usage patterns are. This is a required step in order to be able to identify malicious behavior.
    39. Limit session lifetime. Longer session lifetimes provide greater opportunity for Cross-Site Scripting or Cross-Site Request Forgery attacks to add activity onto an old session.
    40. Log detailed error messages. Highly detailed error message logging can provide clues to attempted attacks.
    41. Log key events. Profile your application and note key or sensitive operations and/or events, and log these events during application operation.
    42. Maintain separate administration privileges. Consider granularity of authorization in the administrative interfaces as well. Avoid combining administrator roles with distinctly different roles such as development, test or deployment.
    43. Make sure that users do not bypass your checks. Bypassing checks can be accomplished by canonicalization attacks, or bypassing client-side validation. Application design should avoid exposing application logic, and segregating application logic into flow that can be interrupted. For example, an ASPX page that performs only validations and then redirects. Instead, validation routines should be tightly bound to the data they are validating.
    44. Pass Forms authentication cookies only over HTTPS connections. Cookies are at risk of theft and replay type attacks. Encrypting them with SSL helps reduce the risk of these types of attacks.
    45. Protect authentication cookies. Cookies can be manipulated with Cross-Site Scripting attacks, encrypt sensitive data in cookies, and use browser features such as the HttpOnly cookie attribute.
    46. Provide strong access controls on sensitive data stores. Access to secret stores should but authorized. Protect the secret store as you would other secure resources by requiring authentication and authorization as appropriate.
    47. Reject known bad input. Rejecting known bad input involves screening input for values that are known to be problematic or malicious. NOTE: Rejecting should never be the primary means of screening bad input, it should always be used in conjunction with input sanitization.
    48. Require strong passwords. Enforce password complexity requirement by requiring long passwords with a combination of upper case, lower case, numeric and special (for example punctuation) characters. This helps mitigate the threat posed by dictionary attacks. If possible, also enforce automatic password expiry.
    49. Restrict user access to system-level resources. Users should not be touching system resources directly. This should be accomplished through an intermediary such as the application. System resources should be restricted to application access.
    50. Retrieve sensitive data on demand. Sensitive data stored in application memory provides attackers another location they can attempt to access the data. Often this data is used in unencrypted form also. To minimize risk of sensitive data theft, sensitive data should be used immediately and then cleared from memory.
    51. Sanitize input. Sanitizing input is the opposite of rejecting bad input. Sanitizing input is the process of filtering input data to only accept values that are known to be safe. Alternatively, input can be rendered innocuous by converting it to safe output through output encoding methods.
    52. Secure access to log files. Log files should only be accessible to administrators, auditors, or administrative interfaces. An attacker with access to the logs might be able to glean sensitive data or program logic from logs.
    53. Secure the communication channel for remote administration. Eavesdropping and replay attacks can target administration interfaces as well. If using a web based administration interface, use SSL.
    54. Secure your configuration store. The configuration store should require authenticated access and should store sensitive settings or information in an encrypted format.
    55. Secure your encryption keys. Encryption keys should be treated as secrets or sensitive data. They should be secured in a secret store or key repository.
    56. Separate public and restricted areas. Applications that contain public front-ends as well as content that requires authentication to access should be partitioned in the same manner. Public facing pages should be hosted in a separate file structure, directory or domain from private content.
    57. Store keys in a restricted location. Protect keys with authorization policies.
    58. Support password expiration periods. User passwords and account credentials are commonly compromised. Expiration policies help mitigate attacks from stolen accounts, or disgruntled employees who have been terminated.
    59. Use account lockout policies for end-user accounts. Account login attempts should have a cap on failed attempts. After the cap is exceeded the account should prevent further login attempts. Lockout helps prevent dictionary and brute force attacks.
    60. Use application instrumentation to expose behavior that can be monitored: Application transactions that are more likely to be targeted by malicious interactions should be logged or monitored. Examples of this might be adding logging code to an exception handler, or logging individual API calls. By providing a means to watch these transactions you have a higher likelihood of being able to identify malicious behavior quickly.
    61. Use authentication mechanisms that do not require clear text credentials to be passed over the network: A variety of authentication approaches exist for use with web based applications some involve the use of tokens while others will pass user credentials (user name/id and password) over the wire. When possible, it is safer to use an authentication mechanism that does not pass the credentials. If credentials must be passed, it is preferable to encrypt them, and/or send them over an encrypted channel such as SSL.
    62. Use least privileged accounts. The privileges granted to the authenticated party should be the minimum required to perform all required tasks. Be careful of using existing roles that have permissions beyond what is required.
    63. Use least privileged process and service accounts. Allocate accounts specifically for process and service accounts. Lock down the privileges of these accounts separately from other accounts.
    64. Use multiple gatekeepers. Passing the authentication system should not provide a golden ticket to any/all functionality. System and/or application resources should have restricted levels of access depending on the authenticated party. Some design patterns might also enforce multiple authentications, sometimes distributed through application tiers.
    65. Use SSL to protect session authentication cookies. Session authentication cookies contain data that can be used in a number of different attacks such as replay, Cross-Site Scripting or Cross-Site Request Forgery. Protecting these cookies helps mitigate these risks.
    66. Use strong authentication and authorization on administration interfaces. Always require authenticated access to administrative interfaces. When applicable, also enforce separation of privileges within the administrative interfaces.
    67. Use structured exception handling. A structured approach to exception handling lowers the risk of unexpected exceptions from going unhandled.
    68. Use the correct algorithm and correct key length. Different encryption algorithms are preferred for varying data types and scenarios.
    69. Use tried and tested platform features. Many cryptographic features are available through the .NET Framework. These are proven features and should be used in favor of custom methods.
    70. Validate all values sent from the client. Similar to not relying on client-side validation, any input from a client should always be assumed to have been tampered with. This input should always be validated before it is used. This encompasses user input, cookie values, HTTP headers, and anything else that is sent over the wires from the client.
    71. Validate data for type, length, format, and range. Data validation should encompass these primary tenets. Validate for data type, string lengths, string or numeric formats, and numeric ranges.

    SDL Considerations
    For more information on preferred encryption algorithms and key lengths, see the Security Development Lifecycle at http://www.microsoft.com/security/sdl/ .

  • J.D. Meier's Blog

    If You’re Afraid of Your To-Do List, It’s Not Working


    If you’re afraid to look at your To-Do list, it’s not working.  Your To-Do list should inspire you.

    One of the things that happens a lot with To-Do lists is they can get overwhelming.  It’s easy to pile on more things.  Eventually, you’re afraid to even look at your To-Do list.   What once started out as a great list of things to make happen, has now became a laundry list of things that hurts more than it helps.

    Worse, it’s easy to spawn a lot of lists that are full of once great intentions, so the problem spreads.

    There are multiple ways to hack the problem down to size, but here are the three I use the most:

    1. New Lists.  I create a new list each day and each week.   This gives me a fresh start.  This way I can have a master list (a “list of lists”), or all up project lists, but then carve out a specific list of outcomes and actions for a given segment of time, whether it’s a day, a week, a month, etc.
    2. Prioritizing.   A quick way to make the list more useful is to make sure that priorities float to the top.  By floating your priorities to the top, you can squeeze out the lower priorities, and let them slough off.  I find it’s easier to figure out the few great things to do, than it is to try and figure out all the things not worth doing.  So I use my short-list of priorities (“the vital few” in Covey speak), to help crowd out the lesser things.
    3. Three Wins at the top.   This is by far the most useful method to reshape a To-Do list into something more meaningful, more rewarding, and less intimidating.   Simply add your Three Wins to the top of your To-Do list.

    Here is a simple visual that shows adding Three Wins to the top of your To-Do list:


    Identify the 3 most important results you want to accomplish today and bubble them to the top of your To Do list.  Prioritize your day against those 3 results you want to achieve, whether it’s incoming requests or you’re making your way through your backlog of things to do on your To-Do list.

    You can use this approach to chop any To-Do list down to size and make it more consumable.

    This tip on building better To-Do lists is from the book, Getting Results the Agile Way: A Personal Results System for Work and Life (Amazon).

  • J.D. Meier's Blog

    Windows Azure Developer Guidance Map



    If you’re a Windows Azure developer or you want to learn Windows Azure, this map is for you.   Microsoft has an extensive collection of developer guidance available in the form of Code Samples, How Tos, Videos, and Training.  The challenge is -- how do you find all of the various content collections? … and part of that challenge is knowing *exactly* where to look.  This is where the map comes in.  It helps you find your way around the online jungle and gives you short-cuts to the treasure troves of available content.

    The Windows Azure Developer Guidance Map helps you kill a few birds with one stone:

    1. It show you the key sources of Windows Azure content and where to look (“teach you how to fish”)
    2. It gives you an index of the main content collections (Code Samples, How Tos, Videos, and Training)
    3. You can also use the map as a model for creating your own map of developer guidance.

    Download the Windows Azure Developer Guidance Map

    Contents at a Glance

    • Introduction
    • Sources of Windows Azure Developer Guidance
    • Topics and Features Map (a “Lens” for Finding Windows Azure Content)
    • Summary Table of Topics
    • How The Map is Organized (Organizing the “Content Collections”)
    • Getting Started
    • Architecture and Design
    • Code Samples
    • How Tos
    • Videos
    • Training

    Mental Model of the Map
    The map is a simple collection of content types from multiple sources, organized by common tasks, common topics, and Windows Azure features:


    Special Thanks …
    Special thanks to David Aiken, James Conard, Mike Tillman, Paul Enfield, Rob Boucher, Ryan Dunn, Steve Marx, Terri Schmidt, and Tobin Titus for helping me find and round up our various content collections.

    Enjoy and share the map with a friend.

  • J.D. Meier's Blog

    How To Use Getting Results the Agile Way with Evernote


    One of the most common questions I get with Getting Results the Agile Way is, “What tools do I use to implement it?”

    The answer is, it depends on how "lightweight" or "heavy" I need to be for a given scenario.  The thing to keep in mind is that the system is stretch to fit because it's based on a simple set of principles, patterns, and practices.  See Values, Principles, and Practices of Getting Results the Agile Way.

    That said, I have a few key scenarios:

    1. Just me.
    2. Pen and Paper.
    3. Evernote.

    The Just Me Scenario
    In the "Just Me" scenario, I don't use any tools.  I just take "mental notes."  I use The Rule of Three to identify three outcomes for the day.  I simply ask the question, "What are the three most important results for today?"  Because it's three things, it's easy to remember, and it helps me stay on track.  Because it's results or outcomes, not activities, I don't get lost in the minutia.

    The Pen and Paper Scenario
    In the Pen and Paper scenario, I carry a little yellow sticky pad.  I like yellow stickies because they are portable and help me free up my mind by writing things down.  The act of writing it down, also forces me to get a bit more clarity.  As a practice, I either write the three results I want for the day on the first note, or I write one outcome per note.  The main reason I write one result per sticky note is so that I can either jot supporting notes, such as tasks, or so I can throw it away when I've achieve that particular result.  It's a simple way to game my day and build a sense of progress.

    I do find that writing things down, even as a simple reference, helps me stay on track way more than just having it in my head.

    The Evernote Scenario
    The Evernote scenario is my high-end scenario.  This is for when I'm dealing with multiple projects, leading teams, etc.  It's still incredibly light-weight, but it helps me stay on top of my game, while juggling many things.  It also helps me quickly see when I have too much open work, or when I'm splitting my time and energy across too many things.  It also helps me see patterns by flipping back through my daily outcomes, weekly outcomes, etc.

    It's hard to believe, but already I've been using Evernote with Getting Results the Agile Way for years.  I just checked the dates of my daily outcomes, and I had switched to Evernote back in 2009.  Time sure flies.  It really does.

    Anyway, I put together a simple step-by-step How To to walk you through setting up Getting Results the Agile Way in Evernote.  Here it is:

    If you’re a OneNote user, and you want to see how to use Getting Results the Agile Way with OneNote, check out Anu’s post on using Getting Results the Agile Way with OneNote.

  • J.D. Meier's Blog

    Key Software Trends


    What you don’t know can hurt you. Sometimes the world can change under your feet and you never saw it coming.  I like to anticipate and stay ahead of the curve where I can. As part of our patterns & practices Application Architecture Guide 2.0 project, I’ve been hunting and gathering trends that influence software development. Rather than make this exhaustive, I wanted to share "good enough" for now, and leave you room to tell me what I've missed and share what you’re seeing in your world.

    Key Notes On Trends

    • Trends aren’t fads.  Trends tend to have depth and staying power, whereas fads tend to be short-lived.
    • Some fads are future trends in disguise.
    • In my experience, consumer trends influence Enterprise trends.
    • Use trends to help you avoid surprises and to sound hip at the water cooler when you too can speak the buzz.

    Trend "Hot Spots"
    Rather than distinguish between trends and fads, I decided to focus on “hot spots” and simply identify the topics that keep showing up in various contexts with customers, in Microsoft, in the industry, … etc.

  • Business Process Management (BPM)
  • Composite / Mash Ups (Server-side, Client-side)
  • Dynamic Languages
  • Functional Programming
  • Health
  • Model-Driven
  • Representational State Transfer (REST)
  • Software plus Services / Software as a Service / Platform as a Service (S+S / SaaS / PaaS)
  • Service Oriented Architecture (SOA)
  • Rich Internet Applications (RIA)
  • Testability
  • User Empowerment (shift in power from business and tech to the user)
  • User Experience (not to be confused with UI)
  • Infrastructure
  • Cloud Computing
  • Green IT
  • Virtualization
  • Very Large Databases
  • Performance
  • Grid
  • High Performance Computing (HPC)
  • Many-core / Multi-core
  • Parallel Computing
  • Software Development
  • Application Life-Cycle Management (ALM)
  • Distributed Teams
  • Lean
  • Scrum
  • User-Lead
  • XP
  • I know the list looks simple enough, but it actually took a bit of vetting to spiral down on the ones that have wood behind the arrow and have signs of a trajectory.

    David Chou on Trends ...
    In this post, Cloud Computing and Microsoft, David Chou identifies the following trends:

    • Consumerization of the Web, and use of browsers
    • Application development efforts shifting towards thin clients and server-side programming
    • Improvements in network bandwidth, anywhere wireless access, etc.
    • Increased maturity in open source software
    • Proliferation and advancement of mobile devices
    • Service Oriented Architecture
    • Software-as-a-Service
    • Utility/Grid Computing

    Simon Guest on Trends ...
    In his talk, An Architectural Overview of Software + Services, Simon Guest outlines the following industry trends:

    • Trend 1: Service Oriented Architecture (SOA)
    • Trend 2: Software as a Service (SaaS)
    • Trend 3: Web 2.0
    • Trend 4: Rich Internet Applications (RIA)
    • Trend 4: Cloud Computing

    Simon connects the dots from the trends to how they support Software + Services:

    • SOA - Reuse and agility
    • RIA: Rich Internet Applications - Experience
    • SaaS: Software as a Service - Flexible pricing and delivery
    • Cloud Computing - Service Utility
    • Web 2.0 - Network Effect

    Phillip Winslow on Trends …
    In an Interop Keynote Panel on Current Software Trends, Phillip Winslow responded to the to please predict the biggest IT story for 2008 as follows:

    “ … decoupling of the end user platform. Virtualization and desktop virtualization and layering on SAAS… Salesforce.com, gmail, etc how we think about the desktop sitting on our desk will be much different.”

    TrendWatching.com on Trends ...
    Here's a snapshot of interesting tidbits from a an earlier version of this trend report page on TrendWatching.com:

    • HappyNomics - In fact, this year may be a good time to move 'HAPPYNOMICS' from the academic world to your ideation team.
    • 12 Themes - The 20+ trends covered in the report are part of bigger themes:
      'REAL', 'BEST', 'STORY', 'UNREAL', 'UNFIXED', 'TIME', 'GREEN', 'DOMAIN', 'ONLINE', '(R)ETAIL', 'ASSIST' and 'PARTICIPATE'.   These themes are all about what will EXCITE consumers in the near future, and can be read/presented independently, or as a coherent 'story'.

    Here's the cool part.  I saw GREEN on their page well before I noticed any of the Green IT initiatives show up.  It struck me as an example of consumer influencing Enterprise.

    Key Posts
    Here's some of the posts that I found to be useful for understanding the impact and influences behind some of the trends:

    Additional Resources
    I know this looks like a laundry list of links but these are actually helpful to quickly get what some of the topics are about:

    My Related Posts

  • J.D. Meier's Blog

    New Release: patterns & practices App Arch Guide 2.0 Beta 2


    Today we released our patterns & practices Application Architecture Guide 2.0 Beta 2.  This is our Microsoft playbook for the application platform.  It's our guide to help solution architects and developers make the most of the Microsoft platform.  It's a distillation of many lessons learned.  It’s principle-based and pattern-oriented to provide a durable, evolvable backdrop for application architecture.  It's a collaborative effort among product team members, field, industry experts, MVPs, and customers.  This is the guide that helps you understand our platform, choose among the technologies, and build applications based on lessons learned and proven practices.

    Key Changes in Beta 2
    Beta 2 is a significant overhaul of the entire guide.  We carried the good forward.  We made some key additions:

    • Added a foreword by S. Somasegar.
    • Added technology considerations throughout the guide.
    • Added technology matrixes for choosing technologies, including Presentation, Data Access, Workflow, and Integration technologies.
    • Added a new Agile Architecture Method (see chapter 4.)
    • Tuned and pruned the recommendations across the entire guide.
    • Restructured the guide for simpler parts – fundamentals, design, layers and archetypes.

    4 Parts

    • Part I, “Fundamentals”
    • Part II, “Design”
    • Part III, “Layers”
    • Part IV, “Archetypes"



    Key Scenarios
    The guide helps you address the following scenarios:

    • Choose the right architecture for your application.
    • Choose the right technologies
    • Make more effective choices for key engineering decisions.
    • Map appropriate strategies and patterns.
    • Map relevant patterns & practices solution assets.

    Key Features

    • Canonical app frame - describes at a meta-level, the tiers and layers that an architect should consider. Each tier/layer is described in terms of its focus, function, capabilities, common design patterns and technologies.
    • App Types.  Canonical application archetypes to illustrate common application types.  Each archetype is described in terms of the target scenarios, technologies, patterns and infrastructure it contains. Each archetype will be mapped to the canonical app frame. They are illustrative of common app types and not comprehensive or definitive.
    • Arch Frame - a common set of categories for hot spots for key engineering decisions.
    • Quality Attributes - a set of qualities/abilities that shape your application architecture: performance, security, scalability, manageability, deployment, communication, etc.
    • Principles, patterns and practices - Using the frames as backdrops, the guide overlays relevant principles, patterns, and practices.
    • Technologies and capabilities - a description/overview of the Microsoft custom app dev platform and the main technologies and capabilities within it.

    Conceptual Framework
    At a high level, the guide is based on the following conceptual framework for application architecture:


    Reference Application Architecture
    We used the following reference application architecture as a backdrop for explaining how to design effective layers and components:


    Key Links

    Core Dev Team

    • J.D. Meier , Alex Homer, David Hill, Jason Taylor, Prashant Bansode , Lonnie Wall, Rob Boucher, Akshay Bogawat

    Contributors / Reviewers

    • Test team: Rohit Sharma, Praveen Rangarajan
    • Edit team: Dennis Rea.
    • External Contributors/Reviewers. Adwait Ullal; Andy Eunson; Christian Weyer; David Guimbellot; David Weller; Derek Greer; Eduardo Jezierski; Evan Hoff; Gajapathi Kannan; Jeremy D. Miller; John Kordyback; Keith Pleas; Kent Corley; Mark Baker; Paul Ballard; Peter Oehlert; Norman Headlam; Ryan Plant; Sam Gentile; Sidney G Pinney; Ted Neward; Udi Dahan
    • Microsoft Contributors / Reviewers. Ade Miller; Anoop Gupta; Bob Brumfield; Brad Abrams; Brian Cawelti; Bhushan Nene; Burley Kawasaki; Carl Perry; Chris Keyser; Chris Tavares; Clint Edmonson; David Hill; Denny Dayton; Diego Dagum; Dmitri Martynov; Dmitri Ossipov; Don Smith; Dragos Manolescu; Elisa Flasko; Eric Fleck; Erwin van der Valk; Faisal Mohamood; Francis Cheung; Gary Lewis; Glenn Block; Gregory Leake; Ian Ellison-Taylor; Ilia Fortunov; J.R. Arredondo; John deVadoss; Joseph Hofstader; Koby Avital; Loke Uei Tan; Manish Prabhu; Meghan Perez; Mehran Nikoo; Michael Puleio; Mike Walker; Mubarak Elamin; Nick Malik; Nobuyuki Akama; Ofer Ashkenazi; Pablo Castro; Pat Helland; Phil Haack; Reed Robison; Rob Tiffany; Ryno Rijnsburger; Scott Hanselman; Serena Yeoh; Srinath Vasireddy; Tom Hollander; Wojtek Kozaczynski

    My Related Posts

  • patterns & practices App Arch Guide 2.0 Project
  • App Arch Guide 2.0 Beta 1 Release
  • App Arch Guide 2.0 Overview Slides
  • Abstract for Application Architecture Guide 2.0
  • App Arch Meta-Frame
  • App Types
  • Architecture Frame
  • App Arch Guidelines
  • Layers and Components
  • Key Software Trends
  • Cheat Sheet: patterns & practices Catalog at a Glance Posted to CodePlex
  • Cheat Sheet: patterns & practices Pattern Catalog Posted to CodePlex
  • J.D. Meier's Blog

    3 Keys to Agile Results


    Agile Results is the name of the system I talk about in Getting Results the Agile Way.   It’s a simple time management system for meaningful results.  The focus is on meaningful results, not doing more things.  There are three keys to the Agile Results system:

    1. The Rule of Three
    2. Monday Vision, Daily Wins, Friday Reflection
    3. Hot Spots

    The Rule of 3
    The Rule of 3 helps you avoid getting overwhelmed.  It’s also a guideline that helps you prioritize and scope. Rather than bite off more than you can chew, you bite off three meaningful things. You can use The Rule of 3 at different levels by picking three wins for the day, three wins for the week, three wins for the month, and three wins for the year. This helps you see the forest for the trees since your three wins for the year are at a higher level than your three wins for the month, and your three wins for the week are at a higher level than your three wins for the day.  You can easily zoom in and out to help balance your perspective on what’s important, for the short term and the longer term.

    Monday Vision, Daily Wins, Friday Reflection
    Monday Vision, Daily Wins, Friday Reflection is a weekly results pattern.  This is a simple “time-based” pattern. Each week is a fresh start. On Mondays, you think about three wins you would like for the week.  Each day you identify three wins you would like for the day. On Fridays, you reflect on lessons learned; you ask yourself, “What three things are going well, and what three things need improvement?”  This weekly results pattern helps you build momentum.

    Hot Spots
    Hot Spots are a way to heat map your life.  They help you map out your results by identifying “what’s hot?.” Hot Spots become both your levers and your lens to help you identify and focus on what’s important in your life. They can represent areas of pain or opportunity. You can use Hot Spots as your main dashboard.  You can organize your Hot Spots by work, personal, and the “big picture” of your life. At a glance, you should be able to quickly see the balls you are juggling and what’s on your plate. To find your Hot Spots, simply make a list of the key things that need your time and energy. Then for each of these key things, create—a simple list, a “tickler list” that answers the question, “What do you want to accomplish?” Once you know the wins you want to achieve in your Hot Spots, you have the ultimate map for your meaningful results.

    You can use Agile Results for work or home or anywhere you need to improve your results in life. Agile Results is compatible with, and can enhance the results of, any productivity system or time management you already use.  That’s because the foundation of the Agile Results platform is a core set of principles, patterns, and practices for getting results.

    The simplest way to get started with Agile Results is to read Getting Started with Agile Results, and take the 30 Day Boot Camp for Getting Results.

  • J.D. Meier's Blog

    E-Shaped People, Not T-Shaped


    Are you I-shaped, T-shaped, or E-shaped?

    I is depth, T is breadth, and E executes (Of course there’s overlap, but you get the idea.)

    If you think of yourself as a mini-business, can you “execute” your ideas? (either yourself, with others, or through others, and amplify your impact)  And, by the way, just how important is the ability to execute?   Well, it’s important enough that Gartner uses “Ability to Execute” as a key criteria in its Magic Quadrants.

    Ability to execute + high-impact ideas are a recipe for value.

    After all, what good are a bunch of ideas if you can’t make them happen.

    Keep these mental models in mind as you design your career path and grow your capabilities, skills, and experiences.

    With that in mind, let’s explore a little more …

    T-Shaped People

    Here's what Wikipedia says about T-shaped people:

    "The concept of T-shaped skills, or T-shaped persons is a metaphor used in job recruitment to describe the abilities of persons in the workforce. The vertical bar on the T represents the depth of related skills and expertise in a single field, whereas the horizontal bar is the ability to collaborate across disciplines with experts in other areas and to apply knowledge in areas of expertise other than one's own."

    (The earliest reference to T-shaped people is by David Guest in  "The hunt is on for the Renaissance Man of computing," in The Independent, September 17, 1991.)

    I-Shaped People

    By contrast, I-shaped people have a very narrow, but expert domain skills in one specific area.

    E-Shaped People

    According to Sarah Davanzo, E-Shaped people are those that "execute":

    “People (workers) today also need to be able to execute.  As they say, 'ideas are like noses, everyone has one.'  I’m tired of people coming to me with a great idea or invention, but with no clue how to bring it fruition. Real genius is being able to execute ideas. “

    An "Ideas Person" Isn't Good Enough

    According to Sarah Davanzo, an “ideas person” isn’t good enough.  You need the ability to execute.  Here’s what Sarah says:

    “Being an experienced, expert, exploratory “ideas person” isn’t good enough in today’s culture, and here’s why.  The trends clearly favor those with “breadth” and “depth”, as well as the tangible (execution) and intangible (exploration), implying having both a big-picture outlook and an attention to detail from being a practitioner.”

    4-Es: Experience, Expertise, Exploration, and Execution

    According to Sarah Davanzo, “E-shaped” people have a combo of 4 Es: 

    “’E-Shaped People’ have a combination of ‘4-E’s’: experience and expertise, exploration and execution.   The last two traits – exploration and execution – are really necessary in the current and future economy.”

    Note – If you want to work on your ability to execute, Getting Results the Agile Way is a good way to start (It’s the playbook I wish somebody would have given me long ago.)

    Is Your CQ (Curiosity Quotient) the Key to Your Future Success?

    According to Sarah Davanzo, your CQ matters more than your IQ and EQ:

    “Exploration = curiosity. Innovation and creative problem solving is tied to one’s “curiosity quotient” (CQ). In this day and age of constant change (think: Moore’s Law), one’s CQ is more useful than one’s IQ or EQ.”

    Side note – Edward de Bono wrote about how exploring ideas is how to have a beautiful mind.

    Bill Buxton on I-Shaped People

    Bill Buxton puts a spin on “I-shaped” people in his article on Innovation Calls for I-Shaped People:

    “But while I love Bill's (Bill Moggridge) notion of T-shaped people, things are just not that simple. So as both compliment and complement, I propose I-shaped people. These have their feet firmly planted in the mud of the practical world, and yet stretch far enough to stick their head in the clouds when they need to. Furthermore, they simultaneously span all of the space in between.”

    Three Pillars:  Business, User Experience, and Technology

    According to Bill Buxton, at Microsoft we purposefully plan for equal levels of competence and creativity in business, design, and technology:

    “When you slide multiple Ts together, their cross bars all overlap, indicating that the various Ts have a common language, and, ideally, their combined base can be broad enough to cover the domain of the problem that you are addressing. At ( (MSFT)), we try to make sure that in looking at new product or services ideas, we have at least three Ts, which we call BXT, reflecting equal levels of competence and creativity in three domains: business, (in design), and technology. These are three interdependent and interwoven pillars we see as the foundation for what we do.”

    Abstract + Concrete = Outstanding

    Outstanding people can generalize and abstract, as well as get specific and make things actually work.  They bridge the head in the clouds and feet on the ground with other people.   Bill Buxton writes: 

    “I once asked him (Brian Shackel) if he had noticed any particular attributes that distinguished the students that went on to do remarkable things compared with the rest. His answer was as immediate as it was insightful. He said: ‘The outstanding students all had an outstanding capacity for abstract thinking, yet they also had a really strong grounding in physical materials and tools.’ By this, he meant that they could rise above the specifics of a particular problem to think about them in a more abstract, and in some ways, more general way.”

    Expand Your T-Shape with Personal Effectiveness

    One way to grow your T-Shape is to grow your personal effectiveness capabilities.   The U.S. department of labor actually has a Competency Model Clearninghouse.   You can easily browse different industries and find a list of Personal Effectiveness capabilities.  For example, in the Information Technology Competency Model, they list the following Personal Effectiveness Capabilities:

    • Interpersonal Skills and Teamwork
    • Integrity
    • Professionalism
    • Initiative
    • Adaptability and Flexibility
    • Dependability and Reliability
    • Lifelong Learning

    BTW – did you notice that Personal Effectiveness is at the base of the pyramid of the competency model?   The higher you go up, the more narrow and specific it gets.   The lower you go, the broader and more general the competency model is.  Personal Effectiveness is at the base of all the pyramids.   That should tell you something.

    Note – If you want to work on your personal effectiveness skills, I have a knowledge base at Sources of Insight that focuses on personal effectiveness, personal development, leadership, productivity, emotional intelligence, time management, strengths, motivation, and more.

    Additional Resources

    You Might Also Like

    Ability to Execute

    Agile Downsizing: Why Agile Skills Improve a Project Manager’s Job Security

    Anatomy of a High-Potential

    Generalists vs. Specialists

    The Microsoft Career Survival Guide

  • J.D. Meier's Blog

    Motivation Techniques and Motivation Theories


    "To hell with circumstances; I create opportunities." – Bruce Lee

    Motivation is a key to making things happen, whether you’re developing software, leading teams, or just getting yourself out of bed and on with your day.

    It's hard to change the world, or even just your world for that matter, if you lack the motivation or drive.  In a world where there is plenty that can bring you down, the best thing you can do is arm yourself with motivation techniques that work, and motivation theories that explain *why* they work.

    Motivation Techniques
    You can motivate yourself with skill, as well as others, if you know the key motivation techniques.  Here is my latest collection of motivation techniques and methods at a glance:

    You can use the motivation techniques to motivate yourself and others.

    Motivation Theories
    There are a lot of motivation theories that are relevant, and some have evolved over the years.    Maslow’s Hierarchy of Needs is useful to know for understanding some basic drivers.  It’s also useful to know David McClelland’s Theory of Needs , and that focuses on achievement, affiliation, and power as key drivers. 

    It’s also useful to distinguish between intrinsic and extrinsic motivation.  For example, if you depend on other people to carrot or stick you, you’re driving from extrinsic motivation.  If instead, you’re doing something because it makes you feel alive or unleashes your passion or simply just for a job well done, then you’re driving from intrinsic motivation, and that is a powerful place to be. 

    It’s also useful to know that at the end of the day, purpose is the most powerful driver, and if you can connect what you do to your purpose, then you bring out your best and you’re a powerful force to be reckoned with.  Purpose, passion, and persistence change the game.

    Timeline of Motivation Theories, Studies, and Models
    Here is a timeline of some interesting work on the study of motivation:

    • 1939 - The Hawthorne studies focused on supervision, incentives, and working conditions.
    • 1957 - Argyris focused on the congruence between individual's needs and organizational demands.
    • 1959 - Focused on sources of work satisfaction to design the work to make it enriching and rewarding (Herberg, Mausner, and Snyderman)
    • 1964 - Valence-instrumentality-expectancy model (Vroom.)
    • 1975 - Organizational behavior modification - Focused on the automatic role of rewards and feedback on work motivation, but downplayed the impact of psychological processes such as goals and self-efficacy.
    • 1977 - Self-efficacy (Locke)
    • 1980 - Focused on ways specific work characteristics and psychological processes that increase employee satisfaction. (Hackman, and Oldham.)
    • 1986 - Goals and self-efficacy (Bandura)
    • 1986 – Social-cognitive theory (Bandura)
    • 1986 - Attribution theory - Focuses on how the ways you make attributions affects your future choices and actions. (Weiner)
    • 1987 - Goal theory - Focuses on the effects of conscious goals as motivators of task performance. (Lord and Hanges)
    • 1997 – Self-efficacy has a powerful motivation effect on task performance (Bandura.)
    • 2002 - Goal-setting theory (Locke and Latham)

    Motivation Quotes
    If you need some inspiring words of wisdom, be sure to explore my collection of motivation quotes.

  • J.D. Meier's Blog

    Microsoft Cloud Analysis Tool and Infrastructure Optimization Tool


    I had 20 minutes before my meeting so I did a quick step through of the new Microsoft Cloud Analysis Tool and the Infrastructure Optimization Self-Assessment Tool.  

    The Microsoft Cloud Analysis Tool helps you build a roadmap to the Cloud based on your business needs, constraints, and desired attributes.  It’s a “what if” for the Cloud, that you can play out the possibilities by changing your parameters.  That’s a mighty powerful thing if you are trying to cycle through various options and understand the trade-offs.   In fact, independent of the actual content in the tool, I think the most valuable part is the framing of the decisions.  If you use nothing else, you can at least use the frames to help you accelerate your own Cloud decision making, and make more informed choices.

    I limited my words and focused on screen captures so that you can quickly scan the end-to-end to see the inputs and the outputs.

    Here is a summary of the tools:

    • Infrastructure Optimization Self-Assessment Tool – You can use the Assessment tool to understand the company’s current “as is” environment and at the same time set the desired “to be” roadmap.   The Infrastructure Optimization Self-Assessment Tool provides a personalized Optimization score for the organization.  The tool generate reports that can serve as the baseline for planning an effective roadmap and as an incentive for optimizing your IT infrastructure. The detailed Roadmap plan will be generated as part of the Discovery tools.
    • Cloud Analysis Tool - This tool will help you Plan the migration to the cloud deployment of your choice based on prioritized Business and IT needs, Services and Applications, and Risks and Constraints. The Cloud Analysis Tool will evaluate what you select against your current environment’s maturity level and provide you with the information required to make decisions about which cloud architecture is right for you -- and more importantly -- how to get there. Selecting your preferred cloud deployment option will generate a transformation plan including project information, ROI/TCO data, and architecture diagrams – the information you need to plan and achieve your cloud transformation.

    Here is the home page of the Microsoft Cloud Analysis Tool and Microsoft Infrastructure Optimization Self-Assessment Tool:


    Optimization Assessment and Discovery Step Through

    Step 1 - Create an Account


    Step 2 – Create a Profile




    Profile a Workload



    Step 3 – Choose a Discovery Activity



    Cloud Analysis Tool





















    The output includes

    • Cloud Decision Summary (PowerPoint)
    • Cloud ROI (PowerPoint)
    • Cloud Decision Summary
    • Cloud Transformation Plan
    • Microsoft Project Output
    • Architecture Diagram (Visio)

    Cloud Decision Summary (PowerPoint)









    Cloud ROI (PowerPoint)






    Cloud Decision Summary












    Cloud Transformation Plan










    Microsoft Project Output




    Architecture Diagram (Visio)


  • J.D. Meier's Blog

    Writing Books on Time and on Budget



    One of the questions I get asked is how did we execute our patterns & practices Application Architecture Guide 2.0 project, on time and on budget?  It was a six month project, during which we ....

    2 Keys to Success
    We used two keys to success:

    • Fix time, Flex scope
    • Agile Guidance Engineering

    Fix Time, Flex Scope
    One of the most successful patterns I've used for years now is to fix time, and flex scope.  The idea is to deliver incremental value and find a way to flow value along the way rather than wait for one big bang at the end.  This allows you to deliver the most timely and relevant value with a healthy worklife balance.  It helps reduce project risk along the way.  More importantly, it helps get your stakeholders on board, by showing them results versus just trust you to the end.  Scope is the best to flex because there's the least amount of precision or accuracy up front, and it enables you to respond to the market or stakeholder concerns more effectively.

    Agile Guidance Engineering
    This is the secret sauce.  I call it Agile Guidance Engineering:


    In a nutshell, Agile Guidance Engineering is about building guidance using nuggets of specific types (how tos, guidelines, checklists ... etc.) and composing them into books.  The books themselves are actually an information model.  The information model is designed to both structure the content as well as structure the problem domain.  We vet the nuggets as we go for feedback, and we prioritize, tune, and improve them along the way.

    I've used Agile Guidance Engineering successfully to build the following Microsoft patterns & practices Blue Books:

    My Related Posts

  • J.D. Meier's Blog

    Software Architecture Best Practices at a Glance


    Today we posted our updated software architecture best practices at a glance to CodePlex, as part of our patterns & practices Application Architecture Guide 2.0 project:

    They’re essentially a brief collection of problems and solutions around building software applications.  The answers are short and sweet so that you can quickly browse them.  You can think of them as a bird’s-eye view of the problem space we tackled.  When we add them to the Application Architecture Guide 2.0, we'll provide quick links into the guide for elaboration.

    This is your chance to bang on the set of problems and solutions before Beta 2 of the guide.

  • J.D. Meier's Blog

    NLP Patterns and Practices for High-Performance Teams and Achievers


    “The winners in life think constantly in terms of I can, I will, and I am. Losers, on the other hand, concentrate their waking thoughts on what they should have or would have done, or what they can’t do.” – Dennis Waitley

    One of the ways I set better goals and achieve them at Microsoft is by using well-defined outcomes.   It’s a way to begin with the end in mind.  An outcome is simply something that follows as a result or consequence.

    Maybe the best way to think of an outcome is that it answers the question: “What do you want?”

    (If you want to just jump to the recipe and full expanded explanation of how to set better goals, go here: How To Set Better Goals with Well-Defined Outcomes)

    Outcomes are the Key to High-Performance and Outstanding Results in  Work and Life

    NLP (Neuro-Linguistic Programming) has popularized the use of outcomes over the years to help people achieve better results.   You can think of NLP as a way to model excellence and replicate it from one person to another.  It’s a way to program your mind, body, and emotions using advanced skills for high-performance.   (Tip – if you don’t’ program yourself, somebody else will.)

    Imagine if you could model what the most successful people think, feel, and do, and get that on your side.

    If you like languages or the idea of language to share and express concepts, you’ll especially appreciate the power of NLP.  NLP helps you create a lot of precision in how you see things, how you articulate things, how you filter information, and how you distill feedback into actionable insights.

    NLP is for Continuous Learning, Agile Personal Development, and Business Results

    NLP is like Agile Personal Development where continuous learning is fundamental to its core.

    NLP is probably the most powerful set of techniques I’ve ever come across for personal development, personal effectiveness, leadership, and high-performance.   The techniques effectively help you find better, faster, easier ways to accomplish outstanding results, while helping you bring out your best.   Tony Robbins popularized NLP back in the 80’s, but it’s more mainstream today.

    In fact, I know a lot of executives and highly effective Softies that use NLP to get the edge in work and life.   I also know a lot of developers that have NLP under their belt and it helps them clarify what they want, set better goals, take more effective action, and communicate more effectively to themselves and others.  In fact, some say that NLP is simply a set of advanced communication techniques.

    Developers Love NLP When They Stumble Upon It

    Developers often find a special place in their hearts for NLP because of its precision and how it helps to “codify” behaviors.   Specialists often use NLP to model high-performance behaviors and break them down into a recipe.   These recipes for results help guide your thoughts, feelings, and actions in a more powerful way.

    Anyway, what makes NLP powerful when you are setting goals is that it helps you really identify the end in mind.  It brings your full senses to bear, so instead of imagining a fuzzy scene of what success looks like with loosey-goosey language, it forces you to get specific and use precision, and to really get clarity on what you actually want to achieve. 

    After all, it’s a lot easier to get to where you are going, if you know what you really want to accomplish.

    In addition to helping you create compelling scenes of success, or mental movies of your future victories, NLP also helps you break your goal down into actionable chunks.  It also sets you up for success by teaching you to focus on feedback as a way to improve, not a sign of failure.  In this way, you keep refining your actions and your outcome until you achieve your goal.

    Patterns and Practices for High-Performance and Personal Development

    What most people don’t know about NLP is that it’s been an effective tool for years for building a great big body of knowledge around high-performance patterns for individuals, teams, and leaders.  The NLP framework provides a way to capture and share very detailed patterns of behavior that help people improve their performance.  Whether you want to improve your leadership skills, or your relationship skills, or whatever, there is a bountiful catalog of very specific patterns that help you do that. 

    And, the beauty of patterns in NLP is that they tend to be very prescriptive, very specific, and easy to follow and try out.  This makes it easy to test and adapt until you find what works for you.  (I’m a fan of don’t take things at face value – test them for yourself and judge from results.  I’m also a fan of Bruce Lee’s timeless wisdom: “Absorb what is useful, Discard what is not, Add what is uniquely your own.")

    One of the best books I’ve found is the book, The Big Book of NLP Techniques, by Shlomo Vakni.   Surprisingly, it actually delivers what it says on the cover.  There’s more than 350 patterns at your fingertips.   I wrote about one of the patterns, Well-Defined Outcomes, in my post on How To Set Better Goals with Well-Defined Outcomes.   You need to see it to believe it.  It really is detailed, so if you’ve ever struggled with setting goals, this might be your big breakthrough.

    The Big Breakthrough in Goal Setting

    Here’s the real breakthrough though in goal setting.  Aside from making sure you have goals that inspire you, and that they are aligned with what you really want, the power of the goal is ultimately in moving you in the right direction.   It’s not a perfect or precise path where you can simply do A and get B.   In fact, the irony is, that if you really want B, your best strategy is to first act as if you already have B.   This will help you think, feel, and act from a more effective perspective so that your actions come from the right place, and help you produce more effective results (or at least guide you in the right direction).

    That’s why you often here people say that you have to BE-DO-HAVE, not HAVE-DO-BE.   With HAVE-DO-BE, the idea is when you get what you want, then you’ll start doing the things that go with it, and finally you’ll act the part.   This is like saying that you won’t show up like a leader or act like a leader until somebody appoints you in a leadership role.   This creates a negative loop, since why should anybody put you in a role that you don’t act the part.

    The right thought pattern is BE-DO-HAVE because then your thoughts, feelings, and actions support your end results.

    Are you acting like what you want? 

    If you’re not getting what you want, what does your feedback tell you to change?

    You Might Also Like

    The Guerilla Guide to Getting a Better Performance Review at Microsoft

    Think in 3 Wins

    E-Shaped People, Not T-Shaped

  • J.D. Meier's Blog

    Lessons Learned in 2008


    I posted my Lessons Learned in 2008 on Sources of Insight.  2008 was a pretty insightful year for me.  I met a lot of great people, read a lot of books, and learned a lot along the way.  I recapped my top 10 lessons here.

    Top Ten Lessons for 2008

    • Adapt, adjust, or avoid situations. Learn how to read situations. Some situations you should just avoid.  Some situations you should adapt yourself, as long as you play to your strengths.  Some situations you should adjust the situation to set yourself up for success.  See The Change Frame.
    • Ask questions over make statements.  If you want to get in an argument, make statements.  If you want to avoid arguments, ask questions.
    • Character trumps emotion trumps Logic.  Don’t just go for the logical win.  Win the heart and the mind follows.  Build rapport.  Remember the golden rule of “rapport before influence.  Have the right people on your side.   If you win the right pillars first, it’s a domino effect.  It’s part of social influence.  See Character Trumps Emotion Trumps Logic.
    • Develop a routine for exceptional thinking.  Create a preperformance routine that creates consistent and dependable thinking.  Work backwards from the end in mind.  Know what it’s like when you’re at your best.  Model from your best experiences.  Success leaves clues.  Turn them into a routine.  Set time boundaries.  Don’t let yourself take as long as it takes.  Work has a way of filling the available hours. Set a timebox and improve your routine until you can shift gears effectively within your time boundaries.  See Design a Routine for Exceptional Thinking.
    • Give your best where you have your best to give.   Design your time to spend most of your time on your strengths.  Limit the time you spend in your weaknesses.   Play to your strengths.  When you play to your strengths, if you get knocked down, it’s easier to get up again.  It’s also how you unleash your best.  See Give Your Best Where You Have Your Best to Give.
    • Label what is right with things.  There’s been too much focus on what’s wrong with things.  Find and label what’s right with you.  We all have a deep need to know what’s right with us.  Shift from labeling what’s wrong, to labeling what’s right. See Label What is Right with Things.
    • One pitch at a time.  Focus on one pitch at a time.  Hook on to one thing.  Be absorbed in the moment, no matter what’s at stake.  Let results be the by-product of what you’re doing.  Don’t judge yourself while you’re performing.  Don’t rearrange your work; rearrange your focus.  See One Pitch at a Time.
    • Spend 75 percent on your strengths.  Very few people spend the majority of their time on their strengths.  Create timeboxes for your non-negotiables.  You’re not your organization’s greatest asset until you spend your time on your strengths.  Activities that you don’t like, hurt less, if you compartmentalize them to a smaller chunk of your day.  See Spend 75 Percent on Your Strengths.
    • Ask Solution-focused questions.   Ask things like “how do we make the most of this?” … “what’s the solution?” … “if we knew the solution, what might it be?”  Believe it or not, a lot of folks get stuck unless you add the “if you did know the solution …” or “what might it be?”  See Solution-Focused Questions.
    • Use stress to be your best.  It’s not what happens to you, it’s what you make of it.  Distinguish stress from anxiety.  Stress is your body’s response.  Anxiety is your mind’s response.   See Use Stress to Be Your Best.
  • J.D. Meier's Blog

    Silverlight Videos Map


    Here is a simple map of the Silverlight videos available from www.Silverlight.net.

    While making our map, I was surprised to see how many Silverlight videos there are, compared to code samples or How Tos.  That said, if you need a Silverlight video, I think you’re in luck.  Currently this is a flat list.  There are many possible improvements.  For example, now it should be easy to bubble up the favorite videos.  It’s also possible to add a “Getting Started” section at the top and group all the videos related to getting started.

    The categories are simply groups based on the existing videos mapped to common areas of concern.  We used the following categories to chunk up the video collection into common tasks and topics:

    • Animations
    • Controls
    • Data Access
    • Data Binding
    • Data Validation
    • Deep Zoom
    • Deployment
    • Events and Delegates
    • General
    • Graphics and 3-D
    • HTML Bridge
    • Layout, Input, and Printing Security
    • Networking and Communication
    • Performance
    • Styles and Templates
    • Text and Rich Text
    • User Controls
    • Video and Audio
    • WCF RIA Services
    • XAML



    Data Access

    Data Binding

    Data Validation


    Deep Zoom


    Events and Delegates


    Graphics and 3-D

    HTML Bridge

    Layout, Input, and Printing Security

    Networking and Communication


    Styles and Templates

    Text and Rich Text

    User Controls

    Video and Audio

    WCF RIA Services


    To Be Sorted …

  • J.D. Meier's Blog

    Microsoft Cloud Case Studies at a Glance


    Cloud computing is hot.  As customers makes sense of what the Microsoft cloud story means to them, one of the first things they tend to do is look for case studies of the Microsoft cloud platform.  They like to know what their peers, partners, and other peeps are doing.

    Internally, I get to see a lot of what our customers are doing across various industries and how they are approaching the cloud behind the scenes.  It’s amazing the kind of transformations that cloud computing brings to the table and makes possible.  Cloud computing is truly blending and connecting business and IT (Information Technology), and it’s great to see the connection.  In terms of patterns, customers are using the cloud to either reduce cost, create new business opportunities and agility, or compete in ways they haven’t been able to before.  One of the most significant things cloud computing does is force people to truly understand what business they are in and what their strategy actually is.

    Externally, luckily, we have a great collection of Microsoft cloud case studies available at Windows Azure Case Studies.

    I find having case studies of the Microsoft cloud story makes it easy to see patterns and to get a sense of where some things are going.  Here is a summary of some of the case studies available, and a few direct links to some of the studies.

    Advertising Industry
    Examples of the Microsoft cloud case studies in advertising:

    Air Transportation Services
    Examples of the Microsoft cloud case studies in air transportation services:

    Capital Markets and Securities Industry
    Examples of the Microsoft cloud case studies in capital markets and securities:

    Examples of the Microsoft cloud case studies in education:

    Employment Placement Agencies
    Examples of the Microsoft cloud case studies in employment agencies:

    • OCC Match - Job-listing web site scales up solution, reduces costs by more than U.S. $500,000.

    Energy and Environmental Agencies
    Examples of the Microsoft cloud case studies in enery and environmental agencies:

    • European Environment Agency (EEA) - Environment agency's pioneering online tools bring revolutionary data to citizens.

    Financial Services Industry
    Examples of the Microsoft cloud case studies in the financial services industry:

    • eVision Systems - Israeli startup offers cost-effective, scalable procurement system using cloud services.
      Fiserv - Fiserv evaluates cloud technologies as it enhances key financial services offerings.
    • NVoicePay - New company tackles big market with cloud-based B2B payment solution.
    • RiskMetrics - Financial risk-analysis firm enhances capabilities with dynamic computing.

    Food Service Industry
    Examples of the Microsoft cloud case studies in the food service industry:

    • Outback Steakhouse - Outback Steakhouse boosts guests loyalty with Facebook and Windows Azure.

    Government Agencies
    Examples of the Microsoft cloud case studies in government agencies:

    Healthcare Industry
    Examples of the Microsoft cloud case studies in healthcare:

    • Vectorform - Digital design and technology firm supports virtual cancer screening with 3-D viewer.

    High Tech and Electronics Manufacturing
    Examples of the Microsoft cloud case studies in high tech and electronics manufacturing:

    • 3M - 3M launches Web-based Visual Attention Service to heighten design impact. - http://www.microsoft.com/casestudies/Case_Study_Detail.aspx?CaseStudyID=4000005768
    • GXS Trading Grid - Electronic services firm reaches out to new markets with cloud-based solution.
    • iLink Systems - Custom developer reduces development time, cost by 83 percent for Web, PC, mobile target.
    • Microsoft Worldwide Partner Group - Microsoft quickly delivers interactive cloud-based tool to ease partner transition.
    • Sharpcloud - Software startup triples productivity, saves $500,000 with cloud computing solution.
    • Symon Communications - Digital innovator uses cloud computing to expand product line with help from experts.
    • VeriSign - Security firm helps customers create highly secure hosted infrastructure solutions.
    • Xerox - Xerox cloud print solution connects mobile workers to printers around the world.

    Examples of the Microsoft cloud case studies in hosting:

    • Izenda - Hosted business intelligence solution saves companies up to $250,000 in IT support and development costs.
    • Mamut - Hosting provider uses scalable computing to create hybrid backup solution.
    • Metastorm - Partner opens new market segments with cloud-based business process solution.
    • Qlogitek - Supply chain integrator relies on Microsoft platform to facilitate $20 billion in business.
    • SpeechCycle - Next generation contact center solution uses cloud to deliver software-plus-services.
    • TBS Mobility - Mobility software provider opens new markets with software-plus-services.

    Insurance Industry
    Examples of the Microsoft cloud case studies in the insurance industry:

    IT Services
    Examples of the Microsoft cloud case studies in IT services:

    • BEDIN Shop Systems - Luxury goods retailer gains point-of-sale solution in minutes with cloud-based system. - http://www.microsoft.com/casestudies/Case_Study_Detail.aspx?CaseStudyID=4000008195
    • Broad Reach Mobility - Firm streamlines field-service tasks with cloud solution. - http://www.microsoft.com/casestudies/Windows-Azure/Broad-Reach-Mobility/Firm-Streamlines-Field-Service-Tasks-with-Cloud-Solution/4000008493
    • Codit - Solution provider streamlines B2B connections using cloud services. - http://www.microsoft.com/casestudies/Microsoft-BizTalk-Server/Codit/Solution-Provider-Streamlines-B2B-Connections-Using-Cloud-Services/4000008528
    • Cumulux - Software developer focuses on innovation, extends cloud services value for customers. - http://www.microsoft.com/casestudies/Case_Study_Detail.aspx?CaseStudyID=4000007947
    • eCraft - IT firm links web applications to powerful business management software.
    • EdisonWeb - Web firm saves $30,000 annually, expands global growth with cloud database service.
    • Epicor - Software developer saves money, enhances application with Internet-based platform.
    • ESRI - GIS provider lowers cost of customer entry, opens new markets with hosted services.
    • Formotus - Forms automation company uses cloud storage to lower mobile data access costs.
    • FullArmor - FullArmor PolicyPortal Technical Brief: A Windows Azure/Software-plus-Services Solution
    • Gcommerce - Service provider transforms special-order process with a hybrid cloud and on-premises inventory solution.
    • GoGrid - Hosting provider extends service offerings, attracts customers with "cloud" platform.
    • Guppers - Mobile data services quickly and cost-effectively scale with cloud services solution.
    • HCL Technologies - IT firm delivers carbon-data management in the cloud, lowers barriers for customers.
    • HubOne - Australian development firm grows business exponentially with cloud services.
    • IMPACTA - IT security firm delivers low cost, high protection with online file transfer service.
    • Infosys - Infosys creates cloud-based solution for auto dealers using SQL data services.
    • InterGrid GreenButton - GreenButton super computing power in the cloud.
    • InterGrid - Software developers offer quick processing of compute-heavy tasks with cloud services.
    • ISHIR Infotech - IT company attracts new customers at minimal cost with cloud computing solution.
    • K2 - Software firm moves business processes and line-of-business data into the cloud.
    • Kelly Street Digital - Digital marketing startup switches cloud providers and saves $4,200 monthly.
    • Kompas Xnet - IT integrator delivers high-availability website at lower cost with online services.
    • LINQPad - Software developers gain the ease of LINQ data queries to compelling cloud content.
    • Meridium - Asset performance management solution increases performance and interoperability.
    • metaSapiens - ISV optimizes data browsing tool for online data, expects to enter global marketplace.
    • Microsoft - Microsoft IT moves auction tool to the cloud, makes it easier for employees to donate.
    • NeoGeo New Media - Digital media asset management solution gains scalability with SQL Data Services.
    • Paladin Data Systems - Software provider reduces operating costs with cloud-based solution.
    • Persistent Systems - Software services provider delivers cost-effective e-government solution.
    • Propelware - Intuit integration provider reduces development time, cost by 50 percent.
    • Quilink - Innovative technology startup creates contact search solution, gains global potential.
    • Siemens - Siemens expands software delivery service, significantly reduces TCO.
    • Sitecore - Company builds compelling web experiences in the cloud for multinational organizations.
    • SOASTA - Cloud services help performance-testing firm simulate real-world Internet traffic.
    • Softronic - Firm meets demand for streamlined government solutions using cloud platform.
    • SugarCRM - CRM vendor quickly adapts to new platform, adds global, scalable delivery channel.
    • Synteractive - Solution provider uses cloud technology to create novel social networking software.
    • Transactiv - Software start-up scales for demand without capital expenses by using cloud services.
    • Umbraco - Web content management system provider moves solution to the cloud to expand market.
    • Volantis - Mobile services ISV gains seamless scalability with Windows Azure platform.
    • Wipro - IT services company reduces costs, expands opportunities with new cloud platform.Zitec - IT consultancy saves up to 90
    • percent on relational database costs with cloud services.
    • Zmanda - Software company enriches cloud-based backup solution with structured data storage.

    Life Sciences
    Examples of the Microsoft cloud case studies in life sciences:

    Examples of the Microsoft cloud case studies in manufacturing:

    Media and Entertainment Industry
    Examples of the Microsoft cloud case studies in media and entertainment:

    • OriginDigital - Video services provider to reduce transcoding costs up to half.
    • Sir Speedy - Publishing giant creates innovative web based service for small-business market.
    • STATS - Sports data provider saves $1 million on consumer market entry via cloud services.
    • TicketDirect - Ticket seller finds ideal business solution in hosted computing platform.
    • TicTacTi - Advertising company adopts cloud computing, gets 400 percent improvement.
    • Tribune - Tribune transforms business for heightened relevance by embracing cloud computing.
    • VRX Studios - Global photography company transforms business with scalable cloud solution.

    Metal Fabrication Industry
    Examples of the Microsoft cloud case studies in metal fabrication:

    • ExelGroup - ExelGroup achieves cost reduction and efficiency increase with Soft1 on Windows Azure.

    Nonprofit Organizations
    Examples of the Microsoft cloud case studies in non-profit organizations:

    • Microsoft Disaster Response Team - Helping governments recover from disasters: Microsoft and partners provide technology and other assistance following natural disasters in Haiti and Pakistan.

    Oil and Gas Industry
    Examples of the Microsoft cloud case studies in oil and gas:

    • The Information Store (iStore) - Solution developer expects to boost efficiency with software-plus-services strategy.

    Professional Services
    Examples of the Microsoft cloud case studies in professional services:

    Publishing Industry
    Examples of the Microsoft cloud case studies in publishing:

    • MyWebCareer - Web startup saves $305,000, sees ever-ready scalability—without having to manage IT.

    Retail Industry
    Examples of the Microsoft cloud case studies in retail:

    • Glympse.com - Location-sharing solution provider gains productivity, agility with hosted services.
      höltl Retail Solutions - German retail solutions firm gains new customers with cloud computing solution.

    Software Engineering
    Examples of the Microsoft cloud case studies in software:

    Telecommunications Industry
    Examples of the Microsoft cloud case studies in telecommunications:

    • IntelePeer - Telecommunications firm develops solution to speed on-demand conference calls.
    • SAPO - Portugal telecom subsidiary helps ensure revenue opportunities in the cloud.
    • T-Mobile USA - Mobile operator speeds time-to-market for innovative social networking solution.
    • T-Systems - Telecommunications firm reduces development and deployment time with hosting platform.

    Training Industry
    Examples of the Microsoft cloud case studies in training:

    • Point8020 - Learning content provider uses cloud platform to enhance content delivery.

    Transportation and Logistics Industry
    Examples of the Microsoft cloud case studies in transportation:

    • TradeFacilitate - Trade data service scales online solution to global level with "cloud" services model.

    My Related Posts

  • J.D. Meier's Blog

    How I Use Agile Results


    This past January, more than 20,000 people got the book that’s changing lives, and changing the workplace:

    Getting Results the Agile Way:  A Personal Results System for Work and life

    You’re going to want to read this if you want to level up in work and life, or share it with a friend you know that you want to help give the edge.

    I’m going to walk through how I use Agile Results  to show you how YOU can seriously and significantly amplify your impact, get better performance reviews, and spend more time doing what YOU enjoy.  (So, while this post might seem all about me, it’s really about you.)

    I’m not going to make it look easy.  I’m going to make it real.  I care way more that you get the full power of the system in your hands so you can do amazing things and get exponential results.   Agile Results is not a fly-by-night.   It was more than ten years in the making.

    Keep in mind, it’s an ultra-competitive world, and what you don’t know can hurt you.  On the flip side, what you do know can instantly boost your creativity, productivity, and impact in unfair ways.

    Use Agile Results as your unfair advantage.

    Now then, let’s roll up our sleeves and get to it.  But first, some context …

    I use Agile Results as a personal productivity and time management system

    In one line, it's my "personal results system for work and life." 

    I also use it to lead distributed teams around the world.  I use it to drive high-impact projects, and for projects at home. 

    This post is a detailed walkthrough of how I use Agile Results as a time management and productivity system for making things happen.

    Before we dive into the details, I want to make an important point ...

    The simplest way I use Agile Results is as follows:

    I write down Three Wins that I want to accomplish for the day on paper.

    Yes, that’s it, and it is that simple (to at least, that’s how simple it is to start using Agile Results.)

    If ever I get off track (and I do), the simple way I get back on track with Agile Results, is I simply write down my three wins for the day, down on a piece of paper.  Agile Results is both forgiving and instantly useful.

    The main goal of Agile Results is to help me spend more time where it counts.  I needed a light-weight and flexible system that I could use for myself or for others.  For several years, I had to build up a new team every six months.  I needed to build high-performance teams under the gun, as quickly as possible.  And, at the same time, I wanted work to be a place of self-expression, where you live your values, give your best where you have your best to give, and experience flow and continuous learning on a regular basis.

    I needed to get "Special Forces" results, from individuals, and from the larger team.  So I needed a system that could stretch to fit ... either scale up for a team, or simply help an individual get exponential results.  I wanted it to be based on timeless and self-evident principles, rather than tools or fads.  And I wanted it to "play well with others" ... where if somebody already had an existing system, or favorite tools, Agile Results could just ride on top and help them get more of what they already use.

    Above all, it had to be as simple as possible.

    Having a system that’s as simple as possible, helps support you while you do the impossible.

    With that in mind, let's dive in.  So here is how I use Agile Results ...

    Daily Startup Routine

    My favorite startup routine is:

    1. Wake up
    2. Throw on my shoes and run for 30 minutes
    3. Take a shower
    4. Eat breakfast slow
    5. Take the back way to work, play my favorite songs, and figure out my three wins for the day

    It's a simple routine.  I've learned that one of the keys is carving out time for what's important, first thing in the morning.  What I like about this routine is that it's not chaotic.  It's serene by design.  I've had chaotic startup patterns.  This is the one that I purposefully made the morning about exercising, eating, and setting the stage for a great day.  I don't turn on the TV.  I don't watch the news.  I don't check my computer.  All of that can wait until I'm in the office. 

    It's how I charge up.

    Monday Vision

    Monday is all about vision for the week. 

    For example, if the week were over, and you were looking back, what would be the three big things you want under your belt?

    It's such a simple thing, but I make the most of the week, by starting with what I want out of the week.  On Monday mornings, my main starting point is Three Wins for the Week.  I identify the top Three Wins that would make this week great.  To do so, I jump ahead and imagine that if this was Friday, what would I want to rattle off as my three wins under my belt.  I do this on my way to work, while listening to my favorite songs.  I play around with possibilities.  I think of what big wins would look like.  I also think about the big, hairy problems need attention.  I try to balance between addressing pain, and acting on opportunities.

    If I really get stuck, I try to think of the top three things that are top of mind that really need my attention.  If I'm going to invest the next week of my life, I want to make sure that I'm nailing the things that matter.

    The key is that I use very simple words.  I'm effectively choosing labels for my wins. For example, "Vision is draft complete" is simple enough to say, and simple enough to remember.  If I can't say it, it's not sticky.

    When I get to work, I scan my mail.  I think of my inbox as a stream of *potential* action.  I walk the halls to beat the street. I absorb what I learn against what I set out to do for the week.  If necessary, I readjust.  If I catch my manager, I do a quick sanity check to find out his Three Wins for the Week, and how I'm mapping to what's on the radar.

    For each project on my plate, I have a simple list of work items.  This gives me "One Place to Look."  This also helps me identify the "Next Best Thing" to do.  It's this balance of the lists with what's top of mind, that keeps me grounded.  I try to support my mind, with just enough scaffolding, but let it do what it does best.  If I can identify the big outcomes for the week, I don't have to get caught up in the overhead of tracking minutia.

    On my computer, I keep notepad open so that I can list my three wins at the top for the week, list my three wins for the day, any tasks or things on my mind below that.  It's important that I keep my mind fresh and ready for anything.  It's also where I do my brain dump at the end of the day ("Dump Your State"), which is simply a dump of anything on my mind or pending issues, so that I don't take work home with me, and I can pick up from where I left off, or start fresh the next day.

    Daily Wins

    Each day of the week, the most important thing I do at the start of the day, is identify Three Wins that I want for that day. I write them down.  I cross-check them against the Three Wins that I want for the week. 

    First I brainstorm on what I want or need to achieve for the day.  This is just a rapid brain dump.  If I'm at my desk, I write it down on paper.  When I hone in on what seems to be my three key wins for the day, I say them out loud.  Verbalizing them is important, because it's how I simplify and internalize them.  Being able to say them, keeps them at my mental finger tips.  It's like having the scoreboard right in plain view.  I want them front and center so that I can use them to help me prioritize and focus throughout the day.

    Worst Things First

    I try to put my "Worst Things First", either in the start of the week, or the start of the day.  The worst thing is to have something looming over me all day or all week.  The other way I look at this is, if I jump my worst hurdle, then the rest of the day or the week is a glide-path.

    If my worst thing is time consuming, then I might need to "Timebox" it, such as spend no more than an hour max on it.  If the work is intensive, I might tend to split it up, and work through it in 20 minute batches, and take 10 minute breaks.  If I'm on a roll, I might go straight for an hour.  If this is regular work that I need to do, that I really don't enjoy doing, then I try to either get it off my plate, or find a way to make it fun, or "Pair Up" with somebody.  I find somebody who loves to do what I hate doing, and see if they might like to show me, either why they love it, or how to do it better, faster, and easier.  This practice has taught me so many new tricks, and it's also helped me appreciate some of the deep skills that others are good at.

    Power Hours

    I know my peak times and my down times during the day.  For example, at around 11:00 AM, I have lunch on my mind, and 3:00 PM is effectively siesta time.

    My best hours tend to be 8:00, 10:00, 2:00, and 4:00.

    They are the hours where I am in the zone and firing on all cylinders.   I’m generally more “productive” earlier in the day, and more “creative” later in the day.   I don’t know all the reasons why, but what I do know is it’s a pattern.  And by knowing that pattern, I can leverage it.

    What I do is I push my heaving lifting into those hours as best as I can.  I use my best horse-power to plow through my work and turn mountains into mole-hills.   When I don’t use those peak hours, somehow mole-hills turn into mountains, and it’s slow going.  It’s the difference in feeling between riding a wave, and pushing rocks uphill.

    To get to this point, I simply had to notice during the week, when my best hours really are, not just when I want them to be.  Now that I know my best times for peak performance, I have to defend those hours as best I can, or at least know what I am trading off.

    When it comes to defending your calendar, you need to know what’s worth it.  Once you know your best Power Hours, you know what’s worth it.

    Aside from spending more time in my high ROI activities, and playing to my strengths, my Power Hours amplify my productivity more than any other way.

    Creative Hours

    This is the space of creative breakthroughs and innovation.  It’s not that I’m not creative throughout the day, but I generally have a pattern where I’m more creative at night, or in the quiet hours of the morning.  I’m also more creative on Fridays and Saturdays.

    I can try to change the pattern, but I can also first notice the pattern and leverage what already exists.  If I know the times when I’m most creative, I can start to use this time to think and brainstorm more freely.

    And, I do.

    That’s how I come up with ways to do things better, faster, and cheaper.  It’s how I figure out ways to change the business, or ways to change my approach, and ways to take things to the next level.

    When I’m in my creative zone, I do more exploration.  I follow my thoughts and play out “what if” scenarios.  I value the fact that my Creative Hours lead the ideas that help me learn and improve whatever I do.

    A simple check, if I’m not flowing enough ideas or if I’m feeling too much nose-to-the grindstone, is I ask myself, “How many Creative Hours did I spend this week?”   If it’s not at least 2, I try to up the count.

    Create Hours are my best way to decompress, absorb and synthesize, which ultimately leads to my greatest breakthroughs.


    Daily Shutdown Routine

    Day is done, gone the Sun.  From the lakes, from the hills, from the sky.

    But how do you put it to rest?

    I like a deliberate switch from work-mode to home-mode.  I don’t want to bring my work home with me, and have it seep into everything I do.  When I’m at work, I work hard (and play hard, too … especially because I treat work like play, and drive it with a passion.)

    But when I shut down my work day, I need a way to unwind.

    I found the best way to free my mind, is dump it down.   So I simply dump it to notepad, or my little yellow sticky pad.  Any open issues or challenges or things on my mind.  I can always pick them back up.  Or, I can let them go.

    But the last thing I want is for a bunch of problems to be swirling around in my head.

    Besides, if you stop swirling problems around in your head, you make space for creative insights, and the answers start to pop out of the woodwork.

    Another pattern I’ve adopted is to use a metaphorical tree in my mind to hang my hat of problems on.  Again, I can always pick them up again tomorrow, but for now, I’ll stuff my problems in this hat, hang them on the tree, and free my mind.

    Friday Reflection

    What if every Friday you could get smarter about your productivity and effectiveness?

    You can.

    I know it sounds simple, and it is, but remember that one of the big keys in life is not just knowing what to do, but doing what you know.

    Friday Reflection is a perfect chance to ask myself two simple questions:

    1. What are three things going well?
    2. What are three things to improve?

    That’s how it starts.

    I keep a simple recurring 20-minute appointment with myself for each Friday morning.   It’s often the most valuable 20 minutes I spend each week.  It’s where I actually reflect on my performance.  Not in a critical way, but a constructive way.  I explore with simple questions:

    1. Am I biting off too much?
    2. Am I biting off the right things?
    3. Am I making the right impact?
    4. Are there better activities I could spend more time on?
    5. Are there soul-sucking activities that I could spend less time on?

    Friday Reflection is how I learn to master my capacity and be more realistic about my own expectations.   I tend to over-estimate what’s possible in a week (and underestimate what’s possible in a month.)   This little feedback loop, helps me see the good, the bad, and the downright fugly.

    The most important outcome of my Friday Reflection is, three things to try out next week to do a little better.

    The little better adds up.

    The main thing to keep in mind is that Friday Reflection gives you deeper insight into your strengths and weaknesses in a way that you instantly benefit from.   The key is to carry the good forward, and let the rest go, and to treat it as a continuous learning loop.

    You only fail when you give up or stop learning or stop trying.

    Monthly Focus

    To make my month more meaningful and to add a dash of focus to it, I identify my Three Wins for the Month.  At the month level, I can take a step back and look at the bigger picture.   Asking myself, “What do I want under my belt when the month is over?” is a powerful and swift way to create clarity, and identify compelling outcomes.

    Since I'm leading a team, I go a step further.  I think of Three Wins for the team.  Based on everything that's on our plate, I try to identify what the Three Wins for the team should be.  I try to figure out things that would be easy to share with my manager.  This makes it easy to check alignment, and it makes it easy for them to sell our impact up the management chain.  (Read – It helps you get better performance reviews.)

    When I get to work, I send out a short mail to the team, with the subject line: WEEKLY WINS: 2012-07-23.  It's simply WEEKLY WINS, plus the current date.  I briefly summarize the drivers, the threats, and hot issues on our plate, then list the Three Wins identified.   I follow this by asking the team for their input, and whether we need to recalibrate.  At the bottom, I simply do an A-Z list of bulleted items to dump the full working set of work in flight.  It both helps people see what the full scope is, as well as help us rationalize whether we bit off the right things, and it helps people stay on top of all the work.  It's like a team To-Do list.  Sometimes it's a crazy list, but the three wins at the top, help keep our sanity and focus at all times.

    It's a simple approach, but it works great for distributed teams, and it gives us something to go back and check at the end of the week, or throughout the week to remind ourselves of what we set out to do.

    Since my manager adopted Agile Results too, he shares his three wins for the week to the team in a simple mail.  Folks across the team, simply add their wins for the week.  It's nothing formal ... it's more like a simple assertion of our intended victories.

    During our team meeting, our manager goes around the team, and we share our three wins from last week, and our three wins we plan for this week.  This helps everybody across the team stay connected to what's going on.

    Ten at Ten

    I need to throw in this tip, because it’s the single most effective way I’ve found to get a team on the same page, and avoid a bunch of email.  And, it’s a simple way to create clarity, and avoid confusion.

    It also builds the discipline of execution.

    All you do is meet for ten minutes each day, Monday through Thursday.  I call it, Ten at Ten.

    I found ten at ten to be one of the most effective times in the day to do a sync.  That said, because I always have distributed teams, I’ve had to vary this.   But for the most part, I like Ten at Ten as a reminder to have a quick sync up with the team, focused on creating clarity, debottlenecking any issues, and taking note of small wins and progress.

    The way it works is this:

    1. I schedule ten minutes for Monday through Thursday, at whatever time the team can agree to, but in the AM.
    2. During the meeting, we go around and ask three simple questions:  1)  What did you get done?  2) What are you getting done today? (focused on Three Wins), and 3) Where do you need help?
    3. We focus on the process (the 3 questions) and the timebox (10 minutes) so it’s a swift meeting with great results.   We put issues that need more drill-down or exploration into a “parking lot” for follow up.  We focus the meeting on status and clarity of the work, the progress, and the impediments.

    You’d be surprised at how quickly people start to pay attention to what they’re working on and on what’s worth working on.  It also helps team members very quickly see each other’s impact and results.  It also helps people raise their bar, especially when they get to hear  and experience what good looks like from their peers.

    Most importantly, it shines the light on little, incremental progress.  Progress is the key to happiness in work and life.

    One thing I’ll point out is that the Monday meeting is actually 30 minutes, not 10 minutes, since it’s more of a level set for the week, and it’s a chance to figure out the Three Wins for the Week.

    Well, there it is.

    It might not look like a simple system for meaningful results, but when you think of all the synthesis it is effective.

    The way to keep it simple is to always start simple.   Whenever you forget what to do, go back to the basics.  Simply ask yourself,

    “What are Three Wins I want for today?”

    - OR -

    “What are Three Wins I want for this week?”

    - OR -

    “What are Three Wins I want for this month?”

    - OR -

    … if you’re feeling really bold, and want to go for the gold, “What are Three Wins I want for this year?”

    Hopefully, this little walkthrough helps you easily see how you can apply Agile Results to your workflow, and get more out of the time you already spend.  If nothing else, remember this:

    Value is the ultimate short-cut.

    When you know what’s valued, you can target your effort.  When you know the high value activities, you can focus on those.

    What Agile Results does is streamline your ability to flow value, for yourself and others. 

    Pure and simple.

    And that’s how getting results should be … elegance in action.

    You Might Also Like

  • J.D. Meier's Blog

    Agile Guidance


    When I ramp new folks on the team, I find it helpful to whiteboard how I build prescriptive guidance.  Here's a rough picture of the process:


    I've used the same process for Performance Testing Guidance, Team Development with Visual Studio Team Foundation Server, and WCF Security.

    Here's a brief explanation of what happens along the way:

    The dominant focus here is identifying candidate problems, candidate solutions, and figuring out key risks, as well as testing paths to explore.  The best outcome is a set of scenarios we can execute against.

    • Research - finding the right people, the right problems, and the right solutions.
    • Prototypes - experiment and test areas of high risk to prove the path.  This can include innovating on how we build prescriptive guidance.  We also use these to test with customers and get feedback on the approach.
    • Question Lists - building organized lists of one-liner user questions.
    • Task Lists - building organized lists of one-liner user tasks.
    • Scenario Frames - organizing scenarios into meaningful buckets.  See Scenario Frame Example.
    • Information Models - framing out the problem space and creating useful ways to organize, share, and act on the information.  See Web Services Security Frame.
    • Guidance Types  - testing which guidance types to use (how tos, checklists, guidelines, patterns, ... etc.)

    The dominant focus here is product results.  It's scenario-driven.  Each week we pick scenarios to execute against.

    • Development - building prescriptive guidance, including coding, testing, and writing.
    • Backlog - our backlog is a prioritized set of scenarios and guidance modules.
    • Iterations - picking sets of scenarios to focus development on and test against.
    • Refactoring - tuning and pruning the guidance to improve effectiveness.  This includes breaking the content up and rewriting it.  For example, a common refactoring is factoring reference information from action.  We try to keep reference information in our Explained modules and action information in our How Tos.
    • Testing -  step through the guidance against the scenario.  The first pass is about making sure it works.  It should be executable by a human.  Next, it's about reducing friction and making sure the guidance really hits the what, why and how.  We test the guidance against objectives and scenarios with acceptance criteria so we know when we're done.
    • Problem Repros - creating step by step examples that reproduce a given problem.
    • Solution Repros - creating step by step examples that reproduce a given solution.

    We produce a Knowledge Base (KB) of guidance modules and a guide.  The guidance modules are modular and can be reused.   The guide includes chapters in addition to the guidance modules.  Here's examples from our WCF Security Guide:

    Agile Publishing
    We release our guidance modules along the way to test reactions, get feedback and reshape the approach as needed.

    • CodePlex - we publish our guidance modules to CodePlex so we can share early versions of the guidance and get customer feedback, as well as to test the structure of the guidance, and experiment with user experience.
    • Guidance Explorer - we publish guidance modules to Guidance Explorer so users can do their own guidance mash ups and build their own personalized guides.  Our field also uses this to build customized sets of guidance for customers.

    Stable Reference
    Once we've tested and vetted the guidance and have made it through a few rounds of customer feedback, we push the guidance to MSDN.

    • MSDN - this is the trusted site that users expect to see our prescriptive guidance in final form.
    • Visual Studio/ Visual Studio Team System - once we're a part of the MSDN distribution, we can automatically take advantage of the VS/VSTS documentation integration.

    My Related Posts

  • Page 5 of 47 (1,166 items) «34567»