J.D. Meier's Blog

Software Engineering, Project Management, and Effectiveness

  • J.D. Meier's Blog

    My Projects on MSDN

    • 5 Comments

    This post is a simple way to browse the bulk of my patterns & practices work on MSDN and CodePlex.   After I walk customers through things, the next question is usually, "OK, so where do we find this?"  This is the link I'll be sharing.

    Guides

    Performance

    Books / Guides

    Methods

    Guidelines

    Checklists

    Practices at a Glance

    How Tos

    Security

    Guides

    Methods

    Threats and Countermeasures

    Cheat Sheets

    Guidelines

    Checklists

    Practices at a Glance

    Questions and Answers

    Explained

    Application Scenarios

    ASP.NET Security How Tos

    WCF Security How Tos

    Visual Studio Team System

    Guides

    Guidelines

    Practices at a Glance

    Questions and Answers

    How Tos

    My Related Posts

  • J.D. Meier's Blog

    What is a PM?

    • 2 Comments

    What is a PM (Program Manager)?  While the Program Manager role seems unique to Microsoft, in general, when you map it to other companies, it’s a product manager, or a project manager, or a combination of the two.  At Microsoft, there are various flavors of PMs (“design” PM, “project” PM, “process” PM, etc.) and the PM discipline can be very different in different groups.  I’ve also seen the PM title used as a general job title, in the absence of something more specific.

    At Microsoft it’s a role that means many things to many people.  In general though, when you meet a PM at Microsoft, you expect somebody who has vision, can drive a project to completion, can manage scope and resources, coordinate work across a team, bridge the customer, the business, and the technology, act as a customer champ, and influence without authority.  From a metaphor standpoint, they are often the hub to the spokes, they drive ideas to done, they take the ball and run with it, or find out who should run with the ball.  Some PMs are better at thought leadership, some are better at people leadership, and the best are great at both. 

    Here is a roundup of some of my favorite points that elaborate, clarify, and distill what a PM is, what a PM does, and how to be one.

    Attributes and Qualities of a PM
    Here is a list of key attributes from Steven Sinofsky’s post -- PM at Microsoft:

    • “Strong interest in what computing can do -- a great interest in technology and how to apply that to problems people have in life and work”
    • “Great communication skills -- you do a lot of writing, presenting, and convincing”
    • “Strong views on what is right, but very open to new ideas -- the best PMs are not necessarily those with the best ideas, but those that insure the best ideas get done”
    • “Selfless -- PM is about making sure the best work gets done by the team, not that your ideas get done. “
    • “Empathy -- As a PM you are the voice of the customer so you have to really understand their point of view and context “
    • “Entrepreneur -- as a PM you need to get out there and convince others of your ideas, so being able to is a good skill”

    Here is an example of PM qualities from Sean Lyndersay’s post -- Exchange team defines a Program Manager:

    • You are passionate about working directly with customers, able to clearly articulate customers requirements and pains to executives, architects and developers.
    • You have experience building strong teams and have a passion for mentoring.
    • The ability to work with multiple teams to develop a plan to provide this value to the customers.
    • You understand how software features will impact and/or modify the marketplace once they are shipped.
    • You have a love of innovation and the ability to think through big, long term ideas.
    • Demonstrated expertise at prioritization and project management.
    • You are hands-on with software development, and passionate about user experience, both as an end-user and administrator.

    Microsoft Careers site on Program Manager
    Here is a description of a Program Manager from the Microsoft Careers site:
    “As a Program Manager, you’ll drive the technical vision, design, and implementation of next-generation software solutions. You’ll transform the product vision into elegant designs that will ultimately turn into products used by Microsoft customers. Managing feature sets throughout the product lifecycle, you’ll have the chance to see your design through to completion. You’ll also work directly with other key team members including Software Development Engineers and Software Development Engineers in Test. Program Managers are advocates for end-users, so your passion for anticipating customer needs and creating outside-the-box solutions for them will really help you shine in this role. As a Program Manager you will have the ability to lead within a product’s life cycle using evangelism, empathy, and negotiation to define and deliver results. You will also be responsible for authoring technical specifications, including envisaged usage cases, customer scenarios, and prioritized requirements lists.” 

    Chris Pratley on Program Manager 
    Here are some points on Program Management from Chris Pratley’s post -- Program Manager:

    • “One way to describe PMs is that they not only "pick up and run with the ball, they go find the ball". That really defines the difference between "knowing what to do and doing it", and "not knowing what to do, but using your own wits to decide what to do, then doing it". That means as a PM you are constantly strategizing and rethinking what is going on to find out if there is something you are missing, or the team is missing. You’re also constantly deciding what is important to do, and whether action needs to be taken.”
    • "... In Office, there are "design" PMs who mainly work on designing the products, "process" PMs who mainly work on driving processes that make sure we get things done, localization PMs who are sort of like process PMs but also sort of like developers in that they sometimes produce code and make bits that ship in the product..."
    • “This ‘jack of all trades’ or ‘renaissance man’ … acted as a hub of communication, and made the marketers job easier, since they had someone to talk to who could take the time to really understand the outside world, and the devs could talk to someone who spoke their language and was not ridiculous …”
    • “You’re also constantly deciding what is important to do, and whether action needs to be taken. The number of such decisions is staggering. I might make 50 a day, sometimes more than 100 or even 200. Most of the time there is not nearly enough hard data to say for certain what to do, and in any case almost all these decisions could never have hard data anyway - you need to apply concentration and clear thinking.”

    Here are the stages of your first year as a PM, according to Pratley:

    1. Start off with excitement and enthusiasm for the new job.
    2. About 4 weeks into the job, you start to feel strange. People keep asking you to decide things you don’t know anything about, as if you’re some kind of expert.
    3. By month two, you're convinced you are the dumbest person on the team by far.
    4. By month four, you have lived through a torture of feeling incompetent and a dead weight on your team.
    5. By month six, you have a great moment.
    6. By month 12, you have developed your network of contacts that pass information to you, you are a subject matter expert on your area, and people on the team are relying on you because you know lots of things they don't know. You have made it.

    Joel Spolsky on Program Manager
    Here are points from Joel’s post on How To Be a Program Manager:

    • “Having a good program manager is one of the secret formulas to making really great software”
      “According to Joel, Charles Simonyi invented the job “Program Manager”, but Jabe Blumenthal, a programmer on the Mac Excel team in the late 80s, reinvented the job completely.”
    • “What does a Program Manager do?  “Design UIs, write functional specs, coordinate teams, serve as the customer advocate.”
    • “How To Be a Program Manger - Making Things Happen, by Scott Berkun, Don't Make Me Thin, by Steve Krug's, User Interface Design for Programmers, by Joel Spolsky, and How To Win Friends and Influence People by Dale Carnegie.“

    Ray Schraff on Program Manager
    Here are point from the comments on Chris Pratley’s post, Program Management:

    • “Once they find the ball, PMs don't pick up every ball themselves... but they own the task of making sure that every ball is picked up and carried to the correct endzone by SOMEBODY. “
    • “PMs translate technology to English “
    • “PMs translate English to technology”

    Sean Lyndersay on Program Manager

    • “My favorite definition involves an analogy: A program manager is to a software project what an architect is to a building.”  See Reflections Program Management at Microsoft.
    • “… the PM occupies a unique position in most software engineering structures – sort of the hub of a bumpy wheel (with dev, QA/test, design, usability, marketing, planning, customer support, etc. being the spokes).  See Someone has an Interesting View of PMs (at least at Yahoo!)”  See Someone has an Interesting View of PMs (at least at Yahoo)
    • "You are the center of the hurricane, the eye of the product development storm. You have passion and more, you have vigor. You are fueled by pure energy and endless drive. You are reading this and wondering why it's not in bullet-points because that would have been more efficient. You are working on a game that merges poker with chess in your spare time because neither game uses your full capabilities and talents. You have no use for extraneous clutter or mundane activities and you wonder what is holding up the full-scale integration of robots into the home – its 2007 already and doing the dishes remains as mundane and inefficient as it ever was. You are thinking that this job description is taking too long to read, and you are right. So, here is the rest of it in bullet points."  See Exchange team defines a Program Manager.
    • “With regard to where ideas come from, what we like to say is that the job of program management is not to have all the great ideas but to make sure all the great ideas are ultimately picked. The best program managers make sure the best ideas get done, no matter where they come from.”  See The Job of Program Management.

    Steven Sinofsky on Program Manager
    Here are points from Sinfoksy’s post on PM at Microsoft:

    • “Learn, convince, spec, refine”
    • “PM is one equal part of a whole system.”
    • “PM is a role that has a lot of responsibility but you also define it--working in partnership with expert designers, clever developers, super smart testers, etc. you all work together to define the product AND the responsibilities each of you have”
    • “… the PM role at Microsoft is both unique and one that has unlimited potential -- it is PMs that can bring together a team and rally them around a new idea and bring it to market (like OneNote, InfoPath).”
    • “It is PMs that can tackle a business problem and work with marketing and sales to really solve a big customer issue with unique and innovative software (like SharePoint Portal Server).”
    • “Where developers were focused on code, architecture, performance, and engineering, the PM would focus on the big picture of "what are we trying to do" and on the details of the user experience, the feature set, the way the product will get used.”
    • “A key benefit of program management is that we are far more agile because we have program management.  That can be counter-intuitive (even for many developers at Microsoft who might be waiting for their PM to iron out the spec).  But the idea that you can just start writing code without having a clear view of the details and specification is a recipe for a poorly architected features.”
    • “A great PM knows when the details are thought through enough to begin and a great developer knows when they can start coding even without the details for sure.  But like building a house--you don't start without the plans.  “
    • “A good book that describes the uniqueness of PM at Microsoft is Michael Cussumano's book Microsoft Secrets or his new book, The Business of Software.”

    My Related Posts

  • J.D. Meier's Blog

    patterns & practices Performance Engineering

    • 4 Comments

    As part of our patterns & practices App Arch Guide 2.0 project, we're consolidating our information on our patterns & practices Performance Engineering.  Our performance engineering approach is simply a collection of performance-focused techniques that we found to be effective for meeting your performance objectives.  One of the keys to the effectiveness is our performance frame.   Our performance frame is a collection of "hot spots" that organize principles, patterns, and practices, as well as anti-patterns.  We use the frame to perform effective performance design and code inspections.  Here's a preview of our cheat sheet so far.  You'll notice a lot of similarity with our patterns & practices Security Engineering.  It's by design so that you can use a consistent approach for handling both security and performance.

    Performance Overlay
    This is our patterns & practices Performance Overlay:

    PerformanceEngineering

    Key Activities in the Life Cycle
    This Performance Engineering approach extends these proven core activities to create performance specific activities.  These include:

    • Performance Objectives. Setting objectives helps you scope and prioritize your work by setting boundaries and constraints. Setting performance objectives helps you identify where to start, how to proceed, and when your application meets your performance goals.
    • Budgeting. Budget represents your constraints and enables you to specify how much you can spend (resource-wise) and how you plan to spend it.
    • Performance Modeling. Performance modeling is an engineering technique that provides a structured and repeatable approach to meeting your performance objectives.
    • Performance Design Guidelines. Applying design guidelines, patterns and principles which enable you to engineer for performance from an early stage.
    • Performance Design Inspections. Performance design inspections are an effective way to identify problems in your application design. By using pattern-based categories and a question-driven approach, you simplify evaluating your design against root cause performance issues.
    • Performance Code Inspections. Many performance defects are found during code reviews. Analyzing code for performance defects includes knowing what to look for and how to look for it. Performance code inspections to identify inefficient coding practices that could lead to performance bottlenecks.
    • Performance Testing. Load and stress testing is used to generate metrics and to verify application behavior and performance under normal and peak load conditions.
    • Performance Tuning.  Performance tuning is an iterative process that you use to identify and eliminate bottlenecks until your application meets its performance objectives. You start by establishing a baseline. Then you collect data, analyze the results, and make configuration changes based on the analysis. After each set of changes, you retest and measure to verify that your application has moved closer to its performance objectives.
    • Performance Health Metrics.   Identity the measures, measurements, and criteria for evaluating the health of your application from a performance perspective.
    • Performance Deployment Inspections. During the deployment phase, you validate your model by using production metrics. You can validate workload estimates, resource utilization levels, response time, and throughput.
    • Capacity Planning. You should continue to measure and monitor when your application is deployed in the production environment. Changes that may affect system performance include increased user loads, deployment of new applications on shared infrastructure, system software revisions, and updates to your application to provide enhanced or new functionality. Use your performance metrics to guide your capacity and scaling plans.

    Performance Frames
    Performance Frames define a set of patterns-based categories that can organize repeatable problems and solutions. You can use these categories to divide your application architecture for further analysis and to help identify application performance issues. The categories within the frame represent the critical areas where mistakes are most often made.

    Category Description
    Caching What and where to cache? Caching refers to how your applications caches data. The main points to be considered are Per user, application-wide, data volatility.
    Communication How to communicate between layers? Communication refers to choices for transport mechanism, boundaries, remote interface design, round trips, serialization, and bandwidth.
    Concurrency How to handle concurrent user interactions? Concurrency refers to choices for Transaction, locks, threading, and queuing.
    Coupling / Cohesion How to structure your application? Coupling and Cohesion refers structuring choices leading to loose coupling, high cohesion among components and layers.
    Data Access How to access data? Data Access refers to choices and approaches for schema design, Paging, Hierarchies, Indexes, Amount of data, and Round trips.
    Data Structures / Algorithms How to handle data? Data Structures and Algorithm refers to choice of algorithms; Arrays vs. collections.
    Exception Management How to handle exceptions? Exceptions management refers to choices / approach for catching, throwing, exceptions.
    Resource Management How to manage resources? Resource Management refers to approach for allocating, creating, destroying, and pooling of application resource
    State Management What and where to maintain state? State management refres to how your application maintains state. The main points to consider are Per user, application-wide, persistence, and location.


    Architecture and Design Issues
    Use the diagram below to help you think about performance-related architecture and design issues in your application.

    PerformanceIssues

    The key areas of concern for each application tier are:

    • Browser.  Blocked or unresponsive UI.
    • Web Server.  Using state affinity.  Wrong data types.  Fetching per request instead of caching.  Poor resource management.
    • Application Server.  Blocking operations.  Inappropriate choice of data structures and algorithms.  Not pooling database connections.
    • Database Server.  Chatty instead of batch processing.  Contention, isolation levels, locking and deadlock.

    Design Process Principles
    Consider the following principles to enhance your design process:

    • Set objective goals. Avoid ambiguous or incomplete goals that cannot be measured such as "the application must run fast" or "the application must load quickly." You need to know the performance and scalability goals of your application so that you can (a) design to meet them, and (b) plan your tests around them. Make sure that your goals are measurable and verifiable. Requirements to consider for your performance objectives include response times, throughput, resource utilization, and workload. For example, how long should a particular request take? How many users does your application need to support? What is the peak load the application must handle? How many transactions per second must it support? You must also consider resource utilization thresholds. How much CPU, memory, network I/O, and disk I/O is it acceptable for your application to consume?
    • Validate your architecture and design early. Identify, prototype, and validate your key design choices up front. Beginning with the end in mind, your goal is to evaluate whether your application architecture can support your performance goals. Some of the important decisions to validate up front include deployment topology, load balancing, network bandwidth, authentication and authorization strategies, exception management, instrumentation, database design, data access strategies, state management, and caching. Be prepared to cut features and functionality or rework areas that do not meet your performance goals. Know the cost of specific design choices and features.
    • Cut the deadwood. Often the greatest gains come from finding whole sections of work that can be removed because they are unnecessary. This often occurs when (well-tuned) functions are composed to perform some greater operation. It is often the case that many interim results from the first function in your system do not end up getting used if they are destined for the second and subsequent functions. Elimination of these "waste" paths can yield tremendous end-to-end improvements.
    • Tune end-to-end performance. Optimizing a single feature could take away resources from another feature and hinder overall performance. Likewise, a single bottleneck in a subsystem within your application can affect overall application performance regardless of how well the other subsystems are tuned. You obtain the most benefit from performance testing when you tune end-to-end, rather than spending considerable time and money on tuning one particular subsystem. Identify bottlenecks, and then tune specific parts of your application. Often performance work moves from one bottleneck to the next bottleneck.
    • Measure throughout the life cycle. You need to know whether your application's performance is moving toward or away from your performance objectives. Performance tuning is an iterative process of continuous improvement with hopefully steady gains, punctuated by unplanned losses, until you meet your objectives. Measure your application's performance against your performance objectives throughout the development life cycle and make sure that performance is a core component of that life cycle. Unit test the performance of specific pieces of code and verify that the code meets the defined performance objectives before moving on to integrated performance testing.  When your application is in production, continue to measure its performance. Factors such as the number of users, usage patterns, and data volumes change over time. New applications may start to compete for shared resources.

    Design Guidelines
    This table represents a set of secure design guidelines for application architects. Use this as a starting point for performance design and to improve performance design inspections.

    Category Description
    Caching Decide where to cache data. Decide what data to cache. Decide the expiration policy and scavenging mechanism. Decide how to load the cache data. Avoid distributed coherent caches.
    Communication Choose the appropriate remote communication mechanism. Design chunky interfaces. Consider how to pass data between layers. Minimize the amount of data sent across the wire. Batch work to reduce calls over the network. Reduce transitions across boundaries. Consider asynchronous communication. Consider message queuing. Consider a "fire and forget" invocation model.
    Concurrency Design for loose coupling. Design for high cohesion. Partition application functionality into logical layers. Use early binding where possible. Evaluate resource affinity.
    Coupling / Cohesion How to structure your application? Coupling and Cohesion refers structuring choices leading to loose coupling, high cohesion among components and layers.
    Data Structures / Algorithms Choose an appropriate data structure. Pre-assign size for large dynamic growth data types. Use value and reference types appropriately.
    Resource Management Treat threads as a shared resource. Pool shared or scarce resources. Acquire late, release early. Consider efficient object creation and destruction. Consider resource throttling.
    Resource Management How to manage resources? Resource Management refers to approach for allocating, creating, destroying, and pooling of application resource
    State Management Evaluate stateful versus stateless design. Consider your state store options. Minimize session data. Free session resources as soon as possible. Avoid accessing session variables from business logic.

    Additional Resources

    My Related Posts

  • J.D. Meier's Blog

    Microsoft Developer Guidance Maps

    • 2 Comments

    As part of creating an "information architecture" for developer guidance at Microsoft, one of the tasks means mapping out what we already have.  That means mapping out out our Microsoft developer content assets across Channel9, MSDN Developer Centers, MSDN Library, Code Gallery, CodePlex, the All-in-One Code Framework, etc.

    You can browse our Developer Guidance Maps at http://innovation.connect.microsoft.com/devguidancemaps

    One of my favorite features is the one-click access that bubbles up high value code samples, how tos, and walkthroughs from the product documentation.  Here is an example of showcasing the ASP.NET documentation team’s ASP.NET Product Documentation Map.  Another of my favorite features is one-click access to consolidated training maps.  Here is an example showcasing Microsoft Developer Platform Evangelism’s Windows Azure Training Map.

    Content Types
    Here are direct jumps to pages that let you browse by content type:

    Developer Guidance Maps
    Here are direct jumps to specific Developer Guidance maps:

    The Approach
    Rather than boil the ocean, so we’ve used a systematic and repeatable model.  We’ve focused on topics, features, and content types for key technologies.   Here is how we prioritized our focus:

    1. Content Types: Code, How Tos, Videos, Training
    2. App Platforms for Key Areas of Focus: Cloud, Data, Desktop, Phone, Service, Web
    3. Technology Building Blocks for the stories above:  ADO.NET, ASP.NET, Silverlight, WCF, Windows Azure, Windows Client, Windows Phone

    The Maps are Works in Progress
    Keep in mind these maps are works in progress and they help us pressure test our simple information architecture (“Simple IA”) for developer guidance at Microsoft.  Creating the maps helps us test our models, create a catalog of developer guidance, and easily find the gaps and opportunities.   While the maps are incomplete, they may help you find content and sources of content that you didn’t even know existed.  For example, the All-In-One Code Framework has more than 450 code examples that cover 24 Microsoft development technologies such as Windows Azure, Windows 7, Silverlight, etc. … and the collection grows by six samples per week.

    Here’s another powerful usage scenario.  Use the maps as a template to create your own map for a particular technology.  By creating a map or catalog of content for a specific technology, and  organizing it by topic, feature, and content type, you can dramatically speed up your ability to map out a space and leverage existing knowledge. (… and you can share your maps with a friend ;)

  • J.D. Meier's Blog

    New Release: Microsoft Enterprise Library 5.0

    • 1 Comments

    patterns & practices Enterprise Library 5.0 is now available on MSDN.   Microsoft Enterprise Library (EntLib) is a collection of application blocks (reusable software components) designed to address common cross-cutting concerns (data access, exception handling, logging, validation, ...etc.)  EntLib provides source code, test cases, and docs that you can use "as is" or extend as you see fit.  The goal of EntLib is to codify Microsoft recommended and proven practices for .NET application development.

    What's New in 5.0
    Enterprise Library 5.0 brings the following to the table:

    • Major architectural refactoring that provides improved testability and maintainability through full support of the dependency injection style of development
    • Dependency injection container independence (Unity ships with Enterprise Library, but you can replace Unity with a container of your choice)
    • Programmatic configuration support, including a fluent configuration interface and an XSD schema to enable IntelliSense
    • Redesign of the configuration tool to provide: a more usable and intuitive look and feel, extensibility improvements through meta-data driven configuration visualizations that replace the requirement to write design time code, a wizard framework that can help to simplify complex configuration tasks
    • Data accessors for more intuitive processing of data query results
    • Asynchronous data access support
    • Honoring validation attributes between Validation Application Block attributes and DataAnnotations
    • Integration with Windows Presentation Foundation (WPF) validation mechanisms
    • Support for complex configuration scenarios, including additive merge from multiple configuration sources and hierarchical merge
    • Optimized cache scavenging
    • Better performance when logging
    • Support for the .NET 4.0 Framework and integration with Microsoft Visual Studio 2010
    • Improvements to Unity: Streamlined configuration schema, a simplified API for static factories and interception, the capability to add interface implementation through interception, additional types of lifetime manager, deferred resolution (automatic factories), a reduction of the number of assemblies

    Key Links

  • J.D. Meier's Blog

    Now Available: Layered Architecture Sample for .NET 4.0

    • 0 Comments

    Serena Yeoh just released her Layer Architecture Sample for .NET 4.0 (July 2010), which targets the .NET Framework 4.0.  Serena is one of our MCS (Microsoft Consultant Services) consultants in the field working with customers on a regular basis, and she was a key contributor of our Microsoft patterns & practices Application Architecture Guide.

    Here is a description of the project according to Serena:
    The Layered Architecture Sample is designed to demonstrate how to apply various .NET Technologies such as Windows Presentation Foundation (WPF), Windows Communication Foundation (WCF), Windows Workflow Foundation (WF), Windows Form, ASP.NET and ADO.NET Entity Framework to the Layered Architecture Design Pattern. It is aimed at illustrating how code of similar responsibilities can be factored into multiple logical layers which are applicable in most of today's enterprise applications.   The primary objective of the sample is to focus on layering and therefore, certain cross-cutting functionalities have been omitted to maintain its simplicity.

    What’s New for This Version of the Sample

    • Business Entities now use ADO.NET Entity Framework POCO
    • Upgraded Workflow Services to WF4
    • Data Context moved to Data Layer
    • Layer Diagram included
    • Upgraded ASP.NET Web client using new VS template
    • Auto-refresh in ASP.NET Web clients
    • New WPF Expense Submitter client

    Key Links

    I’ll be curious to hear three things you like about it and three things you would improve?

  • J.D. Meier's Blog

    patterns & practices WCF 3.5 Security Guidelines Now Available

    • 10 Comments

    For this week's release in our patterns & practices WCF Security Guidance project, we released our first version of our WCF 3.5 Security Guidelines.  Each guideline is a nugget of what to do, why, and how.  The goal of the guideline format is to take a lot of information, compress it down, and turn insight into action.

    The downside is that it's tough to create prescriptive guidelines that are generic enough to be reusable, but specific enough to be helpful.  The upside is that customers find the guidelines help them cut through a lot of information and take action.  We contextualize the guidelines as much as we can, but ultimately you're in the best position to do the pattern matching to find which guidelines are relevant for your scenarios, and how you need to tailor them.

    Here's a snapshot of the guidelines, but you can see our security guidelines explained at our WCF Security Guidance project site.

    Categories
    Our WCF Security guidelines are organized using the following buckets:

    • Auditing and Logging
    • Authentication
    • Authorization
    • Binding
    • Configuration Management
    • Exception Management
    • Hosting
    • Impersonation and Delegation
    • Input/Data Validation
    • Proxy Considerations
    • Deployment considerations 

    Auditing and Logging

    • Use WCF auditing to audit your service
    • If non-repudiation is important, consider setting SuppressAuditFailure property to false
    • Use message logging to log operations on your service
    • Instrument for user management events
    • Instrument for significant business operations
    • Protect log files from unauthorized access
    • Do not log sensitive information

    Authentication

    • Know your authentication options
    • Use Windows Authentication when you can
    • If you support non-WCF clients using windows authentication and message security, consider using the Kerberos direct option
    • If your users are in AD, but you can’t use windows authentication, consider using username authentication
    • If your clients have certificates, consider using client certificate authentication
    • If you need to streamline certificate distribution to your clients for message encryption, consider using the negotiate credentials option
    • If your users are in a custom store, consider using username authentication with a custom validator
    • If your users are in a SQL membership store, use the SQL Membership Provider
    • If your partner applications need to be authenticated when calling WCF services, use client certificate authentication.
    • If you are using username authentication, use SQL Server Membership Provider instead of custom authentication
    • If you need to support intermediaries and a variety of transports between client and service, use message security to protect credentials
    • If you are using username authentication, validate user login information
    • Do not store passwords directly in the user store
    • Enforce strong passwords
    • Protect access to your credential store
    • If you are using Windows Forms to connect to WCF, do not cache credentials

    Authorization

    • If you use ASP.NET roles, use the ASP.NET Role Provider
    • If you use windows groups for authorization, use ASP.NET Role Provider with AspNetWindowsTokenRoleProvider
    • If you store role information in SQL, consider using the SQL Server Role Provider for roles authorization
    • If you store role information in Windows Groups, consider using the WCF PrincipalPermissionAttribute class for roles authorization
    • If you need to authorize access to WCF operations, use declarative authorization
    • If you need to perform fine-grained authorization based on business logic, use imperative authorization

    Binding

    • If you need to support clients over the internet, consider using wsHttpBinding.
    • If you need to expose your WCF service to legacy clients as an ASMX web service, use basicHttpBinding
    • If you need to support remote WCF clients within an intranet, consider using netTcpBinding.
    • If you need to support local WCF clients, consider using netNamedPipeBinding.
    • If you need to support disconnected queued calls, use netMsmqBinding.
    • If you need to support bidirectional communication between WCF Client and WCF service, use wsDualHttpBinding.

    Configuration Management

    • Use Replay detection to protect against message replay attacks
    • If you host your service in a Windows service, expose a metadata exchange (mex) binding
    • If you don’t want to expose your WSDL, turn off HttpGetEnabled and metadata exchange (mex)
    • Manage bindings and endpoints in config not code
    • Associate names with the service configuration when you create service behavior, endpoint behavior, and binding configuration
    • Encrypt configuration sections that contain sensitive data

    Exception Management

    • Use structured exception handling
    • Do not divulge exception details to clients in production
    • Use a fault contract to return error information to clients
    • Use a global exception handler to catch unhandled exceptions

    Hosting

    • If you are hosting your service in a Windows Service, use a least privileged custom domain account
    • If you are hosting your service in IIS, use a least privileged service account
    • Use IIS to host your service unless you need to use a transport that IIS does not support

    Impersonation and Delegation

    • Know the impersonation options
    • If you have to flow the original caller, use constrained delegation
    • Consider LogonUser when you need to impersonate but you don’t have trusted delegation
    • Consider S4U when you need a Windows token and you don’t have the original caller’s credentials
    • Use programmatic impersonation to impersonate based on business logic
    • When impersonating programmatically be sure to revert to original context
    • Only impersonate on operations that require it
    • Use OperationBehavior to impersonate declaratively

    Input/Data Validation

    • If you need to validate parameters, use parameter inspectors
    • If your service has operations that accept message or data contracts, use schemas to validate your messages
    • If you need to do schema validation, use message inspectors
    • Validate operation parameters for length, range, format and type
    • Validate parameter input on the server
    • Validate service responses on the client
    • Do not rely on client-side validation
    • Avoid user-supplied file name and path input
    • Do not echo untrusted input

    Proxy Considerations

    • Publish your metadata over HTTPS to protect your clients from proxy spoofing
    • If you turn off mutual authentication, be aware of service spoofing

    Deployment considerations

    • Do not use temporary certificates in production
    • If you are using a custom domain account in the identity pool for your WCF application, create an SPN for Kerberos to authenticate the client.
    • If you are using a custom service account and need to use trusted for delegation, create an SPN
    • If you are hosting your service in a Windows Service, using a custom domain identity, and ASP.NET needs to use constrained trusted for delegation when calling the service, create an SPN
    • Use IIS to host your service unless you need to use a transport that IIS does not support
    • Use a least privileged account to run your WCF service
    • Protect sensitive data in your configuration files

    My Related Posts

  • J.D. Meier's Blog

    MSF Agile Persona Template

    • 1 Comments

    I was looking for examples of persona templates, and I came across Personas: Moving Beyond Role-Based Requirements Engineering by Randy Miller and Laurie Williams.  I found it to be insightful and practical.  I also like the fact they included a snapshot of a persona template example from MSF Agile ...

    MSF Agile Persona Template

    • Name - Enter a respectful, fictitious name for the persona.
    • Status and Trust Level - Favored or disfavored and level of credentials.
    • Role - Place the user group in which the persona belongs.
    • Demographics - Age and personal details optional Goals, motives and concerns.
    • Knowledge, skills and abilities - Group real but generalized information about the capabilities of the persona.
    • Goals, motives, and concerns - Describe the real needs of the users in the user group represented by the persona. If multiple groupings exist, write a persona for each grouping.
    • Usage Patterns - Write the frequency and usage patterns of the system by the persona. Develop a detailed understanding of what functions would be most used. Look for any challenges that the system must help the persona overcome. Note the learning and interaction style if the system is new. Does the persona explore the system to find new functionality or need guidance? Keep this area brief but accurate.
  • J.D. Meier's Blog

    patterns & practices Security Engineering

    • 4 Comments

    As part of our patterns & practices App Arch Guide 2.0 project, we're consolidating our information on our patterns & practices Security Engineering.  Our security engineering approach is simply a collection of security-focused techniques that we found to be effective.  One of the keys to the effectiveness is our security frame.   Our security frame is a collection of "hot spots" that organize principles, patterns, and practices, as well as anti-patterns.  We use the frame to perform security code and design inspections.  Here's a preview of our cheat sheet so far.

    Security Overlay
    This is our patterns & practices Security Overlay:

    SecurityEngineering

    It simply shows a common set of activities that customers already do, and then we overlay a set of security techniques.

    Summary of Key Activities in the Life Cycle 
    Our patterns & practices Security Engineering approach extends these proven core activities to create security specific activities. These activities include:

    • Security Objectives. Setting objectives helps you scope and prioritize your work by setting boundaries and constraints. Setting security objectives helps you identify where to start, how to proceed, and when you are done.
    • Threat Modeling. Threat modeling is an engineering technique that can help you identify threats, attacks, vulnerabilities, and countermeasures that could affect your application. You can use threat modeling to shape your application's design, meet your company's security objectives, and reduce risk.
    • Security Design Guidelines. Creating design guidelines is a common practice at the start of an application project to guide development and share knowledge across the team. Effective design guidelines for security organize security principles, practices, and patterns by actionable categories.
    • Security Design Inspection. Security design inspections are an effective way to identify problems in your application design. By using pattern-based categories and a question-driven approach, you simplify evaluating your design against root cause security issues.
    • Security Code Inspection. Many security defects are found during code reviews. Analyzing code for security defects includes knowing what to look for and how to look for it. Security code inspections optimize inspecting code for common security issues.
      Security Testing. Use a risk-based approach and use the output from the threat modeling activity to help establish the scope of your testing activities and define your test plans.
    • Security Deployment Inspection. When you deploy your application during your build process or staging process, you have an opportunity to evaluate runtime characteristics of your application in the context of your infrastructure. Deployment reviews for security focus on evaluating your security design and configuration of your application, host, and network.

    Security Frame
    Security frames define a set of patterns-based categories that can organize repeatable problems and solutions. You can use these categories to divide your application architecture for further analysis and to help identify application vulnerabilities. The categories within the frame represent the critical areas where mistakes are most often made.

    Category Description
    Auditing and Logging Who did what and when?  Auditing and logging refer to how your application records security-related events.
    Authentication Who are you?  Authentication is the process where an entity proves the identify of another entity, typically through credentials, such as a user name and password.
    Authorization What can you do?  Authorization is how your application provides access controls for resources and operations.
    Configuration Management Who does your application run as?  Which databases does it connect to?  How is your application administered?  How are these settings protected?  Configuration management refers to how your application handles these operations and issues.
    Cryptography How are you handling secrets (confidentiality)?  How are you tamper proofing your data or libraries (integrity)?  how are you providing seeds for random values that must be cryptographically strong?  Cryptography refers to how your application enforces confidentiality and integrity.
    Exception Management

    When a method call in your application fails, what does your application do?  How much do you reveal?  Do you return friendly information to end users?  Do you pass valuable exception information back to the caller?  Does your application fail gracefully?

    Input and Data Validation How do you know that the input your application receives is valid and safe?  Input validation refers to how your application filters, scrubs, or rejects input before additional processing.  Consider constraining input through entry points and encoding output through exit points.  Do you trust data sources such as databases and file shares?
    Sensitive data How does your application handle sensitive data?  Sensitive data refers to how your application handles any data that must be protected either in memory, over the network, or in persistent stores.
    Session Management How does your application handle and protect user sessions?  A session refers to a session of related interactions between a user and your Web application.

    Architecture and Design Issues
    Use the diagram below to help you think about architecture and design issues in your application.

    SecurityDesignIssues

    The key areas of concern for each application tier are:

    • Browser. Authenticating users on the client. Protecting sensitive data on the wire.  Preventing common attacks such as parameter manipulation and session hijacking.
    • Web Server. Validating untrusted input. Exception handling. Authorizing your users. Securing the configuration.
    • Application Server. Authenticating and Authorizing users. Auditing and logging. Protecting sensitive data on the wire. Securing configuration.
    • Database Server. Protecting sensitive data in the database. Securing configuration. Locking down database users.

    Design Guidelines
    This table represents a set of secure design guidelines for application architects. Use this as a starting point for secure design and to improve security design inspections

    Category Guidelines
    Auditing and Logging Identify malicious behavior. Know what good traffic looks like. Audit and log activity through all of the application tiers. Secure access to log files. Back up and regularly analyze log files.
    Authentication    Partition site by anonymous, identified, and authenticated area. Use strong passwords. Support password expiration periods and account disablement. Do not store credentials (use one-way hashes with salt). Encrypt communication channels to protect authentication tokens. Pass Forms authentication cookies only over HTTPS connections.
    Authorization Use least privileged accounts. Consider authorization granularity. Enforce separation of privileges. Restrict user access to system-level resources.
    Configuration Management Use least privileged process and service accounts. Do not store credentials in plaintext. Use strong authentication and authorization on administration interfaces. Do not use the LSA. Secure the communication channel for remote administration. Avoid storing sensitive data in the Web space.
    Cryptography Do not develop your own. Use tried and tested platform features. Keep unencrypted data close to the algorithm. Use the right algorithm and key size. Avoid key management (use DPAPI). Cycle your keys periodically. Store keys in a restricted location.
    Exception Management Use structured exception handling. Do not reveal sensitive application implementation details. Do not log private data such as passwords. Consider a centralized exception management framework.
    Input and Data Validation Do not trust input; consider centralized input validation. Do not rely on client-side validation. Be careful with canonicalization issues. Constrain, reject, and sanitize input. Validate for type, length, format, and range.
    Parameter Manipulation Encrypt sensitive cookie state. Do not trust fields that the client can manipulate (query strings, form fields, cookies, or HTTP headers). Validate all values sent from the client.
    Sensitive Data Avoid storing secrets. Encrypt sensitive data over the wire. Secure the communication channel. Provide strong access controls on sensitive data stores. Do not store sensitive data in persistent cookies. Do not pass sensitive data using the HTTP-GET protocol.
    Session Management Limit the session lifetime. Secure the channel. Encrypt the contents of authentication cookies. Protect session state from unauthorized access.

    Patterns
    Design patterns in this context refer to generic solutions that address commonly occurring application design problems.  Some of the patterns identified below are well known design patterns. Their use in certain scenarios enables better security as a secondary goal. Some of the main patterns that help improve security are summarized below:

    • Brokered Authentication. Use brokered authentication where the application validates the credentials presented by the client, without the need for a direct relationship between the two parties. An authentication broker that both parties trust independently issues a security token to the client. The client can then present credentials, including the security token, to the application.
    • Direct Authentication. Use direct authentication where the application acts as an authentication service to validate credentials from the client. The credentials, which include proof-of-possession that is based on shared secrets, are verified against an identity store.
    • Roles-based authorization. Role-based authorization is used to associate clients and groups with the permissions that they need to perform particular functions or access resources. When a user or group is added to a role, the user or group automatically inherits the various security permissions.
    • Resource-based authorization. Resource-based authorization is performed on a resource, depending on the type of the resource and the mechanism used to perform authorization. Resource-based authorization can be based on access control lists (ACLs) or URLs.
    • Trusted Subsystem. The application acts as a trusted subsystem to access additional resources. It uses its own credentials instead of the user's credentials to access the resource. The application must perform appropriate authentication and authorization of all requests that enter the subsystem. Remote resources should also be able to verify that the midstream caller is a trusted subsystem and not an upstream user of the application that is trying to bypass access to the trusted subsystem.
    • Impersonation and Delegation. The application uses original user’s credentials to access the resource. The application must perform appropriate authentication and authorization of all requests that enter the subsystem and then impersonation or delegation while accessing resources. Remote resources should also be able to verify that individual users are trusted to access the resource.
    • Transfer Security. Sensitive data passed between layers or remote tiers should be encrypted and signed to ensure confidentiality and integrity of the data.
    • Exception Shielding. Sanitize unsafe exceptions by replacing them with exceptions that are safe by design. Return only those exceptions to the client that have been sanitized or exceptions that are safe by design. Exceptions that are safe by design do not contain sensitive information in the exception message, and they do not contain a detailed stack trace, either of which might reveal sensitive information about the application’s inner workings.

    Additional Resources

    My Related Posts

  • J.D. Meier's Blog

    Microsoft Cloud Case Studies at a Glance

    • 0 Comments

    Cloud computing is hot.  As customers makes sense of what the Microsoft cloud story means to them, one of the first things they tend to do is look for case studies of the Microsoft cloud platform.  They like to know what their peers, partners, and other peeps are doing.

    Internally, I get to see a lot of what our customers are doing across various industries and how they are approaching the cloud behind the scenes.  It’s amazing the kind of transformations that cloud computing brings to the table and makes possible.  Cloud computing is truly blending and connecting business and IT (Information Technology), and it’s great to see the connection.  In terms of patterns, customers are using the cloud to either reduce cost, create new business opportunities and agility, or compete in ways they haven’t been able to before.  One of the most significant things cloud computing does is force people to truly understand what business they are in and what their strategy actually is.

    Externally, luckily, we have a great collection of Microsoft cloud case studies available at Windows Azure Case Studies.

    I find having case studies of the Microsoft cloud story makes it easy to see patterns and to get a sense of where some things are going.  Here is a summary of some of the case studies available, and a few direct links to some of the studies.

    Advertising Industry
    Examples of the Microsoft cloud case studies in advertising:

    Air Transportation Services
    Examples of the Microsoft cloud case studies in air transportation services:

    Capital Markets and Securities Industry
    Examples of the Microsoft cloud case studies in capital markets and securities:

    Education
    Examples of the Microsoft cloud case studies in education:

    Employment Placement Agencies
    Examples of the Microsoft cloud case studies in employment agencies:

    • OCC Match - Job-listing web site scales up solution, reduces costs by more than U.S. $500,000.


    Energy and Environmental Agencies
    Examples of the Microsoft cloud case studies in enery and environmental agencies:

    • European Environment Agency (EEA) - Environment agency's pioneering online tools bring revolutionary data to citizens.

    Financial Services Industry
    Examples of the Microsoft cloud case studies in the financial services industry:

    • eVision Systems - Israeli startup offers cost-effective, scalable procurement system using cloud services.
      Fiserv - Fiserv evaluates cloud technologies as it enhances key financial services offerings.
    • NVoicePay - New company tackles big market with cloud-based B2B payment solution.
    • RiskMetrics - Financial risk-analysis firm enhances capabilities with dynamic computing.


    Food Service Industry
    Examples of the Microsoft cloud case studies in the food service industry:

    • Outback Steakhouse - Outback Steakhouse boosts guests loyalty with Facebook and Windows Azure.

    Government Agencies
    Examples of the Microsoft cloud case studies in government agencies:

    Healthcare Industry
    Examples of the Microsoft cloud case studies in healthcare:

    • Vectorform - Digital design and technology firm supports virtual cancer screening with 3-D viewer.

    High Tech and Electronics Manufacturing
    Examples of the Microsoft cloud case studies in high tech and electronics manufacturing:

    • 3M - 3M launches Web-based Visual Attention Service to heighten design impact. - http://www.microsoft.com/casestudies/Case_Study_Detail.aspx?CaseStudyID=4000005768
    • GXS Trading Grid - Electronic services firm reaches out to new markets with cloud-based solution.
    • iLink Systems - Custom developer reduces development time, cost by 83 percent for Web, PC, mobile target.
    • Microsoft Worldwide Partner Group - Microsoft quickly delivers interactive cloud-based tool to ease partner transition.
    • Sharpcloud - Software startup triples productivity, saves $500,000 with cloud computing solution.
    • Symon Communications - Digital innovator uses cloud computing to expand product line with help from experts.
    • VeriSign - Security firm helps customers create highly secure hosted infrastructure solutions.
    • Xerox - Xerox cloud print solution connects mobile workers to printers around the world.

    Hosting
    Examples of the Microsoft cloud case studies in hosting:

    • Izenda - Hosted business intelligence solution saves companies up to $250,000 in IT support and development costs.
    • Mamut - Hosting provider uses scalable computing to create hybrid backup solution.
    • Metastorm - Partner opens new market segments with cloud-based business process solution.
    • Qlogitek - Supply chain integrator relies on Microsoft platform to facilitate $20 billion in business.
    • SpeechCycle - Next generation contact center solution uses cloud to deliver software-plus-services.
    • TBS Mobility - Mobility software provider opens new markets with software-plus-services.

    Insurance Industry
    Examples of the Microsoft cloud case studies in the insurance industry:

    IT Services
    Examples of the Microsoft cloud case studies in IT services:

    • BEDIN Shop Systems - Luxury goods retailer gains point-of-sale solution in minutes with cloud-based system. - http://www.microsoft.com/casestudies/Case_Study_Detail.aspx?CaseStudyID=4000008195
    • Broad Reach Mobility - Firm streamlines field-service tasks with cloud solution. - http://www.microsoft.com/casestudies/Windows-Azure/Broad-Reach-Mobility/Firm-Streamlines-Field-Service-Tasks-with-Cloud-Solution/4000008493
    • Codit - Solution provider streamlines B2B connections using cloud services. - http://www.microsoft.com/casestudies/Microsoft-BizTalk-Server/Codit/Solution-Provider-Streamlines-B2B-Connections-Using-Cloud-Services/4000008528
    • Cumulux - Software developer focuses on innovation, extends cloud services value for customers. - http://www.microsoft.com/casestudies/Case_Study_Detail.aspx?CaseStudyID=4000007947
    • eCraft - IT firm links web applications to powerful business management software.
    • EdisonWeb - Web firm saves $30,000 annually, expands global growth with cloud database service.
    • Epicor - Software developer saves money, enhances application with Internet-based platform.
    • ESRI - GIS provider lowers cost of customer entry, opens new markets with hosted services.
    • Formotus - Forms automation company uses cloud storage to lower mobile data access costs.
    • FullArmor - FullArmor PolicyPortal Technical Brief: A Windows Azure/Software-plus-Services Solution
    • Gcommerce - Service provider transforms special-order process with a hybrid cloud and on-premises inventory solution.
    • GoGrid - Hosting provider extends service offerings, attracts customers with "cloud" platform.
    • Guppers - Mobile data services quickly and cost-effectively scale with cloud services solution.
    • HCL Technologies - IT firm delivers carbon-data management in the cloud, lowers barriers for customers.
    • HubOne - Australian development firm grows business exponentially with cloud services.
    • IMPACTA - IT security firm delivers low cost, high protection with online file transfer service.
    • Infosys - Infosys creates cloud-based solution for auto dealers using SQL data services.
    • InterGrid GreenButton - GreenButton super computing power in the cloud.
    • InterGrid - Software developers offer quick processing of compute-heavy tasks with cloud services.
    • ISHIR Infotech - IT company attracts new customers at minimal cost with cloud computing solution.
    • K2 - Software firm moves business processes and line-of-business data into the cloud.
    • Kelly Street Digital - Digital marketing startup switches cloud providers and saves $4,200 monthly.
    • Kompas Xnet - IT integrator delivers high-availability website at lower cost with online services.
    • LINQPad - Software developers gain the ease of LINQ data queries to compelling cloud content.
    • Meridium - Asset performance management solution increases performance and interoperability.
    • metaSapiens - ISV optimizes data browsing tool for online data, expects to enter global marketplace.
    • Microsoft - Microsoft IT moves auction tool to the cloud, makes it easier for employees to donate.
    • NeoGeo New Media - Digital media asset management solution gains scalability with SQL Data Services.
    • Paladin Data Systems - Software provider reduces operating costs with cloud-based solution.
    • Persistent Systems - Software services provider delivers cost-effective e-government solution.
    • Propelware - Intuit integration provider reduces development time, cost by 50 percent.
    • Quilink - Innovative technology startup creates contact search solution, gains global potential.
    • Siemens - Siemens expands software delivery service, significantly reduces TCO.
    • Sitecore - Company builds compelling web experiences in the cloud for multinational organizations.
    • SOASTA - Cloud services help performance-testing firm simulate real-world Internet traffic.
    • Softronic - Firm meets demand for streamlined government solutions using cloud platform.
    • SugarCRM - CRM vendor quickly adapts to new platform, adds global, scalable delivery channel.
    • Synteractive - Solution provider uses cloud technology to create novel social networking software.
    • Transactiv - Software start-up scales for demand without capital expenses by using cloud services.
    • Umbraco - Web content management system provider moves solution to the cloud to expand market.
    • Volantis - Mobile services ISV gains seamless scalability with Windows Azure platform.
    • Wipro - IT services company reduces costs, expands opportunities with new cloud platform.Zitec - IT consultancy saves up to 90
    • percent on relational database costs with cloud services.
    • Zmanda - Software company enriches cloud-based backup solution with structured data storage.

    Life Sciences
    Examples of the Microsoft cloud case studies in life sciences:

    Manufacturing
    Examples of the Microsoft cloud case studies in manufacturing:

    Media and Entertainment Industry
    Examples of the Microsoft cloud case studies in media and entertainment:

    • OriginDigital - Video services provider to reduce transcoding costs up to half.
    • Sir Speedy - Publishing giant creates innovative web based service for small-business market.
    • STATS - Sports data provider saves $1 million on consumer market entry via cloud services.
    • TicketDirect - Ticket seller finds ideal business solution in hosted computing platform.
    • TicTacTi - Advertising company adopts cloud computing, gets 400 percent improvement.
    • Tribune - Tribune transforms business for heightened relevance by embracing cloud computing.
    • VRX Studios - Global photography company transforms business with scalable cloud solution.

    Metal Fabrication Industry
    Examples of the Microsoft cloud case studies in metal fabrication:

    • ExelGroup - ExelGroup achieves cost reduction and efficiency increase with Soft1 on Windows Azure.


    Nonprofit Organizations
    Examples of the Microsoft cloud case studies in non-profit organizations:

    • Microsoft Disaster Response Team - Helping governments recover from disasters: Microsoft and partners provide technology and other assistance following natural disasters in Haiti and Pakistan.

    Oil and Gas Industry
    Examples of the Microsoft cloud case studies in oil and gas:

    • The Information Store (iStore) - Solution developer expects to boost efficiency with software-plus-services strategy.

    Professional Services
    Examples of the Microsoft cloud case studies in professional services:

    Publishing Industry
    Examples of the Microsoft cloud case studies in publishing:

    • MyWebCareer - Web startup saves $305,000, sees ever-ready scalability—without having to manage IT.


    Retail Industry
    Examples of the Microsoft cloud case studies in retail:

    • Glympse.com - Location-sharing solution provider gains productivity, agility with hosted services.
      höltl Retail Solutions - German retail solutions firm gains new customers with cloud computing solution.

    Software Engineering
    Examples of the Microsoft cloud case studies in software:

    Telecommunications Industry
    Examples of the Microsoft cloud case studies in telecommunications:

    • IntelePeer - Telecommunications firm develops solution to speed on-demand conference calls.
    • SAPO - Portugal telecom subsidiary helps ensure revenue opportunities in the cloud.
    • T-Mobile USA - Mobile operator speeds time-to-market for innovative social networking solution.
    • T-Systems - Telecommunications firm reduces development and deployment time with hosting platform.

    Training Industry
    Examples of the Microsoft cloud case studies in training:

    • Point8020 - Learning content provider uses cloud platform to enhance content delivery.

    Transportation and Logistics Industry
    Examples of the Microsoft cloud case studies in transportation:

    • TradeFacilitate - Trade data service scales online solution to global level with "cloud" services model.

    My Related Posts

  • J.D. Meier's Blog

    Project Management Body of Knowledge (PMBOK) Framework

    • 0 Comments

    Here is a quick map of the process groups, knowledge areas, and processes in the PMBOK (Project Management Body of Knowledge).  Regardless of the PMI certification, I think it’s useful to know how the knowledge for project management is organized by experts and professionals.   This will help you more effectively navigate the space, and learn project management at a faster pace, because you can better organize the information in your mind.

    If you are a program manager or a project manager, the categories are especially helpful for checking your knowledge and for thinking of projects more holistically.   You can also use the knowledge areas to grow your skills by exploring each area and building your catalog of principles, patterns, and practices.

    Process Groups and Knowledge Areas
    Here is a quick map of the process groups and knowledge areas in the Project Management Body of Knowledge:

    Category

    Items

    Process Groups

    1. Initiating
    2. Planning
    3. Executing
    4. Monitoring and Controlling
    5. Closing

    Knowledge Areas

    1. Project Integration Management
    2. Project Scope Management
    3. Project Time Management
    4. Project Cost Management
    5. Project Quality Management
    6. Project Human Resource Management
    7. Project Communications Management
    8. Project Risk Management
    9. Project Procurement Management

    Knowledge Areas and Processes
    Here is a quick topology view of the Knowledge Areas and the processes:

    Knowledge Area

    Processes

    Project Integration Management

    • Develop Project Charter
    • Develop Primary Project Scope Statement
    • Develop Project Management Plan
    • Direct and Manage Project Execution
    • Monitor and Control Project Work
    • Integrated Change Control
    • Close Project

    Project Scope Management

    • Scope Planning
    • Scope Definition
    • Create WBS
    • Scope Verification
    • Scope Control

    Project Time Management

    • Activity Definition
    • Activity Sequencing
    • Activity Resource Planning
    • Activity Duration Estimating
    • Schedule Development
    • Schedule Control

    Project Cost Management

    • Cost Estimating
    • Cost Budgeting
    • Cost Control

    Project Quality Management

    • Quality Planning
    • Perform Quality Assurance
    • Perform Quality Control

    Project Human Resource Management

    • Human Resource Planning
    • Acquire Project Team
    • Develop Project Team
    • Manage Project Team

    Project Communication Management

    • Communication Planning
    • Information Distribution
    • Performance Reporting
    • Manage Stakeholders

    Project Risk Management

    • Risk Management Planning
    • Risk Identification
    • Qualitative Risk Analysis
    • Quantitative Risk Analysis
    • Risk Response Planning
    • Risk Monitoring and Control

    Project Procurement Management

    • Plan Purchase and Acquisition
    • Plan Contracting
    • Request Seller Responses
    • Select Sellers
    • Contract Administration
    • Contract Closure

  • J.D. Meier's Blog

    patterns and practices WCF Security Application Scenarios

    • 7 Comments

    We published an updated set of our WCF Security application scenarios yesterday, as part of our patterns & practices WCF Security guidance project.   Application Scenarios are visual "blueprints" of skeletal solutions for end-to-end deployment scenarios.  Each application scenario includes a before and after look at working solutions.  While you still need to prototype and test for your scenario, this gives you potential solutions and paths at a glance, rather than starting from scratch.  It's a catalog of applications scenarios that you can look through and potentially find your match.

    Intranet
    Common Intranet patterns:

    Internet
    Common Internet patterns:

    One Size Does Not Fit All
    We know that one size doesn't fit all, so we create a collection of application scenarios that you can quickly sort through and pattern match against your scenario.  It's like a visual menu at a restaurant.  The goal is to find a good fit against your parameters versus a perfect fit.  It gives you a baseline to start from.  They effectively let you preview solutions, before embarking on your journey.

    How We Make Application Scenarios
    First, we start by gathering all the deployment scenarios we can find from customers with working solutions.  We use our field, product support, product teams, subject matter experts, and customers.  We also check with our internal line of business application solutions.  While there's a lot of variations, we look for the common denominators.  There's only so many ways to physically deploy servers, so we start there.  We group potential solutions by big buckets. 

    In order to make the solutions meaningful, we pick a focus.  For example, with WCF Security, key overarching decisions include authentication, authorization, and secure communication.  These decisions span the layers and tiers.  We also pay attention to factors that influence your decisions.  For example, your role stores and user stores are a big factor.  The tricky part is throwing out the details of customer specific solutions, while retaining the conceptual integrity that makes the solution useful.

    Next, we create prototypes and we test the end-to-end scenarios in our lab.  We do a lot of whiteboarding during this stage for candidate solutions.  This is where we spend the bulk of our time, testing paths, finding surprises, and making things work.  It's one thing to know what's supposed to work; it's another to make it work in practice. 

    From our working solution, we highlight the insights and actions within the Application Scenario so you can quickly prototype for your particular context.  We then share our candidate guidance modules on CodePlex, while we continue reviews across our review loops including field, PSS, customers, product team members, and subject matter experts. 

  • J.D. Meier's Blog

    Cloud Security Threats and Countermeasures at a Glance

    • 0 Comments

    Cloud security has been a hot topic with the introduction of the Microsoft offering of the Windows Azure platform.  One of the quickest ways to get your head around security is to cut to the chase and look at the threats, attacks, vulnerabilities and countermeasures.  This post is a look at threats and countermeasures from a technical perspective.

    The thing to keep in mind with security is that it’s a matter of people, process, and technology.  However, focusing on a specific slice, in this case the technical slice, can help you get results.  The thing to keep in mind about security from a technical side is that you also need to think holistically in terms of the application, network, and host, as well as how you plug it into your product or development life cycle.  For information on plugging it into your life cycle, see the Security Development Lifecycle.

    While many of the same security issues that apply to running applications on-premise also apply to the cloud, the context of running in the cloud does change some key things.  For example, it might mean taking a deeper look at claims for identity management and access control.  It might mean rethinking how you think about your storage.  It can mean thinking more about how you access and manage virtualized computing resources.  It can mean thinking about how you make calls to services or how you protect calls to your own services.

    Here is a fast path through looking at security threats, attacks, vulnerabilities, and countermeasures for the cloud …

    Objectives

    • Learn a security frame that applies to the cloud
    • Learn top threats/attacks, vulnerabilities and countermeasures for each area within the security frame
    • Understand differences between threats, attacks, vulnerabilities and countermeasures

    Overview
    It is important to think like an attacker when designing and implementing an application. Putting yourself in the attacker’s mindset will make you more effective at designing mitigations for vulnerabilities and coding defensively.  Below is the cloud security frame. We use the cloud security frame to present threats, attacks, vulnerabilities and countermeasures to make them more actionable and meaningful.

    You can also use the cloud security frame to effectively organize principles, practices, patterns, and anti-patterns in a more useful way.

    Threats, Attacks, Vulnerabilities, and Countermeasures
    These terms are defined as follows:

    • Asset. A resource of value such as the data in a database, data on the file system, or a system resource.
    • Threat. A potential occurrence – malicious or otherwise – that can harm an asset.
    • Vulnerability. A weakness that makes a threat possible.
    • Attack. An action taken to exploit vulnerability and realize a threat.
    • Countermeasure. A safeguard that addresses a threat and mitigates risk.

    Cloud Security Frame
    The following key security concepts provide a frame for thinking about security when designing applications to run on the cloud, such as Windows Azure. Understanding these concepts helps you put key security considerations such as authentication, authorization, auditing, confidentiality, integrity, and availability into action.

    Hot Spot Description
    Auditing and Logging Cloud auditing and logging refers to how security-related events are recorded, monitored, audited, exposed, compiled & partitioned across multiple cloud instances. Examples include: Who did what and when and on which VM instance?
    Authentication Authentication is the process of proving identity, typically through credentials, such as a user name and password. In the cloud this also encompasses authentication against varying identity stores.
    Authorization Authorization is how your application provides access controls for roles, resources and operations. Authorization strategies might involve standard mechanisms, utilize claims and potentially support a federated model.
    Communication Communication encompasses how data is transmitted over the wire. Transport security, message encryption, and point to point communication are covered here.
    Configuration Management Configuration management refers to how your application handles configuration and administration of your applications from a security perspective. Examples include: Who does your application run as? Which databases does it connect to? How is your application administered? How are these settings secured?
    Cryptography Cryptography refers to how your application enforces confidentiality and integrity. Examples include: How are you keeping secrets (confidentiality)? How are you tamper-proofing your data or libraries (integrity)? How are you providing seeds for random values that must be cryptographically strong? Certificates and cert management are in this domain as well.
    Input and Data Validation Validation refers to how your application filters, scrubs, or rejects input before additional processing, or how it sanitizes output. It's about constraining input through entry points and encoding output through exit points. Message validation refers to how you verify the message payload against schema, as well as message size, content and character sets. Examples include: How do you know that the input your application receives is valid and safe? Do you trust data from sources such as databases and file shares?
    Exception Management Exception management refers to how you handle applications errors and exceptions. Examples include: When your application fails, what does your application do? Does it support graceful failover to other application instances in the cloud? How much information do you reveal? Do you return friendly error information to end users? Do you pass valuable exception information back to the caller?
    Sensitive Data Sensitive data refers to how your application handles any data that must be protected either in memory, over the network, or in persistent stores. Examples include: How does your application handle sensitive data? How is sensitive data shared between application instances?
    Session Management A session refers to a series of related interactions between a user and your application. Examples include: How does your application handle and protect user sessions?

     

    Threats and Attacks

     

    Category Items
    Auditing and Logging
    • Repudiation. An attacker denies performing an operation, exploits an application without trace, or covers his or her tracks.
    • Denial of service (DoS). An attacker overwhelms logs with excessive entries or very large log entries.
    • Disclosure of confidential information. An attacker gathers sensitive information from log files.
    Authentication
    • Network eavesdropping. An attacker steals identity and/or credentials off the network by reading network traffic not intended for them.
    • Brute force attacks. An attacker guesses identity and/or credentials through the use of brute force.
    • Dictionary attacks. An attacker guesses identity and/or credentials through the use of common terms in a dictionary designed for that purpose.
    • Cookie replay attacks. An attacker gains access to an authenticated session through the reuse of a stolen cookie containing session information.
    • Credential theft. An attacker gains access to credentials through data theft; for instance, phishing or social engineering.
    Authorization
    • Elevation of privilege. An attacker enters a system as a lower-level user, but is able to obtain higher-level access.
    • Disclosure of confidential data. An attacker accesses confidential information because of authorization failure on a resource or operation.
    • Data tampering. An attacker modifies sensitive data because of authorization failure on a resource or operation.
    • Luring attacks. An attacker lures a higher-privileged user into taking an action on their behalf. This is not an authorization failure but rather a failure of the system to properly inform the user.
    • Token stealing. An attacker steals the credentials or token of another user in order to gain authorization to resources or operations they would not otherwise be able to access.
    Communication
    • Failure to encrypt messages. An attacker is able to read message content off the network because it is not encrypted.
    • Theft of encryption keys. An attacker is able to decrypt sensitive data because he or she has the keys.
    • Man-in-the-middle attack. An attacker can read and then modify messages between the client and the service.
    • Session replay. An attacker steals messages off the network and replays them in order to steal a user's session.
    • Data tampering. An attacker modifies the data in a message in order to attack the client or the service
    Configuration Management
    • Unauthorized access to configuration stores. An attacker gains access to configuration files and is able to modify binding settings, etc.
    • Retrieval of clear text configuration secrets. An attacker gains access to configuration files and is able to retrieve sensitive information such as database connection strings.
    Cryptography
    • Encryption cracking. Breaking an encryption algorithm and gaining access to the encrypted data.
    • Loss of decryption keys. Obtaining decryption keys and using them to access protected data.
    Exception Management
    • Information disclosure. Sensitive system or application details are revealed through exception information.
    • Denial of service. An attacker uses error conditions to stop your service or place it in an unrecoverable error state.
    • Elevation of privilege. Your service encounters an error and fails to an insecure state; for instance, failing to revert impersonation.
    Input and Data Validation
    • Canonicalization attacks. Canonicalization attacks can occur anytime validation is performed on a different form of the input than that which is used for later processing. For instance, a validation check may be performed on an encoded string, which is later decoded and used as a file path or URL.
    • Cross-site scripting. Cross-site scripting can occur if you fail to encode user input before echoing back to a client that will render it as HTML.
    • SQL injection. Failure to validate input can result in SQL injection if the input is used to construct a SQL statement, or if it will modify the construction of a SQL statement in some way.
    • Cross-Site Request Forgery: CSRF attacks involve forged transactions submitted to a site on behalf of another party.
    • XPath injection. XPath injection can result if the input sent to the Web service is used to influence or construct an XPath statement. The input can also introduce unintended results if the XPath statement is used by the Web service as part of some larger operation, such as applying an XQuery or an XSLT transformation to an XML document.
    • XML bomb. XML bomb attacks occur when specific, small XML messages are parsed by a service resulting in data that feeds on itself and grows exponentially. An attacker sends an XML bomb with the intent of overwhelming a Web service’s XML parser and resulting in a denial of service attack.
    Sensitive Data
    • Memory dumping. An attacker is able to read sensitive data out of memory or from local files.
    • Network eavesdropping. An attacker sniffs unencrypted sensitive data off the network.
    • Configuration file sniffing. An attacker steals sensitive information, such as connection strings, out of configuration files.
    Session Management
    • Session hijacking. An attacker steals the session ID of another user in order to gain access to resources or operations they would not otherwise be able to access.
    • Session replay. An attacker steals messages off the network and replays them in order to steal a user’s session.
    • Man-in-the-middle attack. An attacker can read and then modify messages between the client and the service.
    • Inability to log out successfully. An application leaves a communication channel open rather than completely closing the connection and destroying any server objects in memory relating to the session.
    • Cross-site request forgery. Cross-site request forgery (CSRF) is where an attacker tricks a user into performing an action on a site where the user actually has a legitimate authorized account.
    • Session fixation. An attacker uses CSRF to set another person’s session identifier and thus hijack the session after the attacker tricks a user into initiating it.
    • Load balancing and session affinity. When sessions are transferred from one server to balance traffic among the various servers, an attacker can hijack the session during the handoff.

    Vulnerabilities

    Category Items
    Auditing and Logging
    • Failing to audit failed logons.
    • Failing to secure log files.
    • Storing sensitive information in log files.
    • Failing to audit across application tiers.
    • Failure to throttle log files.
    Authentication
    • Using weak passwords.
    • Storing clear text credentials in configuration files.
    • Passing clear text credentials over the network.
    • Permitting prolonged session lifetime.
    • Mixing personalization with authentication.
    • Using weak authentication mechanisms (e.g., using basic authentication over an untrusted network).
    Authorization
    • Relying on a single gatekeeper (e.g., relying on client-side validation only).
    • Failing to lock down system resources against application identities.
    • Failing to limit database access to specified stored procedures.
    • Using inadequate separation of privileges.
    • Connection pooling.
    • Permitting over privileged accounts.
    Configuration Management
    • Using insecure custom administration interfaces.
    • Failing to secure configuration files on the server.
    • Storing sensitive information in the clear text.
    • Having too many administrators.
    • Using over privileged process accounts and service accounts.
    Communication
    • Not encrypting messages.
    • Using custom cryptography.
    • Distributing keys insecurely.
    • Managing or storing keys insecurely.
    • Failure to use a mechanism to detect message replays.
    • Not using either message or transport security.
    Cryptography
    • Using custom cryptography
    • Failing to secure encryption keys
    • Using the wrong algorithm or a key size that is too small
    • Using the same key for a prolonged period of time
    • Distributing keys in an insecure manner
    Exception Management
    • Failure to use structured exception handling (try/catch).
    • Revealing too much information to the client.
    • Failure to specify fault contracts with the client.
    • Failure to use a global exception handler.
    Input and Data Validation
    • Using non-validated input used to generate SQL queries.
    • Relying only on client-side validation.
    • Using input file names, URLs, or usernames for security decisions.
    • Using application-only filters for malicious input.
    • Looking for known bad patterns of input.
    • Trusting data read from databases, file shares, and other network resources.
    • Failing to validate input from all sources including cookies, headers, parameters, databases, and network resources.
    Sensitive Data
    • Storing secrets when you do not need to.
    • Storing secrets in code.
    • Storing secrets in clear text in files, registry, or configuration.
    • Passing sensitive data in clear text over networks.
    Session Management
    • Passing session IDs over unencrypted channels.
    • Permitting prolonged session lifetime.
    • Having insecure session state stores.
    • Placing session identifiers in query strings.

     

    Countermeasures

     

    Category Items
    Auditing and Logging
    • Identify malicious behavior.
    • Know your baseline (know what good traffic looks like).
    • Use application instrumentation to expose behavior that can be monitored.
    • Throttle logging.
    • Strip sensitive data before logging.
    Authentication
    • Use strong password policies.
    • Do not store credentials in an insecure manner.
    • Use authentication mechanisms that do not require clear text credentials to be passed over the network.
    • Encrypt communication channels to secure authentication tokens.
    • Use Secure HTTP (HTTPS) only with forms authentication cookies.
    • Separate anonymous from authenticated pages.
    • Using cryptographic random number generators to generate session IDs.
    Authorization
    • Use least-privileged accounts.
    • Tie authentication to authorization on the same tier.
    • Consider granularity of access.
    • Enforce separation of privileges.
    • Use multiple gatekeepers.
    • Secure system resources against system identities.
    Configuration Management
    • Using insecure custom administration interfaces.
    • Failing to secure configuration files on the server.
    • Storing sensitive information in the clear text.
    • Having too many administrators.
    • Using overprivileged process accounts and service accounts.
    Communication
    • Use message security or transport security to encrypt your messages.
    • Use proven platform-provided cryptography.
    • Periodically change your keys.
    • Use any platform-provided replay detection features.
    • Consider creating custom code if the platform does not provide a detection mechanism.
    • Turn on message or transport security.
    Cryptography
    • Do not develop and use proprietary algorithms (XOR is not encryption. Use established cryptography such as RSA)
    • Avoid key management.
    • Use the RNGCryptoServiceProvider method to generate random numbers
    • Periodically change your keys
    Exception Management
    • Use structured exception handling (by using try/catch blocks).
    • Catch and wrap exceptions only if the operation adds value/information.
    • Do not reveal sensitive system or application information.
    • Implement a global exception handler.
    • Do not log private data such as passwords.
    Sensitive Data
    • Do not store secrets in software.
    • Encrypt sensitive data over the network.
    • Secure the channel.
    • Encrypt sensitive data in configuration files.
    Session Management
    • Partition the site by anonymous, identified, and authenticated users.
    • Reduce session timeouts.
    • Avoid storing sensitive data in session stores.
    • Secure the channel to the session store.
    • Authenticate and authorize access to the session store.
    Validation
    • Do not trust client input.
    • Validate input: length, range, format, and type.
    • Validate XML streams.
    • Constrain, reject, and sanitize input.
    • Encode output.
    • Restrict the size, length, and depth of parsed XML messages.

    Threats and Attacks Explained

    1.  Brute force attacks. Attacks that use the raw computer processing power to try different permutations of any variable that could expose a security hole. For example, if an attacker knew that access required an 8-character username and a 10-character password, the attacker could iterate through every possible (256 multiplied by itself 18 times) combination in order to attempt to gain access to a system. No intelligence is used to filter or shape for likely combinations.
    2. Buffer overflows. The maximum size of a given variable (string or otherwise) is exceeded, forcing unintended program processing. In this case, the attacker uses this behavior to cause insertion and execution of code in such a way that the attacker gains control of the program in which the buffer overflow occurs. Depending on the program’s privileges, the seriousness of the security breach will vary.
    3. Canonicalization attacks. There are multiple ways to access the same object and an attacker uses a method to bypass any security measures instituted on the primary intended methods of access. Often, the unintended methods of access can be less secure deprecated methods kept for backward compatibility.
    4. Cookie manipulation. Through various methods, an attacker will alter the cookies stored in the browser. Attackers will then use the cookie to fraudulently authenticate themselves to a service or Web site.
    5. Cookie replay attacks. Reusing a previously valid cookie to deceive the server into believing that a previously authenticated session is still in progress and valid.
    6. Credential theft. Stealing the verification part of an authentication pair (identity + credentials = authentication). Passwords are a common credential.
    7. Cross-Site Request Forgery (CSRF). Interacting with a web site on behalf of another user to perform malicious actions. A site that assumes all requests it receives are intentional is vulnerable to a forged request.
    8. Cross-site scripting (XSS). An attacker is able to inject executable code (script) into a stream of data that will be rendered in a browser. The code will be executed in the context of the user’s current session and will gain privileges to the site and information that it would not otherwise have.
    9. Connection pooling. The practice of creating and then reusing a connection resource as a performance optimization. In a security context, this can result in either the client or server using a connection previously used by a highly privileged user being used for a lower-privileged user or purpose. This can potentially expose vulnerability if the connection is not reauthorized when used by a new identity.
    10. Data tampering. An attacker violates the integrity of data by modifying it in local memory, in a data-store, or on the network. Modification of this data could provide the attacker with access to a service through a number of the different methods listed in this document.
    11. Denial of service. Denial of service (DoS) is the process of making a system or application unavailable. For example, a DoS attack might be accomplished by bombarding a server with requests to consume all available system resources, or by passing the server malformed input data that can crash an application process.
    12. Dictionary attack. Use of a list of likely access methods (usernames, passwords, coding methods) to try and gain access to a system. This approach is more focused and intelligent than the “brute force” attack method, so as to increase the likelihood of success in a shorter amount of time.
    13. Disclosure of sensitive/confidential data. Sensitive data is exposed in some unintended way to users who do not have the proper privileges to see it. This can often be done through parameterized error messages, where an attacker will force an error and the program will pass sensitive information up through the layers of the program without filtering it. This can be personally identifiable information (i.e., personal data) or system data.
    14. Elevation of privilege. A user with limited privileges assumes the identity of a privileged user to gain privileged access to an application. For example, an attacker with limited privileges might elevate his or her privilege level to compromise and take control of a highly privileged and trusted process or account. More information about this attack in the context of Windows Azure can be found in the Security Best Practices for Developing Windows Azure Applications at http://www.microsoft.com/downloads/details.aspx?displaylang=en&FamilyID=0ff0c25f-dc54-4f56-aae7-481e67504df6
    15. Encryption. The process of taking sensitive data and changing it in such a way that it is unrecognizable to anyone but those who know how to decode it. Different encryption methods have different strengths based on how easy it is for an attacker to obtain the original information through whatever methods are available.
    16. Information disclosure. Unwanted exposure of private data. For example, a user views the contents of a table or file that he or she is not authorized to open, or monitors data passed in plaintext over a network. Some examples of information disclosure vulnerabilities include the use of hidden form fields, comments embedded in Web pages that contain database connection strings and connection details, and weak exception handling that can lead to internal system-level details being revealed to the client. Any of this information can be very useful to the attacker.
    17. Luring attacks. An attacker lures a higher-privileged user into taking an action on his or her behalf. This is not an authorization failure but rather a failure of the system to properly inform the user.
    18. Man-in-the-middle attacks. A person intercepts both the client and server communications and then acts as an intermediary between the two without each ever knowing. This gives the “middle man” the ability to read and potentially modify messages from either party in order to implement another type of attack listed here.
    19. Network eavesdropping. Listening to network packets and reassembling the messages being sent back and forth between one or more parties on the network. While not an attack itself, network eavesdropping can easily intercept information for use in specific attacks listed in this document.
    20. Open Redirects. Attacker provides a URL to a malicious site when allowed to input a URL used in a redirect. This allows the attacker to direct users to sites that perform phishing attacks or other malicious actions.
    21. Password cracking. If the attacker cannot establish an anonymous connection with the server, he or she will try to establish an authenticated connection. For this, the attacker must know a valid username and password combination. If you use default account names, you are giving the attacker a head start. Then the attacker only has to crack the account’s password. The use of blank or weak passwords makes the attacker’s job even easier.
    22. Repudiation. The ability of users (legitimate or otherwise) to deny that they performed specific actions or transactions. Without adequate auditing, repudiation attacks are difficult to prove.
    23. Session hijacking. Also known as man-in-the-middle attacks, session hijacking deceives a server or a client into accepting the upstream host as the actual legitimate host. Instead, the upstream host is an attacker’s host that is manipulating the network so the attacker’s host appears to be the desired destination.
    24. Session replay. An attacker steals messages off of the network and replays them in order to steal a user’s session.
    25. Session fixation. An attacker sets (fixates) another person’s session identifier artificially. The attacker must know that a particular Web service accepts any session ID that is set externally; for example, the attacker sets up a URL such as http://unsecurewebservice.com/?sessionID=1234567. The attacker then sends this URL to a valid user, who clicks on it. At this point, a valid session with the ID 1234567 is created on the server. Because the attacker determines this ID, he or she can now hijack the session, which has been authenticated using the valid user’s credentials.
    26. Spoofing. An attempt to gain access to a system by using a false identity. This can be accomplished by using stolen user credentials or a false IP address. After the attacker successfully gains access as a legitimate user or host, elevation of privileges or abuse using authorization can begin.
    27. SQL injection. Failure to validate input in cases where the input is used to construct a SQL statement or will modify the construction of a SQL statement in some way. If the attacker can influence the creation of a SQL statement, he or she can gain access to the database with privileges otherwise unavailable and use this in order to steal or modify information or destroy data.
    28. Throttling. The process of limiting resource usage to keep a particular process from bogging down and/or crashing a system. Relevant as a countermeasure in DoS attacks, where an attacker attempts to crash the system by overloading it with input.

    Countermeasures Explained

    1. Assume all input is malicious. Assuming all input is malicious means designing your application to validate all input. User input should never be accepted without being filtered and/or sanitized.
    2. Audit and log activity through all of the application tiers. Log business critical and security sensitive events. This will help you track security issues down and make sense of security problems. Skilled attackers attempt to cover their tracks, so you’ll want to protect your logs.
    3. Avoid storing secrets. Design around storing secrets. If necessary, sometimes they can be avoided by storing them after using a one-way hash algorithm.
    4. Avoid storing sensitive data in the Web space. Anything exposed to the public Internet is considered “web space.” Sensitive data stored in a location that might be compromised by any member of the public places it at much higher risk.
    5. Back up and regularly analyze log files. Some attacks can occur over time. Regular analysis of logs will allow you to recognize with sufficient time to address them. Performing regular backups lowers the risk of an attacker covering his tracks by deleting logging of his activities.
    6. Be able to disable accounts. The ability to reactively defend an attack by shutting out a user should be supported through the ability to disable an account.
    7. Be careful with canonicalization issues. Predictable naming of file resources is convenient for programming, but is also very convenient for malicious parties to attack. Application logic should not be exposed to users in this manner. Instead, you use file names derived from the original names or fed through a one-way hashing algorithm.
    8. Catch exceptions. Unhandled exceptions are at risk of passing too much information to the client. Handle exceptions when possible.
    9. Centralize your input and data validation. Input and data validation should be performed using a common set of code such as a validation library.
    10. Consider a centralized exception management framework. Exception handling frameworks are available publically and provide an established and tested means for handling exceptions.
    11. Consider authorization granularity. Every object needs to have an authorization control that authorizes access based on the identity of the authenticated party requesting access. Fine grained authorization will control access to each resource, while coarse grained authorization will control access to groups of resources or functional areas of the application.
    12. Consider identity flow. Auditing should be traceable back to the authenticated party. Take note of identity transitions imposed by design decisions like impersonation.
    13. Constrain input. Limit user input to expected ranges and formats.
    14. Constrain, reject, and sanitize your input. Constrain, reject and sanitize should be primary techniques in handling input data.
    15. Cycle your keys periodically. Expiring encryption keys lowers the risk of stolen keys.
    16. Disable anonymous access and authenticate every principle. When possible, require all interactions to occur as an authenticated party as opposed to an anonymous one. This will help facilitate more effective auditing.
    17. Do not develop your own cryptography. Custom cryptography is not difficult for experts to crack. Established cryptography is preferred because it is known to be safe.
    18. Do not leak information to the client. Exception data can potentially contain sensitive data or information exposing program logic. Provide clients only with the error data they need for the UI.
    19. Do not log private data such as passwords. Log files are an attack vector for malicious parties. Limit the risk of their being compromised by not logging sensitive data in the log.
    20. Do not pass sensitive data using the HTTP-GET protocol. Data passed using HTTP GET is appended to the querystring. When users share links by copying and pasting them from the browser address bar, sensitive data may also be inadvertently passed. Pass sensitive data in the body of a POST to avoid this.
    21. Do not rely on client-side validation. Any code delivered to a client is at risk of being compromised. Because of this, it should always be assumed that input validation on the client might have been bypassed.
    22. Do not send passwords over the wire in plaintext. Authentication information communicated over the wire should always be encrypted. This may mean encrypting the values, or encrypting the entire channel with SSL.
    23. Do not store credentials in plaintext. Credentials are sometimes stored in application configuration files, repositories, or sent over email. Always encrypt credentials before storing them.
    24. Do not store database connections, passwords, or keys in plaintext. Configuration secrets should always be stored in encrypted form, external to the code.
    25. Do not store passwords in user stores. In the event that the user store is compromised, an attack should never be able to access passwords. A derivative of a password should be stored instead. A common approach to this is to encrypt a version of the password using a one-way hash with a SALT. Upon authentication, the encrypted password can be re-generated with the SALT and the result can be compared to the original encrypted password.
    26. Do not store secrets in code. Secrets such as configuration settings are convenient to store in code, but are more likely to be stolen. Instead, store them in a secure location such as a secret store.
    27. Do not store sensitive data in persistent cookies. Persistent cookies are stored client-side and provide attackers with ample opportunity to steal sensitive data, be it through encryption cracking or any other means.
    28. Do not trust fields that the client can manipulate (query strings, form fields, cookies, or HTTP headers). All information sent from a client should always be assumed to be malicious. All information from a client should always be validated and sanitized before it is used.
    29. Do not trust HTTP header information. Http header manipulation is a threat that can be mitigated by building application logic that assumes HTTP headers are compromised and validates the HTTP headers before using them.
    30. Encrypt communication channels to protect authentication tokens. Authentication tokens are often the target of eavesdropping, theft or replay type attacks. To reduce the risk in these types of attacks, it is useful to encrypt the channel the tokens are communicated over. Typically this means protecting a login page with SSL encryption.
    31. Encrypt sensitive cookie state. Sensitive data contained within cookies should always be encrypted.
    32. Encrypt the contents of the authentication cookies. In the case where cookies are compromised, they should not contain clear-text session data. Encrypt sensitive data within the session cookie.
    33. Encrypt the data or secure the communication channel. Sensitive data should only be passed in encrypted form. This can be accomplished by encrypting the individual items that are sent over the wire, or encrypting the entire channel as with SSL.
    34. Enforce separation of privileges. Avoid building generic roles with privileges to perform a wide range of actions. Roles should be designed for specific tasks and provided the minimum privileges required for those tasks.
    35. Enforce unique transactions. Identify each transaction from a client uniquely to help prevent replay and forgery attacks.
    36. Identify malicious behavior. Monitoring site interactions that fall outside of normal usage patterns, you can quickly identify malicious behavior. This is closely related to “Know what good traffic looks like.
    37. Keep unencrypted data close to the algorithm. Use decrypted data as soon as it is decrypted, and then dispose of it promptly. Unencrypted data should not be held in memory in code.
    38. Know what good traffic looks like. Active auditing and logging of a site will allow you know recognize what regular traffic and usage patterns are. This is a required step in order to be able to identify malicious behavior.
    39. Limit session lifetime. Longer session lifetimes provide greater opportunity for Cross-Site Scripting or Cross-Site Request Forgery attacks to add activity onto an old session.
    40. Log detailed error messages. Highly detailed error message logging can provide clues to attempted attacks.
    41. Log key events. Profile your application and note key or sensitive operations and/or events, and log these events during application operation.
    42. Maintain separate administration privileges. Consider granularity of authorization in the administrative interfaces as well. Avoid combining administrator roles with distinctly different roles such as development, test or deployment.
    43. Make sure that users do not bypass your checks. Bypassing checks can be accomplished by canonicalization attacks, or bypassing client-side validation. Application design should avoid exposing application logic, and segregating application logic into flow that can be interrupted. For example, an ASPX page that performs only validations and then redirects. Instead, validation routines should be tightly bound to the data they are validating.
    44. Pass Forms authentication cookies only over HTTPS connections. Cookies are at risk of theft and replay type attacks. Encrypting them with SSL helps reduce the risk of these types of attacks.
    45. Protect authentication cookies. Cookies can be manipulated with Cross-Site Scripting attacks, encrypt sensitive data in cookies, and use browser features such as the HttpOnly cookie attribute.
    46. Provide strong access controls on sensitive data stores. Access to secret stores should but authorized. Protect the secret store as you would other secure resources by requiring authentication and authorization as appropriate.
    47. Reject known bad input. Rejecting known bad input involves screening input for values that are known to be problematic or malicious. NOTE: Rejecting should never be the primary means of screening bad input, it should always be used in conjunction with input sanitization.
    48. Require strong passwords. Enforce password complexity requirement by requiring long passwords with a combination of upper case, lower case, numeric and special (for example punctuation) characters. This helps mitigate the threat posed by dictionary attacks. If possible, also enforce automatic password expiry.
    49. Restrict user access to system-level resources. Users should not be touching system resources directly. This should be accomplished through an intermediary such as the application. System resources should be restricted to application access.
    50. Retrieve sensitive data on demand. Sensitive data stored in application memory provides attackers another location they can attempt to access the data. Often this data is used in unencrypted form also. To minimize risk of sensitive data theft, sensitive data should be used immediately and then cleared from memory.
    51. Sanitize input. Sanitizing input is the opposite of rejecting bad input. Sanitizing input is the process of filtering input data to only accept values that are known to be safe. Alternatively, input can be rendered innocuous by converting it to safe output through output encoding methods.
    52. Secure access to log files. Log files should only be accessible to administrators, auditors, or administrative interfaces. An attacker with access to the logs might be able to glean sensitive data or program logic from logs.
    53. Secure the communication channel for remote administration. Eavesdropping and replay attacks can target administration interfaces as well. If using a web based administration interface, use SSL.
    54. Secure your configuration store. The configuration store should require authenticated access and should store sensitive settings or information in an encrypted format.
    55. Secure your encryption keys. Encryption keys should be treated as secrets or sensitive data. They should be secured in a secret store or key repository.
    56. Separate public and restricted areas. Applications that contain public front-ends as well as content that requires authentication to access should be partitioned in the same manner. Public facing pages should be hosted in a separate file structure, directory or domain from private content.
    57. Store keys in a restricted location. Protect keys with authorization policies.
    58. Support password expiration periods. User passwords and account credentials are commonly compromised. Expiration policies help mitigate attacks from stolen accounts, or disgruntled employees who have been terminated.
    59. Use account lockout policies for end-user accounts. Account login attempts should have a cap on failed attempts. After the cap is exceeded the account should prevent further login attempts. Lockout helps prevent dictionary and brute force attacks.
    60. Use application instrumentation to expose behavior that can be monitored: Application transactions that are more likely to be targeted by malicious interactions should be logged or monitored. Examples of this might be adding logging code to an exception handler, or logging individual API calls. By providing a means to watch these transactions you have a higher likelihood of being able to identify malicious behavior quickly.
    61. Use authentication mechanisms that do not require clear text credentials to be passed over the network: A variety of authentication approaches exist for use with web based applications some involve the use of tokens while others will pass user credentials (user name/id and password) over the wire. When possible, it is safer to use an authentication mechanism that does not pass the credentials. If credentials must be passed, it is preferable to encrypt them, and/or send them over an encrypted channel such as SSL.
    62. Use least privileged accounts. The privileges granted to the authenticated party should be the minimum required to perform all required tasks. Be careful of using existing roles that have permissions beyond what is required.
    63. Use least privileged process and service accounts. Allocate accounts specifically for process and service accounts. Lock down the privileges of these accounts separately from other accounts.
    64. Use multiple gatekeepers. Passing the authentication system should not provide a golden ticket to any/all functionality. System and/or application resources should have restricted levels of access depending on the authenticated party. Some design patterns might also enforce multiple authentications, sometimes distributed through application tiers.
    65. Use SSL to protect session authentication cookies. Session authentication cookies contain data that can be used in a number of different attacks such as replay, Cross-Site Scripting or Cross-Site Request Forgery. Protecting these cookies helps mitigate these risks.
    66. Use strong authentication and authorization on administration interfaces. Always require authenticated access to administrative interfaces. When applicable, also enforce separation of privileges within the administrative interfaces.
    67. Use structured exception handling. A structured approach to exception handling lowers the risk of unexpected exceptions from going unhandled.
    68. Use the correct algorithm and correct key length. Different encryption algorithms are preferred for varying data types and scenarios.
    69. Use tried and tested platform features. Many cryptographic features are available through the .NET Framework. These are proven features and should be used in favor of custom methods.
    70. Validate all values sent from the client. Similar to not relying on client-side validation, any input from a client should always be assumed to have been tampered with. This input should always be validated before it is used. This encompasses user input, cookie values, HTTP headers, and anything else that is sent over the wires from the client.
    71. Validate data for type, length, format, and range. Data validation should encompass these primary tenets. Validate for data type, string lengths, string or numeric formats, and numeric ranges.

    SDL Considerations
    For more information on preferred encryption algorithms and key lengths, see the Security Development Lifecycle at http://www.microsoft.com/security/sdl/ .

  • J.D. Meier's Blog

    ASP.NET Security Scenarios on Azure

    • 8 Comments

    As part of our patterns & practices Azure Security Guidance project, we’re putting together a series of Application Scenarios and Solutions.  Our goal is to show the most common application scenarios on the Microsoft Azure platform.  This is your chance to give us feedback on whether we have the right scenarios, and whether you agree with the baseline solution.

    ASP.NET Security Scenarios on Windows Azure
    We’re taking a crawl, walk, run approach and starting with the basic scenarios first.  This is our application scenario set for ASP.NET:

    • ASP.NET Forms Auth to Azure Storage
    • ASP.NET Forms Auth to SQL Azure
    • ASP.NET to AD with Claims
    • ASP.NET to AD with Claims (Federation)

    ASP.NET Forms Auth to Azure Storage

    Scenario

    image

    Solution

    image

    Solution Summary Table

    Area Notes
    Authentication
    • ASP.NET application authenticates users with Forms authentication.
    • ASP.NET accesses the membership store through the TableStorageMembershipProvider.
    • ASP.NET authenticates against Azure Storage using a shared key.
    Authorization
    • ASP.NET accesses the Role store in Azure Storage through the TableStorageRoleProvider.
    • ASP.NET application performs role checks.
    Communication
    • Protect credentials over the wire using SSL.
    • A shared key protects communication between ASP.NET and Azure Storage.

    ASP.NET Forms Authentication to SQL Azure

    Scenario

    image

    Solution

    image

    Solution Summary Table

    Area Notes
    Authentication
    • Authenticate users with Forms Authentication.
    • Store users in SQL Azure.
    • ASP.NET connects to SQL Azure using a SQL user account.
    • Application identity is mapped to SQL account.
    Authorization
    • Store roles in SQL Azure.
    • ASP.NET checks roles through the SqlRoleProvider.
    Communication
    • Protect credentials over the wire with SSL.
    • ASP.NET connects to SQL Azure over port 1433
    • SQL authentication occurs over secure TDS.
    • SQL connections are configured to screen IP addresses to expected client app addresses.

    ASP.NET to AD with Claims

    Scenario

    image

    Solution

    image

    Solution Summary Table

    Area Notes
    Authentication
    • Authenticate users against Active Directory.
    • Obtain user credentials as claims.
    • Use ADFS to provide claims.
    • Authenticate users in application using claims.
    • Use Windows Identity Foundation in ASP.NET app to manage SAML tokens.
    Authorization
    • Authorize users against claims.
    • Authorize in application logic.
    • Store additional claims beyond what AD can provide in a local SQL server.
    Communication
    • Claims are passed using WS-* protocols.
    • Protect claims over the wire using Security Assertion Markup Language (SAML)
    • Protect SAML tokens with SSL

    ASP.NET to AD with Claims (Federation)

    Scenario

    image

    Solution

    image

    Solution Summary Table

    Area Notes
    Authentication
    • Authenticate client browser against Active Directory.
    • Obtain user credentials as claims.
    • Use Active Directory Federation Services (ADFS) to provide claims.
    • Authenticate users in application using claims.
    • Establish trust relationship between ASP.NET app and Azure hosted Secure Token Service (STS).
    • Establish trust relationship between ADFS and Azure STS.
    Authorization
    • Authorize users against claims.
    • Authorize in application logic.
    • Store additional claims beyond what AD can provide in a local SQL server.
    Communication
    • Claims are passed using WS-*protocols.
    • Protect claims over the wire using Security Assertion Markup Language (SAML).
    • Protect SAML tokens with SSL.
  • J.D. Meier's Blog

    Cheat Sheet: patterns & practices Pattern Catalog Posted to CodePlex

    • 2 Comments

    As part of our patterns & practices Application Architecture Guide 2.0 project, we've been hunting and gathering our patterns from across our patterns & practices catalog.  Here's an initial draft of our patterns & practices Patterns Catalog at a Glance:

    Pattern Catalog
    To collect the patterns, we first identified the key projects that focused on patterns:

    Next, we organized the patterns and summarized in tables.  You can browse the tables below to see which patterns are associated with which project.

    Composite Application Guidance for WPF

    Category Patterns
    Composite User Interface Patterns Adapter; Command; Composite and Composite View;
    Modularity Event Aggregator; Facade; Separated Interface and Plug In; Service Locator
    Testability Inversion of Control; Dependency Injection; Separated Presentation; Supervising Controller; Presentation Model

    Data Patterns

    Category Patterns
    Data Movement Patterns Data Replication; Master-Master Replication; Master-Slave Replication; Master-Master Row-Level Synchronization; Master-Slave Snapshot Replication; Capture Transaction Details; Master-Slave Transactional Incremental Replication; Master-Slave Cascading Replication
    Pattlets Maintain Data Copies; Application-Managed Data Copies; Extract-Transform-Load (ETL); Topologies for Data Copies

     Enterprise Solution Patterns

    Category Patterns
    Deployment Patterns Deployment Plan ; Layered Application; Three-Layered Services Application; Tiered Distribution; Three-Tiered Distribution
    Distributed Systems Broker; Data Transfer Object; Singleton
    Performance and Reliability Server Clustering; Load-Balanced Cluster; Failover Cluster
    Services Patterns Service Interface; Service Gateway
    Web Presentation Patterns Model-View-Controller; Page Controller; Front Controller; Intercepting Filter; Page Cache; Observer
    Pattlets Abstract Factory; Adapter; Application Controller; Application Server; Assembler; Bound Data Control; Bridge; Command(s); Decorator; Façade; Four-Tiered Distribution; Gateway; Layer Supertype; Layers; Mapper; Mediator; MonoState; Observer; Naming Service; Page Data Caching; Page Fragment Caching; Presentation-Abstraction-Controller; Remote Façade; Server Farm; Special Case; Strategy; Table Data Gateway; Table Module; Template Method

    Integration Patterns

    Category Patterns
    Integration Layer Entity Aggregation; Process Integration; Portal Integration.
    System Connections Data Integration; Function Integration; Service-Oriented Integration; Presentation Integration
    Additional Integration Patterns Pipes and Filters; Gateway

    Web Services Security Patterns

    Category Patterns
    Authentication Brokered Authentication; Brokered Authentication: Kerberos; Brokered Authentication: X509 PKI; Brokered Authentication: STS; Direct Authentication
    Authorization Protocol Transition with Constrained Delegation; Trusted Subsystem
    Exception Management Exception Shielding
    Message Encryption Data Confidentiality
    Message Replay Detection Message Replay Detection
    Message Signing Data Origin Authentication
    Message Validation Message Validator
    Deployment Perimeter Service Router

    Pattern Summaries
    The following pattern summaries are brief descriptions of each pattern, along with a link to the relevant MSDN pages.

    Composite Application Guidance for WPF

    Modularity

    • Service Locator. Create a service locator that contains references to the services and that encapsulates the logic to locate them. In your classes, use the service locator to obtain service instances.

    Testability

    • Dependency Injection. Do not instantiate the dependencies explicitly in your class. Instead, declaratively express dependencies in your class definition. Use a Builder object to obtain valid instances of your object's dependencies and pass them to your object during the object's creation and/or initialization.
    • Inversion of Control. Delegate the function of selecting a concrete implementation type for the classes' dependencies to an external component or source.
    • Presentation Model. Separate the responsibilities for the visual display and the user interface state and behavior into different classes named, respectively, the view and the presentation model. The view class manages the controls on the user interface and the presentation model class acts as a façade on the model with UI-specific state and behavior, by encapsulating the access to the model and providing a public interface that is easy to consume from the view (for example, using data binding).
    • Separated Presentation. Separate the presentation logic from the business logic into different artifacts. The Separated Presentation pattern can be implemented in multiple ways such as Supervising Controller or Presentation Model, etc.
    • Supervising Controller. Separate the responsibilities for the visual display and the event handling behavior into different classes named, respectively, the view and the presenter. The view class manages the controls on the user interface and forwards user events to a presenter class. The presenter contains the logic to respond to the events, update the model (business logic and data of the application) and, in turn, manipulate the state of the view.

    Data Movement Patterns

    • Data Replication.  Build on the data movement building block as described in Move Copy of Data by adding refinements that are appropriate to replication.
    • Master-Master Replication. Copy data from the source to the target and detect and resolve any update conflicts that have occurred since the last replication (due to changes to the same data on the source and target). The solution consists of a two replication links between the source and the target in opposite directions. Both replication links transmit the same replication set in both directions). Such a pair of replication links is referred to as related links in the more detailed patterns.
    • Master-Slave Replication. Copy data from the source to the target without regard to updates that may have occurred to the replication set at the target since the last replication.
    • Master-Master Row-Level Synchronization. Create a pair of related replication links between the source and target. Additionally, create a synchronization controller to manage the synchronization and connect the links. This solution describes the function of one of these replication links. The other replication link behaves the same way, but in the opposite direction. To synchronize more than two copies of the replication set, create the appropriate replication link pair for each additional copy.
    • Master-Slave Snapshot Replication. Make a copy of the source replication set at a specific time (this is known as a snapshot), replicate it to the target, and overwrite the target data. In this way, any changes that may have occurred to the target replication set are replaced by the new source replication set.
    • Capture Transaction Details. Create additional database objects, such as triggers and (shadow) tables, and to record changes of all tables belonging to the replication set.
    • Master-Slave Transactional Incremental Replication. Acquire the information about committed transactions from the source and to replay the transactions in the correct sequence when they are written to the target.
    • Master-Slave Cascading Replication. Increase the number of replication links between the source and target by adding one or more intermediary targets between the original source and the end target databases. These intermediaries are data stores that take a replication set from the source, and thus act as a target in a first replication link. Then they act as sources to move the data to the next replication link and so on until they reach the cascade end targets (CETs).

    Enterprise Solution Patterns

    Deployment Patterns

    • Deployment Plan. Create a deployment plan that describes which tier each of the application's components will be deployed to. While assigning components to tiers, if it’s found that a tier is not a good match for a component, determine the cost and benefits of modifying the component to better work with the infrastructure, or modifying the infrastructure to better suit the component.
    • Layered Application. Separate the components of your solution into layers. The components in each layer should be cohesive and at roughly the same level of abstraction. Each layer should be loosely coupled to the layers underneath.
    • Three-Layered Services Application. Base your layered architecture on three layers: presentation, business, and data. This pattern presents an overview of the responsibilities of each layer and the components that compose each layer.
    • Tiered Distribution. Structure your servers and client computers into a set of physical tiers and distribute your application components appropriately to specific tiers.
    • Three-Tiered Distribution. Structure your application around three physical tiers: client, application, and database.

    Distributed Systems

    • Broker. Use the Broker pattern to hide the implementation details of remote service invocation by encapsulating them into a layer other than the business component itself.
    • Data Transfer Object. Create a data transfer object (DTO) that holds all data that is required for the remote call. Modify the remote method signature to accept the DTO as the single parameter and to return a single DTO parameter to the client. After the calling application receives the DTO and stores it as a local object, the application can make a series of individual procedure calls to the DTO without incurring the overhead of remote calls.
    • Singleton. Singleton provides a global, single instance by making the class create a single instance of itself, allowing other objects to access this instance through a globally accessible class method that returns a reference to the instance. Additionally declaring the class constructor as private so that no other object can create a new instance.

    Performance and Reliability

    • Server Clustering. Design your application infrastructure so that your servers appear to users and applications as virtual unified computing resources. One means by which to achieve this virtualization is by using a server cluster. A server cluster is the combination of two or more servers that are interconnected to appear as one, thus creating a virtual resource that enhances availability, scalability, or both.
    • Load-Balanced Cluster. Install your service or application onto multiple servers that are configured to share the workload. This type of configuration is a load-balanced cluster. Load balancing scales the performance of server-based programs, such as a Web server, by distributing client requests across multiple servers. Load balancing technologies, commonly referred to as load balancers, receive incoming requests and redirect them to a specific host if necessary. The load-balanced hosts concurrently respond to different client requests, even multiple requests from the same client.
    • Failover Cluster. Install your application or service on multiple servers that are configured to take over for one another when a failure occurs. The process of one server taking over for a failed server is commonly known as failover. A failover cluster is a set of servers that are configured so that if one server becomes unavailable, another server automatically takes over for the failed server and continues processing. Each server in the cluster has at least one other server in the cluster identified as its standby server.

    Services Patterns

    • Service Interface. Design your application as a collection of software services, each with a service interface through which consumers of the application may interact with the service.
    • Service Gateway. Encapsulate the code that implements the consumer portion of the contract into its own Service Gateway component. Service gateways play a similar role when accessing services as data access components do for access to the application's database. They act as proxies to other services, encapsulating the details of connecting to the source and performing any necessary translation.

    Web Presentation Patterns

    • Model-View-Controller. The Model-View-Controller (MVC) pattern separates the modeling of the domain, the presentation, and the actions based on user input into three separate classes. The model manages the behavior and data of the application domain, responds to requests for information about its state (usually from the view), and responds to instructions to change state (usually from the controller). The view manages the display of information. The controller interprets the mouse and keyboard inputs from the user, informing the model and/or the view to change as appropriate.
    • Page Controller. Use the Page Controller pattern to accept input from the page request, invoke the requested actions on the model, and determine the correct view to use for the resulting page. Separate the dispatching logic from any view-related code. Where appropriate, create a common base class for all page controllers to avoid code duplication and increase consistency and testability.
    • Front Controller. Front Controller solves the decentralization problem present in Page Controller by channeling all requests through a single controller. The controller itself is usually implemented in two parts: a handler and a hierarchy of commands. The handler receives the HTTP Post or Get request from the Web server and retrieves relevant parameters from the request. The handler uses the parameters from the request first to choose the correct command and then to transfers control to the command for processing. The commands themselves are also part of the controller. The commands represent the specific actions as described in the Command pattern.
    • Intercepting Filter. Create a chain of composable filters to implement common pre-processing and post-processing tasks during a Web page request.
    • Page Cache. Use a page cache for dynamic Web pages that are accessed frequently, but change less often.
    • Observer. Use the Observer pattern to maintain a list of interested dependents (observers) in a separate object (the subject). Have all individual observers implement a common Observer interface to eliminate direct dependencies between the subject and the dependent objects.

    Integration Patterns

    Integration Layer   

    • Entity Aggregation. Introduce an Entity Aggregation layer that provides a logical representation of the entities at an enterprise level with physical connections that support the access and that update to their respective instances in back-end repositories.
    • Process Integration. Define a business process model that describes the individual steps that make up the complex business function. Create a separate process manager component that can interpret multiple concurrent instances of this model and that can interact with the existing applications to perform the individual steps of the process.
    • Portal Integration. Create a portal application that displays the information retrieved from multiple applications in a unified user interface. The user can then perform the required tasks based on the information displayed in this portal.

    Integration Topologies   

    • Message Broker. Extend the integration solution by using Message Broker. A message broker is a physical component that handles the communication between applications. Instead of communicating with each other, applications communicate only with the message broker. An application sends a message to the message broker, providing the logical name of the receivers. The message broker looks up applications registered under the logical name and then passes the message to them.
    • Message Bus. Connect all applications through a logical component known as a message bus. A message bus specializes in transporting messages between applications. A message bus contains three key elements, set of agreed-upon message schemas, set of common command messages, and shared infrastructure for sending bus messages to recipients.
    • Publish / Subscribe. Extend the communication infrastructure by creating topics or by dynamically inspecting message content. Enable listening applications to subscribe to specific messages. Create a mechanism that sends messages to all interested subscribers.

    System Connections   

    • Data Integration. Integrate applications at the logical data layer by allowing the data in one application (the source) to be accessed by other applications (the target).
    • Function Integration. Integrate applications at the business logic layer by allowing the business function in one application (the source) to be accessed by other applications (the target).
    • Service-Oriented Integration. To integrate applications at the business logic layer, enable systems to consume and provide XML-based Web services. Use Web Services Description Language (WSDL) contracts to describe the interfaces to these systems. Ensure interoperability by making your implementation compliant with the Web Services family of specifications.
    • Presentation Integration. Access the application's functionality through the user interface by simulating a user's input and by reading data from the screen display.

    Web Service Services Security Patterns

    Authentication

    • Brokered Authentication. The Web service validates the credentials presented by the client, without the need for a direct relationship between the two parties. An authentication broker that both parties trust independently issues a security token to the client. The client can then present credentials, including the security token, to the Web service.
    • Brokered Authentication: Kerberos. Use the Kerberos protocol to broker authentication between clients and Web services.
    • Brokered Authentication: X509 PKI.  Use brokered authentication with X.509 certificates issued by a certificate authority (CA) in a public key infrastructure (PKI) to verify the credentials presented by the requesting application.
    • Brokered Authentication: STS.  Use brokered authentication with a security token issued by a Security Token Service (STS). The STS is trusted by both the client and the Web service to provide interoperable security tokens.
    • Direct Authentication. The Web service acts as an authentication service to validate credentials from the client. The credentials, which include proof-of-possession that is based on shared secrets, are verified against an identity store.

    Authorization   

    • Protocol Transition with Constrained Delegation.  Use the Kerberos protocol extensions in Windows Server. The extensions require the user ID but not the password. You still need to establish trust between the client application and the Web service; however, the application is not required to store or send passwords.
    • Trusted Subsystem.  The Web service acts as a trusted subsystem to access additional resources. It uses its own credentials instead of the user's credentials to access the resource.

    Exception Management   

    • Exception Shielding.  Sanitize unsafe exceptions by replacing them with exceptions that are safe by design. Return only those exceptions to the client that have been sanitized or exceptions that are safe by design. Exceptions that are safe by design do not contain sensitive information in the exception message, and they do not contain a detailed stack trace, either of which might reveal sensitive information about the Web service's inner workings.

    Message Encryption   

    • Data Confidentiality.  Use encryption to protect sensitive data that is contained in a message. Unencrypted data, which is known as plaintext, is converted to encrypted data, which is known as cipher-text. Data is encrypted with an algorithm and a cryptographic key. Cipher-text is then converted back to plaintext at its destination.

    Message Replay Detection   

    • Message Replay Detection.  Cache an identifier for incoming messages, and use message replay detection to identify and reject messages that match an entry in the replay detection cache.

    Message Signing   

    • Data Origin Authentication.  Use data origin authentication, which enables the recipient to verify that messages have not been tampered with in transit (data integrity) and that they originate from the expected sender (authenticity).

    Message Validation   

    • Message Validator.  The message validation logic enforces a well-defined policy that specifies which parts of a request message are required for the service to successfully process it. It validates the XML message payloads against an XML schema (XSD) to ensure that they are well-formed and consistent with what the service expects to process. The validation logic also measures the messages against certain criteria by examining the message size, the message content, and the character sets that are used. Any message that does not meet the criteria is rejected.

    Deployment   

    • Perimeter Service Router.  Design a Web service intermediary that acts as a perimeter service router. The perimeter service router provides an external interface on the perimeter network for internal Web services. It accepts messages from external applications and routes them to the appropriate Web service on the private network.

    My Related Posts

  • patterns & practices App Arch Guide 2.0 Project
  • App Arch Guide 2.0 Overview Slides
  • Abstract for Application Architecture Guide 2.0
  • App Arch Meta-Frame
  • App Types
  • Architecture Frame
  • App Arch Guidelines
  • Layers and Components
  • Key Software Trends
  • Cheat Sheet: patterns & practices Catalog at a Glance Posted to CodePlex
  • J.D. Meier's Blog

    How To Use Getting Results the Agile Way with Evernote

    • 0 Comments

    One of the most common questions I get with Getting Results the Agile Way is, “What tools do I use to implement it?”

    The answer is, it depends on how "lightweight" or "heavy" I need to be for a given scenario.  The thing to keep in mind is that the system is stretch to fit because it's based on a simple set of principles, patterns, and practices.  See Values, Principles, and Practices of Getting Results the Agile Way.

    That said, I have a few key scenarios:

    1. Just me.
    2. Pen and Paper.
    3. Evernote.

    The Just Me Scenario
    In the "Just Me" scenario, I don't use any tools.  I just take "mental notes."  I use The Rule of Three to identify three outcomes for the day.  I simply ask the question, "What are the three most important results for today?"  Because it's three things, it's easy to remember, and it helps me stay on track.  Because it's results or outcomes, not activities, I don't get lost in the minutia.

    The Pen and Paper Scenario
    In the Pen and Paper scenario, I carry a little yellow sticky pad.  I like yellow stickies because they are portable and help me free up my mind by writing things down.  The act of writing it down, also forces me to get a bit more clarity.  As a practice, I either write the three results I want for the day on the first note, or I write one outcome per note.  The main reason I write one result per sticky note is so that I can either jot supporting notes, such as tasks, or so I can throw it away when I've achieve that particular result.  It's a simple way to game my day and build a sense of progress.

    I do find that writing things down, even as a simple reference, helps me stay on track way more than just having it in my head.

    The Evernote Scenario
    The Evernote scenario is my high-end scenario.  This is for when I'm dealing with multiple projects, leading teams, etc.  It's still incredibly light-weight, but it helps me stay on top of my game, while juggling many things.  It also helps me quickly see when I have too much open work, or when I'm splitting my time and energy across too many things.  It also helps me see patterns by flipping back through my daily outcomes, weekly outcomes, etc.

    It's hard to believe, but already I've been using Evernote with Getting Results the Agile Way for years.  I just checked the dates of my daily outcomes, and I had switched to Evernote back in 2009.  Time sure flies.  It really does.

    Anyway, I put together a simple step-by-step How To to walk you through setting up Getting Results the Agile Way in Evernote.  Here it is:

    OneNote
    If you’re a OneNote user, and you want to see how to use Getting Results the Agile Way with OneNote, check out Anu’s post on using Getting Results the Agile Way with OneNote.

  • J.D. Meier's Blog

    Silverlight Developer Guidance Map

    • 1 Comments

    image

    If you’re a Silverlight developer or you want to learn Silverlight, this map is for you.   Microsoft has an extensive collection of developer guidance available in the form of Code Samples, How Tos, Videos, and Training.  The challenge is -- how do you find all of the various content collections? … and part of that challenge is knowing *exactly* where to look.  This is where the map comes in.  It helps you find your way around the online jungle and gives you short-cuts to the treasure troves of available content.

    The Silverlight Developer Guidance Map helps you kill a few birds with one stone:

    1. It show you the key sources of Silverlight content and where to look (“teach you how to fish”)
    2. It gives you an index of the main content collections (Code Samples, How Tos, Videos, and Training)
    3. You can also use the map as a model for creating your own map of developer guidance.

    Download the Silverlight Developer Guidance Map

    Contents at a Glance

    • Introduction
    • Sources of Silverlight Developer Guidance
    • Topics and Features Map (a “Lens” for Finding Silverlight Content)
    • Summary Table of Topics
    • How The Map is Organized (Organizing the “Content Collections”)
    • Getting Started
    • Architecture and Design
    • Code Samples
    • How Tos
    • Videos
    • Training

    Mental Model of the Map
    The map is a simple collection of content types from multiple sources, organized by common tasks, common topics, and Silverlight features:

    image

    Special Thanks …
    Special thanks to Jesse Liberty, Joe Stagner, Paul Enfield, Pete Brown, Sam Landstrom, and Scott Hanselman for helping me find and round up our various content collections.

    Enjoy and share the map with a friend.

  • J.D. Meier's Blog

    Extreme Programming (XP) at a Glance

    • 3 Comments

    Extreme Programming (XP) is a lightweight software development methodology based on principles of simplicity, communication, feedback, and courage.   I like to be able to scan methodologies to compare approaches.  To do so, I create a skeleton of the activities, artifacts, principles, and practices.    Here are my notes on XP:

    Activities

    • Coding
    • Testing
    • Listening
    • Designing

    Artifacts

    • Acceptance tests
    • Code
    • Iteration plan
    • Release and iteration plans
    • Stories
    • Story cards
    • Statistics about the number of tests, stories per iteration, etc.
    • Unit tests
    • Working code every iteration

    12 Practices
    Here's the 12 XP practices:

    • Coding Standards
    • Collective Ownership
    • Continuous Integration
    • On-Site Customer
    • Pair Programming
    • Planning Game
    • Refactoring
    • Short Releases
    • Simple Design
    • Sustainable Pace
    • System Metaphor
    • Test-Driven Development

    For a visual of the XP practices, see a picture of the Practices and Main Cycles of XP.

    5 Values (Extreme Programming Explained)

    • Communication
    • Courage
    • Feedback
    • Respect
    • Simplicity

    Phases
    The following are phases of an XP project life cycle.

    • Exploration Phase
    • Planning Phase
    • Iteration to Release Phase
    • Productionizing Phase
    • Maintenance Phase

    For a visual overview, see Agile Modeling Throughout the XP Lifecycle.

    12 Principles (Agile Manifesto)

    These are the 12 Agile principles according to the  Agile Manifesto:

    • Our highest priority is to satisfy the customer through early and continuous delivery of valuable software.
    • Welcome changing requirements, even late in development. Agile processes harness change for the customer’s competitive advantage.
    • Deliver working software frequently, from a couple of weeks to a couple of months, with a preference to the shorter timescale.
    • Business people and developers must work together daily throughout the project.
    • Build projects around motivated individuals.  Give them the environment and support they need, and trust them to get the job done.
    • The most efficient and effective method of conveying information to and within a development team is face-to-face conversation.
    • Working software is the primary measure of progress.
    • Agile processes promote sustainable development.   The sponsors, developers, and users should be able to maintain a constant pace indefinitely.
    • Continuous attention to technical excellence and good design enhances agility.
    • Simplicity–the art of maximizing the amount of work not done–is essential.
    • The best architectures, requirements, and designs emerge from self-organizing teams.
    • At regular intervals, the team reflects on how to become more effective, then tunes and adjusts its behavior accordingly
    4 Values (Agile Manifesto)

    These are the four Agile values according to the Agile Manifesto:

    • Individuals and interactions over processes and tools
    • Working software over comprehensive documentation
    • Customer collaboration over contract negotiation
    • Responding to change over following a plan
    Additional Resources
    My Related Posts
  • J.D. Meier's Blog

    Cheat Sheet: patterns & practices Catalog at a Glance Posted to CodePlex

    • 5 Comments

    As part of our patterns & practices Application Architecture Guide 2.0 project, we've been hunting and gathering our patterns & practices solution assets.  Here's our initial draft of our catalog at a glance:

    You can use it to get a quick sense of the types and range of solution assets from blocks to guides.

    Architecture Meta-Frame 
    We used our Architecture Meta-Frame (AMF) as a lens to help slice and dice the catalog:

    ArchMetaFrame

    Here's some examples to illustrate:

    • App Types - Factories such as Web Client Software Factory and Web Services Software Factory map to the application types.  You can think of them as product-line engineering.
    • Quality Attributes - Various guides address quality attributes such as security, performance and manageability. 
    • Architecture Frame - Enterprise Library assets such as the Validation block and the Logging block map to the Architecture Frame.  The Architecture Frame represents hot spots and common cross-cutting concerns when building line of business applications.

    The frame is easily extensible.  For example, if we include our Engineering Practices Frame, we can group our process, activity, and artifact related guidance.

    Catalog at a Glance
    Here's a quick list of key patterns & practices solution assets at a glance:

    Product Line Solution Assets
    Enterprise Library
  • Enterprise Library
  • Caching Application Block
  • Cryptography Application Block
  • Data Access Application Block
  • Exception Handling Application Block
  • Logging Application Block
  • Policy Injection Application Block
  • Security Application Block
  • Unity Application Block
  • Validation Application Block
  • Composite Application Guidance for WPF
  • Individual Blocks
  • Composite Application Guidance for WPF
  • Smart Client – Composite UI Application Block
  • Unity Application Block
  • Archived Blocks
  • Asynchronous Invocation Application Block
  • Aggregation Application Block for .NET
  • Smart Client Offline Application Block
  • Updater Application Block – Version 2.0
  • User Interface Application Block for .NET
  • User Interface Process (UIP) Application Block – Version 2.0
  • Web Service Façade for Legacy Applications
  • Factories
  • Mobile Client Software Factory
  • Smart Client Software Factory
  • Web Client Software Factory
  • Web Service Software Factory
  • Guides
  • Application Architecture for .NET: Designing Applications and Services
  • Application Interoperability: Microsoft .NET and J2EE
  • Authentication in ASP.NET: .NET Security Guidance
  • Building Secure ASP.NET Applications: Authentication, Authorization, and Secure Communication
  • Caching Architecture Guide for .NET Framework Applications
  • Deploying .NET Framework-based Applications
  • Describing the Enterprise Architectural Space
  • Design and Implementation Guidelines for Web Clients
  • Designing Application-Managed Authorization
  • Designing Data Tier Components and Passing Data Through Tiers
  • Guidelines for Application Integration
  • Improving .NET Application Performance and Scalability
  • Improving Web Application Security: Threats and Countermeasures
  • Microsoft .NET /COM Migration and Interoperability
  • Microsoft ESB Guidance for BizTalk Server 2006 R2
  • Monitoring in .NET Distributed Application Design
  • Performance Testing Guidance for Web Applications
  • Production Debugging for .NET Framework Applications
  • Security Engineering Explained
  • Smart Client Architecture and Design Guide
  • Team Development with Visual Studio .NET and Visual SourceSafe
  • Team Development with Visual Studio Team Foundation Server
  • Testing .NET Application Blocks - Version 1.0
  • Threat Modeling Web Applications
  • Upgrading Visual Basic 6.0 Applications to Visual Basic .NET and Visual Basic 2005
  • Archived Guides
  • .NET Data Access Architecture Guide
  • Exception Management Architecture Guide
  • Testing Software Patterns
  • Patterns
  • Data Patterns
  • Enterprise Solution Patterns Using Microsoft .NET
  • Integration Patterns
  • Web Service Security Guidance: Scenarios, Patterns, and Implementation Guidance for Web Services Enhancements (WSE) 3.0
  • Reference Implementations
  • Global Bank Scenario
  • WS-I Basic Security Profile 1.0 Reference Implementation: Final Release for .NET Framework 2.0
  • Archived Reference Implementations
  • Applied Integration Baseline Reference Implementation

  • Application Types
    Guidance assets listed by application type.

    Category Solution Assets
    Mobile
  • Mobile Client Software Factory
  • Rich Client
  • Composite Application Guidance for WPF
  • Smart Client Architecture and Design Guide
  • Smart Client Software Factory
  • Service
  • Improving Web Services Security: Scenarios and Implementation Guidance for WCF
  • Web Service Security Guidance: Scenarios, Patterns, and Implementation Guidance for Web Services Enhancements (WSE) 3.0
  • Web Service Software Factory
  • Web Client
  • Design and Implementation Guidelines for Web Clients
  • Improving .NET Application Performance and Scalability
  • Improving Web Application Security: Threats and Countermeasures
  • Web Client Software Factory


  • Quality Attributes
    Guidance assets listed by quality attributes.
    Category Solution Assets
    Integration
  • Enterprise Solution Patterns Using Microsoft .NET
  • Guidelines for Application Integration
  • Integration Patterns
  • Interoperability
  • Application Interoperability: Microsoft .NET and J2EE
  • Enterprise Solution Patterns Using Microsoft .NET
  • Microsoft .NET /COM Migration and Interoperability
  • Flexibility
  • Policy Injection Application Block
  • Unity Application Block
  • Manageability
  • Deploying .NET Framework-based Applications
  • Enterprise Solution Patterns Using Microsoft .NET
  • Monitoring in .NET Distributed Application Design
  • Production Debugging for .NET Framework Applications
  • Performance
  • Improving .NET Application Performance and Scalability
  • Performance Testing Guidance for Web Applications
  • Scalability
  • Improving .NET Application Performance and Scalability
  • Security
  • Designing Application-Managed Authorization
  • Improving Web Application Security: Threats and Countermeasures
  • Improving Web Services Security: Scenarios and Implementation Guidance for WCF
  • Security Engineering Explained
  • Security Guidance for .NET Framework 2.0
  • Threat Modeling Web Applications
  • Web Service Security Guidance: Scenarios, Patterns, and Implementation Guidance for Web Services Enhancements (WSE) 3.0

  • Engineering Practices
    Guidance assets organized by engineering practices.
    Category Solution Assets
    Deployment
  • Deploying .NET Framework-based Applications
  • Enterprise Solution Patterns Using Microsoft .NET
  • Monitoring in .NET Distributed Application Design
  • Production Debugging for .NET Framework Applications
  • Performance Engineering
  • Performance Testing Guidance for Web Applications
  • Security Engineering
  • Security Engineering Explained
  • Threat Modeling Web Applications
  • Team Development
  • Team Development with Visual Studio Team Foundation Server
  • Testing
  • Performance Testing Guidance for Web Applications
  •  My Related Posts

  • patterns & practices App Arch Guide 2.0 Project
  • App Arch Guide 2.0 Overview Slides
  • Abstract for Application Architecture Guide 2.0
  • App Arch Meta-Frame
  • App Types
  • Architecture Frame
  • App Arch Guidelines
  • Layers and Components
  • Key Software Trends
  • J.D. Meier's Blog

    Perspectives Frame

    • 1 Comments

    Building software involves a lot of communication.  Behind this communication, lies perspectives.  These perspectives often get lost somewhere between initial goals and final product, which can lead to failed software.  I found that by using a simple Perspectives Frame, I improve my chances for success.

    Perspectives Frame

    • Industry Perspective - industry constraints and standards
    • Business Perspective - business goals and constraints
    • Technical Perspective - technological requirements, technical standards and practices
    • User Perspective - User experiences and goals

    In Practice
    I could easily over-engineer it, but in meetings and hallways, this quick, memorable frame of four categories helps.  OK, so it looks simple enough, but how do I use it? Here's how I use it in practice:

    • Understanding goals - First things first, I want to know goals and drivers from the different perspectives.  Knowing which bucket they fall in, helps more than a random collection of requirements.
    • Understanding priorities - Which perspectives take precedence?  For example, corporate line of business applications tend to optimize around industry and business at the expense of the user experience, since users don't have much choice.  On the other hand, an emerging breed of social software applications, puts the user front and center.  In another case, e-commerce applications have to get the user experience right, since users do have choices.  
    • Checkpointing representation - Is my customer representing the user, business, technical or industry perspective?   Do I have the different perspectives represented?  
    • Rationalizing decisions - If I know that for a scenario, user experience take precedence, I can make more effective decisions, moving towards the goal.
    • Rationalizing feedback - If I know which perspective feedback is coming from, I can have a more meaningful prioritization discussion. If the team knows that for this case, the success of the user experience is key to the business success, that's a different story than if we 
    • Choosing the right techniques and tools - Some techniques tend to be optimized for a particular perspective.  That's a good thing.  The trick is to know that and explicitly decide if it's the right tool.  For example, performing Kano Analysis can help you identify user satisfiers and dissatisfiers.  On the other hand Taguchi methods will optimize around technical perspectives.

    This perspectives frame becomes even more powerful when you combine it with MUST vs. SHOULD vs. COULD and What Are You Optimizing.

  • J.D. Meier's Blog

    Growth Mind-set Over Fixed Mind-set

    • 6 Comments

    Do you have to be great at everything?  If this stops you from doing things you want to try, then it's a limiting belief.  Scott Berkun spells this out in Why You Should Be Bad at Something.

    Keys to Growing Your Skills
    Here's a set of practices and mind-sets that I've found to be effective for getting in the game, or getting back in the game, or learning a new game versus just watching from the side-lines. 

    • Swap out a fixed mind-set with a growth mind-set. (innate ability vs. learning) See The Effort Effect.
    • Call it an "experiment."  This sounds like a trivial frame game, but I see it work for myself and others.
    • Treat perfection as a path, not a destination.  If you're a "perfectionist," (like I "was", er "am, er ... still fighting it), you know what I mean.
    • Use little improvements over time.  Focus on little improvements and distinctions over time, versus instant success.  It's consistent action over time that produces the greatest results.  You're probably a master of your craft, whatever it is you do each day, every day.  John Wooden focused his team on continuous, individual improvement and created the winningest team in history.
    • Remind yourself you're growing or dying.  You're either climbing or sliding, there's no in-between (and the slide down is faster than the climb up!)
    • Try agin.  If at first you don't succeeed, don't just give up.  Remember folks like Thomas Edison, who "failed" many, many times before finding "success" (it's a part of innovation)
    • Focus on lessons over failures.  Remind yourself there are no failures; only lessons (one more way how "not" to do something)
    • Fail fast.  The faster you "fail", the faster you learn.
    • Don't take yourself or life too seriously.  If you take yourself too seriously, you'll never get out alive!
    • Learn to bounce back.  It's not that you don't get knocked down, it's that you get back up.  (Just like the song, "I get knocked down, but I get up again")
    • Give yourself time.  A lot of times the difference between results is time.  If you only chase instant successes, you miss out on opportunities. Walk, crawl, run.  Or, if you're like me, sprint and sprint again ;) 
    • Start with something small.  Build momentum.  Jumping an incremental set of hurdles is easier than scaling a giant wall.   
    • Build on what you know.  Now matter where you are or what you do, you take yourself with you.  Bring your game wherever you go. 
    • Learn to like what growth feels like.   I used to hate the pain of my workouts.  Now, I know that's what growth feels like.  The better I got at some things, the more I hated how awkward I was at some new things.  Now I like awkward and new things.  It's growth.
    • Find a mentor and coach.  It doesn't have to be official.  Find somebody who's great at what you want to learn.  Most people like sharing how they got good at what they do.  It's their pride and joy.  I used to wonder where the "mentors" are. Then I realized, they're all around me every day.
    • Have a learning approach.  For me, I use 30 Day Improvement Sprints.  Timeboxes, little improvements at a time, and focus go a long way for results.  I use 30 Day Improvement Sprints.

    There's a lot more I could say, but I think this is bite-sized working set to chew on for now.

    More Information

    • The Effort Effect - This article exposes the truth behind "you can't teach an old dog new tricks" and whether all-stars are born or made (nature vs. nurture.)  If you have kids, this article is particularly important.  Adopting a growth mind-set over a fixed mind-set can have enormous impact.

    My Related Posts

  • J.D. Meier's Blog

    How NOT To Make Money Online

    • 14 Comments

    I mentor several folks on how to make money online, either because they are trying to supplement their income, or take their game to the next level, or simply trying to reduce the worry around losing their job.  

    An interesting pattern is that many of the folks that I know that make a second (3rd, 4th, 5th) income online, show up strong in many ways.     Their second source of income is always a “passion business.”   They find a way to monetize what they love in a way that’s sustainable and creates a ton of value for their tribe of raving fans.  

    They end up spending more time in their art, so they recharge and renew, and show up fresh at work because they found a way to spend more time doing what they love (it’s an interesting question when you ask the question, “What do you want to spend more time doing?”, and then actually do it Winking smile

    One of the most important success patterns I see is that people do what they would do for free, but pay attention to what people would pay them for.   This does two things:

    1. It forces them to figure out what they really do love and can do day in and day out (where can they be strong, all day long)
    2. It forces them to be smarter at business (otherwise, it’s not sustainable and it slowly dies)

    I see people succeed at making money online by doing lots of experimentation and continuous learning.  The ones that do the best, learn from success AND failures.   The ones that create truly outstanding success, learn the patterns of failure to avoid, and the patterns of success to do more of.

    Lucky for me, I got to see several people right around me making $10,000, $20,000, etc. a month online, and they happily shared with me what they were doing, including what was working and what was not.   The variety was pretty amazing, until I started to see the patterns.  As I started to see the patterns, what surprised me the most is how so many people fail to make money online because “they try to make money online” – it’s like chasing happiness, and having it always evade your grasp.

    How ironic.

    There are so many ways NOT to make money online.  In fact, they are worth enumerating because people still try them and get incredibly frustrated and give up.

    Here are 50 Ways How NOT To Make Money Online.

    It’s serious stuff.

    I took a pattern-based approach, so that it’s easy to see the principle behind each recipe for failure. 

    You can actually apply many of the insights whether it’s an online or offline business, and whether you are a one-man band, or a business partnership, or working in a corporation.  

    It puts a distillation of many business basics, great business lessons, and business skills at your fingertips.

    I’m hoping that more people can be entrepreneurs and create their financial freedom by doing more of what they love, in a business-smart way.

    Also, I’m hoping this helps more people get their head around the idea that we’re in a new digital economy and the ways to make a living are changing under our feet.

    The future is here and it belongs to those that create it and shape it.

    Own your destiny.

  • J.D. Meier's Blog

    Motivation Quotes

    • 5 Comments

    I added a set of my favorite motivation quotes to Sources of Insight.   You never know where you might find just the inspiration you need. 

  • J.D. Meier's Blog

    Security Approaches That Don't Work

    • 3 Comments

    If it’s not broken, then don’t fix it ...

    The problem is, you may have an approach that isn’t working, or it’s not as efficient as it could be, but you may not even know it.  Let’s take a quick look at some broken approaches and get to the bottom of why they fail.  If you understand why they fail, you can then take a look at your own approach and see what, if anything, you need to change.  The more prevalent broken approaches include:

    • The Bolt on Approach
    • The Do It All Up Front Approach
    • The Big Bang Approach
    • The Buckshot Approach
    • The All or Nothing Approach

    The Bolt on Approach
    Make it work, and then make it right.  This is probably the most common approach to security that I see, and it almost always results in failure or at least inefficiency.  This approach results in a development process that ignores security until the end, usually the testing phase, and then tries to make up for mistakes made earlier in the development cycle.   This is one way of addressing security.  This is effectively the bolt on approach.  

    The assumption is that you can bolt on security at the end, just enough to get the job done.  While the bolt on approach is a common practice, the prevalence of security issues you can find in Web applications that use this approach, is not a coincidence.

    The real weakness in the bolt on approach is that some of the more important design decisions that impact the security of your application have a cascading impact on the rest of your application’s design. If you’ve made a poor decision early in design, later you will be faced with some unpleasant choices.  You can either cut corners, further degrading security, or you can extend the development of your application missing project deadlines.  If you make your most important security design decisions at the end, how can you be confident you’ve implemented and tested your application to meet your objectives?

    The Do It All Up Front Approach
    The opposite of the bolt on approach is the do it all up front approach.   In this case, you attempt to address all of your security design up front.  There are two typical failure scenarios:

    1. You get overwhelmed and frustrated and give up, or
    2. You feel like you’ve covered all your bases and then don’t touch security again until you see your first vulnerability come in on the release product.

    While considering security up front is a wise choice, you can’t expect to do it all at once.  For one thing, you can’t expect to know everything up front.  More importantly, this approach is not as capable of dealing with decisions throughout the application development that affect security, as an approach that integrates security consideration throughout the life cycle.

    The Big Bang Approach
    This can be similar to the do it all up front approach.  The big bang approach is where you depend on a single big effort, technique or activity to produce all of your security results.   Depending on where you drop your hammer, you can certainly accomplish some security benefits, if some security effort is better than none.  However, similar to the do it all up front approach, a small set of focused activities outshines the single big bang.

    The typical scenario is a shop that ignores security (or pays it less than the optimal amount of attention) until the test phase.  Then they spend a lot of time/money on a security test pass that tells them all the things that are wrong.  They then make hard decisions on what to fix vs. what to leave and try to patch an architecture that wasn’t designed properly for security in the first place.

    The Buckshot Approach
    The buckshot approach is where you try a bunch of security techniques on your application, hoping you somehow manage to hit the right ones.  For example, it’s common to hear, “we’re secure, we have a firewall”, or “we’re secure, we’re using SSL”.  The hallmark of this approach is that you don’t know what your target is and the effort expended is random and without clear benefit. Beware the security zealot who is quick to apply everything in their tool belt without knowing what the actual issue is they are defending against.  More security doesn’t necessarily equate to good security.  In fact, you may well be trading off usability or maintainability or performance, without improving security at all.

    You can’t get the security you want without a specific target.  Firing all over the place (with good weapons even) isn’t likely to get you a specific result.  Sure you’ll kill stuff but who knows what.

    The All or Nothing Approach
    With the all or nothing approach, you used to do nothing to address security and now you do it all.  Over-reacting to a previous failure is one reason you might see a switch to “All”.  Another is someone truly trying to do the best they can to solve a real problem and not realizing they are biting off more than they can chew. 

    While it can be a noble effort, it’s usually a path to disaster.  There are multiple factors, aside from your own experience, that impact success, including maturity of your organization and buy in from team members.  Injecting a dramatic change helps get initial momentum, but, if you take on too much at once, you may not create a lasting approach, and will eventually fall back to doing nothing.

  • J.D. Meier's Blog

    MUST vs. SHOULD vs. COULD

    • 6 Comments

    Whether I'm dealing with software requirements, or I'm prioritizing my personal TO Dos, I think in terms of MUST, SHOULD, COULD.  It's simpple but effective.

    Here's an example of some scenarios and usage:

    • getting a quick handle on my day - what MUST I do today?  what SHOULD I do?  What COULD I do?
    • prioritizing my personal backlog - what MUST I do today?  what MUST I do this week?  What should I do? What could I do?
    • focusing my teams - what MUST we release this week?  What SHOULD we release this week?  what COULD we release this week?
    • brainstorming sessions - what COULD we do?  What SHOULD we do? what MUST we do?
    • determining an incremental release - what are the MUSTs for this software release?  What are the SHOULDs?  What are the COULDs?
    • helping a customer identify their security objectives - What security constraints MUST be met for this software?
    • helping a customer identify their performance objectives - what performance constraints MUST be met for this software?

    It's easy to get lost among SHOULDs and COULDs.  I find factoring MUSTs from the SHOULDs and COULDs helps get clarity around immediate action.

  • Page 5 of 42 (1,042 items) «34567»