J.D. Meier's Blog

Software Engineering, Project Management, and Effectiveness

April, 2006

  • J.D. Meier's Blog

    Performance and Scalability Checkpoint to Improve Your Software Engineering

    • 1 Comments

    When a patterns & practices deliverable would be ready to ship, our General Manager (GM) would ask me to sign off on the performance and security.  I would usually be pulled thin so I needed a way to scale.  To do so, I created a small checkpoint for performance and scalability.  The checkpoint was simply a set of questions that are a forcing function to make sure you've addressed a lot of the basics (and avoid a lot of "do-overs").  Here's what we used internally:

    Checkpoint: Performance and Scalability

    Customer Validation

    1. List 3-5 customers that say performance/scalability for the product is a "good deal" (e.g. they pay for play)

    Product Alignment

    1. Do you have alignment w/the future directions of the product team?
    2. Who from the product team agrees?

    Supportability

    1. Has Product Support Services (PSS) reviewed and signed off?
    2. Which newsgroups would a customer go to if performance and scalability problems occur?

    Performance Planning

    1. Performance model created (performance modeling template)?
    2. Budget.  These are performance and scalability constraints.  What are the maximum acceptable values for the following?
      1. Response time?
      2. Maximum operating CPU usage?
      3. Maximum network bandwidth usage?
      4. Maximum memory usage?
    3. Performance Objectives
      1. Workload
      2. % of overhead over relevant baselines (e.g. Within 5% performance degradation from version 1 to version 2)
      3. Response Time
      4. Throughput
      5. Resource Utilization (CPU, Network, Disk, Memory)

    Hardware/Software Requirements

    1. List requirements for customer installation.
      1. Hardware?
      2. Minimum hardware requirements?
      3. Ideal hardware requirements?
      4. Minimum software required:

    Performance Testing

    1. Lab Environment.  What is the deployment scenario configuration you used for testing
      1. Hardware?
      2. CPU?
      3. # of CPUs?
      4. RAM?
      5. Network Speed?
    2. Peak conditions.  What does peak condition look like?
      1. How many users?
      2. Response time?
      3. Resource Utilization?
      4. Memory?
      5. CPU?
      6. Network I/O?
    3. Capacity.  How many users until your response time or resource utilization budget is exceeded?
      1. What is the glass ceiling? (e.g. the breaking point)
      2. Number of users?
      3. Response time?
      4. Resource Utilization?
      5. CPU?
      6. Network?
      7. Memory?
    4. Failure.  What does failure look like in terms of performance and scalability?
      1. Does the application fail gracefully?
      2. What fails and how do you know?
      3. Response time exceeds a threshold?
      4. Resource utilization exceeds thresholds? (CPU too much?)
      5. What diagnostic/monitoring clues do you see:
      6. Exceptions?
      7. Event Log entries?
      8. Performance counters to watch?
    5. Stress Scenarios.  What does stress look like and how do you respond?
      1. Contention?
      2. Memory Leaks?
      3. Deadlocks?
      4. What are the first bad signs under stress?

    Instrumentation

    1. What is the technology/approach used for instrumenting the codebase?
      1. Are the key performance scenarios instrumented?
      2. Is the instrumentation configurable (On/off? Levels of granularity?)?

    The checkpoint helped the engineering team shape their approach and it simplfied my job when I had to review.  You can imagine how some of these questions can shape your strategies.  This is by no means exhuastive, but it was effective enough to tease out big project risks.  For example, do you know when you're software's going to hit capacity?  Did you completely ignore the customer's practical environment and use up all their resources?  Do you have a way to intstrument for when things go bad and is this configurable?  When your software is in trouble, what sort of support did you enable for troubleshooting and diagnostics?

    While I think the original checkpoint was helpful, I think a pluggable set of checkpoints based on application types would be even more helpful and more precise.  For example, if I'm building a Web application, what are the specific metrics or key instrumentation features I should have?  If I'm building a smart client, what sort of instrumentation and metrics should I bake in? … etc.  If and when I get to a point where I can do more checkpoints, I'll use a strategy of modular, type-specific, scenario-based checkpoints to supplement the baseline checkpoint above.

  • J.D. Meier's Blog

    Strategic Stories

    • 1 Comments

    I'm realizing more and more how stories help you drive a point home.  It's one thing to make a point, it's another for your story to make the point for you.  If your ideas aren't sticking, or you're not getting buy in, maybe a compelling story is the answer. 

    Crafting useful stories is an art, and, now, apparently a science.  Srinath pointed me to Stories at Work on 50Lessons.com.    The video shares a story about using stories as a catalyst for change and a recipe for good strategic stories:

    1. Make stories short (1 -2 minutes) so they can be retained
    2. Limit stories to no more than 2 or 3 characters, so it's easy to follow
    3. Build your story around a singular message.
    4. Tell your story in the present tense so participants can relate
    5. Use powerful images to tie to a theme
    6. Repeat a phrase or word that is the essence of your message

    The value of the stories is they help you engage people and they have a more powerful recall than slides, facts and figures.

  • J.D. Meier's Blog

    Security Innovation Security Engineering Study

    • 3 Comments


    The Security Innovation Security Engineering study,  Comparing Security in the Application Lifecycle - Microsoft and IBM Development Platforms Compared, is timely, given the emerging industry emphasis on integrating security in the life cycle. 

    My favorite quote in the study is "The patterns & practices security guidance covers the key security engineering activities better than any other resource we’ve found."  I think this reflects the fact we have more than 2,500 pages of security guidance (see Security Guidance, Security Engineering, Threat Modeling, and Improving Web Application Security) , and we've integrated our guidance into MSF/VS 2005 (see MSF/VS 2005 and p&p Integration.) 

    The study was available from the MSDN Security DevCenter for a while but seems to have fallen off.  I've summarized the study here for quick reference:

    Overview
    Security Innovation evaluated the guidance and tools of Microsoft's and IBM's development platforms.  The study compared the support available to a development team via security guidance, documentation and security focused features in the life-cycle tool suites.  Gartner reviewed the approach.


    Evaluation Criteria

    • CoverageHow well do the provided tools and guidance cover the key set of security areas? 
    • QualityHow effective and accurate are the tools and guidance?
    • VisibilityHow easy is it to find the tools and guidance and then apply it to your security needs?
    • UsabilityAre the tools and guidance precise, comprehensive and easy to use?

    Ratings

    • Outstanding: 81-100%
    • Good: 61-80%
    • Average: 41-60%
    • Below Average: 21-40%
    • Poor: 0-20%

    Scorecard Categories

    • Basic Platform Security.  When used in accordance with its documentation, a platform should be inherently secure.
    • Platform Security Services.  A mature platform should include services that make it easier for developers to implement security features in their applications.
    • Platform Security Guidance. A secure platform is much less useful if it lacks proper guidance.
    • Software Security Engineering Guidance.  It is not possible to develop a secure application unless security is a focus during every phase of the development lifecycle.
    • Security Tools.  A secure platform should include tools that make it easier to define, design, implement, test, and deploy a secure application.

    Results of the Study

    First, here's a couple key points, then the summaries are below:

    • Microsoft beat IBM in every category around guidance.
    • Microsoft beat IBM in three out of four categories around tools.


    IBM

    1. Platform Overall
      1. Overall: 36%
      2. Coverage: 62%
      3. Quality: 70%
      4. Visibility: 17%
      5. Usability: 72%
    2. Platform Security Guidance
      1. Overall: 50%
      2. Coverage: 81%
      3. Quality: 85%
      4. Visibility: 17%
      5. Usability: 84%
    3. Security Engineering Guidance
      1. Overall: 25%
      2. Coverage: 50%
      3. Quality: 64%
      4. Visibility: 17%
      5. Usability: 69%
    4. Security Tools
      1. Overall: 32%
      2. Coverage: 55%
      3. Quality: 59%
      4. Visibility: 56%
      5. Usability: 63%

    Microsoft

    1. Platform Overall
      1. Overall: 67%
      2. Coverage: 88%
      3. Quality: 85%
      4. Visibility: 61%
      5. Usability: 80%
    2. Platform Security Guidance
      1. Overall: 76%
      2. Coverage: 93%
      3. Quality: 85%
      4. Visibility: 67%
      5. Usability: 91%
    3. Security Engineering Guidance
      1. Overall: 78%
      2. Coverage: 100%
      3. Quality: 89%
      4. Visibility: 67%
      5. Usability: 79%
    4. Security Tools
      1. Overall: 47%
      2. Coverage: 71%
      3. Quality: 78%
      4. Visibility: 50%
      5. Usability: 68%

    Quotes from the Study

    • Microsoft’s overall rating of 67% reflects the impressive level of focus Microsoft has applied to application security in the past several years.
    • IBM’s overall score of 36% is the result of a more disjointed approach to security.  Security guidance is spread throughout the IBM web site and is difficult to discover.
    • The patterns & practices security guidance covers the key security engineering activities better than any other resource we’ve found.

    More Information
    For more information, see Comparing Security in the Application Lifecycle -
    Microsoft and IBM Development Platforms Compared
    at Security Innovation's site.  They created four documents that take you through the details and results: Executive Summary, Research Overview, Full Detailed Reports and Results, and Methodology.
      

  • J.D. Meier's Blog

    OpenHack 4 (eWeek Labs): Web Application Security

    • 6 Comments

    Whenever I bring up the OpenHack 4 competition, most aren't ware of it.  It was an interesting study because it was effectively an open "hack me with your best shot" competition. 

    I happened to know the folks on the MS side, like Erik Olson and Girish Chander, that helped secure the application, so it had some of the best available security engineering.  In fact, customers commented that it's great that Microsoft can secure its applications ... but what about its customers?  That comment was inspiration for our Improving Web Application Security:Threats and Countermeasures guide.

    I've summarize OpenHack 4 here, so it's easier for me to reference.

    Overview of OpenHack 4
    In October 2002, eWeek Labs launched its fourth annual OpenHack online security contest.  It was designed to test enterprise security by exposing systems to the real-world rigors of the Web.  Microsoft and Oracle were given a sample Web application by eWeek and were asked to redevelop the application using their respective technologies. Individuals were then invited to attempt to compromise the security of the resulting sites.  Acceptable breaches included of cross-site scripting attacks, dynamic Web page source code disclosure, Web page defacement, posting malicious SQL commands to the databases, and theft of credit card data from the databases used.

    Outcome of the Competition
    The Web site built by Microsoft engineers using the Microsoft .NET Framework, Microsoft Windows 2000 Advanced Server, Internet Information Services 5.0, and Microsoft SQL Server 2000 successfully withstood over 82,500 attempted attacks to emerge from the eWeek OpenHack 4 competition unscathed.

    More Information

    For more information on implementation details of the Microsoft Web application and configuration used for the OpenHack competition, see "Building and Configuring More Secure Web Sites: Security Best Practices for Windows 2000 Advanced Server, Internet Information Services 5.0, SQL Server 2000, and the .NET Framework"

  • J.D. Meier's Blog

    @Stake Security Study: .NET 1.1 vs. WebSphere 5.0

    • 7 Comments

    I like competitive studies.  I'm usually more interested in the methodology than the outcome.  The methodology acts as a blueprint for what's important in a particular problem space. 

    One of my favorite studies was the original @Stake study comparing .NET 1.1 vs. IBM's WebSphere security, not just because our body of guidance made a direct and substantial difference in the outcome, but because @Stake used a comprehensive set categories and an evaluation criteria matrix that demonstrated a lot of depth.

    Because the information from the original report can be difficult to find and distill, I'm summarizing it below:

    Overview of Report
    In June 2003, @Stake, Inc., an independent security consulting firm, released results of a Microsoft-commissioned study that found Microsoft's .Net platform to be superior to IBM's WebSphere for secure application development and deployment.  @stake performed an extensive analysis comparing security in the .NET Framework 1.1, running on Windows Server 2003, to IBM WebSphere 5.0, running on both Red Hat Linux Advanced Server 2.1 and a leading commercial distribution of Unix..


    Findings
    Overall, @stake found that:

    • Both platforms provide infrastructure and effective tools for creating and deploying secure applications
    • The .NET Framework 1.1 running on Windows Server 2003 scored slightly better with respect to conformance to security best practices 
    •  The Microsoft solution scored even higher with respect to the ease with which developers and administrators can implement secure solutions

    Approach
    @stake evaluated the level of effort required for developers and system administrators to create and deploy solutions that implement security best practices, and to reduce or eliminate most common attack surfaces.


    Evaluation Criteria

    • Best practice compliance.  For a given analysis topic, to what degree did the platform permit implementation of best practices?
    • Implementation complexity.   How difficult was it for the developer to implement the desired feature?
    • Documentation and examples.  How appropriate was the documentation? 
    • Implementor competence.  How skilled did the developer need to be in order to implement the security feature?
    • Time to implement.  How long did it take to implement the desired security feature or behavior? 


    Ratings for the Evaluation Criteria

    1. Best Practice Compliance Ratings
      1. Not possible
      2. Developer implement
      3. Developer extend
      4. Wizard
      5. Transparent
    2. Implementation Complexity Ratings
      1. Large amount of code
      2. Medium amount of code
      3. Small amount of code
      4. Wizard +
      5. Wizard
    3. Quality of Documentation and Sample Code Ratings
      1. Incorrect or Insecure
      2. Vague or Incomplete
      3. Adequate
      4. Suitable
      5. Best Practice Documentation
    4. Developer/Administrator Competence Ratings
      1. Expert (5+ years of experience
      2. Expert/intermediate (3-5 years of experience)
      3. Intermediate
      4. Intermediate/novice
      5. Novice (0-1 years of experience)
    5. Time to Implement
      1. High (More than 4 hours)
      2. Medium to High (1 to 4 hours)
      3. Medium (16-60 minutes)
      4. Low to Medium  (6-15 minutes )
      5. Low (5 minutes or less )


    Scorecard Categories
    The scorecard was organized by application, Web server and platform categories.  Each category was divided into smaller categories to test the evaluation criteria (best practice compliance, implementation complexity, quality of documentation, developer competence, and time to implement).

    Application Server Categories

    1. Application Logging Services
      1. Exception Management
      2. Logging Privileges
      3. Log Management
    2. Authentication and Access Control
      1. Login Management
      2. Role Based Access Control
      3. Web Server Integration
    3. Communications
      1. Communication Security
      2. Network Accessible Services
    4. Cryptography
      1. Cryptographic Hashing
      2. Encryption Algorithms
      3. Key Generation
      4. Random Number Generation
      5. Secrets Storage
      6. XML Cryptography
    5. Database Access
      1. Database Pool Connection Encryption
      2. Data Query Safety
    6. Data Validation
      1. Common Validators
      2. Data Sanitization
      3. Negative Data Validation
      4. Output Filtering
      5. Positive Data Validation
      6. Type Checking
    7. Information Disclosure
      1. Error Handling
      2. Stack Traces and Debugging
    8. Runtime Container Security
      1. Code Security
      2. Runtime Account Privileges
    9. Web Services
      1. Credentials Mapping
      2. SOAP Router Data Validation

    Host and Operating System Categories

    1. IP Stack Hardening
      1. Protocol Settings
    2. Service Minimization
      1. Installed Packages
      2. Network Services

    Web Server Categories

    1. Architecture
      1. Security Partitioning
    2. Authentication
      1. Authentication Input Validation
      2. Authentication Methods
      3. Credential Handling
      4. Digital Certificates
      5. External Authentication
      6. Platform Integrated Authentication
    3. Communication Security
      1. Session Encryption
    4. Information Disclosure
      1. Error Messages and Exception Handling
      2. Logging
      3. URL Content Protection
    5. Session Management
      1. Cookie Handling
      2. Session Identifier
      3. Session Lifetime

    More Information
    For more information on the original @stake report, see the eWeek.com article, .Net, WebSphere Security Tested.

  • J.D. Meier's Blog

    Flawless Execution

    • 2 Comments

    In the book Flawless Execution, James D. Murphey, shares techniques used by fighter pilots to achieve peak performance, accelerate the learning curve, and make performance more predictable and repeatable.

    The essence of the execution engine is a set of iterative steps:

    • Plan.  Evaluate multiple courses of action, evaluate them, and take the best parts.
    • Brief.  Tell everybody how we're going to carry out the plan and what we're going to do today.
    • Execute.  Act out the script you created in the brief.
    • Debrief.  Evaluate execution errors and successes.

    Murphy connects the execution framework to the strategy.  If they aren't aligned, you can win the battle, but lose the war.  He distinguished strategy from tactices, by saying strategy is about four things:

    1. Where are we going to be?
    2. What are we going to apply resources for or against?
    3. How are we going to do this?
    4. When are we going to stop doing this?

    Murphy is very prescriptive.  For every technique, there's a set of steps and checkpoints.  I've successfully scaled down some of the techniques, such as Future Picture, to meet my needs. 

    What I like about the overall execution framework is that its practices are drawn from life and death scenarios.  Fighter pilots need to learn what works from their missions, and share it as quickly as possible.  What I also like is that Murphey illustrates how ordinary people, are capable of execution excellence.

  • J.D. Meier's Blog

    Meeting with Gabriel and Sebastian from PreEmptive

    • 1 Comments

    I met with Gabriel Torok and Sebastian Holst of PreEmptive Solutions, the other day.  PreEmptive makes obfuscator products, including the Dotfuscator that comes with Visual Studio.  Gabriel founded PreEmptive more than 10 years ago, and it was originally a code optimization company (The dual focus on performance and security resonates with me).

    I was familiar with obfuscation and its limitations.  I wasn't as familiar with some of the internals of specific obfuscation techniques, such as identifier renaming, control flow obfuscation, metadata removal, and string encryption, or how you can tweak or tune these.  One surprise for me was that obfuscation for some scenarios could yield a 30-40% reduction in size (the result of shortening identifier names and "pruning" libraries that are never called.)

    Gabriel's interested in creating obfuscation guidance for the community.  I gave him my wish list:

    • Scenarios.  Illustrate how obfuscation fits in, including high level, when to use or not to use, as well as lower-level choices among obfuscation techniques.
    • Anatomy of obfuscation.  From a developer mindset, this means knowing how things work.  Walk me through the bits and pieces and the execution flow.
    • Impact and surprises.  If I introduce obfuscation, tell me things I might run into.  For example, is there impact to my build process?  What about servicing my codebase? Is there a difference between obfuscationg with ASP.NET using dynamic compilation vs. pre-compilation?  What if you need to GAC?  What if you sign your code with a 509 cert?
    • Performance considerations.  Take me through trade-offs and scenarios where performance can be improved or degraded.  Give me insight into how to influence or tweak my obfuscation approach.

    Sebastion and I exchanged some metaphors.  In reference to the limits of obfuscation, I said that just because door-locks don't prevent car thieves, that doesn't mean cars should come without locks.  Sebastion related it to smoke alarms.  In the grand schema of things, smoke alarms play a key role in saving lives and limiting damage, but to the individual, there's not a lot of value until a fire occurs.  The fact that smoke alarms are low cost and simple helps justify their common use.  He added, risk varies by context, so the value to hotels or restaurants may be more obvious.

    It was an interesting and insightful meeting and I look forward to Gabriel's whitepaper.

Page 1 of 1 (7 items)