June, 2011

  • Canadian Solution Developers' Blog

    Why You Should Never Use DATETIME Again!

    • 12 Comments

    690px-Microsoft_SQL_Server_Logo.svgDates, we store them everywhere, DateOrdered, DateEntered, DateHired, DateShipped, DateUpdated, and on and on it goes. Up until and including SQL Server 2005, you really didn’t have much choice about how you stored your date values. But in SQL Server 2008 and higher you have alternatives to DateTime and they are all better than the original.

    DATETIME stores a date and time, it takes 8 bytes to store, and has a precision of .001 seconds

    In SQL Server 2008 you can use DATETIME2, it stores a date and time, takes 6-8 bytes to store and has a precision of 100 nanoseconds. So anyone who needs greater time precision will want DATETIME2. What if you don’t need the precision? Most of us don’t even need milliseconds. So you can specify DATETIME2(0) which will only take 6 bytes to store and has a precision of seconds. If you want to store the exact same value you had in DATETIME, just choose DATETIME2(3), you get the same precision but it only takes 7 bytes to store the value instead of 8. I know a lot of you are thinking, what’s one byte, memory is cheap. But it’s not a question of how much space you have on your disk. When you are performance tuning, you want to store as many rows on a page as you can for each table and index. that means less pages to read for a table or query, and more rows you can store in cache. Many of our tables have multiple date columns, and millions of rows. That one byte savings for every date value in your database is not going to make your users go ‘Wow everything is so much faster now’, but every little bit helps.

    If you are building any brand new tables in SQL Server 2008, I recommend staying away from DATETIME and DATETIME2 altogether. Instead go for DATE and TIME. Yes, one of my happiest moments when I first started learning about SQL Server 2008 was discovering I could store the DATE without the time!! How many times have you used GETDATE() to populate a date column and then had problems trying to find all the records entered on ‘05-JUN-06’ and got no results back because of the time component. We end up truncating the time element before we store it, or when we query the date to ignore the time component. Now we can store a date in a column of datatype DATE. If you do want to store the time, store that in a separate column of datatype TIME. By storing the date and time in separate columns you can search by date or time, and you can index by date and or time as well! This will allow you to do much faster searches for time ranges.

    Since we are talking about the date and time datatypes, I should also mention that there is another date datatype called DATETIMEOFFSET that is time zone aware. But that is a blog for another day if you are interested.

    Here is a quick comparison of the different Date and Time Data types,

    Datatype Range Precision Nbr Bytes User Specified Precision
    SMALL DATETIME 1900-01-01 to 2079-06-06 1 minute 4 No
    DATETIME 1753-01-01 to 9999-12-31 .00333 seconds 8 No
    DATETIME2 0001-01-01 to 9999-12-31 23:59.59.9999999 100 ns 6-8 Yes
    DATE 0001-01-01 to 9999-12-31 1 day 3 No
    TIME 00:00:00.0000000 to 23:59.59.9999999 100 ns 3-5 Yes
    DATETIMEOFFSET 0001-01-01 to 9999-12-31 23:59.59.9999999 100 ns 8-10 Yes

    Today’s My 5 is of course related to the Date and Time datatypes.

    My 5 Important Date functions and their forward and backwards compatibility

    1. GETDATE() – Time to STOP using GETDATE(), it still works in SQL Server 2008, but it only returns a precision of milliseconds because it was developed for the DATETIME datatype.
    2. SYSDATETIME() – Time to START using SYSDATETIME(), it returns a precision of nanoseconds because it was developed for the DATETIME2 datatype and it also works for populating DATETIME columns.
    3. DATEDIFF() – This is a great little function that returns the number of minutes, hours, days, weeks, months, or years between two dates, it supports the new date datatypes.
    4. ISDATE() – This function is used to validate DATETIME values. It returns a 1 if you pass it a character string containing a valid date. However if you pass it a character string representing a datetime that has a precision greater than milliseconds it will consider this an invalid date and will return a 0.
    5. DATEPART() – This popular function returns a portion of a date, for example you can return the year, month, day, or hour. This date function supports all the new date datatypes.

    Also one extra note, because I know there are some former Oracle developers who use this trick. If you have any select statements where you select OrderDate+1 to add one day to the date, that will not work with the new date and time datatypes. So you need to use the DATEADD() function.

  • Canadian Solution Developers' Blog

    Why do IT Projects Fail?

    • 2 Comments

    I read an interesting article in this morning’s Ottawa Citizen. It says the auditor general has stated the government is mismanaging a number of IT projects. The auditor general’s office found a history of cost overruns and delays and of not delivering on the project’s planned purpose. There are a myriad of reasons that can cause a project to run over budget or miss deadlines. I’d like to hone in one of that one specific point from the article “Not delivering on the project’s planned purpose.”

    This one hits close to home for me because for the past few years I was training not just developers but also IT business analysts. A business analyst gathers the requirements for an IT system. A systems analyst on the other hand determines the specifications for an IT system. What’s the difference?

    Requirements describe WHAT the user needs from the system. For example, the user needs to be able to know who reported a bug and needs to be able to track the progress of the bug they reported.

    Specifications describe HOW the system will meet that need. For example, there will be a SQLServer database and a User table with a ReportedBy field of type VARCHAR, there will be a mobile application which will allow users to view the status of their bugs written using Silverlight and a web service.

    Some projects have dedicated business analysts who will gather the requirements and hand them off to the technical team. On other projects members of the technical team are asked to gather the requirements, Sometimes, requirements gathering is skipped and we just start building because “we know what the user needs” You can be successful gathering requirements with business analysts or technical team members but you should never skip requirements gathering because you think you know what the user needs. Even in the agile methodologies the user is the one who drives the content in the sprints.

    With our technical skills we are very good at building solutions to meet a specific need or requirement. I have been blown away over and over again with the ability of programmers and system architects to come up with creative ways to meet users needs even when faced with technical limitations. But, if the users needs aren’t well understood in the first place you are headed for trouble.

    Indulge me for a moment as I relate a short story from my own work life. I was managing a few programmers and was asked to build a small application for another team to use internally. I sat down with the team lead who wanted the system for about an hour to get a feel for what they were after. Then I walked away, sat down with my team and we designed a solution to meet their needs. That first meeting was the only meeting I had with the other team. We were on a project that was already behind schedule and over budget so there was a lot of pressure to get this application out the door. Since it was an internal application there was no requirement from management for lengthy documentation or sign off. With a lot of long hours and creative programming we turned around the application in two weeks! We were patting ourselves on the back for a job well done, especially with the time constraints! We walked into a meeting with our customers (the other team) to show them this wonderful new application we had built for them. Within 5 minutes it was clear that although our application worked fine, it did not meet the other team’s needs! They got angry because the system didn’t meet their needs. We got angry because we had just killed ourselves for two weeks getting this completely functional solution built and out the door. The result was increased tension and anger on a project that was already under stress and a solution that didn’t actually help the other team’s productivity. Why did it happen? Because we didn’t fully understand the requirements!

    I can’t teach you how to correctly gather requirements in one blog post, I can tell you gathering requirements is a critical success factor for any project and requirements gathering is the theme for today’s My 5

    My 5 tips for successful requirements gathering

    1. Your requirements will be wrong – No matter how much time you spend on requirements, no matter how many users you talk to, and how many meetings you hold, you cannot get the requirements 100% correct. That doesn’t mean you should just skip requirements gathering because the requirements will be wrong, it just means you need to accept the fact you can’t get it 100% right.
    2. When gathering requirements your goal is to find as many mistakes in the requirements as you can with the time you have – Since you can’t get requirements 100% correct, you want to get as close as you can to 100% with the time you have. How do you find mistakes in the requirements? See Number 3.
    3. Engage the users early and often - Users will forget to tell you something in that first meeting, or a different user will be aware of a requirement that another user will miss. Engaging different users, and following up with emails, phone calls or additional meetings increases your chances of finding mistakes or gaps in the requirements
    4. Get requirements signed off before you start testing – It is not unusual for a project to start development when the requirements are still in flux, but how can you test to see if requirements are met if you haven’t agreed on the requirements yet?
    5. Document in one place only – Since the requirements are wrong, you know you will have to update them as you find the mistakes and gaps. Get creative in how you document your requirements. If you find yourself doing a cut and paste from one requirements document to another, ask yourself how can I document this in one place so that *when* it changes, I only have to update the requirement in a single place.

    For more information about requirements gathering, I suggest you check out The International Institute of Business Analysis (IIBA). They organize conferences on requirements gathering, and they even have a business analysis certification based on the information in the Business Analysis Book of Knowledge (BABOK) which is chock full of best practices for gathering requirements. Some of the tips in todays My 5 come from material prepared by Noble Inc.

  • Canadian Solution Developers' Blog

    Next Generation ALM with Visual Studio vNext

    • 0 Comments

    Last month, during TechEd North America, we were introduced to some of the new features that were coming in the next version of Visual Studio, specifically in the area of Application Lifecycle Management (ALM). Susan was at TechEd and blogged about some of those details. If you weren’t able to get out to TechEd and haven’t yet watched sessions on-demand that talk about Visual Studio vNext, you absolutely must.

    TechEd2011Keynote_220_ch9[1]

    Tech·Ed North America 2011 Keynote Address
    Jason Zander, Robert Wahbe

    If you fast forward to 1:01:15 in the video of the Keynote address, you can watch Jason Zander, Corporate VP, Visual Studio talk about what to expect from Visual Studio vNext.

       
    FDN03[1]

    The Future of Microsoft Visual Studio Application Lifecycle Management
    Cameron Skinner

    This demo-heavy session offers early insights into the future of Application Lifecycle Management and agile development that are incorporated in the next release of Visual Studio.

    And if those two sessions were not enough to get you excited, our friends at TechNet Edge captured some great conversations on the ground at TechEd, specifically, this conversation between Richard Campbell and Jason Zander about the upcoming ALM features. 

    clip_image002

    Jason Zander on ALM

    Jason Zander talks to Richard Campbell about Visual Studio v.Next, specifically the upcoming ALM features. Jason talks about the next generation of ALM and how it brings stakeholders, developers and IT Pros closer together with new tooling.

    Want to dive deeper and find out more? Download the Visual Studio vNext whitepaper. It’ll provide you with additional contexts and outlines of the problems that Visual Studio vNext is working to solve, how they relate to problems that are faced as an industry, and how Visual Studio vNext will improve the effectiveness of ALM.

    So what’s next? Stay tuned! As more information because available, we’ll make sure that you hear about it here. In the meantime, join me in this LinkedIn conversation around the features announced in these sessions, share your thoughts, and perhaps recommend a feature or two that you would like to see in Visual Studio vNext.

  • Canadian Solution Developers' Blog

    Improving the SharePoint User Experience with Custom Web Part Editors

    • 0 Comments

    ChrisWeb03Today’s blog comes courtesy of Christopher Harrison (@GeekTrainer), a Microsoft Certified Trainer (MCT), teaching mainly SharePoint and SQL Server and the owner and Head Geek at GeekTrainer Inc. Christopher has been training for the past 12+ years, with a couple of breaks in the middle to take on full time jobs. These days he focuses mainly on training, presenting at conferences, and consulting. I had the pleasure of attending a SharePoint Developer session presented by Christopher in Ottawa, and he agreed to share some insight on a SharePoint developer feature. Christopher even included his own My 5 in this blog post, in fact to give credit where credit is due, it was actually one of his posts where I first saw the My 5 concept. So a big thank you for a great blog, complete with a link at the end so you can download the code Smile. Enjoy! Now, take it away Christopher…

    Custom Web Part Editors

    One of the most critical aspects to any successful SharePoint deployment is achieving a high level of user adoption. There are many factors that will aid in driving people to SharePoint, but one of the biggest is ensuring users have access to their data in their desired format and structure. This is where web parts come into play.

    Web parts are of course little boxes, applets, gadgets, widgets, or whatever you like to call them, that perform some action or display some bit of data. Users love web parts because it allows them to easily create and edit pages without having to actually write any code.

    But in order for a web part’s true potential to be realized, it needs to be easily edited by the user. You can basically add any property and configure it as editable by the user by adding a couple of simple attributes to a property. Consider the following web part[1]:

    [WebBrowsable(true)]

    [Category("Configuration")]

    [WebDisplayName("Sales Person ID")]

    [WebDescription("ID of the desired sales person's orders")]

    [Personalizable(PersonalizationScope.User)]

    public Int32 SalesPersonId { get; set; }

     

    private GridView gvRecentOrders;

    private String connectionString = //AdventureWorks connection string

    private String sql = @"SELECT TOP 10 OrderDate

                             , TotalDue

                        FROM Sales.SalesOrderHeader

                        WHERE SalesPersonID = @SalesPersonID

                        ORDER BY OrderDate DESC";

                 

    protected override void  OnPreRender(EventArgs e)

    {

        if(SalesPersonId == 0) {

            DisplayEditLink();

        } else {

            DisplayRecentOrders();

        }

    }

     

    private void DisplayEditLink()

    {

        HyperLink editLink = new HyperLink();

        editLink.NavigateUrl = String.Format(

                "javascript:ShowToolPane2Wrapper('Edit','129','{0}');"

                   , this.ID);

        editLink.ID = String.Format("MsoFrameworkToolpartDefmsg_{0}"

                       , this.ID);

        editLink.Text = "Click here to edit web part";

        this.Controls.Add(editLink);

    }

     

    private void DisplayRecentOrders()

    {

        gvRecentOrders = new GridView();

        using (SqlConnection connection =

                   new SqlConnection(connectionString))

        using (SqlCommand command = new SqlCommand(sql, connection)) {

            connection.Open();

            command.Parameters.AddWithValue

                ("@SalesPersonID", SalesPersonId);

            using (SqlDataReader reader = command.ExecuteReader()) {

                gvRecentOrders.DataSource = reader;

                gvRecentOrders.DataBind();

            }

        }

        Controls.Add(gvRecentOrders);

    }

     

    A Couple of quick notes:

    • First – there’s nothing overly fancy about what I’m doing. I have a property (SalesPersonID) that I want to use to filter the orders. DisplayRecentOrders() executes the query and binds the data to the GridView.
    • Second – I’ve taken a couple of short cuts, hard coding my SQL and my connection string. Not a good idea for production, but just fine for blog posts. :-)
    • Third – I’ve included a cool little bit of boiler plate code in DisplayEditLink. This will check to see if a value has been set on SalesPersonID, and if not provide a link the user can click on to open the tool pane. That’s much more intuitive for users new to SharePoint than having to click on the little drop down arrow on the upper right corner of the web part.

    Here’s the problem. When the tool pane is created, it’s going to use a default editor for SalesPersonID. Because SalesPersonId is an integer, the default editor is going to be a textbox, which would require the user to type in the ID of the desired sales person. That’s obviously not ideal. While I could create a connected web part, and allow the user to choose the sales person from there, that would mean I’d have to add another web part to the page and configure the connection, which is a bit more than I really need for a simple one value property.

    The solution? Customize the editor.

    SharePoint allows you to create your own editor for a web part if you are not happy with the one created for you. The first step is to create a new class that inherits from EditorPart. EditorPart is similar to a normal web control, except it has two methods that need to be implemented:

    • ApplyChanges() – Responsible for updating the web part with the chosen set through the editor
    • SyncChanges() – Responsible for updating the editor with the values from the web part

    Creating the EditorPart involves overriding CreateChildControls() (like a normal control), and then overriding those two methods. The end result looks a bit like the following:

    private String connectionString = //AdventureWorks connection string

    private String sql = 

              @"SELECT c.FirstName + ' ' + c.LastName AS Name

                     , sp.SalesPersonId

                 FROM Person.Contact c

                 INNER JOIN HumanResources.Employee e

                       ON c.ContactID = e.ContactID

                 INNER JOIN Sales.SalesPerson sp

                       ON e.EmployeeID = sp.SalesPersonID

                           ORDER BY Name";

     

    private DropDownList ddlSalesPeople;

    private Label lblSalesPeople;

     

    protected override void CreateChildControls()

    {

        this.Title = "Recent Sales Properties";

     

        lblSalesPeople = new Label();

        lblSalesPeople.Text = "<b>Sales Person:</b>";

        Controls.Add(lblSalesPeople);

     

        ddlSalesPeople = new DropDownList();

        ddlSalesPeople.DataValueField = "SalesPersonId";

        ddlSalesPeople.DataTextField = "Name";

     

        using(SqlConnection connection =

                   new SqlConnection(connectionString))

        using(SqlCommand command = new SqlCommand(sql, connection)) {

            connection.Open();

            using(SqlDataReader reader = command.ExecuteReader()) {

                ddlSalesPeople.DataSource = reader;

                ddlSalesPeople.DataBind();

            }

        }

        Controls.Add(ddlSalesPeople);

     

        base.CreateChildControls();

        ChildControlsCreated = true;

    }

     

    // Save settings to web part

    public override bool ApplyChanges()

    {

        EnsureChildControls();

        RecentSales webPart = (RecentSales) this.WebPartToEdit;

        webPart.SalesPersonId =

            Int32.Parse(ddlSalesPeople.SelectedValue);

        return true;

    }

     

    // Retrieve settings from web part

    public override void SyncChanges()

    {

        EnsureChildControls();

        RecentSales webPart = (RecentSales) this.WebPartToEdit;

        ddlSalesPeople.SelectedValue = webPart.SalesPersonId.ToString();

    }

     

    A Couple of notes about my new editor:

    • Once again I’ve hardcoded my connection string and SQL query. I’m ok with that here. :-)
    • CreateChildControls() is relatively straight forward, adding in the label for display purposes, and then the drop down list.
    • ApplyChanges() and SyncChanges() work just as prescribed, reading the WebPartToEdit property to access the web part.

    After creating the EditorPart, the last step is to configure the web part to use the new editor. This is done by performing the following actions:

     

    1. Update the web part to implement IWebEditable. IWebEditable has two methods:

    • CreateEditorParts() – used to create an instance of the desired editor web part
    • WebBrowsableObject() – used to return the current instance of the web part for access in the editor

    For our example an implementation would look like this:

     

    EditorPartCollection IWebEditable.CreateEditorParts()

    {

        List<EditorPart> editors = new List<EditorPart>();

        RecentSalesEditor editor = new RecentSalesEditor();

        editor.ID = this.ID + "EditorPart";

        editors.Add(editor);

        return new EditorPartCollection(editors);

    }

     

    object IWebEditable.WebBrowsableObject

    {

        get { return this; }

    }

    2.  Remove every attribute from the property except Personalizable. Or more specifically, make sure that WebBrowsable is gone. While having the other ones left behind won’t hurt anything, they’re superfluous since we won’t be counting on SharePoint to create the editor for us. The resultant property looks like this:

    [Personalizable(PersonalizationScope.User)]

    public Int32 SalesPersonId { get; set; }

    The end result in the editor will look a little like this:

    RecentSalesEditor

    Creating usable web parts will go a long way to ensuring reuse and adoption of your SharePoint implementation. With that, here’s today’s My 5.

    My 5 components to a good web part

    1. Configurable – A good web part should be configurable by the user. Each user will have different needs for certain web parts, and by allowing users the ability to configure them they’ll become more valuable.
    2. Self-documenting – Users may or may not actually read any documentation that’s separate from the web part. Make sure your properties are clearly named, and provide any supporting documentation on the editor.
    3. Connectable – When appropriate, take advantage of the ability to connect web parts. This allows your users to create parent/child views with ease.
    4. Validates User Input – Either implicitly through controls like drop down lists, or explicitly by using validation controls, make sure you validate every value to ensure it is acceptable.
    5. Provide Custom Error Messages – Unhandled exceptions in a web part will crash the entire page. Make sure you catch all exceptions and display friendly error messages.

    Oh – and before I forget. You can download the source code!

    [1] In my example, I’m using a custom web part. I did this merely for simplicity of the blog post. I generally favour visual web parts, but adding properties to a visual web part requires a couple of additional steps. The steps I walked through above can be implemented with a visual web part using the same techniques, remembering that the property needs to be added to the web part class, and the user control will be responsible for reading and using the value. If you’re interested, I wrote a blog post on adding properties to visual web parts.

  • Canadian Solution Developers' Blog

    The Basics of Securing Applications: Part 4 - Secure Architecture

    • 0 Comments

     

    Visual-Studio-One-on-One_thumb1_thumIn this post, we’re continuing our One on One with Visual Studio conversation from May 26 with Canadian Developer Security MVP Steve Syfuhs, The Basics of Securing Applications. If you’ve just joined us, the conversation builds on the previous post, so check that out (links below) and then join us back here. If you’ve already read the previous post, welcome!

    Part 1: Development Security Basics
    Part 2: Vulnerabilities
    Part 3: Secure Design and Analysis in Visual Studio 2010 and Other Tools
    Part 4: Secure Architecture (This Post)

    In this session of our conversation, The Basics of Securing Applications, Steve provides us with great architectural guidance. As usual, when it comes to architecture, there is a lot to cover, and as such, today’s discussion is a bit longer, but, on the other hand, the security of your application is a big deal and doing it architecturally correct is worth the up front investment.

    Steve, back to you.

    Absolutely!

    Before you start to build an application you need to start with it’s design.  Previously, I stated that bugs that are introduced at this stage of the process are the most expensive to fix throughout the lifetime of the project.  It is this reason that we need to have a good foundation for security, otherwise it'll be expensive (not to mention a pain) to fix. Keep in mind this isn't an agile vs. the world discussion because no matter what at some point you still need to have a design of the application.

    Before we can design an application, we need to know that there are two basic types of code/modules:

    1. That which is security related; e.g. authentication, crypto, etc.
    2. That which is not security related (but should still be secure nonetheless); e.g. CRUD operations.

    They can be described as privileged or unprivileged.

    Privileged

    Whenever a piece of code is written that deals with things like authentication or cryptography, we have be very careful with it because it should be part of the secure foundation of the application. Privileged code is the authoritative core of the application.  Careless design here will render your application highly vulnerable.  Needless to say, we don't want that.  We want a foundation we can trust.  So, we have three options:

    • Don't write secure code, and be plagued with potential security vulnerabilities.  Less cost, but you cover the risk.
    • Write secure code, test it, have it verified by an outside source.  More cost, you still cover the risk.
    • Make someone else write the secure code.  Range of cost, but you don't cover the risk.

    In general, from a cost/risk perspective, our costs and risks decrease as we move from the top of the list to the bottom.  This should therefore be a no-brainer: DO NOT BUILD YOUR OWN PRIVILEGED SECURITY MODULES.  Do not invent a new way of doing things if you don't need to, and do not rewrite modules that have already been vetted by security experts.  This may sound harsh, but seriously, don't.  If you think you might need to, stop thinking.  If you still think you need to, contact a security expert. PLEASE!

    This applies to both coding and architecture. When we talked about vulnerabilities, we did not come up with a novel way of protecting our inputs, we used well known libraries or methods.  Well now we want to apply this to the application architecture. 

    Authentication

    Let's start with authentication.

    A lot of times an application has a need for user authentication, but it's core function has nothing to do with user authentication.  Yes, you might need to authenticate users for your mortgage calculator, but the core function of the application is calculating mortgages, and has very little to do with users.  So why would you put that application in charge of authenticating users?  It seems like a fairly simple argument, but whenever you let your application use something like a SqlMembershipProvider you are letting the application manage authentication.  Not only that, you are letting the application manage the entire identity for the user.  How much of that identity information is duplicated in multiple databases?  Is this really the right way to do things?  Probably not.

    From an architectural perspective, we want to create an abstract relationship between the identity of the user and the application.  Everything this application needs to know about this user is part of this identity, and (for the most part) the application is not an authority on any of this information because it's not the job of the application to be the authority.

    Let's think about this another way.

    Imagine for a moment that you want to get a beer at the bar. In theory the bartender should ask you for proof of age. How do you prove it? Well, one option is to have the bartender cut you in half and count the number of rings, but there could be some problems with that. The other option is for you to write down your birthday on a piece of paper to which the bartender approves or disapproves. The third option is to go to the government, get an ID card, and then present the ID to the bartender.

    Some may have laughed at the idea of just writing your birthday on a piece of paper, but this is essentially what is happening when you are authenticating users within your application because it is up to the bartender (or your application) to trust the piece of paper. However, we trust the government's assertion that the birthday on the ID is valid, and the ID is for the person requesting the drink.  The bartender knows nothing about you except your date of birth because that's all the bartender needs to know.  Now, the bartender could store information that they think is important to them, like your favorite drink, but the government doesn't care (as it isn't the authoritative source), so the bartender stores that information in his own way.

    Now this begs the question of how do you prove your identity/age to the government, or how do you authenticate against this external service?  Frankly, it doesn't matter as it's the core function of this external service and not our application.  Our application just needs to trust that it is valid, and trust that it is a secure authentication mechanism.

    In developer speak, this is called Claims Based Authentication

    • A claim is an arbitrary piece of information about an identity, such as age, and is bundled into a collection of claims, to be part of a token.
    • A Security Token Service (STS) generates the token, and our application consumes it.  It is up to the STS to handle authentication. 

    Both Claims Based Authentication and the Kerberos Protocol are built around the same model, although they use different terms.  If you are looking for examples, Windows Live/Hotmail use Claims via the WS-Federation protocol.  Google, Facebook, and Twitter use Claims via the OAuth protocol.  Claims are everywhere.

    Alright, less talking, more diagramming:

    image

    The process goes something like this:

    1. Go to STS and authenticate (this is usually a web page redirect + the user entering their credentials)
    2. The STS tells the user's browser to POST the token to the application
    3. The application verifies the token and verifies whether it trusts the the STS
    4. The Application consumes the token and uses the claims as it sees fit

    Now we get back to asking how the heck does the STS handle authentication?  The answer is that it depends (Ah, the consultant’s answer).  The best case scenario is that you use an STS and identity store that already exist.  If you are in an intranet scenario use Active Directory and Active Directory Federation Services (a free STS for Active Directory).  If your application is on the internet use something like Live ID or Google ID, or even Facebook, simplified with Windows Azure Access Control Services.  If you are really in a bind and need to create your own STS, you can do so with the Windows Identity Foundation (WIF).  In fact, use WIF as the identity library in the diagram above.  Making a web application claims-aware involves a process called Federation.  With WIF it's really easy to do.

    Accessing claims within the token is straightforward because you are only accessing a single object, the identity within the CurrentPrincipal:

    private static TimeSpan GetAge()
    {
        IClaimsIdentity ident = Thread.CurrentPrincipal.Identity as IClaimsIdentity;
    
        if (ident == null)
            throw new ApplicationException("Isn't a claims based identity");
    
        var dobClaims = ident.Claims.Where(c => c.ClaimType == ClaimTypes.DateOfBirth);
    
        if(!dobClaims.Any())
            throw new ApplicationException("There are no date of birth claims");
    
        string dob = dobClaims.First().Value;
    
        TimeSpan age = DateTime.Now - DateTime.Parse(dob);
    
        return age;
    }

    There is secondary benefit to Claims Based Authentication.  You can also use it for authorization.  WIF supports the concept of a ClaimsAuthorizationManager, which you can use to authorize access to site resources.  Instead of writing your own authorization module, you are simply defining the rules for access, which is very much a business problem, not technical.

    Once authentication and authorization are dealt with, the two final architectural nightmares problems revolve around privacy and cryptography.

    Privacy

    Privacy is the control of Personally Identifiable Information (PII), which is defined as anything you can use to personally identify someone (good definition, huh?).  This can include information like SIN numbers, addresses, phone numbers, etc.  The easiest solution is to simply not use the information.  Don't ask for it and don't store it anywhere.  Since this isn't always possible, the goal should be to use (and request) as little as possible.  Once you have no more uses for the information, delete it.

    This is a highly domain-specific problem and it can't be solved in a general discussion on architecture and design.  Microsoft Research has an interesting solution to this problem by using a new language designed specifically for defining the privacy policies for an application:

    Preferences and policies are specified in terms of granted rights and required obligations, expressed as assertions and queries in an instance of SecPAL (a language originally developed for decentralized authorization). This paper further presents a formal definition of satisfaction between a policy and a preference, and a satisfaction checking algorithm. Based on the latter, a protocol is described for disclosing PIIs between users and services, as well as between third-party services.

    Privacy is also a measure of access to information in a system. Authentication and authorization are a core component of proper privacy controls.  There needs to be access control on user information.  Further, access to this information needs to be audited.  Anytime someone reads, updates, or deletes personal information, it should be recorded somewhere for review later.  There are quite a number of logging frameworks available, such as log4net or ELMAH.

    Cryptography

    For the love of all things holy, do not do any custom crypto work.  Rely on publically vetted libraries like Bouncy Castle and formats like OpenPGP.

    • Be aware of how you are storing your private keys. Don't store them in the application as magic-strings, or store them with the application at all. If possible store them in a Hardware Security Module (HSM).
    • Make sure you have proper access control policies for the private keys.
    • Centralize all crypto functions so different modules aren't using their own implementations.
    • Finally, if you have to write custom encryption wrappers, make sure your code is capable of switching encryption algorithms without requiring recompilation.  The .NET platform has made it easy to change.  You can specify a string:
    public static byte[] SymmetricEncrypt(byte[] plainText, byte[] initVector, byte[] keyBytes)
    {
        if (plainText == null || plainText.Length == 0)
            throw new ArgumentNullException("plainText");
    
        if (initVector == null || initVector.Length == 0)
            throw new ArgumentNullException("initVector");
    
        if (keyBytes == null || keyBytes.Length == 0)
            throw new ArgumentNullException("keyBytes");
    
        using (SymmetricAlgorithm symmetricKey = SymmetricAlgorithm.Create("algorithm")) // e.g.: 'AES'
        {               
            return CryptoTransform(plainText, symmetricKey.CreateEncryptor(keyBytes, initVector));
        }
    }
    
    private static byte[] CryptoTransform(byte[] payload, ICryptoTransform transform)
    {
        using (MemoryStream memoryStream = new MemoryStream())
        using (CryptoStream cryptoStream = new CryptoStream(memoryStream, transform, CryptoStreamMode.Write))
        {
            cryptoStream.Write(payload, 0, payload.Length);
    
            cryptoStream.FlushFinalBlock();
    
            return memoryStream.ToArray();
        }
    }

    Microsoft provides a list of all supported algorithms, as well as how to specify new algorithms for future use. By following these design guidelines you should have a fairly secure foundation for your application.  Now lets look at unprivileged modules.

    Unprivileged

    When we talked about vulnerabilities, there was a single, all encompassing, hopefully self evident, solution to most of the vulnerabilities: sanitize your inputs.  Most vulnerabilities, one way or another, are the result of bad input.  This is therefore going to be a very short section.

    • Don't let input touch queries directly.  Build business objects around data and encode any strings.  Parse all incoming data.  Fail gracefully on bad input.
    • Properly lock down access to resources through authorization mechanisms in privileged modules, and audit all authorization requests.
    • If encryption is required, call into a privileged module.
    • Validate the need for SSL.  If necessary, force it.

    Final Thoughts

    Bugs during this phase of development are the most costly and hardest to fix, as the design and architecture of the application is the most critical step in making sure it isn't vulnerable to attack. Throughout our conversation, we have looked at how to develop a secure application.  Next time, we will look at how to respond to threats and mitigate the damage done.

    About Steve Syfuhs

    Steve Syfuhs is a bit of a Renaissance Kid when it comes to technology. Part developer, part IT Pro, part Consultant working for ObjectSharp. Steve spends most of his time in the security stack with special interests in Identity and Federation. He recently received a Microsoft MVP award in Developer Security. You can find his ramblings about security at www.steveonsecurity.com
  • Canadian Solution Developers' Blog

    The Basics of Securing Applications: Vulnerabilities

    • 0 Comments

    Visual Studio One on OneIn this post, we’re continuing our One on One with Visual Studio conversation from May 26 with Canadian Developer Security MVP Steve Syfuhs, The Basics of Securing Applications. If you’ve just joined us, the conversation builds on the previous post, so check that out (links below) and then join us back here. If you’ve already read the previous post, welcome!

    Part 1: Development Security Basics
    Part 2: Vulnerability Deep Dive (This Post)

    In this session of our conversation, The Basics of Securing Applications, Steve goes through the primers of writing secure code, focusing specifically on the most common threats to web-based applications.

    Steve, back to you.

    Great, thank you. When we started the conversation last week, I stated that knowledge is key to writing secure code:

    Perhaps the most important aspect of the SDL is that it's important to have a good foundation of knowledge of security vulnerabilities.

    In order to truly protect our applications and the data within, we need to know about the threats in the real world. The OWASP top 10 list gives a us a good starting point in understanding some of the more common vulnerabilities that malicious users can use for attack:

    1. Injection
    2. Cross-Site Scripting (XSS)
    3. Broken Authentication and Session Management
    4. Insecure Direct Object References
    5. Cross-Site Request Forgery (CSRF)
    6. Security Misconfiguration
    7. Insecure Cryptographic Storage
    8. Failure to Restrict URL Access
    9. Insufficient Transport Layer Protection
    10. Unvalidated Redirects and Forwards

    It's important to realize that is strictly for the web – we aren't talking client applications, though some do run into the same problems. We’ll focus on web-based applications for now.  It's also helpful to keep in mind that we aren't talking just Microsoft-centric vulnerabilities.  These also exist in applications running on the LAMP (Linux/Apache/MySQL/PHP) stack and variants in between.  Finally, it's very important to note that just following these instructions won't automatically give you a secure code base – these are just primers in some ways of writing secure code.

    Injection

    Injection is a way of changing the flow of a procedure by introducing arbitrary changes. An example is SQL injection. Hopefully you’ve heard of SQL Injection, but lets take a look at this bit of code for those who don't know about it:

    string query = string.Format("SELECT * FROM UserStore WHERE UserName = '{0}' AND PasswordHash = '{1}'", username, password);

    If we passed it into a SqlCommand we could use it to see whether or not a user exists, and whether or not their hashed password matches the one in the table. If so, they are authenticated. Well what happens if I enter something other than my username? What if I enter this:

    '; --

    It would modify the SQL Query to be this:

    SELECT * FROM UserStore WHERE UserName = ''; -- AND PasswordHash = 'TXlQYXNzd29yZA=='

    This has essentially broken the query into a single WHERE clause, asking for a user with a blank username because the single quote closed the parameter, the semicolon finished the executing statement, and the double dash made anything following it into a comment.

    Hopefully your user table doesn't contain any blank records, so lets extend that a bit:

    ' OR 1=1; --

    We've now appended a new clause, so the query looks for records with a blank username OR where 1=1. Since 1 always equals 1, it will return true, and since the query looks for any filter that returns true, it returns every record in the table.

    If our SqlCommand just looked for at least one record in the query set, the user is authenticated. Needless to say, this is bad.

    We could go one step further and log in as a specific user:

    administrator';  --

    We've now modified the query in such a way that it is just looking for a user with a particular username, such as the administrator.  It only took four characters to bypass a password and log in as the administrator.

    Injection can also work in a number of other places such as when you are querying Active Directory or WMI. It doesn't just have to be for authentication either. Imagine if you have a basic report query that returns a large query set. If the attacker can manipulate the query, they could read data they shouldn't be allowed to read, or worse yet they could modify or delete the data.

    Essentially our problem is that we don't sanitize our inputs.  If a user is allowed to enter any value they want into the system, they could potentially cause unexpected things to occur.  The solution is simple: sanitize the inputs!

    If we use a SqlCommand object to execute our query above, we can use parameters.  We can write something like:

    string query = "SELECT * FROM UserStore WHERE UserName = @username AND PasswordHash = @passwordHash";
                
    SqlCommand c = new SqlCommand(query);
    c.Parameters.Add(new SqlParameter("@username", username));
    c.Parameters.Add(new SqlParameter("@passwordHash", passwordHash));

    This does two things.  One, it makes .NET handle the string manipulation, and two it makes .NET properly sanitize the parameters, so

    ' OR 1=1; --

    is converted to

    ' '' OR 1=1;—'

    In the SQL language, two single quote characters acts as an escape sequence for a single quote, so in effect the query is trying to look for a value as is, containing the quote.

    The other option is to use a commercially available Object Relational Mapper (ORM) like the Entity Framework or NHibernate where you don't have to write error-prone SQL queries.  You could write something like this with LINQ:

    var users = from u in entities.UserStore 
                where u.UserName == username && u.PasswordHash == passwordHash select u;

    It looks like a SQL query, but it's compileable C#.  It solves our problem by abstracting away the ungodly mess that is SqlCommands, DataReaders, and DataSets.

    Cross-Site Scripting (XSS)

    XSS is a way of adding a chunk of malicious JavaScript into a page via flaws in the website. This JavaScript could do a number of different things such as read the contents of your session cookie and send it off to a rogue server. The person in control of the server can then use the cookie and browse the affected site with your session. In 2007, 80% of the reported vulnerabilities on the web were from XSS.

    XSS is generally the result of not properly validating user input. Conceptually it usually works this way:

    1. A query string parameter contains a value: ?q=blah
    2. This value is outputted on the page
    3. A malicious user notices this
    4. The malicious user inserts a chunk of JavaScript into the URL parameter: ?q=<script>alert("pwned");</script>
    5. This script is outputted without change to the page, and the JavaScript is executed
    6. The malicious user sends the URL to an unsuspecting user
    7. The user clicks on it while logged into the website
    8. The JavaScript reads the cookie and sends it to the malicious user's server
    9. The malicious user now has the unsuspecting user's authenticated session

    This occurs because we don't sanitize user input. We don't remove or encode the script so it can't execute.  We therefore need to encode the inputted data.  It's all about the sanitized inputs.

    The basic problem is that we want to display the content the user submitted on a page, but the content can be potentially executable.  Well, how do we display JavaScript textually?  We encode each character with HtmlEncode, so the < (left angle bracket) is outputted as &lt; and the > (right angle bracket) is outputted as &gt;. In .NET you have have some helpers in the HttpUtility class:

    HttpUtility.HtmlEncode("<script>alert(\"hello!\");</script>");

    This works fairly well, but you can bypass it by doing multiple layers of encoding (encoding an encoded value that was encoded with another formula).  This problem exists because HtmlEncode uses a blacklist of characters, so whenever it comes across a specific character it will encode it.  We want it to do the opposite – use a whitelist.  So whenever it comes across a known character, it doesn't encode it (such as the letter 'a'), but otherwise it encodes everything else.  It's generally far easier to protect something if you only allow known good things instead of blocking known threats (because threats are constantly changing).

    Microsoft released a toolkit to solve this encoding problem, called the AntiXss toolkit. It's now part of the Microsoft Web Protection Library, which also actually contains some bits to help solve the SQL injection problem.  To use this encoder, you just need to do something like this:

    string encodedValue = Microsoft.Security.Application.Sanitizer.GetSafeHtmlFragment(userInput);

    There is another step, which is to set the cookie to server-only, meaning that client side scripts cannot read the contents of the cookie.  Only newer browsers support this, but all we have to do is write something like this:

    HttpCookie cookie = new HttpCookie("name", "value");
    cookie.HttpOnly = true;

    For added benefit while we are dealing with cookies, we can also do this:

    cookie.Secure = true;

    Setting Secure to true requires that the cookie only be sent over HTTPS.

    This should be the last step in the output.  There shouldn't be any tweaking to the text or cookie after this point.  Call it the last line of defense on the server-side.

    Cross-Site Request Forgery (CSRF)

    Imagine a web form that has a couple fields on it – sensitive fields, say money transfer fields: account to, amount, transaction date, etc. You need to log in, fill in the details, and click submit. That submit POSTs the data back to the server, and the server processes it. In ASP.NET WebForms, the only validation that goes on is whether the ViewState hasn’t been tampered with.  Other web frameworks skip the ViewState bit, because well, they don't have a ViewState.

    Now consider that you are still logged in to that site, and someone sends you a link to a funny picture of a cat. Yay, kittehs! Anyway, on that page is a simple set of hidden form tags with malicious data in it. Something like their account number, and an obscene number for cash transfer. On page load, JavaScript POST’s that form data to the transfer page, and since you are already logged in, the server accepts it. Sneaky.

    There is actually a pretty elegant way of solving this problem.  We need to create a value that changes on every page request, and send it as part of the response.  Once the server receives the response, it validates the value and if it's bad it throws an exception.  In the the cryptography world, this is called a nonce.  In ASP.NET WebForms we can solve this problem by encrypting the ViewState.  We just need a bit of code like this in the page (or masterpage):

    void Page_Init (object sender, EventArgs e) 
    { 
        ViewStateUserKey = Session.SessionID; 
    }

    When we set the ViewStateUserKey property on the page, the ViewState is encrypted based on this key.  This key is only valid for the length of the session, so this does two things.  First, since the ViewState is encrypted, the malicious user cannot modify their version of the ViewState since they don't know the key.  Second, if they use an unmodified version of a ViewState, the server will throw an exception since the victim's UserKey doesn't match the key used to encrypt the initial ViewState, and the ViewState parser doesn't understand the value that was decrypted with the wrong key.  Using this piece of code depends entirely on whether or not you have properly set up session state though.  To get around that, we need to set the key to a cryptographically random value that is only valid for the length of the session, and is only known on the server side.  We could for instance use the modifications we made to the cookie in the XSS section, and store the key in there.  It gets passed to the client, but client script can't access it.  This places a VERY high risk on the user though, because this security depends entirely on the browser version.  It also means that any malware installed on the client can potentially read the cookie – though the user has bigger problems if they have a virus. 

    Security is complex, huh?  Anyway…

    In MVC we can do something similar except we use the Html.AntiForgeryToken().

    This is a two step process.  First we need to update the Action method(s) by adding the ValidateAntiForgeryToken attribute to the method:

    [AcceptVerbs(HttpVerbs.Post)]
    [ValidateAntiForgeryToken]
    public ActionResult Transfer(WireTransfer transfer)
    {
        try
        {
            if (!ModelState.IsValid)
                return View(transfer); 
    
            context.WireTransfers.Add(transfer);
            context.SubmitChanges();
    
            return RedirectToAction("Transfers");
        }
        catch
        {
            return View(transfer);
        }
    }

    Then we need to add the AntiForgeryToken to the page:

    <%= Html.AntiForgeryToken() %>

    This helper will output a nonce that gets checked by the ValidateAntiForgeryToken attribute.

    Insecure Cryptographic Storage

    I think it's safe to say that most of us get cryptography related-stuff wrong most of the time at first. I certainly do. Mainly because crypto is hard to do properly. If you noticed above in my SQL Injection query, I used this value for my hashed password: TXlQYXNzd29yZA==.

    It's not actually hashed. It's encoded using Base64 (the double-equals is a dead give away). The decoded value is 'MyPassword'. The difference being that hashing is a one-way process. Once I hash something (with a cryptographic hash), I can't de-hash it. Second, if I happen to get hold of someone else's user table, I can look for hashed passwords that look the same. So anyone else that has a password in the table as "TXlQYXNzd29yZA==", I know that their password is 'MyPassword'. This is where a salt comes in handy. A salt is just a chunk of data appended to the unhashed password, and then hashed. Each user has a unique salt, and therefore will have a unique hash.

    Then in the last section on CSRF I talked about using a nonce.  Nonce's are valuable to the authenticity of a request.  They prevent replay attacks, meaning that the encrypted output will look the same as a previous response, and is therefore a copy of the last message.  It is extremely important that the attacker not know how this nonce is generated.

    Which leads to the question of how to do you properly secure encryption keys? Well that’s a discussion in and of itself. Properly using cryptography in an application is really hard to do. Proper cryptography in an application is a topic fit for a book.

    Final Thoughts

    We’ve touched on only four of the items in the OWASP top 10 list as they are directly solvable using publically available frameworks.  The other six items in the list can be solved through the use of tools as well as designing a secure architecture, both of which we will talk about in the future.

    Looking forward to continuing the conversation.

    About Steve Syfuhs

    Steve Syfuhs is a bit of a Renaissance Kid when it comes to technology. Part developer, part IT Pro, part Consultant working for ObjectSharp. Steve spends most of his time in the security stack with special interests in Identity and Federation. He recently received a Microsoft MVP award in Developer Security. You can find his ramblings about security at www.steveonsecurity.com
  • Canadian Solution Developers' Blog

    Why Employers Should Pay for Training and Certification

    • 0 Comments


    As you know, I had the chance to sit down with Canadian MVP Mitch Garvis (@MGarvis) recently to talk about training and certification. So far, we’ve discussed:

    Not Studying for a Certification Exam
    Why Employers Should Pay for Training and Certification (This Post)

    Getting certified is an investment of time, and further, it is an investment of money. There is a fee for the exam, preparatory books, and courses. However, these are needed in order to review the material for the prepare for the certification exam. Mitch believes that these are expenses that your employer should cover, or at the very least, assist with. So the question is “why?”.

    The following is Mitch’s answer, offering up guidance to help you build your case for your employer:

    That’s a great question. There are so many ways that we can learn a product… I’m going to use a horrible expression, but “there’s more than one way to skin a cat”. I’ll explain by using an example.

    In desktop deployment, you, as an IT Pro would know how to install Windows 7 on your computer. You do whatever you need to do and three and a half hours later, you’re computer is done [on a good day]. I was at a client yesterday and they told me that they have their deployments down to an hour and forty-five minutes. “We took the time to learn it and how to do it – we did the research.” they told me. I asked: “what did you learn?” and they gave me a list of products that they learned, each of which was a generation or two removed from the current. I looked at them and I said “well if you just learned this, which is the Microsoft Deployment Toolkit, the hour and forty-five minutes becomes 17 minutes. Did you know that?!” The gentlemen with whom I was speaking was shocked – he was the desktop deployment guru for his company and he realized that he learned it the wrong way.

    So, you can learn something and it will get the job done, or you can learn something and do it right and get the job done better. You have to invest the time in training. You know my 13 year old son knows everything that he knows and has no concept of what he doesn’t know. Let’s take “13 year old son” out of that sentence. You know what you know very well but you will have very little concept of what you don’t know if you don’t know it.

    Let’s relate that to development. You want (or need) to learn Windows Azure, let’s say. Windows Azure is a huge platform and as with any complex platform, there are many different ways to architect solutions targeted at the platform. Some ways are more performant than others, while some ways are more cost-effective than others. There are also ways to have a balance between the two (“do it right and get the job done better” as Mitch says above); however, finding out how to do so would be challenging unless you either have the experience of previous trial-and-error attempts or take training from others who have done it and learn from their experiences.

    Imagine the IT pro or developer that doesn’t participate in community events (user groups) or doesn’t go to events held by Microsoft or Microsoft partners, and by the way, there is nothing wrong with that, but then how would you know that there are things that you don’t know? By telling your boss “hey look, there’s something out there that I don’t know, and therefore I don’t know what is the proper way to do it.” This is why training and certification matters. By the way, if you don’t think that you can have that conversation with your boss, and convince them that you need those training days, bring me in. I'll have that conversation for you and in no time, I’ll convince him or her.

    You know what? CxOs don’t care about what’s cool out there. They care about one very simple equation – how do I increase ROI and reduce cost of ownership. The answer to that is – train your people properly, invest in their training.

    I remember, back in the day, I used to have this discussion with one of my old employers and somehow, the discussion always came down learning something on my own. The thought was if I could learn it on my own, that’s more cost effective, and therefore additional paid training and certification were not approved.

    I did say before that there is more than one way to skin a cat. There is more than one way to learn something the right way. I didn’t say there was more than one way to do something the right way. There is more than one way to learn how to do something the right way. If you can’t get the X days off to take a course – your boss says to you “learn it on your own time” – say to him/her “you know what boss? I could probably learn it almost as well as the course, but I still need books and still need the certification afterwards.” and get him/her to invest in that. You can learn the right way from books, online forums, and articles. Just make sure that you don’t hack your way through it and learn it the way you think you should be learning it. Don’t just get the frameworks and just start going at it and think you’re going to be an excellent, excellent developer.

    Now It’s Your Turn

    How have you approached training and certification with your employer? How did you pitch your business case? Which approaches have worked and which haven’t? Share your thoughts.

    Conversation Continued

    Stay tuned for more insights from my conversation with Mitch as we chat about actually taking exams, some tips and tricks, and what to do after an exam, whether you pass or fail.

    Mitch Garvis

    Mitch Garvis is a Renaissance Man of the IT world with a passion for community.  He is an excellent communicator which makes him the ideal trainer, writer, and technology evangelist. Having founded and led two major Canadian user groups for IT Professionals, he understands both the value and rewards of helping his peers. After several years as a consultant and in-house IT Pro for companies in Canada, he now works with various companies creating and delivering training for Microsoft to its partners and clients around the world. He is a Microsoft Certified Trainer, and has been recognized for his community work with the prestigious Microsoft Most Valuable Professional award. He is an avid writer, and blogs at http://garvis.ca.

  • Canadian Solution Developers' Blog

    Maybe You Do Support IE9 and No-One Told You

    • 0 Comments

    I got an interesting email from a developer the other day who had attended one of the Internet Explorer 9 code camps. He told me he was having trouble using a particular website with IE9. When he called their support line, he was told “We don’t support IE9, you should uninstall IE9 or use another browser” Whoa! Stop Right There! Just because you haven’t done complete regression testing in IE9 doesn’t mean you should just abandon the members of the public who have moved to IE9 and want to access your site. At least try compatibility mode first! Compatibility mode is the feature in IE9 designed for backwards compatibility. So if you or someone you are talking to has trouble browsing a site in IE9 it is your first line of defense. Please forward this article to your service desk so the next time an IE9 user calls, they have a chance at solving that user’s issue with the click of a single button. I understand the service desk may not be able to say they support IE9 simply because the corporation has not completed full testing on the browser, but, many users will be satisfied with a “You can try this but we don’t officially support it” solution like compatibility view.

    There are 2 ways to access compatibility mode.

    1. Use the Compatibility mode button in the toolbar.
    2. Select Compatibility mode from the menu.

    Let’s take a look at each

    We’ll start with a website that doesn’t work in IE9. I am not a Beastie Boys lover or hater, I just happen to know that the web site promoting their new album does not work in IE9. When I visit the site in IE9, the main screen is just an image, and the menu options at the bottom of the screen are all jumbled on top of each other.

    BrokenWebsiteImage

    Let’s use our first option, Use the Compatibility mode button in the toolbar, to fix the site. If you look at the toolbar, you will notice an icon of a broken page.

    CompatibilityIcon

    If you click on this icon you will enter compatibility view. This will display the page as if you were using Internet Explorer 7. Voila the site now works!

    FixedWebsiteImage

    IE9 will remember the sites where you selected compatibility view. If you return to that site in another browsing session it will automatically be displayed in compatibility view.

    IE9 actually examines the source code of the web site to determine if it should display the compatibility view icon in the toolbar. But it is possible that one day you will visit a site that does not work in IE9, and there is no compatibility button on the toolbar. In the immortal words of Hitch Hiker’s Guide to the Galaxy “Don’t Panic.”

    You simply need to use our second option, Select Compatibility mode from the menu, to select compatibility view.

    You may have noticed the menu bar is not displayed by default in IE9. Just press the <ALT> key to display the menu. To enter compatibility view choose Tools | Compatibility View.

    IE9ToolsMenu

    That’s it! Now if you read the blog this far that shows dedication, so,  you deserve an extra tip, take a look at the menu option Tools | Compatibility View Settings. It brings up the following window.

    CompatibilityViewSettingsWindow

    You can use the Compatibility View Settings Window to:

    • Add or remove websites from Compatibility View
    • Include updated website lists from Microsoft - this automatically displays in Compatibility View websites Microsoft has identified and registered as requiring compatibility view
    • Display intranet sites in Compatibility View-  to display all your intranet sites in Compatibility View so you may be able to start using IE9 before all the internal corporate sites have been upgraded and tested for IE9
    • Display all websites in Compatibility View - to display all websites in Compatibility View. I would not recommend this setting because more and more websites are taking advantage of HTML5 and pinned site features that require a current browser such as IE9.

    So there you have it, before you tell someone you don’t support IE9, give Compatibility View a try!

    Today’s My 5:

    5 Reasons you want to use IE9 to browse web sites. (in no particular order)

    1. Full hardware acceleration - IE9 leverages the Graphics Processor Unit (GPU) as well as the CPU, which means graphics and video display faster and more smoothly than ever before. So everyday sites with graphics and video will be better.
    2. Faster JavaScript execution – The new JavaScript engine Chakra is a huge improvement in speed over IE8. Don’t believe me, check out the SunSpider results for JavaScript performance on IE9.
    3. Favourites on steroids (aka pinned sites) – If you have a favourite web, you can just drag the tab down to the taskbar and pin it to your taskbar. Now whenever you start up your computer, you have an icon you can use to launch that site. Some websites customize the pinning with their own icons and jump lists (check out tsn.ca, cbc.ca, or theglobeandmail.com)
    4. More screen space for the website – When you browse a website, you want to see the content, not the browser. With the new layout in IE9, more screen space is devoted to the website.
    5. F12 Developer Tools – Hey, we are developers after all, the developer tools that display when you choose F12 are fantastic web debugging tools built right into the browser. Yes we had the F12 tools in IE8, but the new Network and Profiler tabs add even more tools to our developer toolkit! I like the developer tools so much, I wrote a blog about Console Smile.
  • Canadian Solution Developers' Blog

    The Basics of Securing Applications: Part 3 - Secure Design and Analysis in Visual Studio 2010 and Other Tools

    • 0 Comments

     

    Visual-Studio-One-on-One_thumb1In this post, we’re continuing our One on One with Visual Studio conversation from May 26 with Canadian Developer Security MVP Steve Syfuhs, The Basics of Securing Applications. If you’ve just joined us, the conversation builds on the previous post, so check that out (links below) and then join us back here. If you’ve already read the previous post, welcome!

    Part 1: Development Security Basics
    Part 2: Vulnerabilities
    Part 3: Secure Design and Analysis in Visual Studio 2010 and Other Tools (This Post)

    In this session of our conversation, The Basics of Securing Applications, Steve shows us how we can identify potential security issues in our applications by using Visual Studio and other security tools.

    Steve, back to you.

    Every once in a while someone says to me, “hey Steve, got any recommendations on tools for testing security in our applications?” It's a pretty straightforward question, but I find it difficult to answer because a lot of times people are looking for something that will solve all of their security problems, or more likely, solve all the problems they know they don't know about because, well, as we talked about last week (check out last week’s discussion), security is hard to get right.  There are a lot of things we don't necessarily think about when building applications.

    Last time we only touched on four out of ten types of vulnerabilities that can be mitigated through the use of frameworks.  That means that the other six need to be dealt with another way.  Let's take a look at what we have left:

    1. Injection
    2. Cross-Site Scripting (XSS)
    3. Broken Authentication and Session Management
    4. Insecure Direct Object References
    5. Cross-Site Request Forgery (CSRF)
    6. Security Misconfiguration
    7. Insecure Cryptographic Storage
    8. Failure to Restrict URL Access
    9. Insufficient Transport Layer Protection
    10. Unvalidated Redirects and Forwards

    None of these can actually be fixed by tools either.  Well, that's a bit of a downer.  Let me rephrase that a bit: these vulnerabilities can not be fixed by tools, but some can be found by tools.  This brings up an interesting point. There is a major difference between identifying vulnerabilities with tools and fixing vulnerabilities with tools. We will fix vulnerabilities by writing secure code, but we can find vulnerabilities with the use of tools. 

    Remember what I said in our conversation two weeks ago (paraphrased): the SDL is about defense in depth. This is a concept of providing or creating security measures at multiple layers of an application.  If one security measure fails, there will be another one that hasn't failed to continue to protect the application or data. It stands to reason then that there is no one single thing we can do to secure our applications, and by extension, therefore there is no one single tool we can use to find all of our vulnerabilities.

    There are different types of tools to use at different points in the development lifecycle. In this article we will look at tools that we can use within three of the seven stages of the lifecycle: Design, implementation, and verification:

    DesignImplementationVerification

    Within these stages we will look at some of their processes, and some tools to simplify the processes.

    Design

    Next to training, design is probably the most critical stage of developing a secure application because bugs that show up here are the most expensive to fix throughout the lifetime of the project.

    Once we've defined our secure design specifications, e.g. the privacy or cryptographic requirements, we need to create a threat model that defines all the different ways the application can be attacked.  It's important for us to create such a model because an attacker will absolutely create one, and we don't want to be caught with our pants down.  This usually goes in-tow with architecting the application, as changes to either will usually affect the other.  Here is a simple threat model:

    image_thumb2

    It shows the flow of data to and from different pieces of the overall system, with a security boundary (red dotted line) separating the user's access and the process. This boundary could, for example, show the endpoint of a web service. 

    However, this is only half of it.  We need to actually define the threat.  This is done through what's called the STRIDE approach.  STRIDE is an acronym that is defined as:

    Name Security Property Huh?
    Spoofing Authentication Are you really who you say you are?
    Tampering Integrity Has this data been compromised or modified without knowledge?
    Repudiation Non-repudiation The ability to prove (or disprove) the validity of something like a transaction.
    Information Disclosure Confidentiality Are you only seeing data that you are allowed to see?
    Denial of Service Availability Can you access the data or service reliably?
    Elevation of Privilege Authorization Are you allowed to actually do something?

    We then analyze each point on the threat model for STRIDE.  For instance:

      Spoofing Tampering Repudiation Info Disclosure DoS Elevation
    Data True True True True False True

    It's a bit of a contrived model because it's fairly vague, but it makes us ask these questions:

    • Is it possible to spoof a user when modifying data?  Yes: There is no authentication mechanism.
    • Is it possible to tamper with the data? Yes: the data can be modified directly.
    • Can you prove the change was done by someone in particular? No: there is no audit trail.
    • Can information be leaked out of the configuration? True: you can read the data directly.
    • Can you disrupt service of the application? No: the data is always available (actually, this isn't well described – a big problem with threat models which is a discussion for another time).
    • Can you access more than you are supposed to? Yes: there is no authorization mechanism.

    For more information you can check out the Patterns and Practices Team article on creating a threat model.

    This can get tiresome very quickly.  Thankfully Microsoft came out with a tool called the SDL Threat Modeling tool.  You start with a drawing board to design your model (like shown above) and then you define it's STRIDE characteristics:

    image_thumb4

    Which is basically just a grid of fill-in-the-blanks which opens up into a form:

    image_thumb7

    Once you create the model, the tool will auto-generate a few well known characteristics of the types of interactions between elements as well as provides a bunch of useful questions to ask for less common interactions.  It's at this point that we can start to get a feel for where the application weaknesses exist, or will exist.  If we compare this to our six vulnerabilities above, we've done a good job of finding a couple.  We've already come across an authentication problem and potentially a problem with direct access to data.  Next week, we will look at ways of fixing these problems.

    After a model has been created we need to define the attack surface of the application, and then we need to figure out how to reduce the attack surface.  This is a fancy way of saying that we need to know all the different ways that other things can interact with our application.  This could be a set of API's, or generic endpoints like port 80 for HTTP:

    image_thumb11

    Attack surface could also include changes in permissions to core Windows components.  Essentially, we need to figure out the changes our application introduces to the known state of the computer it's running on.  Aside from analysis of the threat model, there is no easy way to do this before the application is built.  Don't fret just yet though, because there are quite a few tools available to us during the verification phase which we will discuss in a moment.  Before we do that though, we need to actually write the code.

    Implementation

    Visual Studio 2010 has some pretty useful features that help with writing secure code. First is the analysis tools:

    image_thumb2_thumb_thumb

    There is a rule set specifically for security:

    image_thumb5_thumb_thumb

    If you open it you are given a list of 51 rules to validate whenever you run the test. This encompasses a few of the OWASP top 10, as well as some .NET specifics.

    When we run the analysis we are given a set of warnings:

    image_thumb15

    They are a good sanity check whenever you build the application, or check it into source control.

    In prior versions of Visual Studio you had to run FxCop to analyze your application, but Visual Studio 2010 calls into FxCop directly.  These rules were migrated from CAT.NET, a plugin for Visual Studio 2008. There is a V2 version of CAT.NET in beta hopefully to be released shortly.

    If you are writing unmanaged code, there are a couple of extras that are available to you too.

    One major source of security bugs in unmanaged code comes from a well known set of functions that manipulate memory in ways that are usually called poorly.  These are collectively called the banned functions, and Microsoft has released a header file that deprecates them:

    #            pragma deprecated (_mbscpy, _mbccpy)
    #            pragma deprecated (strcatA, strcatW, _mbscat, 
    StrCatBuff, StrCatBuffA,
    StrCatBuffW, StrCatChainW,
    _tccat, _mbccat)

    Microsoft also released an ATL template that allows you to restrict which domains or security zones your ActiveX control can execute.

    Finally, there is also a set of rules that you can run for unmanaged code too.  You can enable these to run on build:

    image_thumb18

    These tools should be run often, and the warnings should be fixed as they appear because when we get to verification, things can get ugly.

    Verification

    In theory, if we have followed the guidelines for the phases of the SDL, verification of the security of the application should be fairly tame.  In the previous section I said things can get ugly; what gives?  Well, verification is sort of the Rinse-and-Repeat phase.  It's testing.  We write code, we test it, we get someone else to test it, we fix the bugs, and repeat.

    The SDL has certain requirements for testing.  If we don't meet these requirements, it gets ugly.  Therefore we want to get as close as possible to secure code during the implementation phase.  For instance, if you are writing a file format parser you have to run it through 100,000 different files of varying integrity.  In the event that something catastrophically bad happens on one of these malformed files (which equates to it doing anything other than failing gracefully), you need to fix the bug and re-run the 100,000 files – preferably with a mix of new files as well.  This may not seem too bad, until you realize how long it takes to process 100,000 files.  Imagine it takes 1 second to process a single file:

    100,000 files * 1 second = 100,000 seconds
    100,000 seconds / 60 seconds = ~1667 minutes
    1667 minutes / 60 minutes = ~27 hours

    This is a time consuming process.  Luckily Microsoft has provided some tools to help.  First up is MiniFuzz

    image_thumb21

    Fuzzing is the process of taking a good copy of a file and manipulating the bits in different ways; some random, some specific.  This file is then passed through your custom parser… 100,000 times.  To use this tool, set the path of your application in the process to fuzz path, and then a set of files to fuzz in the template files path.  MiniFuzz will go through each file in the templates folder and randomly manipulate them based on aggressiveness, and then pass them to the process.

    Regular expressions also run into a similar testing requirement, so there is a Regex Fuzzer too.

    An often overlooked part of the development process is the compiler.  Sometimes we compile our applications with the wrong settings.  BinScope Binary Analyzer can help us as it will verify a number of things like compiler versions, compiler/linker flags, proper ATL headers, use of strongly named assemblies, etc.  You can find more information on these tools on the SDL Tools site.

    Finally, we get back to attack surface analysis.  Remember how I said there aren't really any tools to help with analysis during the design phase?  Luckily, there is a pretty useful tool available to us during the verification phase, aptly named the Attack Surface Analyzer.  It's still in beta, but it works well.

    The analyzer goes through a set of tests and collects data on different aspects of the operating system:

    image_thumb24

    The analyzer is run a couple times; it's run before your application is installed or deployed, and then it's run after it's installed or deployed.  The analyzer will then do a delta on all of the data it collected and return a report showing the changes.

    The goal is to reduce attack surface. Therefore we can use the report as a baseline to modify the default settings to reduce the number of changes.

    It turns out that our friends on the SDL team have released a new tool just hours ago.  It's the Web Application Configuration Analyzer v2 – so while not technically a new tool, it's a new version of a tool.  Cool!  Lets take a look:

    image_thumb5

    It's essentially a rules engine that checks for certain conditions in the web.config files for your applications.  It works by looking at all the web sites and web applications in IIS and scans through the web.config files for each one.

    The best part is that you can scan multiple machines at once, so you know your entire farm is running with the same configuration files.  It doesn't take very long to do the scan of a single system, and when you are done you can view the results:

    image_thumb111

    It's interesting to note that this was scan of my development machine (don't ask why it's called TroyMcClure), and there are quite a number of failed tests.  All developers should run this tool on their development machines, not only for their own security, but so that your application can be developed in the most secure environment possible.

    Final Thoughts

    It's important to remember that tools will not secure our applications.  They can definitely help find vulnerabilities, but we cannot rely on them to solve all of our problems.

    Looking forward to continuing the conversation.

    About Steve Syfuhs

    Steve Syfuhs is a bit of a Renaissance Kid when it comes to technology. Part developer, part IT Pro, part Consultant working for ObjectSharp. Steve spends most of his time in the security stack with special interests in Identity and Federation. He recently received a Microsoft MVP award in Developer Security. You can find his ramblings about security at www.steveonsecurity.com
  • Canadian Solution Developers' Blog

    Where do you get your “In Person” fix?

    • 0 Comments

    c4My team is in the midst of our planning cycle right now and a question came up that has us scratching our heads. “What conferences or face to face activities do developers and architects in Canada attend in order to get training and keep up to date on what’s out there?” Times have changed – there used to be a lot of conferences and activities years ago – but what about now? Oh – I am also talking about ones NOT PUT ON BY MICROSOFT. We’re trying to think outside of the box here. Drop me an email (jonathan.rozenblit@microsoft.com), tweet me a reply (@jrozenblit), chime in on Twtsurvey or LinkedIn or comment below!

    There’s the big conferences such as TechDays and Prairie Dev Con, or alternatively, your company might have internal lunch and learns and team knowledge building activities, but there has to be more out there.

    This is part of a larger conversation that we’ve had together. You’ve heard me talk about investing in yourself and your skills – external training /info sessions/partner events are one avenue to stretch that training budget. It still takes a withdrawal on your personal time budget. Going out and participating in community driven technology meetups/user groups/professional associations also take a tax on your personal time, but it’s a necessary evil.

    Do you take the time to make this type of investment? If so – where do you go? I ask once again: “What conferences or face to face activities do developers and architects in Canada attend in order to get training and keep up to date on what’s out there?” Drop me an email (jonathan.rozenblit@microsoft.com), tweet me a reply (@jrozenblit), chime in on Twtsurvey or LinkedIn, or comment below!

    I’ll make sure to post the replies back to share with all of you.

  • Canadian Solution Developers' Blog

    Weekly Events Calendar June 27–July 23

    • 0 Comments

    Your Microsoft Canada team is always looking into opportunities to bring you training that would help you grow your skills and make you more successful in your career. We’ve put together this weekly events calendar update to make sure you’re always up to date on training opportunities happening in a city near you or online and can therefore schedule work in order to be be able to attend. Look forward to the weekly event calendar update every Sunday.

    New Events To Mention

    The following events have been added to the events calendar this week:

    Rock, Paper Azure Challenge Finals Online Event Wednesday, July 13 2011
    7:00 PM – 8:00 PM Eastern

    Upcoming Events

    Sunday Monday Tuesday Wednesday Thursday Friday Saturday
    26 27
    Ottawa IT Community Social Night
    Ottawa
    28
    29
    30
    July 1 2
    3 4 5 6
    7
    8 9
    10 11 12 13
    Rock, Paper Azure Challenge Finals
    online
    14 15 16
    17 18 19 20 21 22 23
  • Canadian Solution Developers' Blog

    The Basics of Securing Applications: Part 5 – Incident Response Management Using Team Foundation Server

    • 0 Comments

    Visual-Studio-One-on-One_thumb1_thum[2]In this post, we’re continuing our One on One with Visual Studio conversation from May 26 with Canadian Developer Security MVP Steve Syfuhs, The Basics of Securing Applications. If you’ve just joined us, the conversation builds on the previous post, so check that out (links below) and then join us back here. If you’ve already read the previous post, welcome!

    Part 1: Development Security Basics
    Part 2: Vulnerabilities
    Part 3: Secure Design and Analysis in Visual Studio 2010 and Other Tools
    Part 4: Secure Architecture
    Part 5: Incident Response Management Using Team Foundation Server (This Post)

      In this session of our conversation, The Basics of Securing Applications, Steve talks about the differences between protocols and procedures when it comes to security and how to be ready in an event of a security situation.

      Steve, back to you.

      There are only a few certainties in life: death, taxes, and one of your applications getting attacked.  Throughout the lifetime of an application it will undergo a barrage of attack – especially if it's public facing.  If you followed the SDL, tested properly, coded securely, and managed well, you will have gotten most of the bugs out.

      Most.

      There will always be bugs in production code, and there will very likely always be a security bug in production code.  Further, if there is a security bug in production code, an attacker will probably find it.  Perhaps the best metric for security is along the lines of mean-time-to-failure.  Or rather, mean-time-to-breach.  All safes for storing valuables are rated in how long they can withstand certain types of attacks – not whether they can, but how long they can.  There is no one-single thing we can do to prevent an attack, and we cannot prevent all attacks.  It's just not in the cards.  So, it stands to reason then that we should prepare for something bad happening.  The final stage of the SDL requires that an Incident Response Plan is created.  This is the procedure to follow in the event of a vulnerability being found.

      In security parlance, there are protocols and procedures.  The majority of the SDL is all protocol.  A protocol is the usual way to do things.  It's the list of steps you follow to accomplish a task that is associated with a normal working condition, e.g. fuzzing a file parser during development.  You follow a set of steps to fuzz something, and you really don't deviate from those steps.  A procedure is when something is different.  A procedure is reactive.  How you respond to a security breach is a procedure.  It's a set of steps, but it's not a normal condition.

      An Incident Response Plan (IRP - the procedure) serves a few functions:

      • It has the list of people to contact in the event of the emergency
      • It is the actual list of steps to follow when bad things happen
      • It includes references to other procedures for code written by other teams

      This may be one of the more painful parts of the SDL, because it's mostly process over anything else.  Luckily there are two wonderful products by Microsoft that help: Team Foundation Server.  For those of you who just cringed, bear with me.

      Microsoft released the MSF-Agile plus Security Development Lifecycle Process Template for VS 2010 (it also takes second place in the longest product name contest) to make the entire SDL process easier for developers.  There is the SDL Process Template for 2008 as well.

      It's useful for each stage of the SDL, but we want to take a look at how it can help with managing the IRP.  First though, lets define the IRP.

      Emergency Contacts (Incident Response Team)

      The contacts usually need to be available 24 hours a day, seven days a week.  These people have a range of functions depending on the severity of the breach:

      • Developer – Someone to comprehend and/or triage the problem
      • Tester – Someone to test and verify any changes
      • Manager – Someone to approve changes that need to be made
      • Marketing/PR – Someone to make a public announcement (if necessary)

      Each plan is different for each application and for each organization, so there may be ancillary people involved as well (perhaps an end user to verify data).  Each person isn't necessarily required at each stage of the response, but they still need to be available in the event that something changes.

      The Incident Response Plan

      Over the years I've written a few Incident Response Plans (Never mind that I was asked to do it after an attack most times – you WILL go out and create one after reading this right?).  Each plan was unique in it's own way, but there were commonalities as well. 

      Each plan should provide the steps to answer a few questions about the vulnerability:

      • How was the vulnerability disclosed?  Did someone attack, or did someone let you know about it?
      • Was the vulnerability found in something you host, or an application that your customers host?
      • Is it an ongoing attack?
      • What was breached?
      • How do you notify the your customers about the vulnerability?
      • When do you notify them about the vulnerability?

      And each plan should provide the steps to answer a few questions about the fix:

      • If it's an ongoing attack, how do you stop it?
      • How do you test the fix?
      • How do you deploy the fix?
      • How do you notify the public about the fix?

      Some of these questions may not be answerable immediately – you may need to wait until a post-mortem to answer them. 

      This is the high level IRP for example:

      1. The Attack – It's already happened
      2. Evaluate the state of the systems or products to determine the extent of the vulnerability
        1. What was breached?
        2. What is the vulnerability
      3. Define the first step to mitigate the threat
        1. How do you stop the threat?
        2. Design the bug fix
      4. Isolate the vulnerabilities if possible
        1. Disconnect targeted machine from network
        2. Complete forensic backup of system
        3. Turn off the targeted machine if hosted
      5. Initiate the mitigation plan
        1. Develop the bug fix
        2. Test the bug fix
      6. Alert the necessary people
        1. Get Marketing/PR to inform clients of breach (don't forget to tell them about the fix too!)
        2. If necessary, inform the proper legal/governmental bodies
      7. Deploy any fixes
        1. Rebuild any affected systems
        2. Deploy patch(es)
        3. Reconnect to network
      8. Follow up with legal/governmental bodies if prosecution of attacker is necessary
        1. Analyze forensic backups of systems
      9. Do a post-mortem of the attack/vulnerability
        1. What went wrong?
        2. Why did it go wrong?
        3. What went right?
        4. Why did it go right?
        5. How can this class of attack be mitigated in the future?
        6. Are there any other products/systems that would be affected by the same class?

      Some of procedures can be done in parallel, hence the need for people to be on call.

      Team Foundation Server

      So now that we have a basic plan created, we should make it easy to implement.  The SDL Process Template (mentioned above) creates a set of task lists and bug types within TFS projects that are used to define things like security bugs, SDL-specific tasks, exit criteria, etc..

      image

      While these can (and should) be used throughout the lifetime of the project, they can also be used to map out the procedures in the IRP.  In fact, a new project creates an entry in Open SDL Tasks to create an Incident Response Team:

      image

      A bug works well to manage incident responses.

      image

      Once a bug is created we can link a new task with the bug.

      image

      And then we can assign a user to the task:

      image

      Each bug and task are now visible in the Security Exit Criteria query:

      image

      Once all the items in the Exit Criteria have been met, you can release the patch.

      Conclusion

      Security is a funny thing. A lot of times you don't think about it until it's too late. Other times you follow the SDL completely, and you still get attacked.

      In this conversation (which spanned over a few weeks) we looked at writing secure software from a pretty high level.  We touched on common vulnerabilities and their mitigations, tools you can use to test for vulnerabilities, some thoughts to apply to architecting the application securely, and finally we looked at how to respond to problems after release.  By no means will this conversation automatically make you write secure code, but hopefully you’ve received guidance around understanding what goes into writing secure code.  It's a lot of work, and sometimes its hard work.

      Finally, there is an idea I like to put into the first section of every Incident Response Plan I've written, and I think it applies to writing software securely in general:

      Something bad just happened.  This is not the time to panic, nor the time to place blame.  Your goal is to make sure the affected system or application is secured and in working order, and your customers are protected.

      Something bad may not have happened yet, and it may not in the future, but it's important to plan accordingly because your goal should be to protect the application, the system, and most importantly, the customer.

      About Steve Syfuhs

      Steve Syfuhs is a bit of a Renaissance Kid when it comes to technology. Part developer, part IT Pro, part Consultant working for ObjectSharp. Steve spends most of his time in the security stack with special interests in Identity and Federation. He recently received a Microsoft MVP award in Developer Security. You can find his ramblings about security at www.steveonsecurity.com
    Page 1 of 1 (12 items)