• Canadian Solution Developers' Blog

    Why You Should Never Use DATETIME Again!


    690px-Microsoft_SQL_Server_Logo.svgDates, we store them everywhere, DateOrdered, DateEntered, DateHired, DateShipped, DateUpdated, and on and on it goes. Up until and including SQL Server 2005, you really didn’t have much choice about how you stored your date values. But in SQL Server 2008 and higher you have alternatives to DateTime and they are all better than the original.

    DATETIME stores a date and time, it takes 8 bytes to store, and has a precision of .001 seconds

    In SQL Server 2008 you can use DATETIME2, it stores a date and time, takes 6-8 bytes to store and has a precision of 100 nanoseconds. So anyone who needs greater time precision will want DATETIME2. What if you don’t need the precision? Most of us don’t even need milliseconds. So you can specify DATETIME2(0) which will only take 6 bytes to store and has a precision of seconds. If you want to store the exact same value you had in DATETIME, just choose DATETIME2(3), you get the same precision but it only takes 7 bytes to store the value instead of 8. I know a lot of you are thinking, what’s one byte, memory is cheap. But it’s not a question of how much space you have on your disk. When you are performance tuning, you want to store as many rows on a page as you can for each table and index. that means less pages to read for a table or query, and more rows you can store in cache. Many of our tables have multiple date columns, and millions of rows. That one byte savings for every date value in your database is not going to make your users go ‘Wow everything is so much faster now’, but every little bit helps.

    If you are building any brand new tables in SQL Server 2008, I recommend staying away from DATETIME and DATETIME2 altogether. Instead go for DATE and TIME. Yes, one of my happiest moments when I first started learning about SQL Server 2008 was discovering I could store the DATE without the time!! How many times have you used GETDATE() to populate a date column and then had problems trying to find all the records entered on ‘05-JUN-06’ and got no results back because of the time component. We end up truncating the time element before we store it, or when we query the date to ignore the time component. Now we can store a date in a column of datatype DATE. If you do want to store the time, store that in a separate column of datatype TIME. By storing the date and time in separate columns you can search by date or time, and you can index by date and or time as well! This will allow you to do much faster searches for time ranges.

    Since we are talking about the date and time datatypes, I should also mention that there is another date datatype called DATETIMEOFFSET that is time zone aware. But that is a blog for another day if you are interested.

    Here is a quick comparison of the different Date and Time Data types,

    Datatype Range Precision Nbr Bytes User Specified Precision
    SMALL DATETIME 1900-01-01 to 2079-06-06 1 minute 4 No
    DATETIME 1753-01-01 to 9999-12-31 .00333 seconds 8 No
    DATETIME2 0001-01-01 to 9999-12-31 23:59.59.9999999 100 ns 6-8 Yes
    DATE 0001-01-01 to 9999-12-31 1 day 3 No
    TIME 00:00:00.0000000 to 23:59.59.9999999 100 ns 3-5 Yes
    DATETIMEOFFSET 0001-01-01 to 9999-12-31 23:59.59.9999999 100 ns 8-10 Yes

    Today’s My 5 is of course related to the Date and Time datatypes.

    My 5 Important Date functions and their forward and backwards compatibility

    1. GETDATE() – Time to STOP using GETDATE(), it still works in SQL Server 2008, but it only returns a precision of milliseconds because it was developed for the DATETIME datatype.
    2. SYSDATETIME() – Time to START using SYSDATETIME(), it returns a precision of nanoseconds because it was developed for the DATETIME2 datatype and it also works for populating DATETIME columns.
    3. DATEDIFF() – This is a great little function that returns the number of minutes, hours, days, weeks, months, or years between two dates, it supports the new date datatypes.
    4. ISDATE() – This function is used to validate DATETIME values. It returns a 1 if you pass it a character string containing a valid date. However if you pass it a character string representing a datetime that has a precision greater than milliseconds it will consider this an invalid date and will return a 0.
    5. DATEPART() – This popular function returns a portion of a date, for example you can return the year, month, day, or hour. This date function supports all the new date datatypes.

    Also one extra note, because I know there are some former Oracle developers who use this trick. If you have any select statements where you select OrderDate+1 to add one day to the date, that will not work with the new date and time datatypes. So you need to use the DATEADD() function.

  • Canadian Solution Developers' Blog

    How do I use HTML5 in Visual Studio 2010?


    cadhtml5coaIn this post, I’ll share what I learned about how to get started writing HTML5 code in Visual Studio 2010.

    HTML5 seems to be everywhere these days! I started trying it myself a few months back and I quickly decided that if possible, I wanted to play with it in Visual Studio. I’ve been working with Visual Studio for years, it’s got to be simpler to keep working with the developer tool I already know and love rather than moving to a new tool. Besides I want to be able to incorporate HTML5 into ASP.NET applications!  It took me a bit of messing about to get up and running with HTML5 the way I know there will be greater support for HTML5 in Visual Studio 11. But for now I am working with Visual Studio 2010.  I thought I would share what I learned so hopefully it will be easier for you.

    Here’s what you want to do:

    • Add HTML5 validation and intellisense
    • Create an HTML5 project
    • Set up for <video> and <audio>
    • Play!

    Add HTML5 Validation and Intellisense

    You will definitely want to make sure you have Service Pack 1 installed! By installing Service Pack 1 you get both intellisense (can’t live without that anymore) and validation for HTML5. Don’t forget after you install Service pack 1 to go to Tools | Options | Text Editor | HTML | Validation and set the validation to HTML5 or XHTML5 or the HTML5 validation won’t work.

    First of all there is a really great blog by Burke Holland on how to use the MVC HTML5 template for Visual Studio 2010 here.

    Create an HTML5 project in Visual Studio

    You have a couple of choices here.

    • Modify an existing template to be HTML5 or create your own template. There is a great blog describing how to do that here.
    • Download the MotherEffin ASP.NET MVC HTML5 template that Jacob Gable was kind enough to post on the VisualStudio Gallery.
    • Download the mobile ready ASP.NET MVC HTML5 template that Sammy Ageil was kind enough to post on the Visual Studio Gallery

    Set up for <video> and <audio>

    The first tags I started playing with in Visual Studio were the video and audio tags. I immediately had problems getting an actual video to display on my web page it was really frustrating. Here is what I had to do to get everything working. The basic problem was with the MIME types. When a .avi, or .MP3 file was used on my website, the web server didn’t recognize that those were video and audio files. To get it working I had to edit my web.config file and make sure I had IIS express running in the development environment instead of the development server built into Visual Studio to ensure that my web.config file was being used to figure out the MIME types. You need to do this for the WOFF fonts as well.

    • Install IIS Express
    • Specify the mime types you will be using in your web.config file. Here’s an example:
              <mimeMap fileExtension=".mp4" mimeType="video/mp4" />
              <mimeMap fileExtension=".m4v" mimeType="video/mp4" />
              <mimeMap fileExtension=".woff" mimeType="application/x-woff" />
              <mimeMap fileExtension=".webm" mimeType="video/webm" />
              <mimeMap fileExtension=".ogg" mimeType="video/ogg" />
              <mimeMap fileExtension=".ogv" mimeType="video/ogg" />

    Change the project settings, by right clicking on the project and changing the settings to Use IIS Express when debugging in Visual Studio.VisualStudioDevelopmentServer


    Once you have it up and running you can start exploring the world of HTML5. There are some great resources on learning HTML5 here. Make sure you read up on feature detection since different browsers will support different HTML5 features and because you will need this for backwards compatibility as well!

    If you want to experiment with <video>, I found it handy to just download Big Buck Bunny since you can get it in multiple formats so it’s great for experimenting with the fallback features of HTML5 <video> for different browsers.

    Since a big part of HTML5 is the cross browser support, make sure you try it out in different browsers, or use the F12 developer tools in Internet Explorer to test how your code will work in different browsers or older browsers.

    Most of all have fun!

  • Canadian Solution Developers' Blog

    How to Move Data from One Table to Another


    I recently saw a post on Stack Overflow asking how do you copy specific table data from one database to another in SQL Server. It struck me I should share the solution to this with others because it is such a handy trick. Often I set up test data and want to quickly copy it to another table, or a co-worker wants a copy of my data, or I want to copy some data from production to a local database.

    If all you want to do is copy data from one table to another in the same database, just use the INSERT clause on the SELECT statement.

    INSERT INTO PlayerBackups
    SELECT * FROM NhlPlayer

    If you do not have a second table and you want to make a quick and dirty backup of some test data, you can create a table based on the data you choose in your select statement.

    SELECT * INTO PlayerBackups
    FROM NhlPlayer

    If you want to move data between tables across databases, you will have to use a fully qualified name

    INSERT INTO YourDatabase.YourSchema.PlayerBackups
    SELECT * FROM MyDatabase.MySchema.NhlPlayer

    If you want to move data across servers, you will need to set up a linked server, this will require working with the DBA because there are authentication issues around linked servers (how will your account log in to the other server? what permissions will you have on the other server?) Once you have a linked server set up, then you can just use the fully qualified name to specify the server name.

    INSERT INTO YourServer.YourDatabase.YourSchema.PlayerBackups
    SELECT * FROM MyServer.MyDatabase.MySchema.NhlPlayer

    I am always forgetting the syntax for these commands, so thought I would share them. Don’t forget, if you know SQL, you know SQL Azure! Try it out now

  • Canadian Solution Developers' Blog

    Free Training on .NET Framework 4 and Visual Studio 2010 on YOUR schedule


    VisualStudioBoxThe Visual Studio Team has released a new Visual Studio 2010 and .NET Framework 4 Training Course. You can complete it online, or if you prefer you can download the training kit and work offline. For those who don’t have the training budget to take an in person course, these training kits are a fantastic resource! This course is designed for people who have already been playing with .NET but want to get up and running with the new features in .NET 4 and Visual Studio 2010.

    The course includes presentations, hands-on labs, and demos. Whether you develop on windows or web, with VB or C#, there is material here to interest you. You will get a chance to look at C# 4, Parallel Extensions, Silverlight 4, ASP.NET 4, enhancements to the data platform, WCF, and Workflow updates, the Application Lifecycle Management tools in Visual Studio, Windows Azure, and more!

    You do need Visual Studio 2010 in order to complete the labs. If you don’t have it yet you can download a full trial version.

    Since we are in the planning stages for TechDays Canada 2011, I thought I would use TechDays as the inspiration for this week’s My 5.

    5 Great Sessions from TechDays 2010 you can watch to learn more about .NET Framework 4 and Visual Studio 2010

    1. Getting Your Return on Investment with Microsoft .NET Framework 4 – this is an overview session of the new features in .NET Framework 4 across the different technologies
    2. Microsoft Visual Studio 2010 Tips and Tricks – For anyone who works with Visual Studio, pick up a few tricks to impress your boss and co-workers.
    3. New IDE and Language Features in Microsoft Visual Studio 2010 Using Visual Basic and C# – this session is focused on the new features in the programming languages themselves.
    4. Web Load Testing with Microsoft Visual Studio 2010 – Learn how to use VS2010 to make it easier to do your web load testing
    5. Microsoft Visual Studio 2010 for Web Development – improve your web deployment process
  • Canadian Solution Developers' Blog

    The Basics of Securing Applications: Vulnerabilities


    Visual Studio One on OneIn this post, we’re continuing our One on One with Visual Studio conversation from May 26 with Canadian Developer Security MVP Steve Syfuhs, The Basics of Securing Applications. If you’ve just joined us, the conversation builds on the previous post, so check that out (links below) and then join us back here. If you’ve already read the previous post, welcome!

    Part 1: Development Security Basics
    Part 2: Vulnerability Deep Dive (This Post)

    In this session of our conversation, The Basics of Securing Applications, Steve goes through the primers of writing secure code, focusing specifically on the most common threats to web-based applications.

    Steve, back to you.

    Great, thank you. When we started the conversation last week, I stated that knowledge is key to writing secure code:

    Perhaps the most important aspect of the SDL is that it's important to have a good foundation of knowledge of security vulnerabilities.

    In order to truly protect our applications and the data within, we need to know about the threats in the real world. The OWASP top 10 list gives a us a good starting point in understanding some of the more common vulnerabilities that malicious users can use for attack:

    1. Injection
    2. Cross-Site Scripting (XSS)
    3. Broken Authentication and Session Management
    4. Insecure Direct Object References
    5. Cross-Site Request Forgery (CSRF)
    6. Security Misconfiguration
    7. Insecure Cryptographic Storage
    8. Failure to Restrict URL Access
    9. Insufficient Transport Layer Protection
    10. Unvalidated Redirects and Forwards

    It's important to realize that is strictly for the web – we aren't talking client applications, though some do run into the same problems. We’ll focus on web-based applications for now.  It's also helpful to keep in mind that we aren't talking just Microsoft-centric vulnerabilities.  These also exist in applications running on the LAMP (Linux/Apache/MySQL/PHP) stack and variants in between.  Finally, it's very important to note that just following these instructions won't automatically give you a secure code base – these are just primers in some ways of writing secure code.


    Injection is a way of changing the flow of a procedure by introducing arbitrary changes. An example is SQL injection. Hopefully you’ve heard of SQL Injection, but lets take a look at this bit of code for those who don't know about it:

    string query = string.Format("SELECT * FROM UserStore WHERE UserName = '{0}' AND PasswordHash = '{1}'", username, password);

    If we passed it into a SqlCommand we could use it to see whether or not a user exists, and whether or not their hashed password matches the one in the table. If so, they are authenticated. Well what happens if I enter something other than my username? What if I enter this:

    '; --

    It would modify the SQL Query to be this:

    SELECT * FROM UserStore WHERE UserName = ''; -- AND PasswordHash = 'TXlQYXNzd29yZA=='

    This has essentially broken the query into a single WHERE clause, asking for a user with a blank username because the single quote closed the parameter, the semicolon finished the executing statement, and the double dash made anything following it into a comment.

    Hopefully your user table doesn't contain any blank records, so lets extend that a bit:

    ' OR 1=1; --

    We've now appended a new clause, so the query looks for records with a blank username OR where 1=1. Since 1 always equals 1, it will return true, and since the query looks for any filter that returns true, it returns every record in the table.

    If our SqlCommand just looked for at least one record in the query set, the user is authenticated. Needless to say, this is bad.

    We could go one step further and log in as a specific user:

    administrator';  --

    We've now modified the query in such a way that it is just looking for a user with a particular username, such as the administrator.  It only took four characters to bypass a password and log in as the administrator.

    Injection can also work in a number of other places such as when you are querying Active Directory or WMI. It doesn't just have to be for authentication either. Imagine if you have a basic report query that returns a large query set. If the attacker can manipulate the query, they could read data they shouldn't be allowed to read, or worse yet they could modify or delete the data.

    Essentially our problem is that we don't sanitize our inputs.  If a user is allowed to enter any value they want into the system, they could potentially cause unexpected things to occur.  The solution is simple: sanitize the inputs!

    If we use a SqlCommand object to execute our query above, we can use parameters.  We can write something like:

    string query = "SELECT * FROM UserStore WHERE UserName = @username AND PasswordHash = @passwordHash";
    SqlCommand c = new SqlCommand(query);
    c.Parameters.Add(new SqlParameter("@username", username));
    c.Parameters.Add(new SqlParameter("@passwordHash", passwordHash));

    This does two things.  One, it makes .NET handle the string manipulation, and two it makes .NET properly sanitize the parameters, so

    ' OR 1=1; --

    is converted to

    ' '' OR 1=1;—'

    In the SQL language, two single quote characters acts as an escape sequence for a single quote, so in effect the query is trying to look for a value as is, containing the quote.

    The other option is to use a commercially available Object Relational Mapper (ORM) like the Entity Framework or NHibernate where you don't have to write error-prone SQL queries.  You could write something like this with LINQ:

    var users = from u in entities.UserStore 
                where u.UserName == username && u.PasswordHash == passwordHash select u;

    It looks like a SQL query, but it's compileable C#.  It solves our problem by abstracting away the ungodly mess that is SqlCommands, DataReaders, and DataSets.

    Cross-Site Scripting (XSS)

    XSS is a way of adding a chunk of malicious JavaScript into a page via flaws in the website. This JavaScript could do a number of different things such as read the contents of your session cookie and send it off to a rogue server. The person in control of the server can then use the cookie and browse the affected site with your session. In 2007, 80% of the reported vulnerabilities on the web were from XSS.

    XSS is generally the result of not properly validating user input. Conceptually it usually works this way:

    1. A query string parameter contains a value: ?q=blah
    2. This value is outputted on the page
    3. A malicious user notices this
    4. The malicious user inserts a chunk of JavaScript into the URL parameter: ?q=<script>alert("pwned");</script>
    5. This script is outputted without change to the page, and the JavaScript is executed
    6. The malicious user sends the URL to an unsuspecting user
    7. The user clicks on it while logged into the website
    8. The JavaScript reads the cookie and sends it to the malicious user's server
    9. The malicious user now has the unsuspecting user's authenticated session

    This occurs because we don't sanitize user input. We don't remove or encode the script so it can't execute.  We therefore need to encode the inputted data.  It's all about the sanitized inputs.

    The basic problem is that we want to display the content the user submitted on a page, but the content can be potentially executable.  Well, how do we display JavaScript textually?  We encode each character with HtmlEncode, so the < (left angle bracket) is outputted as &lt; and the > (right angle bracket) is outputted as &gt;. In .NET you have have some helpers in the HttpUtility class:


    This works fairly well, but you can bypass it by doing multiple layers of encoding (encoding an encoded value that was encoded with another formula).  This problem exists because HtmlEncode uses a blacklist of characters, so whenever it comes across a specific character it will encode it.  We want it to do the opposite – use a whitelist.  So whenever it comes across a known character, it doesn't encode it (such as the letter 'a'), but otherwise it encodes everything else.  It's generally far easier to protect something if you only allow known good things instead of blocking known threats (because threats are constantly changing).

    Microsoft released a toolkit to solve this encoding problem, called the AntiXss toolkit. It's now part of the Microsoft Web Protection Library, which also actually contains some bits to help solve the SQL injection problem.  To use this encoder, you just need to do something like this:

    string encodedValue = Microsoft.Security.Application.Sanitizer.GetSafeHtmlFragment(userInput);

    There is another step, which is to set the cookie to server-only, meaning that client side scripts cannot read the contents of the cookie.  Only newer browsers support this, but all we have to do is write something like this:

    HttpCookie cookie = new HttpCookie("name", "value");
    cookie.HttpOnly = true;

    For added benefit while we are dealing with cookies, we can also do this:

    cookie.Secure = true;

    Setting Secure to true requires that the cookie only be sent over HTTPS.

    This should be the last step in the output.  There shouldn't be any tweaking to the text or cookie after this point.  Call it the last line of defense on the server-side.

    Cross-Site Request Forgery (CSRF)

    Imagine a web form that has a couple fields on it – sensitive fields, say money transfer fields: account to, amount, transaction date, etc. You need to log in, fill in the details, and click submit. That submit POSTs the data back to the server, and the server processes it. In ASP.NET WebForms, the only validation that goes on is whether the ViewState hasn’t been tampered with.  Other web frameworks skip the ViewState bit, because well, they don't have a ViewState.

    Now consider that you are still logged in to that site, and someone sends you a link to a funny picture of a cat. Yay, kittehs! Anyway, on that page is a simple set of hidden form tags with malicious data in it. Something like their account number, and an obscene number for cash transfer. On page load, JavaScript POST’s that form data to the transfer page, and since you are already logged in, the server accepts it. Sneaky.

    There is actually a pretty elegant way of solving this problem.  We need to create a value that changes on every page request, and send it as part of the response.  Once the server receives the response, it validates the value and if it's bad it throws an exception.  In the the cryptography world, this is called a nonce.  In ASP.NET WebForms we can solve this problem by encrypting the ViewState.  We just need a bit of code like this in the page (or masterpage):

    void Page_Init (object sender, EventArgs e) 
        ViewStateUserKey = Session.SessionID; 

    When we set the ViewStateUserKey property on the page, the ViewState is encrypted based on this key.  This key is only valid for the length of the session, so this does two things.  First, since the ViewState is encrypted, the malicious user cannot modify their version of the ViewState since they don't know the key.  Second, if they use an unmodified version of a ViewState, the server will throw an exception since the victim's UserKey doesn't match the key used to encrypt the initial ViewState, and the ViewState parser doesn't understand the value that was decrypted with the wrong key.  Using this piece of code depends entirely on whether or not you have properly set up session state though.  To get around that, we need to set the key to a cryptographically random value that is only valid for the length of the session, and is only known on the server side.  We could for instance use the modifications we made to the cookie in the XSS section, and store the key in there.  It gets passed to the client, but client script can't access it.  This places a VERY high risk on the user though, because this security depends entirely on the browser version.  It also means that any malware installed on the client can potentially read the cookie – though the user has bigger problems if they have a virus. 

    Security is complex, huh?  Anyway…

    In MVC we can do something similar except we use the Html.AntiForgeryToken().

    This is a two step process.  First we need to update the Action method(s) by adding the ValidateAntiForgeryToken attribute to the method:

    public ActionResult Transfer(WireTransfer transfer)
            if (!ModelState.IsValid)
                return View(transfer); 
            return RedirectToAction("Transfers");
            return View(transfer);

    Then we need to add the AntiForgeryToken to the page:

    <%= Html.AntiForgeryToken() %>

    This helper will output a nonce that gets checked by the ValidateAntiForgeryToken attribute.

    Insecure Cryptographic Storage

    I think it's safe to say that most of us get cryptography related-stuff wrong most of the time at first. I certainly do. Mainly because crypto is hard to do properly. If you noticed above in my SQL Injection query, I used this value for my hashed password: TXlQYXNzd29yZA==.

    It's not actually hashed. It's encoded using Base64 (the double-equals is a dead give away). The decoded value is 'MyPassword'. The difference being that hashing is a one-way process. Once I hash something (with a cryptographic hash), I can't de-hash it. Second, if I happen to get hold of someone else's user table, I can look for hashed passwords that look the same. So anyone else that has a password in the table as "TXlQYXNzd29yZA==", I know that their password is 'MyPassword'. This is where a salt comes in handy. A salt is just a chunk of data appended to the unhashed password, and then hashed. Each user has a unique salt, and therefore will have a unique hash.

    Then in the last section on CSRF I talked about using a nonce.  Nonce's are valuable to the authenticity of a request.  They prevent replay attacks, meaning that the encrypted output will look the same as a previous response, and is therefore a copy of the last message.  It is extremely important that the attacker not know how this nonce is generated.

    Which leads to the question of how to do you properly secure encryption keys? Well that’s a discussion in and of itself. Properly using cryptography in an application is really hard to do. Proper cryptography in an application is a topic fit for a book.

    Final Thoughts

    We’ve touched on only four of the items in the OWASP top 10 list as they are directly solvable using publically available frameworks.  The other six items in the list can be solved through the use of tools as well as designing a secure architecture, both of which we will talk about in the future.

    Looking forward to continuing the conversation.

    About Steve Syfuhs

    Steve Syfuhs is a bit of a Renaissance Kid when it comes to technology. Part developer, part IT Pro, part Consultant working for ObjectSharp. Steve spends most of his time in the security stack with special interests in Identity and Federation. He recently received a Microsoft MVP award in Developer Security. You can find his ramblings about security at www.steveonsecurity.com
  • Canadian Solution Developers' Blog

    How to Get Consensus in a Meeting with the “Fist of Five”


    FistHave you ever been in a meeting and asked “Is everyone okay with this plan?” only to be answered with silence. You prompt again “any concerns or questions"?” again nothing. Finally you announce that you are going to assume silence means consent and move on to the next topic. But here is the big question: Is silence consent? 

    In order to answer that question, think about what happens after the meeting where you assumed silence was consent. After the meeting, did one of the team members turn to another and start pointing out the flaws in the plan? In the coming days and weeks did anyone keep bringing up that same topic again because of additional concerns? In the worst case scenarios when you go ahead with the plan and it does not work. Is there someone who stands up at the Follow Up meeting and says “I knew this plan would never work”? Those are all signs that the team did not reach a consensus.

    The Merriam Webster defines consensus as “general agreement” and “group solidarity in sentiment and belief”. It’s that second definition group solidarity you want to reach with your team.  Group solidarity means we all agree to support this decision or plan going forward. That means we won’t walk out of the room and start telling everyone this is a bad idea, and we don’t think it will work. It means that even if it fails you will stand up and say you agreed it was worth trying.

    Consensus is not a group vote where majority rules. It is the entire group agreeing to support a decision or idea. There is a very simple technique you can use check for consensus: The Fist of Five.

    When you are ready to ask “Is everyone okay with this plan”, each team member responds by raising their fist with one to five fingers.

    • Five fingers – You think this is a very good plan and you fully support it
    • Four Fingers – You support this plan, it’s not perfect, but you strongly support it
    • Three fingers – This plan may not be your first choice, you have some concerns about it, but you understand the arguments presented in the meeting and you agree that given the current circumstances it is a reasonable plan moving forward and you support it
    • Two fingers – There is an issue you feel must be resolved before you can support the plan, further discussion or follow up is required before you can support it
    • One finger – You do not support this proposal, you do not think it will work, it is going to take some serious convincing to get you to change your mind

    If everyone is showing 3 or more fingers then you have consensus. If anyone is showing less than 3 fingers ask them to explain their concern and develop a plan to address and follow up on that concern.

    It’s simple, and it works. In fact we had a team meeting this week where we were putting this technique to good use as we discussed how to work with user communities, developers and IT Professionals across Canada. Besides what could be better than a meeting technique that sounds like the title of a Kung Fu movie!

    All those in favour?


  • Canadian Solution Developers' Blog

    AzureFest: Cloud Development Crash Course



    Finding the time to learn about the Cloud and how to extend your skills to build solutions for the Cloud can be a challenge when your schedule is defined by application build, test, and deployment milestones. With this type of schedule, chances are you find learning new concepts in one condensed sprint much easier than trying to squeeze some learning into an hour here and an hour, between your other tasks. If that’s the case for you, you’ll want to check out AzureFest, a hands-on educational event designed by Canada’s own MVPs Cory Fowler (@SyntaxC4) and Barry Gervin (@bgervin) from ObjectSharp. At AzureFest, you’ll see how developing and deploying applications to Windows Azure is fast and easy, leveraging the skills you already have (.NET, Java, PHP, or Ruby) and the tools you already know (Visual Studio, Eclipse, etc.).

    AzureFest sessions are delivered with examples using .NET and Visual Studio, but the concepts are the same regardless of the language and tools. You’ll learn everything to you need to know to get up and running with Windows Azure quickly including:

    • Overview of Cloud Computing and Windows Azure
    • Setting up the Windows Azure SDK and Windows Azure Tools for Visual Studio
    • Setting up your Windows Azure account
    • Migrating a traditional on-premise ASP.NET MVC application to the Cloud
    • Deploying solutions to Windows Azure

    AzureFest is a hands-on event. This means that you’ll be following along in your own development environment and actually deploying your solution during the event. In order to get the most out of the experience, you’ll need to bring a laptop with you that is running Windows Vista SP1 or Windows 7 and has the Windows Azure Tools for Visual Studio installed.

    You’ll also need to bring a credit card. Windows Azure activations require a credit card even for the trial period, but don’t worry, nothing will be charged to your credit card as the last part of the event shows you how to take down all of your Windows Azure instances.

    We’re taking AzureFest across Canada, and will be coming to a city near you. Check out the listings below to get all the information you need about each of the cities. Don’t see a city that’s near you? Keep checking back as we will be adding more cities and dates as we confirm them. If you’d like to help organize an AzureFest in your city or at your user group, please contact me via email, Twitter, or LinkedIn.

    Where and When

    Downtown Toronto
    Microsoft Canada
    Ernst & Young Tower
    222 Bay Street, 12th floor
    Wednesday, March 30, 2011 6:00PM – 9;00PM
    Click here to register

    Presenters: Cory Fowler (@SyntaxC4), Barry Gervin (@bgervin)

    Microsoft Canada
    1950 Meadowvale Boulevard
    Thursday, March 31, 2011 6:00PM - 9:00PM 
    Click here to register

    Presenters: Cory Fowler (@SyntaxC4), Barry Gervin (@bgervin)

    BCIT, Burnaby Campus
    3700 Willingdon Avenue
    Tuesday, April 5, 2011
    6:00PM – 9:00PM
    Click here to register

    Presenters: Jonathan Rozenblit (@jrozenblit)

    Algonquin College Campus
    1385 Woodroffe Avenue, Ottawa
    Saturday, April 16, 2011 12:45PM – 1:30PM
    Click here to register

    Presenters: Christian Beauclair (@cbeauclair)

    University of Calgary, Rm 121 ICT Building
    2500 University Drive NW, Calgary, AB
    Saturday, April 30, 2011 9:00 AM – 5:00 PM
    Click here to register

    Presenters: Michael Diehl (@MikeDiehl_Wpg) and Tyler Doerksen (@Tyler_gd)
    Greater Moncton Chamber of Commerce Board Room – First Floor
    1273 Main Street, Suite 200, Moncton, NB
    Friday, May 6, 2011 6:00 PM - 10:00 PM
    Click here to register

    Presenters: Cory Fowler (@SyntaxC4)

    UNB Campus
    Room 317, ITC
    Saturday, May 7, 2011 9:00 AM – 12:00 PM
    Click here to register

    Presenters: Cory Fowler (@SyntaxC4)

    The Hub
    1673 Barrington St., 2nd Floor, Halifax, NS
    Sunday, May 8, 2011 1:30PM – 4:30 PM
    Click here to register

    Presenters: Cory Fowler (@SyntaxC4)

    Quebec City
    l'École National d'Administration Publique (ENAP), salle 4114
    555, boul.Charest Est, Québec, QC
    Thursday, May 12, 2011 5:00 PM – 8:30 PM
    Click here to register

    Presenters: Frédéric Harper (@fharper)

    Microsoft Canada
    2000 McGill College, Montreal, QC
    Tuesday, May 17, 2011 6:00 PM – 9:00 PM
    Click here to register

    Presenters: Frédéric Harper (@fharper)
    Online Business Systems, Assiniboine Room, 2nd Floor
    200-115 Bannatyne Ave., Winnipeg, MB
    Thursday, May 19, 2011 5:30 PM – 9:00 PM
    Click here to register

    Presenters: John Bristowe (@jbristowe)
    Holiday Inn - Kitchener
    30 Fairway Road South, Kitchener, ON
    Tuesday, May 31, 2011 8:30 AM – 12:00 PM
    Click here to register 

    Presenters: Jonathan Rozenblit (@jrozenblit)
    Virtual Lunch and Learn Series
    Part 1: Thursday, May 19, 2011 12:00 PM – 1:00 PM EST, CST, MST, PST
    Part 2: Thursday, May 26, 2011 12:00 PM – 1:00 PM EST, CST, MST, PST
    Part 3: Thursday, June 2, 2011 12:00 PM – 1:00 PM EST, CST, MST, PST
    Click here to register

    Presenters: Jonathan Rozenblit (@jrozenblit)

    Make sure you register early as space is limited. Make sure to find me when you’re are the event – it will be an opportunity for us to chat about what you’re working on, possible projects to move to the Cloud, and how I can help you grow your skills and career.

  • Canadian Solution Developers' Blog

    Do you hate SharePoint? Part 1 of 4


    If the answer is yes, could your hatred be caused by your local implementation? In this blog series we look at four common problems with SharePoint implementations and how you can address them.

    SharePoint is one of those tools where the line blurs between the developer and the administrator, much like SQL Server and much like SQL Server, SharePoint is everywhere! So even though this post is not about coding for SharePoint, I thought it had some great information that many of us could use when dealing with SharePoint implementations, either as a developer supporting an implementation, or even as an end user (did I mention I use SharePoint at work? Hey boss, you reading this?).  A huge thank you to Neil McIsaac, SharePoint trainer extraordinaire, (bio at the end of the blog) for putting this together. Happy reading!

    SharePoint is an interesting platform and as it grows as a product and with its already incredible adoption, it is an important cornerstone for many organizations. But ask the people that work with it, and you will find a divided love it or hate it passion for the product.

    Why hate it?

    It’s my experience (which dates back to the site server/dashboard days), that many customers have difficulty handling the product and I mean this a number of ways. Here’s the issue:

    SharePoint will amplify your problems.

    So why do we hate it? I would hate anything that made my problems larger. But did SharePoint create the problem? That would be like blaming the carpenters hammer for building a crooked house. The problems are our own doing in the majority of cases. In my experience, the most common problem SharePoint seems to amplify are the following;

    1. Information Management
    2. Project Management
    3. Information Security
    4. Business Intelligence

    Without a doubt, this is not a definitive list of problem areas, but from my experience, these are the key ones that help make or break your experience with SharePoint. So let's take a look at them.

    1. Information Management

    In my mind, this is the biggest problem area and by a considerable margin. Why? Well, if you think about information management, it really encompasses all of the other areas. It is a really broad topic. What is surprising is as an industry whose core revolves around titles such as Information Management and Information Technology; you would think that we'd be better at it. Let's look at an example: The shared documents library within the default team site is fairly widely used by organizations. At face value it seems like a perfect solution for the sharing of documents. After all, it is called the 'shared documents' library.

    When I was a kid, I remember going to the library. I am talking about the real one that had shelves and shelves of books that you couldn't carry around in your pocket. I won't refer to those times as 'the good old days' because they simply weren't. What fascinated me was the organization. I had the power as a kid, to walk in to the library and find various books on a topic that interested me, and to browse some additional information about each book before ever finding the book on the shelf. You might be thinking that I am referring to the ability to sit down in front of a computer and search, but I'm older than that. I'm referring to the cataloguing system called the Dewey Decimal system.

    That's right, no computers. Yet I could search amongst a huge amount of material systematically and rapidly (for the times). 135 years later, and I'm watching organizations fumble with taxonomy and metadata like new borns driving a car.

    So what's the problem?

    If we look at the shared documents library like a real library and a document like a book, if you let your employees simply start saving their document in the library it becomes almost the equivalent of having a library where you open up the front door, and chuck your book into the building. Imagine trying to find that book a week later. For the first hundred books or so, you might be ok, but what about the first thousand? Every time you see the default shared documents library being used, you should picture a real library, with nothing more than a mound of books in the middle of the room and people frantically trying to find things in the pile. The first thing that might come to many peoples mind is that "Well that is what we have Search for!" No we don't. Well, not exactly. Search doesn't organize our data for us; it makes the retrieval faster in larger systems. If you don't believe me, do an internet search for a topic such as Shakespeare and tell me what the most current and correct material is on the subject. So how do we go from a pile of books on the floor, to nicely organized books on the proper shelves? The answer is 2/3rds metadata, and 1/3rd taxonomy.

    Metadata is data that describes data. In the case of the Dewey Decimal system, that data helped to organize books into categories such as fiction or non-fiction, and provide additional tags such as animals, psychology, religion etc. so that you could much more easily identify basic keywords that described the material. In the library system, that information is collected, identified, and then recorded when the book is first brought into the library so that the material can be properly placed as well as be identified within a cataloguing system to be more easily retrieved. Do your SharePoint libraries behave like that?

    Taxonomy is the organization of metadata. In the example of the library, who determined that fiction and non-fiction should be one of the primary organizational metadata to categorize books? Why not hard cover and soft cover? Within your own organization, the determination of metadata and the taxonomy surrounding it is purely yours. It needs to reflect your organizational goals, which is why companies like Microsoft can't exactly make that an out of the box feature. YOU have to address it, and unless you like sorting through a million books, you need to address it yesterday.

    If you haven't already addressed it, let me help you with a few tips.

    Focus on process

    Data is a byproduct of process. Data simply wouldn't exist if it didn't have somewhere to go or something to be done to it. Knowing and understanding the key processes in your organization is a must. What can be more difficult is the identification of key areas where your processes will likely change, or where you would like to change in the future. The reason we need to identify this as best as we can is so that we can better lay the ground work now. In other words, after we know what the current process is, we need to ask "What is likely to change? What additional information might be needed to identify problems or opportunities that we could leverage to further improve the process?" As an example, if we examine a simple project management site where we record change requests and have their statuses updated, could you easily identify the total amount of time it took to go from request to resolution? Could I easily identify the chain of events that happened after receiving a change request? And is either of those 2 details important to me or will be important to me in the future? Questions such as those will help take you beyond simply recording a change request and marking it as 'resolved'. Better metadata = better taxonomy = better processes.

    Have Multiple Taxonomies

    Taxonomy is fairly simple in concept in that it is leveraged metadata. I think I've already established the importance of having some type of taxonomy. Although what I am about to say is really two versions of the same thing, for the sake of the SharePoint argument I am going to separate the taxonomies into 2 types; Navigational taxonomies and categorical taxonomies. The reason for the separation is so they can be planned according to their primary usage in that users are either finding the data they need, or working with the data to make decisions. By focusing on their usage, we can hopefully make a better taxonomy.

    With navigational taxonomies our focus should be on the Use Cases that you have established for the project. By focusing on what people do with the site, we can streamline their access to their data. You won't be able to establish that unless you understand what people do with your site, and Use Cases are the best way to establish that.

    You should also support more than one navigational taxonomy since there isn't only one way to complete a task. The goal of the menu navigation should be task focused, so how do we add a second navigational taxonomy? By adding more menus? No. In SharePoint, we can add these extra navigational taxonomies through the introduction of a Site Directory focused site, and/or through the use of custom search pages and results. Both of these options are relatively easy to implement and will allow your users a second and or third way to find a location in your growing architecture.

    Categorical taxonomy can be a bit harder to implement since it deals directly with content. We need to collect metadata on content to better describe it, but what should that metadata be? How should it be best structured? Great questions and the first answer lies within understanding the various processes surrounding your data. How it will be used, what decisions need to be made on it, etc. The metadata from this is typically well understood and most organizations have little trouble in establishing what the metadata is rather they have trouble in establishing how to best implement it within SharePoint.

    Let me give you some tips in establishing categorical taxonomies;

    Use Content Types

    Content types are a way of establishing a common structure that can be shared amongst lists and libraries. Use them if you want to establish some consistency.

    Use the Managed Metadata Service (MMS)

    You can think of the MMS as a place to store the common vocabulary for your organization which can be used and shared in a number of ways. Another advantage is that you can disseminate the administration of the terms to the people that use them and not IT. Be aware that the MMS interface within the Document Information Panel is only supported within Office 2010.

    Support Views

    Views are a great way to change to look and organization of a list or library. They work by changing the display of the data, such as sort order, which columns are shown etc. Good views require good metadata.

    Support Soft Metadata

    Hard metadata is metadata that directly fulfills a business requirement. In other words, it really needs to be there and usually in a very structured way where we control the terms and their usage. Soft metadata on the other hand is metadata that doesn't have a direct business relationship but can offer some insight to the content. A good example would be in the way that we tag photos. Quite often we will need some hard metadata such as the date that the photo was taken and the location, but we want to support soft metadata so that users are able to tag the photo with open terms, such as 'wildlife' or 'Christmas Party'. But why do we want to support this? To which my answer is 'Do we really want to turn away free information?' Granted there is a minimal support cost to this. In the end, we have content that is simply more usable, and with any luck, could be leveraged one day, so I often tout that the support costs are minimal with a potential for much gain, so why not. SharePoint 2010 can implement this many ways including using keywords, and/or open MMS term stores.


    This has been a thorn in my side almost wherever I go. We work in the information age and are so-called masters of information technologies, so why are we so bad at archiving strategies? A common dialog I often have with my clients goes something like this: "Our data retrieval is slow because we have a lot of it, over a million rows.", "Why do you have over a million rows in your table?", "We need to keep our data for X years.", "Did anyone say you need to keep it in the same storage medium as the daily production data?", "Ummm, no.". Archiving data does not have to be offline, it can be online and accessible, it simply has a different purpose than your live, day to day, data, most importantly it should be separated. Every time you create a new location where users can add content, whether it be a list, or a library, or a database, or a file share, you should ask yourself "How does this content retire?" and "When does it change its purpose?" After that, automate the process. Without an archival strategy you are setup for failure, you just don’t know when. By accumulating data over time, you cause the live, day to day, data to slowly become harder to use when it is left in the same storage medium. Retrieving data will be slow, and it will often get in the way of users trying to find the correct content while they are trying to accomplish their day to day tasks.

    Next week Part 2. Project management…

    NeilMcIssacNeil McIsaac (MCPD, MCITP, MCTS, MCSD, MCDBA, MCSE, MCSA, MCT) is an accomplished educator, consultant, and developer who specializes in enterprise application development and integration, application architecture, and business intelligence. As an instructor, Neil shares his knowledge and years of experience with students on a wide range of topics including SharePoint, BizTalk, SQL, .NET development, and PowerShell. He recently did an interview about SharePoint in the Cloud with .NET Rocks

    Neil is an owner of BlueGreen Information Technologies Inc., and has over 18 years experience working in the IT industry in both the private and public sectors. His focus on large scale application development and integration keeps Neil involved almost exclusively with enterprise level companies. However, he also works in every level of government.

    Neil lives in Moncton, New Brunswick Canada. In his spare time, Neil enjoys downhill skiing, golf and a new motorcycle.

  • Canadian Solution Developers' Blog

    SQL Server Roadmap to Denali Webcast Series


    imageSQL Server Denali, so many great features for reporting, high availability, business intelligence, and more. If you haven’t had time to sit down and start learning about what’s new in the latest release of SQL Server, join Microsoft Canada for a weekly webcast series to help discover and understand the benefits of the latest release of SQL Server. Starting on November 9th and continuing every week until December 7th at 1:00 pm EST, you can join in on a series of webcasts all about SQL Server Denali.

    Enterprise Information Management November 9, 2011, 1 – 2pm EST

    Presented by Darren King, Technical Specialist – Data Platform, Microsoft Canada Inc.
    An overview and demonstration of the technology and a chance to see for yourself how SQL Server can empower your EIM strategy today

    BI Semantic Model November 16, 2011, 1 – 2pm EST
    Presented by Howard Morgenstern, Technical Specialist – Business Intelligence, Microsoft Canada Inc.
    Learn how organizations can scale from small personal BI solutions to the largest organizational BI needs with the BI Sematic Model.

    SQL Azure- Reporting Services November 23, 2011, 1 – 2pm EST
    Presented by Richard Iwasa, Senior Consultant, Solution Architect, Ideaca
    Learn how SQL Azure Database can help you reduce costs by integrating with existing toolsets and providing symmetry with on-premises and cloud databases. Discover what SQL Azure can do for you.

    Appliances November 30th, 2011, 1 – 2pm EST
    Presented by Doug Harrison, Solution Specialist – Platform, Microsoft Canada Inc.
    An overview of Microsoft's vision for appliances details on the workloads supported today and in the future.

    Mission Critical Confidence – Enable mission critical environments December 7th, 2011, 1 – 2pm EST
    Presented by Marc Theoret, Technical Specialist – Data Platform, Microsoft Canada Inc.
    Find out how to enable Mission Critical Environments (focusing on availability and performance) with manageable costs.

    As an added bonus, attendees that register and attend all five (5) modules of the webcast will receive a SQL Server Code Name "Denali" USB key and a copy of Windows Azure Step by Step (May 2011) to be mailed 4-6 weeks after the final webcast. Limit one per person.

    Attend the webcasts, download the Denali evaluation and get yourself up and running with SQL Server Denali. Register today!

  • Canadian Solution Developers' Blog

    Security requires a prison not a fortress


    PrisonRob Labbe, Senior Security Program Manager at Microsoft challenges us to think about security in a new way: does it matter if someone gets in to your system if you stop them from taking anything out? 

    One of the best parts of doing webcasts is people often ask you questions you hadn’t quite thought about, or make you think about issues in different ways. I just finished recording a security webcast on .NET Rocks with Carl Franklin and Richard Campbell and it got me thinking. Shame they never warned me I’d have to think.

    The information security “industry” has turned out thousands of security products, tools, methodologies, processes, you name it… some of it is even pretty good. Given all that innovation, why is it, at a macro level it appears we’re not getting any better as an IT industry or as developers in security our systems and preventing large scale compromise?

    It can’t be a lack of tools, platforms and processes can it? Given the huge advances in all those areas, I think it is pretty safe to say we have the tools in our toolbox to be more secure. So, if it isn’t the tools, could it be us? Could it be that as developers and IT pros we’re simply looking at security the wrong way?

    Since the beginning of time, when we have something of value we try to protect it by ensuring the bad guys can’t get to it. We build castles, moats, and walls… Banks build vaults, we bury important military installations in the middle of mountains or solid chunks of granite. All that thinking carried over to our IT systems.  We focus on firewalls to keep the bad guys out, intrusion detection to let us know if they find a way in, and all manner of systems and technologies to do it. We’re building a big, digital vault to keep our company crown jewels locked up. In theory it’s a great plan. If we keep all the bad guys out, then there is no way they can steal our stuff.

    Before we have a chance to finish our perfect, high security vault, we get a wrinkle, one I like to call “The Business”. They have requirements, they want to let people into our vault. All sorts of people. Good people, bad people, people we don’t even know about. By the time we’re finished poking holes in our vault for all the services and users want, a complex system can have thousands upon thousands of endpoints, holes and possible entry points.

    I think we need a new analogy. Rather than building a vault or fortress, perhaps we need to design our networks as prisons. Classify our applications and data according to their sensitivity (minimum security right through super max) and flip a lot of our security and detective controls inward. Lets focus on controlling the known, our data, and relatively few users and systems that are within our span of control. Time has proven that we can’t prevent the determined human adversary from finding some foothold in, however we can make great strides in limiting that impact. Does it matter who gets in, if the important data doesn’t get out?

    For IT Pros, most of this comes down to doing a really good job with hygiene tasks: Keeping machines patched, doing a good job with identity management (particularly privileged Identity) and you’re 90% of the way there. For developers, again it is partly hygiene, but we need to remember that our applications need to be installed on systems, so we need to work with the IT pros to ensure least privilege, and intelligent encryption and data protection for data based on the risk and data classification.

    For Developers, to enable good application security hygiene we need a good Security Development Lifecycle. A good SDL will help you build that prison for your key data, help you identify the key assets, identify risks and design security controls to protect those key assets. Regardless of what is going on “out there” the SDL will help you manage and identify those places you need to work with IT to come up with one big plan.

    Over the next several guest blog posts, I’m going to walk through the SDL from a developer’s perspective, using the prison as the analogy, we’ll look at how following good SDL practices will help us not only build a more secure application, but also do it in a way that has minimal impact on the project budget and schedule. It should be a fun ride, stay tuned!

  • Canadian Solution Developers' Blog

    Bridging the Gap Between Developers and Testers Using Visual Studio 2010



    VS One on One - Bridging The Gap

    In this first One on One with Visual Studio conversation, we’re joined by two of Canada’s experts on Visual Studio and ALM (application lifecycle management). Etienne Tremblay and Vincent Grondin will be talking to us about how Visual Studio’s extensive integrated tools make you more effective and efficient, allowing you to better collaborate with the rest of your project team. They’ll show you how testing tools like MS Test, IntelliTrace, and Coded UI make it easier for you to test – and test often – while not increasing your workload; about Team Foundation Server’s work item management and how it facilitates better communication between you, testers, project managers, and development managers; Team Foundation Server’s version control, lab management, and team build tools and how they can automate many of the manual tasks that you’re required to do today; and lastly, about Visual Studio Test Professional’s Test Manager and Test Runner so that you can do your part in reducing testing efforts and as a result, deliver to market faster.

    Before we get started, a question to Etienne and Vincent. Etienne, Vincent, what inspired you to have this conversation with developers and testers?

    We’ve been toying with the idea of creating a video series of interesting tips and tricks for TFS and VS for quite a while but time always seems to prevent us from doing it. Back in November 2010, Vincent Grondin, Mario Cardinal, Mathieu Szablowski, and I did a full day session at our local user group in Montreal about how developers and testers could be good friends with all the new VS 2010 tooling.  Jean-René Roy was in attendance and he liked it so much (so did the attendees) that he approached Vincent and I to redo it in Ottawa. He said that he would try and see if he could get a crew to record it! Well it was in the cards because he did get a real professional crew to film us while we were re-enacting our play (improved from the first showing) and that’s we’re going to take a look at here. 

    We wanted to make a more “team oriented” presentation right from the beginning, we didn’t want to do yet another PowerPoint drone session for 8 hours. We wanted the audience (and now you) to feel like they (and you) were one of us and could relate to the problems we were facing and trying to resolve with the tools. We came up with the idea of making it a technical play. We actually talk to each other and the audience “spies” on us. It worked out great. Most people really enjoyed the format and asked us to do it again for other subjects.  We won’t lie to you, it was a lot of work – more than 70 hours went into preparing and rehearsing in order to deliver the final product. 

    What do you hope that we’ll take away from this conversation?

    There is a lot of information on the tools available on MSDN but it’s sometimes hard to find or make sense of in the context of real projects.  This conversation is an intro to this material in the real world (as much as it can be) and will provide you a good starting point on the various technologies and tools.  At the end, you can then go off to MSDN and find deeper information and see if the tooling works for you and your team.

    We both hope you enjoy it.

    Thank you Etienne and Vincent. Our conversation will be broken down in to sessions of 20-40 minutes to allow you to work it into your busy schedules. I’d recommend that you go in order as each segment builds on the previous.

    Before getting into the sessions, in this introductory video, Etienne and Vincent introduce themselves and set the stage for the rest of the conversation. Take a look.

    Etienne and Vincent, on behalf of myself and the developers in Canada, thank you for having this conversation with us and for all the effort you put into helping us understand what’s possible.

    Remember, this conversation is bidirectional. Share your comments and feedback on our Ignite Your Coding LinkedIn discussion. Etienne, Vincent, and I will be monitoring the discussion and will be happy to hear from you.

    About Etienne Tremblay


    Etienne Tremblay is an Associate Director in charge of the Microsoft technologies center at DMR-Fujitsu in Montreal. He has over 20 years of experience in the IT industry and he specialized in Microsoft technologies in the last 12 years, specifically in managing the development process, he also has expertise in the mining and manufacturing industries. He has spoken at DevTeach since 2005, is a member of the Microsoft Team System Advisory Council and a Microsoft ALM MVP. You can reach him at bridging.the.gap@live.com

    Additional links: Etienne’s blog, Microsoft MVP Profile, Email

    About Vincent Grondin


    Vincent Grondin has over 12 years of experience in the software development field and has been using .NET in enterprise projects for more than 8 years now. He was involved in many enterprise projects for large corporations like Desjardins, Domtar, Cascades and Alcoa but he was also part of a few projects for various government branches. He likes to learn new technologies related to .NET, use the new tools that are designed for .NET and he also loves to share it all with his peers. Yes, he’s a confessed .NET addict and currently works at DMR-Fujitsu as a Senior .NET Consultant.

    Additional links: Vincent’s blog, Microsoft MVP Profile, Email

    The Bridging the Gap Between Developers and Testers Using Visual Studio 2010 was graciously sponsored by:

  • Canadian Solution Developers' Blog

    Do You Really Need that Web Part? SharePoint 2010 Business Connectivity Services, long name, AWESOME FEATURE!


    ShrPt10_h_cL_epsBusiness Connectivity Services is a feature I have heard about, but never had a chance to try. In the hands on labs at TechEd I sat down and launched a virtual machine and followed step by step instructions to try it out. There were even proctors around to give me a hand when I got stuck. The perfect time to try out BCS!

    BCS (sometimes referred to as BDC Business Data Connectivity) allows you to connect external data to SharePoint. I know you can do that with web parts, but then you have to show your users how to access and use the web parts, they already know lists! With the BCS features in SharePoint 2010 you can create lists that read data from external data sources like flat files and databases. You could do it in SharePoint 2007 but it was a LOT of work to set it up for read and write.  It’s a *lot* easier in SharePoint 2010. You have two choices for setting up BCS: SharePoint designer or Visual Studio. In the lab I was able to try and compare both methods.

    In the first part of the lab I set up a list that pointed to a database table using SharePoint Designer. It took me about 5-10 minutes. The list allowed the user to read and update the database table. Not bad for 10 minutes work! I’ve summarized the steps below, sorry there aren’t any screenshots, that’s the one drawback to using someone else’s hardware. (found a video that shows you how to create the external content type with SharePoint Designer if you want to try it yourself)

    1. Edit your site in SharePoint Designer.
    2. In SharePoint designer, add a new External Content Type.
    3. Set the external system of your content type to point to your data source (for example a SQL Server database)
    4. Use the Data Source Explorer tab to select the table you wish to access
    5. Right click the table name and select Create All Operations to create operations that will be used to read and update the underlying data source.
    6. Use the properties section to map the columns in your data source to your client (for example if you are reading contacts from a database table that you want to access from Outlook, you need to specify which column in the database table contains the e-mail address.
    7. Because your database table may contain millions of records you should also add a filter to limit the number of records returned, this acts like a Where clause restricting the number of records returned.
    8. Now create an External List using your External Content Type. (NOTE: You will need to ensure your users have the necessary permissions to access the External Content Type in the Business Data Connectivity Service)

    During the second part of the lab, I created a list from a flat file using Visual Studio. It took me 50 minutes and that was following step by step instructions that provided all the code in detail! I can tell you right now, if you have a choice, use SharePoint designer! The functionality is great, and more flexible, but just a heads up, if you are going to use Visual Studio to set up your external list, be prepared to spend a fair bit of time going through Help, blogs, and your code to make sure you haven’t missed anything. (found a video on how to set up BDC in Visual Studio 2010)

    Today’s my 5 are tips for SharePoint 2010 developers working with BCS, but they aren’t really My 5, these tips come from Penny Coventry (@pjcov) she was one of the Microsoft Certified Trainers working as a Technical Learning Guide in the Hands On Labs. She knows more about this feature than I do!

    Penny’s 5 Tips for SharePoint Developers getting started with BCS (in no particular order)

    1. Check out the book Professional Business Connectivity Services in SharePoint 2010 by Scott Hiller and Brad Stevenson.
    2. If you are going to do a lot of BCS, check out the tool bcs Meta Man from Lightning Tools.
    3. Work closely with your administrator, to ensure you have the necessary permissions to create your business data connectivity model and to ensure that your users can use your model.
    4. You may need to configure Secure Store Services because by in most situations Windows Authentication cannot be used to connect to your data source (and of course most of us have our SQL databases set up to do Integrated Security! This makes it very difficult to use BCS in SharePoint Foundation because it does not come with Secure Store Services. You can do it, but be prepared for a rather long and arduous battle.
    5. If you need to use the data in workflows, stick to web parts to access your external data, but if all you want to do is give users a way to view and update that data, Business Connectivity Services is the way to go.

    Okay It’s Penny’s 5, but I am adding Number 6, Penny is quite the SharePoint guru and has written her share of SharePoint books as well so I am going to recommend you check out some of her books as well.

  • Canadian Solution Developers' Blog

    Quick SQL Tips – Indexed Views


    Although not a new feature, Indexed views can still be a useful tool for increasing query performance. Of course you have to be careful of the trade-offs. Just like indexing a table, indexing a view may speed up a query, but will increase your storage requirements and slow down insert and update operations. So make sure you benchmark performance before and after you add an index to a view.

    With that caveat in mind, I’d like to do a review of Indexed Views because of a post I saw from a SQL User having trouble creating a Fulltext index on an indexed view.

    When we create a view we simply specify the select statement that will return the data we want displayed in the view.

    CREATE VIEW OrderInfo 
    AS SELECT od.SalesOrderId, od.productid, od.unitprice, od.orderqty, p.name
    FROM Sales.SalesOrderDetail od
    Production.product p
    ON od.productid =p.productid

    If you want to add indexes to a view you must make the view schema bound by adding the WITH SCHEMABINDING clause to the CREATE VIEW statement and you must specify the schema for each table specified in the select statement for the view.

    AS SELECT od.SalesOrderId, od.productid, od.unitprice, od.orderqty, p.name
    FROM Sales.SalesOrderDetail od
    Production.product p
    ON od.productid =p.productid

    The first index you create on a view must be a unique clustered index. So in this example I create a unique clustered index on the combination of SalesOrderId and ProductId

    CREATE UNIQUE CLUSTERED INDEX idx_orderinfo_salesOrderid 
    ON orderinfo(SalesOrderid,productid)

    I can now add additional nonclustered indexes as desired to the view

    ON orderinfo(unitprice)

    Now coming back to the question posted on reddit, can you create a fulltext index on the view? The steps for full text indexes changed quite a bit for SQL Server 2008 from SQL Server 2005. The steps listed below are for SQL Server 2008 and higher where full text indexing no longer requires a separate service and is enabled automatically.

    In order to create a full text index you first need a full text catalog, unless you have already created one for other fulltext indexes in your database.


    Next I try to create a fulltext index on the product name column, you must specify the name of the column you wish to index and the name of the unique clustered index for the view.

    CREATE FULLTEXT INDEX ON dbo.orderinfo(name) 
    KEY INDEX idx_orderinfo_salesorderid

    At this point I receive an error message, because there are restrictions on the key indexes used for creating full text indexes. The key index must be:

    • Unique
    • Non-nullable
    • Single-column
    • Online
    • Cannot be on a non-deterministic column
    • Cannot be on a nonpersisted computed column
    • Cannot be a filtered index
    • Cannot be based on a column that exceeds 900 bytes

    My key index is based on two columns, so I am unable to create a full text index for this view. So can you create a full text index on a view? It depends. If my view above had a key index that met the requirements listed above then yes! If my key index does not meet the requirements I may need to redesign my index or my view so that I can create a key index that meets the requirements.

    So we finish with everyone’s favourite answer. It depends. Don’t forget if you know SQL you know SQL Azure, read about the differences between on premise SQL Server and SQL Azure database development and you will find it’s easier than you think.

  • Canadian Solution Developers' Blog

    Table Valued Parameters Save Code and Traffic


    690px-Microsoft_SQL_Server_Logo.svgDo you use Stored Procedures? I hope so, they are great for performance and security. But, on early versions of SQL Server you could only pass one record at a time to a stored procedure. So those of us who nobly followed best practice and used stored procedures for inserting, updating, and deleting records often found ourselves having to write loop logic to call the same stored procedure over and over to insert, update, or delete multiple records. With Table Valued Parameters you can pass a set of records to a stored procedure. Yay! This feature was long overdue and is one of my favourite developer features added in SQL Server 2008.

    In SQL Server 2005, if you had a table that contained a list of Hockey Players, and you wanted to load three records into that table using a stored procedure, your code would look something like this:

    CREATE TABLE HockeyPlayer
    (id       INT                 NOT NULL,
    name VARCHAR(50) NULL,
    team  VARCHAR(50) NULL)
    (@id int, @name VARCHAR(50), @team VARCHAR(50))
    INSERT HockeyPlayer (id, name, team)
    VALUES (@id, @name, @team)
    EXECUTE InsertPlayer 1,’Michel’,'Ottawa'
    EXECUTE InsertPlayer 2,’Mike’,’Toronto’
    EXECUTE InsertPlayer 3,’Alexei,’Vancouver’

    We had to execute the stored procedure once for every record we wanted to insert.

    In SQL Server 2008, they added a new feature called Table Valued Parameters. This feature allows you to pass a one or more records to the stored procedure. In order to create a stored procedure using Table Valued Parameters, you must perform two steps.

    1. Create a Table Type in the database that defines the structure of the records you will pass to the stored procedure.

    CREATE TYPE PlayerTableType AS TABLE
    (id INT, name VARCHAR(50), team VARCHAR(50))

    2. Create your stored procedure and declare an input parameter with the type set to the Table Type you created. This input variable must be declared as READONLY. (I know my stored procedure isn’t very useful, but you get the idea)

    CREATE PROCEDURE InsertManyPlayers (@PlayerRecords PlayerTableType READONLY)
       INSERT INTO HockeyPlayer (id, name, team)
       SELECT * FROM @PlayerRecords

    Now you have a stored procedure that can accept a record set. Now how do you call it?

    Calling your stored procedure from T-SQL

    To call a stored procedure with a table valued parameter from T-SQL you have to:

    1. Create a variable based on your Table Type
    2. Populate that table variable with the records you want to pass to your stored procedure.
    3. Execute your stored procedure, passing in your table variable

    DECLARE @MyPlayers PlayerTableType

    INSERT INTO @MyPlayers

    EXECUTE InsertManyPlayers @MyPlayers

    Calling your stored procedure from .NET

    You can pass a DataTable, a Data Reader or any other object that implements the iList interface to a table valued parameter. When you declare the command parameter in .NET you must specify SqlDbType as Structure, and TypeName as the name of your table type, then execute the call to your stored procedure.

    In the example below, I create a data table with the same structure as the table type and populate the data table with the records I want to send to the stored procedure.

    Dim dt As New DataTable("player")
    dt.Columns.Add("id", System.Type.GetType("System.Int32"))

    Dim newRow As DataRow = dt.NewRow()
    newRow("id") = 7
    newRow("name") = "Chris"
    newRow("team") = "New York"

    newRow("id") = 8
    newRow("name") = "Chris"
    newRow("team") = "New York"

    The DataTable implements the iList interface, so I can pass this data table to a table valued parameter in a stored procedure. The code below defines the SqlParameter that will pass the data table and executes the stored procedure.

    Dim Cmd As New SqlCommand("InsertManyPlayers", myCon)
    Cmd.CommandType = System.Data.CommandType.StoredProcedure
    Dim tvp As New SqlParameter
    tvp.ParameterName = "@PlayerRecords"
    tvp.SqlDbType = SqlDbType.Structured
    tvp.TypeName = "dbo.PlayerTableType"
    tvp.Value = dt Cmd.Parameters.Add(tvp)


    You have now passed multiple records to a stored procedure in the database with a single call using the magic of table valued parameters. SQL rocks Smile.

    Of course I can’t forget My 5. This week:

    My 5 Places you can go learn something new about SQL Server (in no particular order)

    1. SQLTeach May 30 – June 3, 2011, Montreal, driving distance for Eastern Canada and lots of great content!
    2. TechEd North America May 16-19, Atlanta, don’t forget most of the content is also available online after the conference, even if you did not attend the show!
    3. SQLPass October 11-14th, Seattle, fabulous SQL conference for those on the West Coast.
    4. Greg Low’s blog (this guy knows his SQL Server, and he is a great presenter if you ever get a chance to catch one of his sessions)
    5. SQL Server Developer Center has links to lots of great resources including SQL Server compare editions, to compare features, I frequently get asked what is the difference between Express and Standard, between Standard and Enterprise, this is where I always look it up!

    I know many of us are just upgrading to SQL Server 2008 which is why I brought up this particular feature, but I am curious, what do you want to read about in the blog, would you like to be reminded of what is in SQL 2008, or do you want to know what’s new in Denali? Both? Let us know!

  • Canadian Solution Developers' Blog

    The Basics of Securing Applications


    Visual Studio One on OneIn this next One on One with Visual Studio conversation, we’re joined by Steve Syfuhs (@stevesyfuhs), who is a Canadian developer security MVP. I’ve asked him to chat with us about code security – more specifically, what we can easily do to ensure that our code is safe from common threats and how we can use Visual Studio and Team Foundation Server to do that.

    As in the past, our conversation will be broken down into parts to allow you to squeeze the conversation into your busy schedule. I’d recommend that you go in order as each segment builds on the previous.

    Remember, this conversation is bidirectional. Share your comments and feedback in this Ignite Your Coding LinkedIn discussion. Steve and I will be monitoring the discussion threads and will be happy to hear from you and answer any questions you may have.

    Steve, on behalf of myself and the developers in Canada, I would like to thank you for taking the time to have this conversation with us and for the effort you put into conversation, allowing us to better understand how to secure our applications.

    So without further ado, Steve, take it away.

    Thanks Jonathan, and hello Canadian solution developers.

    Every year or so a Software Security Advocacy group creates a top 10 list of the security flaws developers introduce into their software.  This is something I affectionately refer to as the stupid things we do when building applications listThe group is OWASP (Open Web Application Security Project) and the list is the OWASP Top 10 Project (of which I have no affiliation to either).  In this conversation, we will dig into some of the ways we can combat the ever-growing list of security flaws in our applications.

    Security is a trade off.  We need to balance the requirements of the application with the time and budget constraints of the project. A lot of times though, nobody has enough forethought to think that security should be a feature, or more importantly, that security should just be a central design requirement for the application regardless of what the time or budget constraints may be (do I sound bitter?).

    This of course leads to a funny problem.  What happens when your application gets attacked? There is no easy way to say it: the developers get blamed.  Or if it's a seriously heinous breach the boss gets arrested because they were accountable for the breach.  In any case it doesn't end well for the organization.

    Part of the problem with writing secure code is that you just can't look for the bugs at the end of a development cycle, fix them, and move on.  It just doesn't work.  Microsoft introduced the Security Development Lifecycle to combat this problem, as it introduced processes during the development lifecycle to aid the developers in writing secure code.

    Conceptually it's pretty simple: defense in depth.


    In order to develop secure applications, we need to adapt our development model from the beginning of development training, all the way up to application release, as well as how we respond to vulnerabilities after launch to include security requirements. Now, Microsoft, for example, has a vested interest in writing secure code, so it went all in with the SDL.  Companies that haven't made this decision may have considerably more trouble implementing the SDL simply because it costs money to do so.  Luckily we don't have to implement the entire process all at once.

    During this discussion we'll touch on some of the key aspects of the SDL and how we can fit it into the development lifecycle.

    Perhaps the most important aspect of the SDL is that it's important to have a good foundation of knowledge of security vulnerabilities.  This is where the top 10 list from OWASP comes in handy: 

    1. Injection
    2. Cross-Site Scripting (XSS)
    3. Broken Authentication and Session Management
    4. Insecure Direct Object References
    5. Cross-Site Request Forgery (CSRF)
    6. Security Misconfiguration
    7. Insecure Cryptographic Storage
    8. Failure to Restrict URL Access
    9. Insufficient Transport Layer Protection
    10. Unvalidated Redirects and Forwards

    Next up, we'll take a look at a few of these vulnerabilities up close and some of the libraries available to us to help combat attackers. We’;; see how different steps in the SDL process can help find and mitigate these vulnerabilities. After that, we'll take a look at some of the tools Microsoft has created to aid the process of secure design and analysis. Then, we'll dig into some of the architectural considerations of developing secure applications. Lastly, we'll take a look at how we can use Team Foundation Server to help us manage incident responses for future vulnerabilities.

    Looking forward to the discussion.

    About Steve Syfuhs

    Steve Syfuhs is a bit of a Renaissance Kid when it comes to technology. Part developer, part IT Pro, part Consultant working for ObjectSharp.  Steve spends most of his time in the security stack with special interests in Identity and Federation.  He recently received a Microsoft MVP award in Developer Security.  You can find his ramblings about security at www.steveonsecurity.com
  • Canadian Solution Developers' Blog

    The Basics of Securing Applications: Part 4 - Secure Architecture



    Visual-Studio-One-on-One_thumb1_thumIn this post, we’re continuing our One on One with Visual Studio conversation from May 26 with Canadian Developer Security MVP Steve Syfuhs, The Basics of Securing Applications. If you’ve just joined us, the conversation builds on the previous post, so check that out (links below) and then join us back here. If you’ve already read the previous post, welcome!

    Part 1: Development Security Basics
    Part 2: Vulnerabilities
    Part 3: Secure Design and Analysis in Visual Studio 2010 and Other Tools
    Part 4: Secure Architecture (This Post)

    In this session of our conversation, The Basics of Securing Applications, Steve provides us with great architectural guidance. As usual, when it comes to architecture, there is a lot to cover, and as such, today’s discussion is a bit longer, but, on the other hand, the security of your application is a big deal and doing it architecturally correct is worth the up front investment.

    Steve, back to you.


    Before you start to build an application you need to start with it’s design.  Previously, I stated that bugs that are introduced at this stage of the process are the most expensive to fix throughout the lifetime of the project.  It is this reason that we need to have a good foundation for security, otherwise it'll be expensive (not to mention a pain) to fix. Keep in mind this isn't an agile vs. the world discussion because no matter what at some point you still need to have a design of the application.

    Before we can design an application, we need to know that there are two basic types of code/modules:

    1. That which is security related; e.g. authentication, crypto, etc.
    2. That which is not security related (but should still be secure nonetheless); e.g. CRUD operations.

    They can be described as privileged or unprivileged.


    Whenever a piece of code is written that deals with things like authentication or cryptography, we have be very careful with it because it should be part of the secure foundation of the application. Privileged code is the authoritative core of the application.  Careless design here will render your application highly vulnerable.  Needless to say, we don't want that.  We want a foundation we can trust.  So, we have three options:

    • Don't write secure code, and be plagued with potential security vulnerabilities.  Less cost, but you cover the risk.
    • Write secure code, test it, have it verified by an outside source.  More cost, you still cover the risk.
    • Make someone else write the secure code.  Range of cost, but you don't cover the risk.

    In general, from a cost/risk perspective, our costs and risks decrease as we move from the top of the list to the bottom.  This should therefore be a no-brainer: DO NOT BUILD YOUR OWN PRIVILEGED SECURITY MODULES.  Do not invent a new way of doing things if you don't need to, and do not rewrite modules that have already been vetted by security experts.  This may sound harsh, but seriously, don't.  If you think you might need to, stop thinking.  If you still think you need to, contact a security expert. PLEASE!

    This applies to both coding and architecture. When we talked about vulnerabilities, we did not come up with a novel way of protecting our inputs, we used well known libraries or methods.  Well now we want to apply this to the application architecture. 


    Let's start with authentication.

    A lot of times an application has a need for user authentication, but it's core function has nothing to do with user authentication.  Yes, you might need to authenticate users for your mortgage calculator, but the core function of the application is calculating mortgages, and has very little to do with users.  So why would you put that application in charge of authenticating users?  It seems like a fairly simple argument, but whenever you let your application use something like a SqlMembershipProvider you are letting the application manage authentication.  Not only that, you are letting the application manage the entire identity for the user.  How much of that identity information is duplicated in multiple databases?  Is this really the right way to do things?  Probably not.

    From an architectural perspective, we want to create an abstract relationship between the identity of the user and the application.  Everything this application needs to know about this user is part of this identity, and (for the most part) the application is not an authority on any of this information because it's not the job of the application to be the authority.

    Let's think about this another way.

    Imagine for a moment that you want to get a beer at the bar. In theory the bartender should ask you for proof of age. How do you prove it? Well, one option is to have the bartender cut you in half and count the number of rings, but there could be some problems with that. The other option is for you to write down your birthday on a piece of paper to which the bartender approves or disapproves. The third option is to go to the government, get an ID card, and then present the ID to the bartender.

    Some may have laughed at the idea of just writing your birthday on a piece of paper, but this is essentially what is happening when you are authenticating users within your application because it is up to the bartender (or your application) to trust the piece of paper. However, we trust the government's assertion that the birthday on the ID is valid, and the ID is for the person requesting the drink.  The bartender knows nothing about you except your date of birth because that's all the bartender needs to know.  Now, the bartender could store information that they think is important to them, like your favorite drink, but the government doesn't care (as it isn't the authoritative source), so the bartender stores that information in his own way.

    Now this begs the question of how do you prove your identity/age to the government, or how do you authenticate against this external service?  Frankly, it doesn't matter as it's the core function of this external service and not our application.  Our application just needs to trust that it is valid, and trust that it is a secure authentication mechanism.

    In developer speak, this is called Claims Based Authentication

    • A claim is an arbitrary piece of information about an identity, such as age, and is bundled into a collection of claims, to be part of a token.
    • A Security Token Service (STS) generates the token, and our application consumes it.  It is up to the STS to handle authentication. 

    Both Claims Based Authentication and the Kerberos Protocol are built around the same model, although they use different terms.  If you are looking for examples, Windows Live/Hotmail use Claims via the WS-Federation protocol.  Google, Facebook, and Twitter use Claims via the OAuth protocol.  Claims are everywhere.

    Alright, less talking, more diagramming:


    The process goes something like this:

    1. Go to STS and authenticate (this is usually a web page redirect + the user entering their credentials)
    2. The STS tells the user's browser to POST the token to the application
    3. The application verifies the token and verifies whether it trusts the the STS
    4. The Application consumes the token and uses the claims as it sees fit

    Now we get back to asking how the heck does the STS handle authentication?  The answer is that it depends (Ah, the consultant’s answer).  The best case scenario is that you use an STS and identity store that already exist.  If you are in an intranet scenario use Active Directory and Active Directory Federation Services (a free STS for Active Directory).  If your application is on the internet use something like Live ID or Google ID, or even Facebook, simplified with Windows Azure Access Control Services.  If you are really in a bind and need to create your own STS, you can do so with the Windows Identity Foundation (WIF).  In fact, use WIF as the identity library in the diagram above.  Making a web application claims-aware involves a process called Federation.  With WIF it's really easy to do.

    Accessing claims within the token is straightforward because you are only accessing a single object, the identity within the CurrentPrincipal:

    private static TimeSpan GetAge()
        IClaimsIdentity ident = Thread.CurrentPrincipal.Identity as IClaimsIdentity;
        if (ident == null)
            throw new ApplicationException("Isn't a claims based identity");
        var dobClaims = ident.Claims.Where(c => c.ClaimType == ClaimTypes.DateOfBirth);
            throw new ApplicationException("There are no date of birth claims");
        string dob = dobClaims.First().Value;
        TimeSpan age = DateTime.Now - DateTime.Parse(dob);
        return age;

    There is secondary benefit to Claims Based Authentication.  You can also use it for authorization.  WIF supports the concept of a ClaimsAuthorizationManager, which you can use to authorize access to site resources.  Instead of writing your own authorization module, you are simply defining the rules for access, which is very much a business problem, not technical.

    Once authentication and authorization are dealt with, the two final architectural nightmares problems revolve around privacy and cryptography.


    Privacy is the control of Personally Identifiable Information (PII), which is defined as anything you can use to personally identify someone (good definition, huh?).  This can include information like SIN numbers, addresses, phone numbers, etc.  The easiest solution is to simply not use the information.  Don't ask for it and don't store it anywhere.  Since this isn't always possible, the goal should be to use (and request) as little as possible.  Once you have no more uses for the information, delete it.

    This is a highly domain-specific problem and it can't be solved in a general discussion on architecture and design.  Microsoft Research has an interesting solution to this problem by using a new language designed specifically for defining the privacy policies for an application:

    Preferences and policies are specified in terms of granted rights and required obligations, expressed as assertions and queries in an instance of SecPAL (a language originally developed for decentralized authorization). This paper further presents a formal definition of satisfaction between a policy and a preference, and a satisfaction checking algorithm. Based on the latter, a protocol is described for disclosing PIIs between users and services, as well as between third-party services.

    Privacy is also a measure of access to information in a system. Authentication and authorization are a core component of proper privacy controls.  There needs to be access control on user information.  Further, access to this information needs to be audited.  Anytime someone reads, updates, or deletes personal information, it should be recorded somewhere for review later.  There are quite a number of logging frameworks available, such as log4net or ELMAH.


    For the love of all things holy, do not do any custom crypto work.  Rely on publically vetted libraries like Bouncy Castle and formats like OpenPGP.

    • Be aware of how you are storing your private keys. Don't store them in the application as magic-strings, or store them with the application at all. If possible store them in a Hardware Security Module (HSM).
    • Make sure you have proper access control policies for the private keys.
    • Centralize all crypto functions so different modules aren't using their own implementations.
    • Finally, if you have to write custom encryption wrappers, make sure your code is capable of switching encryption algorithms without requiring recompilation.  The .NET platform has made it easy to change.  You can specify a string:
    public static byte[] SymmetricEncrypt(byte[] plainText, byte[] initVector, byte[] keyBytes)
        if (plainText == null || plainText.Length == 0)
            throw new ArgumentNullException("plainText");
        if (initVector == null || initVector.Length == 0)
            throw new ArgumentNullException("initVector");
        if (keyBytes == null || keyBytes.Length == 0)
            throw new ArgumentNullException("keyBytes");
        using (SymmetricAlgorithm symmetricKey = SymmetricAlgorithm.Create("algorithm")) // e.g.: 'AES'
            return CryptoTransform(plainText, symmetricKey.CreateEncryptor(keyBytes, initVector));
    private static byte[] CryptoTransform(byte[] payload, ICryptoTransform transform)
        using (MemoryStream memoryStream = new MemoryStream())
        using (CryptoStream cryptoStream = new CryptoStream(memoryStream, transform, CryptoStreamMode.Write))
            cryptoStream.Write(payload, 0, payload.Length);
            return memoryStream.ToArray();

    Microsoft provides a list of all supported algorithms, as well as how to specify new algorithms for future use. By following these design guidelines you should have a fairly secure foundation for your application.  Now lets look at unprivileged modules.


    When we talked about vulnerabilities, there was a single, all encompassing, hopefully self evident, solution to most of the vulnerabilities: sanitize your inputs.  Most vulnerabilities, one way or another, are the result of bad input.  This is therefore going to be a very short section.

    • Don't let input touch queries directly.  Build business objects around data and encode any strings.  Parse all incoming data.  Fail gracefully on bad input.
    • Properly lock down access to resources through authorization mechanisms in privileged modules, and audit all authorization requests.
    • If encryption is required, call into a privileged module.
    • Validate the need for SSL.  If necessary, force it.

    Final Thoughts

    Bugs during this phase of development are the most costly and hardest to fix, as the design and architecture of the application is the most critical step in making sure it isn't vulnerable to attack. Throughout our conversation, we have looked at how to develop a secure application.  Next time, we will look at how to respond to threats and mitigate the damage done.

    About Steve Syfuhs

    Steve Syfuhs is a bit of a Renaissance Kid when it comes to technology. Part developer, part IT Pro, part Consultant working for ObjectSharp. Steve spends most of his time in the security stack with special interests in Identity and Federation. He recently received a Microsoft MVP award in Developer Security. You can find his ramblings about security at www.steveonsecurity.com
  • Canadian Solution Developers' Blog

    How to Study for a Certification Exam


    The most common question I get when talking about certification is what should I study? Well, before you crack open a book, take the time to make sure you are taking the right exam, and figure out what you don’t know so your study time will be spent as effectively as possible. If you want get certified, there is a simple plan you can follow:

    1. Choose your certification goal/exam
    2. Figure out what you don’t know
    3. Fill in the gaps
    4. Take the exam

    In the last blog, “How to Prepare for a Certification Exam” I talked about how to figure out what you don’t know. That blog explained how to determine what topics are on the exam, and how to prioritize the topics to study. Now it’s time to look at Step Three: Filling in the Gaps.

    You’ve completed step two “Figure out what you don’t know”, so you should now have a list of the topics you need to study. But where will you find suitable study materials? There are many, many resources out there to help you prepare for exams. Let’s review a few key resources.

    Preparation Materials tab in the Exam Guide

    Every exam guide has a section called Preparation Materials that lists all the MsPress books, e-learning, and courses you can access that include content covered in the exam. Now it’s important to note that just because a course or book is listed here that taking that course or reading that book guarantees you will pass the exam. You know what topics you need to study. These are suggested resources. You need to go look at the course outlines to see if they cover the areas you need to study, you need to see the table of contents for that book to see if it has sections on the areas YOU need to study. The courses and books are a great overall review, but unless you take a boot camp course which is designed to help you prepare for a specific exam, you should assume the course only covers some of the exam content. So you still need to find your gaps and fill those gaps. Some exams, like this SharePoint 2010 exam, also have Exam Coaching Sessions with tips on that specific exam you can watch.


    Training Kits

    The more popular exams have books called Self-Paced Training Kits that are specifically designed to help you prepare for an exam. They include exercises and practice tests to help you study and check your knowledge at the same time. If you are going to invest in a book to help you prepare, and there is a training kit for your exam, these are often a worthwhile investment. If you already have deep knowledge of the product and just need to learn a few specific topics, the training kit is probably more than you need.

    Practice Exams

    Practice Exams are also a great study tool. Both MeasureUp and SelfTest give you the option of completing practice test in a learning or study mode. These are a great study tool because after you answer each question you can check your answer and then read an explanation of why each answer was wrong or right. For the maximum value from a practice test, I recommend customizing your test. Both Self Test and MeasureUp give you the option of specifying which topics you want on your test. If there is a specific content area you know you need to study, customize your test to ask you all the questions on that topic.

    Training Catalog

    The Microsoft Learning Training Catalog is another great place to search for exam preparation materials. You can search this catalog to get a list of learning materials related to your technology. You can even create a My Learning account to help you remember which resources you want to review and help you keep track of which white papers or articles you have read. You can search by technology or by exam number, and you can customize the search to change what types of resources are returned in the search results.


    Learning Plans

    One of the resources you may see displayed in your training catalog search results is a Learning Plan. Many exams have Learning Plans associated with them. Learning plans are designed to identify resources to help you achieve a goal like passing an exam, or learning a new version of a tool. Learning plans are an often overlooked resource, and can point you to articles or whitepapers that you might otherwise have missed. In order to follow the links on a learning plan, sign in with a Windows Live ID and save the training plan to My Learning, then you can go to the articles and keep track of which content you have already read.


    Try features out for yourself

    Reading is great, but actually trying out a feature is always the best way to learn if you have time. Many product teams provide virtual machines you can download, and TechDays Canada will be hosting a number of Hands On Labs on developer and infrastructure tools throughout the year you can complete online.

    Today’s My 5 covers a situation that occasionally comes up with new exams:

    When a certification exam is first released, sometimes you go to the Preparation Materials tab and there are no books, no courses, and no practice tests. So now what? Well first of all you have earned my respect because you are probably going to pass this exam before most of your peers. But what can you study?

    5 Places to look for study materials when there are no preparation materials listed for the exam

    1. MSDN & TechNet, sometimes reading one or two pages on the topic in MSDN or TechNet is enough to grasp the basic concept and commands
    2. Try it for yourself, there is nothing quite like trying out the technology to master it, if you don’t have the software, search the team blogs to see if there are any Virtual Machines you can download or labs you can try
    3. Visit the team blogs (e.g. Windows, SharePoint, Exchange, Azure) they often have blogs explaining the latest features
    4. Bing  - try searching for the specific topics you need to study, you may find someone out there has written a great blog post or article that explains the very feature you are trying to understand.
    5. Visit the Microsoft Learning Training Catalog, search by product, there may be whitepapers or articles you can read

    Next blog we will wrap up our certification discussion and talk about what to expect on the day of the exam and a few tips to increase your chances of walking out with a passing score.

  • Canadian Solution Developers' Blog

    SQL Azure Essentials for the Database Developer



    I admit it, I am a SQL geek. I really appreciate a well designed database. I believe a good index strategy is a thing of beauty and a well written stored procedure is something to show off to your friends and co-workers. What I, personally, do not enjoy, is all the administrative stuff that goes with it. Backup & recovery, clustering, installation are all important but, it’s just not my thing. I am first and foremost a developer. That’s why I love SQL Azure. I can jump right in to the fun stuff: designing my tables, writing stored procedures and writing code to connect to my awesome new database, and I don’t have to deal with planning for redundancy in case of disk failures, and keeping up with security patches.

    There are lots of great videos out there to explain the basics: What is SQL AzureCreating a SQL Azure Database. In fact there is an entire training kit to help you out when you have some time to sit down and learn. I’ll be providing a few posts over the coming weeks to talk about SQL Azure features and tools for database developers. What I’d like to do today is jump right in and talk about some very specific things an experienced database developer should be aware of when working with SQL Azure.

    You can connect to SQL Azure using ANY client with a supported connection library such as ADO.NET or ODBC

    This could include an application written in Java or PHP. Connecting to SQL Azure with OLEDB is NOT supported right now. SQL Azure supports tabular data stream (TDS) version 7.3 or later. There is a JDBC driver you can download to connect to SQL Azure. Brian Swan has also written a post on how to get started with PHP and SQL Azure. .NET Framework Data Provider for SQLServer (System.Data.SqlClient) from .NET Framework 3.5 Service Pack 1 or later can be used to connect to SQL Azure and the Entity Framework from .NET Framework 3.5 Service Pack 1 or later can also be used with SQL Azure.

    You can use SQL Server Management Studio (SSMS) to connect to SQL Azure

    In many introduction videos for SQL Azure they spend all their time using the SQL Azure tools. That is great for the small companies or folks building a database for their photography company who may not have a SQL Server installation. But for those of us who do have SQL Server Management Studio, you can use it to manage your database in SQL Azure. When you create the server in SQL Azure, you will be given a Fully Qualified DNS Name. Use that as your Server name when you connect in SSMS. For those of you in the habit of using Server Explorer in Visual Studio to work with the database, Visual Studio 2010 allows you to connect to a SQL Azure database through Server Explorer.



    The System databases have changed

    • Your tempdb is hiding – Surprise, no tempdb listed under system databases. That doesn’t mean it’s not there. You are running on a server managed by someone else, so you don’t manage tempdb. Your session can use up to 5 GB of tempdb space, if a session uses more than 5GB of space in tempdb it will be terminated with error code 40551.
    • The master database has changed – When you work with SQL Azure, there are some system views that you simply do not need because they provide information about aspects of the database you no longer manage. For example there is no sys.backup_devices view because you don’t need to do backups (if you are really paranoid about data loss, and I know some of us are, there are ways to make copies of your data). On the other hand there are additional system views to help you manage aspects you only need to think about in the cloud. For example sys.firewall_rules is only available in SQL Azure because you define firewall rules for each SQL Azure server but you wouldn’t do that for a particular instance of SQL Server on premise.
    • SQL Server Agent is NOT supported – Did you notice msdb is not listed in the system databases. There are 3rd party tools and community projects that address this issue. Check out SQL Azure Agent on Codeplex to see an example of how to create similar functionality. You can also run SQL Server Agent on your on-premise database and connect to a SQL Azure database.


    You don’t know which server you will connect to when you execute a query

    When you create a database in SQL Azure there are actually 3 copies made of the database on different servers. This helps provide higher availability, failover and load balancing. Most of the time it doesn’t matter as long as we can request a connection to the database and read and write to our tables. However this architecture does have some ramifications:

    • No  4 part names for queries –  Since when you execute a query you do not know which server it will use, 4 part queries that specify the server name are not allowed.
    • No USE command or cross database queries – When you create two databases there is no guarantee that those two databases will be stored on the same physical server. That is why the USE command and cross database queries are not supported.

    Every database table must have a clustered index

    You can create a table without a clustered index, but you won’t be able to insert data into the table until you create the clustered index. This has never affected my database design because I always have a clustered index on my tables to speed up searches.

    Some Features are not currently supported

    • Integrated Security – SQL Server authentication is used for SQL Azure, which makes sense given you are managing the database but not the server.
    • No Full Text Searches – For now at least, full text searches are not supported by SQL Azure. If this is an issue for you, there is an interesting article in the TechNet Wiki on a .NET implementation of a full text search engine that can connect to SQL Azure.
    • CLR is not supported – You have access to .NET through Windows Azure, but you can’t use the .NET to define your own types and functions, but you can still create your own functions and types with T-SQL.

    You can connect to SQL Azure from your Business Intelligence Solutions

    • SQL Server Analysis Services - Starting with SQL Server 2008 R2 you can use SQL Azure as a data source when running SQL Server Analysis Services on-premise.
    • SQL Server Reporting Services – Starting with SQL Server 2008 R2, you can use SQL Azure as a data source when running SQL Server Reporting Services on-premise.
    • SQL Server Integration Services – You can use the ADO.NET Source and Destination components to connect to SQL Azure, and in SQL Server 2008 R2 there was a “Use Bulk Insert” option added to the Destination to improve SQL Azure performance.

    Today’s My 5 of course has to relate to SQL Azure!

    5 Steps to get started with SQL Azure

    1. Create a trial account and login
    2. Create a new SQL Azure server – choose Database | Create a new SQL Azure Server and choose your region (for Canada North Central US is the closest)
    3. Specify an Administrator account and password and don’t forget it! – some account names such as admin, administrator, and sa are not allowed as administrator account names
    4. Specify the firewall rules – these are the IP Addresses that are allowed to access your Database Server, I recommend selecting the “Allow other Windows Azure services to access this server” so you can use Windows Azure services to connect to your database.
    5. Create a Database and start playing – You can either create the database using T-SQL from SSMS, or using the Create New Database in the Windows azure Platform tool which gives you a wizard to create the database.

    Now you know the ins and outs, go try it out and come back next week to learn more about life as a database developer in SQL Azure

  • Canadian Solution Developers' Blog

    Why Employers Should Pay for Training and Certification


    As you know, I had the chance to sit down with Canadian MVP Mitch Garvis (@MGarvis) recently to talk about training and certification. So far, we’ve discussed:

    Not Studying for a Certification Exam
    Why Employers Should Pay for Training and Certification (This Post)

    Getting certified is an investment of time, and further, it is an investment of money. There is a fee for the exam, preparatory books, and courses. However, these are needed in order to review the material for the prepare for the certification exam. Mitch believes that these are expenses that your employer should cover, or at the very least, assist with. So the question is “why?”.

    The following is Mitch’s answer, offering up guidance to help you build your case for your employer:

    That’s a great question. There are so many ways that we can learn a product… I’m going to use a horrible expression, but “there’s more than one way to skin a cat”. I’ll explain by using an example.

    In desktop deployment, you, as an IT Pro would know how to install Windows 7 on your computer. You do whatever you need to do and three and a half hours later, you’re computer is done [on a good day]. I was at a client yesterday and they told me that they have their deployments down to an hour and forty-five minutes. “We took the time to learn it and how to do it – we did the research.” they told me. I asked: “what did you learn?” and they gave me a list of products that they learned, each of which was a generation or two removed from the current. I looked at them and I said “well if you just learned this, which is the Microsoft Deployment Toolkit, the hour and forty-five minutes becomes 17 minutes. Did you know that?!” The gentlemen with whom I was speaking was shocked – he was the desktop deployment guru for his company and he realized that he learned it the wrong way.

    So, you can learn something and it will get the job done, or you can learn something and do it right and get the job done better. You have to invest the time in training. You know my 13 year old son knows everything that he knows and has no concept of what he doesn’t know. Let’s take “13 year old son” out of that sentence. You know what you know very well but you will have very little concept of what you don’t know if you don’t know it.

    Let’s relate that to development. You want (or need) to learn Windows Azure, let’s say. Windows Azure is a huge platform and as with any complex platform, there are many different ways to architect solutions targeted at the platform. Some ways are more performant than others, while some ways are more cost-effective than others. There are also ways to have a balance between the two (“do it right and get the job done better” as Mitch says above); however, finding out how to do so would be challenging unless you either have the experience of previous trial-and-error attempts or take training from others who have done it and learn from their experiences.

    Imagine the IT pro or developer that doesn’t participate in community events (user groups) or doesn’t go to events held by Microsoft or Microsoft partners, and by the way, there is nothing wrong with that, but then how would you know that there are things that you don’t know? By telling your boss “hey look, there’s something out there that I don’t know, and therefore I don’t know what is the proper way to do it.” This is why training and certification matters. By the way, if you don’t think that you can have that conversation with your boss, and convince them that you need those training days, bring me in. I'll have that conversation for you and in no time, I’ll convince him or her.

    You know what? CxOs don’t care about what’s cool out there. They care about one very simple equation – how do I increase ROI and reduce cost of ownership. The answer to that is – train your people properly, invest in their training.

    I remember, back in the day, I used to have this discussion with one of my old employers and somehow, the discussion always came down learning something on my own. The thought was if I could learn it on my own, that’s more cost effective, and therefore additional paid training and certification were not approved.

    I did say before that there is more than one way to skin a cat. There is more than one way to learn something the right way. I didn’t say there was more than one way to do something the right way. There is more than one way to learn how to do something the right way. If you can’t get the X days off to take a course – your boss says to you “learn it on your own time” – say to him/her “you know what boss? I could probably learn it almost as well as the course, but I still need books and still need the certification afterwards.” and get him/her to invest in that. You can learn the right way from books, online forums, and articles. Just make sure that you don’t hack your way through it and learn it the way you think you should be learning it. Don’t just get the frameworks and just start going at it and think you’re going to be an excellent, excellent developer.

    Now It’s Your Turn

    How have you approached training and certification with your employer? How did you pitch your business case? Which approaches have worked and which haven’t? Share your thoughts.

    Conversation Continued

    Stay tuned for more insights from my conversation with Mitch as we chat about actually taking exams, some tips and tricks, and what to do after an exam, whether you pass or fail.

    Mitch Garvis

    Mitch Garvis is a Renaissance Man of the IT world with a passion for community.  He is an excellent communicator which makes him the ideal trainer, writer, and technology evangelist. Having founded and led two major Canadian user groups for IT Professionals, he understands both the value and rewards of helping his peers. After several years as a consultant and in-house IT Pro for companies in Canada, he now works with various companies creating and delivering training for Microsoft to its partners and clients around the world. He is a Microsoft Certified Trainer, and has been recognized for his community work with the prestigious Microsoft Most Valuable Professional award. He is an avid writer, and blogs at http://garvis.ca.

  • Canadian Solution Developers' Blog

    The Basics of Securing Applications: Part 5 – Incident Response Management Using Team Foundation Server


    Visual-Studio-One-on-One_thumb1_thum[2]In this post, we’re continuing our One on One with Visual Studio conversation from May 26 with Canadian Developer Security MVP Steve Syfuhs, The Basics of Securing Applications. If you’ve just joined us, the conversation builds on the previous post, so check that out (links below) and then join us back here. If you’ve already read the previous post, welcome!

    Part 1: Development Security Basics
    Part 2: Vulnerabilities
    Part 3: Secure Design and Analysis in Visual Studio 2010 and Other Tools
    Part 4: Secure Architecture
    Part 5: Incident Response Management Using Team Foundation Server (This Post)

      In this session of our conversation, The Basics of Securing Applications, Steve talks about the differences between protocols and procedures when it comes to security and how to be ready in an event of a security situation.

      Steve, back to you.

      There are only a few certainties in life: death, taxes, and one of your applications getting attacked.  Throughout the lifetime of an application it will undergo a barrage of attack – especially if it's public facing.  If you followed the SDL, tested properly, coded securely, and managed well, you will have gotten most of the bugs out.


      There will always be bugs in production code, and there will very likely always be a security bug in production code.  Further, if there is a security bug in production code, an attacker will probably find it.  Perhaps the best metric for security is along the lines of mean-time-to-failure.  Or rather, mean-time-to-breach.  All safes for storing valuables are rated in how long they can withstand certain types of attacks – not whether they can, but how long they can.  There is no one-single thing we can do to prevent an attack, and we cannot prevent all attacks.  It's just not in the cards.  So, it stands to reason then that we should prepare for something bad happening.  The final stage of the SDL requires that an Incident Response Plan is created.  This is the procedure to follow in the event of a vulnerability being found.

      In security parlance, there are protocols and procedures.  The majority of the SDL is all protocol.  A protocol is the usual way to do things.  It's the list of steps you follow to accomplish a task that is associated with a normal working condition, e.g. fuzzing a file parser during development.  You follow a set of steps to fuzz something, and you really don't deviate from those steps.  A procedure is when something is different.  A procedure is reactive.  How you respond to a security breach is a procedure.  It's a set of steps, but it's not a normal condition.

      An Incident Response Plan (IRP - the procedure) serves a few functions:

      • It has the list of people to contact in the event of the emergency
      • It is the actual list of steps to follow when bad things happen
      • It includes references to other procedures for code written by other teams

      This may be one of the more painful parts of the SDL, because it's mostly process over anything else.  Luckily there are two wonderful products by Microsoft that help: Team Foundation Server.  For those of you who just cringed, bear with me.

      Microsoft released the MSF-Agile plus Security Development Lifecycle Process Template for VS 2010 (it also takes second place in the longest product name contest) to make the entire SDL process easier for developers.  There is the SDL Process Template for 2008 as well.

      It's useful for each stage of the SDL, but we want to take a look at how it can help with managing the IRP.  First though, lets define the IRP.

      Emergency Contacts (Incident Response Team)

      The contacts usually need to be available 24 hours a day, seven days a week.  These people have a range of functions depending on the severity of the breach:

      • Developer – Someone to comprehend and/or triage the problem
      • Tester – Someone to test and verify any changes
      • Manager – Someone to approve changes that need to be made
      • Marketing/PR – Someone to make a public announcement (if necessary)

      Each plan is different for each application and for each organization, so there may be ancillary people involved as well (perhaps an end user to verify data).  Each person isn't necessarily required at each stage of the response, but they still need to be available in the event that something changes.

      The Incident Response Plan

      Over the years I've written a few Incident Response Plans (Never mind that I was asked to do it after an attack most times – you WILL go out and create one after reading this right?).  Each plan was unique in it's own way, but there were commonalities as well. 

      Each plan should provide the steps to answer a few questions about the vulnerability:

      • How was the vulnerability disclosed?  Did someone attack, or did someone let you know about it?
      • Was the vulnerability found in something you host, or an application that your customers host?
      • Is it an ongoing attack?
      • What was breached?
      • How do you notify the your customers about the vulnerability?
      • When do you notify them about the vulnerability?

      And each plan should provide the steps to answer a few questions about the fix:

      • If it's an ongoing attack, how do you stop it?
      • How do you test the fix?
      • How do you deploy the fix?
      • How do you notify the public about the fix?

      Some of these questions may not be answerable immediately – you may need to wait until a post-mortem to answer them. 

      This is the high level IRP for example:

      1. The Attack – It's already happened
      2. Evaluate the state of the systems or products to determine the extent of the vulnerability
        1. What was breached?
        2. What is the vulnerability
      3. Define the first step to mitigate the threat
        1. How do you stop the threat?
        2. Design the bug fix
      4. Isolate the vulnerabilities if possible
        1. Disconnect targeted machine from network
        2. Complete forensic backup of system
        3. Turn off the targeted machine if hosted
      5. Initiate the mitigation plan
        1. Develop the bug fix
        2. Test the bug fix
      6. Alert the necessary people
        1. Get Marketing/PR to inform clients of breach (don't forget to tell them about the fix too!)
        2. If necessary, inform the proper legal/governmental bodies
      7. Deploy any fixes
        1. Rebuild any affected systems
        2. Deploy patch(es)
        3. Reconnect to network
      8. Follow up with legal/governmental bodies if prosecution of attacker is necessary
        1. Analyze forensic backups of systems
      9. Do a post-mortem of the attack/vulnerability
        1. What went wrong?
        2. Why did it go wrong?
        3. What went right?
        4. Why did it go right?
        5. How can this class of attack be mitigated in the future?
        6. Are there any other products/systems that would be affected by the same class?

      Some of procedures can be done in parallel, hence the need for people to be on call.

      Team Foundation Server

      So now that we have a basic plan created, we should make it easy to implement.  The SDL Process Template (mentioned above) creates a set of task lists and bug types within TFS projects that are used to define things like security bugs, SDL-specific tasks, exit criteria, etc..


      While these can (and should) be used throughout the lifetime of the project, they can also be used to map out the procedures in the IRP.  In fact, a new project creates an entry in Open SDL Tasks to create an Incident Response Team:


      A bug works well to manage incident responses.


      Once a bug is created we can link a new task with the bug.


      And then we can assign a user to the task:


      Each bug and task are now visible in the Security Exit Criteria query:


      Once all the items in the Exit Criteria have been met, you can release the patch.


      Security is a funny thing. A lot of times you don't think about it until it's too late. Other times you follow the SDL completely, and you still get attacked.

      In this conversation (which spanned over a few weeks) we looked at writing secure software from a pretty high level.  We touched on common vulnerabilities and their mitigations, tools you can use to test for vulnerabilities, some thoughts to apply to architecting the application securely, and finally we looked at how to respond to problems after release.  By no means will this conversation automatically make you write secure code, but hopefully you’ve received guidance around understanding what goes into writing secure code.  It's a lot of work, and sometimes its hard work.

      Finally, there is an idea I like to put into the first section of every Incident Response Plan I've written, and I think it applies to writing software securely in general:

      Something bad just happened.  This is not the time to panic, nor the time to place blame.  Your goal is to make sure the affected system or application is secured and in working order, and your customers are protected.

      Something bad may not have happened yet, and it may not in the future, but it's important to plan accordingly because your goal should be to protect the application, the system, and most importantly, the customer.

      About Steve Syfuhs

      Steve Syfuhs is a bit of a Renaissance Kid when it comes to technology. Part developer, part IT Pro, part Consultant working for ObjectSharp. Steve spends most of his time in the security stack with special interests in Identity and Federation. He recently received a Microsoft MVP award in Developer Security. You can find his ramblings about security at www.steveonsecurity.com
    • Canadian Solution Developers' Blog

      Your FREE Trial is Actually FREE, Now…


      Too often I get asked whether the Windows Azure trial is actually FREE because you have to enter a credit card when signing up. With the introduction of Spending Limits, yes, yes, it is.


      Here’s the situation – you’ve been following the Canadian Developer Connection, my blog, or wherever you get your updates on what’s new and exciting in the developer world. You’re sitting down to give Windows Azure a try (check out the Windows Azure Challenge for a fun way to get started with Windows Azure). You go to create your free Windows Azure trial and boom – it asks you for a credit card. You scratch your head and say: “If I put my card in there, I’m going to get charged – but it’s a free trial…”

      The truth is that, up until recently, it was technically possible to get charged for your Windows Azure usage if you went over the resources that come with the free trial. But now – no more. Your FREE trial is actually FREE because of a new feature that was added to Windows Azure called a Spending Limit, and the nice thing is that it is enabled automatically to ensure that you’re protected!

      Now when you sign up for a new trial subscription and deploy applications to Windows Azure, the spending limit, which is by default set to $0 (meaning you don’t want to spend any money) will prevent you from being charged! If and when your usage exceeds the monthly amounts included in your subscription, Windows Azure will disable your service for the remainder of that billing month, which includes removing any apps you have running (though your data in your storage accounts and databases will be accessible as read-only). At the beginning of the next billing month, your subscription will be re-enabled and you will be able to re-deploy your Windows Azure apps and have full access to your storage accounts and databases. Perfect for ensuring that you are not charged for playing around and getting comfortable with Windows Azure.

      Here’s what you’ll see as you approach the limit of your subscription:


      and then when you’ve reached the limit, rather than charging your credit card, the subscription is disabled:


      No more worries of being charged

      With spending limits in place, there are not more excuses as to why you can’t give Windows Azure a try. Here are two great ways to do so:

      AzureCamp Windows Azure Camp Challenge
      Downloads the tools and a hands-on lab to complete on your own computer. You can then reward yourself with a few drinks on us.
      Go >>
      Techdays Logo Basics of Application Development for Windows Azure on TechDays Online
      Use our virtual environments to complete the lab. You won’t have to download or install the tools. Just the remote viewer.
      Go >>

      Join the Conversation

      I’d love to hear all about your first experiences with Windows Azure – what compelled you to give it a try? What was your first time like? Did you have any “ah-ha” moments? Did you come to any realizations? Did you make any sort of conclusions about Windows Azure? Once you’ve gone through one of the above (and/or the many other hands-on labs on TechDays Online), join the conversation about first experiences in Canadian Developer Connection group on LinkedIn.

    • Canadian Solution Developers' Blog

      Bridging The Gap Between Developers and Testers Using Visual Studio 2010: Part 1 of 13 - Migrating VSS to TFS



      VS One on One - Bridging The Gap_thumb[2]

      In this post, we’re continuing our One on One with Visual Studio conversation from March 13 with Canadian MVPs Etienne Tremblay and Vincent Grondin, Bridging the Gap Between Developers and Testers Using Visual Studio 2010. If you’ve just joined us, the conversation builds on the previous post, so check it out and then join us back here. If you’re re-joining us, welcome back!

      In this session of our conversation, Bridging The Gap Between Developers and Testers Using Visual Studio 2010, Etienne and Vincent give us a brief introduction to Team Foundation Server, it’s tools, and how it integrates with the tools in your environment that you’re already familiar with such as Visual Studio, Office, and SharePoint. They’ll also show us how migration from an existing Visual SourceSafe repository to Team Foundation Server can be automated so that you don’t have to migrate everything by hand.

      With that, Etienne and Vincent, we’re all yours.

      Etienne and Vincent, where can we get more information on the items that you discussed in this session?

      Remember, this conversation is bidirectional. Share your comments and feedback on our Ignite Your Coding LinkedIn discussion. Etienne, Vincent, and I will be monitoring the discussion and will be happy to hear from you.

    • Canadian Solution Developers' Blog

      The Basics of Securing Applications: Part 3 - Secure Design and Analysis in Visual Studio 2010 and Other Tools



      Visual-Studio-One-on-One_thumb1In this post, we’re continuing our One on One with Visual Studio conversation from May 26 with Canadian Developer Security MVP Steve Syfuhs, The Basics of Securing Applications. If you’ve just joined us, the conversation builds on the previous post, so check that out (links below) and then join us back here. If you’ve already read the previous post, welcome!

      Part 1: Development Security Basics
      Part 2: Vulnerabilities
      Part 3: Secure Design and Analysis in Visual Studio 2010 and Other Tools (This Post)

      In this session of our conversation, The Basics of Securing Applications, Steve shows us how we can identify potential security issues in our applications by using Visual Studio and other security tools.

      Steve, back to you.

      Every once in a while someone says to me, “hey Steve, got any recommendations on tools for testing security in our applications?” It's a pretty straightforward question, but I find it difficult to answer because a lot of times people are looking for something that will solve all of their security problems, or more likely, solve all the problems they know they don't know about because, well, as we talked about last week (check out last week’s discussion), security is hard to get right.  There are a lot of things we don't necessarily think about when building applications.

      Last time we only touched on four out of ten types of vulnerabilities that can be mitigated through the use of frameworks.  That means that the other six need to be dealt with another way.  Let's take a look at what we have left:

      1. Injection
      2. Cross-Site Scripting (XSS)
      3. Broken Authentication and Session Management
      4. Insecure Direct Object References
      5. Cross-Site Request Forgery (CSRF)
      6. Security Misconfiguration
      7. Insecure Cryptographic Storage
      8. Failure to Restrict URL Access
      9. Insufficient Transport Layer Protection
      10. Unvalidated Redirects and Forwards

      None of these can actually be fixed by tools either.  Well, that's a bit of a downer.  Let me rephrase that a bit: these vulnerabilities can not be fixed by tools, but some can be found by tools.  This brings up an interesting point. There is a major difference between identifying vulnerabilities with tools and fixing vulnerabilities with tools. We will fix vulnerabilities by writing secure code, but we can find vulnerabilities with the use of tools. 

      Remember what I said in our conversation two weeks ago (paraphrased): the SDL is about defense in depth. This is a concept of providing or creating security measures at multiple layers of an application.  If one security measure fails, there will be another one that hasn't failed to continue to protect the application or data. It stands to reason then that there is no one single thing we can do to secure our applications, and by extension, therefore there is no one single tool we can use to find all of our vulnerabilities.

      There are different types of tools to use at different points in the development lifecycle. In this article we will look at tools that we can use within three of the seven stages of the lifecycle: Design, implementation, and verification:


      Within these stages we will look at some of their processes, and some tools to simplify the processes.


      Next to training, design is probably the most critical stage of developing a secure application because bugs that show up here are the most expensive to fix throughout the lifetime of the project.

      Once we've defined our secure design specifications, e.g. the privacy or cryptographic requirements, we need to create a threat model that defines all the different ways the application can be attacked.  It's important for us to create such a model because an attacker will absolutely create one, and we don't want to be caught with our pants down.  This usually goes in-tow with architecting the application, as changes to either will usually affect the other.  Here is a simple threat model:


      It shows the flow of data to and from different pieces of the overall system, with a security boundary (red dotted line) separating the user's access and the process. This boundary could, for example, show the endpoint of a web service. 

      However, this is only half of it.  We need to actually define the threat.  This is done through what's called the STRIDE approach.  STRIDE is an acronym that is defined as:

      Name Security Property Huh?
      Spoofing Authentication Are you really who you say you are?
      Tampering Integrity Has this data been compromised or modified without knowledge?
      Repudiation Non-repudiation The ability to prove (or disprove) the validity of something like a transaction.
      Information Disclosure Confidentiality Are you only seeing data that you are allowed to see?
      Denial of Service Availability Can you access the data or service reliably?
      Elevation of Privilege Authorization Are you allowed to actually do something?

      We then analyze each point on the threat model for STRIDE.  For instance:

        Spoofing Tampering Repudiation Info Disclosure DoS Elevation
      Data True True True True False True

      It's a bit of a contrived model because it's fairly vague, but it makes us ask these questions:

      • Is it possible to spoof a user when modifying data?  Yes: There is no authentication mechanism.
      • Is it possible to tamper with the data? Yes: the data can be modified directly.
      • Can you prove the change was done by someone in particular? No: there is no audit trail.
      • Can information be leaked out of the configuration? True: you can read the data directly.
      • Can you disrupt service of the application? No: the data is always available (actually, this isn't well described – a big problem with threat models which is a discussion for another time).
      • Can you access more than you are supposed to? Yes: there is no authorization mechanism.

      For more information you can check out the Patterns and Practices Team article on creating a threat model.

      This can get tiresome very quickly.  Thankfully Microsoft came out with a tool called the SDL Threat Modeling tool.  You start with a drawing board to design your model (like shown above) and then you define it's STRIDE characteristics:


      Which is basically just a grid of fill-in-the-blanks which opens up into a form:


      Once you create the model, the tool will auto-generate a few well known characteristics of the types of interactions between elements as well as provides a bunch of useful questions to ask for less common interactions.  It's at this point that we can start to get a feel for where the application weaknesses exist, or will exist.  If we compare this to our six vulnerabilities above, we've done a good job of finding a couple.  We've already come across an authentication problem and potentially a problem with direct access to data.  Next week, we will look at ways of fixing these problems.

      After a model has been created we need to define the attack surface of the application, and then we need to figure out how to reduce the attack surface.  This is a fancy way of saying that we need to know all the different ways that other things can interact with our application.  This could be a set of API's, or generic endpoints like port 80 for HTTP:


      Attack surface could also include changes in permissions to core Windows components.  Essentially, we need to figure out the changes our application introduces to the known state of the computer it's running on.  Aside from analysis of the threat model, there is no easy way to do this before the application is built.  Don't fret just yet though, because there are quite a few tools available to us during the verification phase which we will discuss in a moment.  Before we do that though, we need to actually write the code.


      Visual Studio 2010 has some pretty useful features that help with writing secure code. First is the analysis tools:


      There is a rule set specifically for security:


      If you open it you are given a list of 51 rules to validate whenever you run the test. This encompasses a few of the OWASP top 10, as well as some .NET specifics.

      When we run the analysis we are given a set of warnings:


      They are a good sanity check whenever you build the application, or check it into source control.

      In prior versions of Visual Studio you had to run FxCop to analyze your application, but Visual Studio 2010 calls into FxCop directly.  These rules were migrated from CAT.NET, a plugin for Visual Studio 2008. There is a V2 version of CAT.NET in beta hopefully to be released shortly.

      If you are writing unmanaged code, there are a couple of extras that are available to you too.

      One major source of security bugs in unmanaged code comes from a well known set of functions that manipulate memory in ways that are usually called poorly.  These are collectively called the banned functions, and Microsoft has released a header file that deprecates them:

      #            pragma deprecated (_mbscpy, _mbccpy)
      #            pragma deprecated (strcatA, strcatW, _mbscat, 
      StrCatBuff, StrCatBuffA,
      StrCatBuffW, StrCatChainW,
      _tccat, _mbccat)

      Microsoft also released an ATL template that allows you to restrict which domains or security zones your ActiveX control can execute.

      Finally, there is also a set of rules that you can run for unmanaged code too.  You can enable these to run on build:


      These tools should be run often, and the warnings should be fixed as they appear because when we get to verification, things can get ugly.


      In theory, if we have followed the guidelines for the phases of the SDL, verification of the security of the application should be fairly tame.  In the previous section I said things can get ugly; what gives?  Well, verification is sort of the Rinse-and-Repeat phase.  It's testing.  We write code, we test it, we get someone else to test it, we fix the bugs, and repeat.

      The SDL has certain requirements for testing.  If we don't meet these requirements, it gets ugly.  Therefore we want to get as close as possible to secure code during the implementation phase.  For instance, if you are writing a file format parser you have to run it through 100,000 different files of varying integrity.  In the event that something catastrophically bad happens on one of these malformed files (which equates to it doing anything other than failing gracefully), you need to fix the bug and re-run the 100,000 files – preferably with a mix of new files as well.  This may not seem too bad, until you realize how long it takes to process 100,000 files.  Imagine it takes 1 second to process a single file:

      100,000 files * 1 second = 100,000 seconds
      100,000 seconds / 60 seconds = ~1667 minutes
      1667 minutes / 60 minutes = ~27 hours

      This is a time consuming process.  Luckily Microsoft has provided some tools to help.  First up is MiniFuzz


      Fuzzing is the process of taking a good copy of a file and manipulating the bits in different ways; some random, some specific.  This file is then passed through your custom parser… 100,000 times.  To use this tool, set the path of your application in the process to fuzz path, and then a set of files to fuzz in the template files path.  MiniFuzz will go through each file in the templates folder and randomly manipulate them based on aggressiveness, and then pass them to the process.

      Regular expressions also run into a similar testing requirement, so there is a Regex Fuzzer too.

      An often overlooked part of the development process is the compiler.  Sometimes we compile our applications with the wrong settings.  BinScope Binary Analyzer can help us as it will verify a number of things like compiler versions, compiler/linker flags, proper ATL headers, use of strongly named assemblies, etc.  You can find more information on these tools on the SDL Tools site.

      Finally, we get back to attack surface analysis.  Remember how I said there aren't really any tools to help with analysis during the design phase?  Luckily, there is a pretty useful tool available to us during the verification phase, aptly named the Attack Surface Analyzer.  It's still in beta, but it works well.

      The analyzer goes through a set of tests and collects data on different aspects of the operating system:


      The analyzer is run a couple times; it's run before your application is installed or deployed, and then it's run after it's installed or deployed.  The analyzer will then do a delta on all of the data it collected and return a report showing the changes.

      The goal is to reduce attack surface. Therefore we can use the report as a baseline to modify the default settings to reduce the number of changes.

      It turns out that our friends on the SDL team have released a new tool just hours ago.  It's the Web Application Configuration Analyzer v2 – so while not technically a new tool, it's a new version of a tool.  Cool!  Lets take a look:


      It's essentially a rules engine that checks for certain conditions in the web.config files for your applications.  It works by looking at all the web sites and web applications in IIS and scans through the web.config files for each one.

      The best part is that you can scan multiple machines at once, so you know your entire farm is running with the same configuration files.  It doesn't take very long to do the scan of a single system, and when you are done you can view the results:


      It's interesting to note that this was scan of my development machine (don't ask why it's called TroyMcClure), and there are quite a number of failed tests.  All developers should run this tool on their development machines, not only for their own security, but so that your application can be developed in the most secure environment possible.

      Final Thoughts

      It's important to remember that tools will not secure our applications.  They can definitely help find vulnerabilities, but we cannot rely on them to solve all of our problems.

      Looking forward to continuing the conversation.

      About Steve Syfuhs

      Steve Syfuhs is a bit of a Renaissance Kid when it comes to technology. Part developer, part IT Pro, part Consultant working for ObjectSharp. Steve spends most of his time in the security stack with special interests in Identity and Federation. He recently received a Microsoft MVP award in Developer Security. You can find his ramblings about security at www.steveonsecurity.com
    • Canadian Solution Developers' Blog

      By The Way There is Still No Documentation…


      A couple of weeks ago I talked about the challenges of finding up to date documentation and showed you how to generate dependency graphs to help you figure out the structure of your code using Visual Studio 2010 Ultimate Edition.

      This week I want to show you another useful feature in Visual Studio 2010 to help you generate documentation from existing code. The dependency graphs are great for big picture analysis, which assemblies are referenced and the classes in each assembly, but what if I need a lower level of detail?

      Once again Visual Studio 2010 Ultimate Edition comes to the rescue with the sequence diagram generator. Maybe you’ve been asked to investigate an error message received by a user when they click on a particular button. You can just go into the event handler and generate a sequence diagram to determine the method calls from that event handler. This is much simpler than manually walking through the code class by class, method by method. Simply place the cursor in the code editor window, within the method for which you want to generated a sequence diagram and right click. Choose Generate Sequence Diagram from the context menu as shown in Figure 1.


      Figure 1 Generating Sequence Diagram

      When you select Generate Sequence Diagram you get a window which allows you to control the level of detail you want to include in the sequence diagram.


      Figure 2 Generate Sequence Diagram Pop Up Window


      Let’s take a quick look at the different options

      • Maximum call depth controls how many calls deep you want the sequence diagram to draw. If you have a lot of nested calls and different classes you probably want to increase this from the default value of 3.
      • Include calls in Current project will display only calls to methods in the same project as the method you selected
      • Include calls in Current Solution will display only calls to methods in the same solution as the method you selected
      • Include calls in Solution and external references will display all calls to all methods regardless of the assembly or location where they reside
      • Exclude calls to Properties and events will leave out calls to get and set methods for properties or event handlers, unless you suspect the error is in a get or set method, the diagram is usually easier to read without these calls.
      • Exclude calls to System namespace will leave out calls to any methods that are part of the System namespace, since most of the time the bugs are in our code, we often exclude these calls, though there are times when seeing System namespace calls can be helpful, for example when debugging localization issues or database connections.
      • Exclude calls to Other namespaces allows you to explicitly list namespaces whose calls you want excluded from the diagram, this allows you to focus in on specific sections of code
      • Add diagram to current project allows you to save the diagram in your project with a .sequencediagram extension

      If I choose OK and generate the sequence diagram with the default values shown in Figure 2, Visual Studio generates the sequence diagram shown in Figure 3.



      Figure 3 Default Sequence Diagram

      You can see my event handler instantiates a new Student object, and then calls the Save method of the Student class which instantiates an instance of the StudentData class and calls the Add method. This is a very simple example, but shows how quickly you can outline method calls from the event handler. Generating a sequence diagram does not take very long so you can experiment with different settings to get the right level of information for your needs.

      So once again, without using any external tools, we have the ability to generate documentation for our undocumented project! You already have Visual Studio, it is so much more than just a code editor! For just a quick sense of how much more it can do, just take a minute to look at the Visual Studio 2010 feature comparison chart. What features are you using? Testing? Database development? Version Control? Build Automation?

      Today’s My 5 is related to documentation

      5 ways to make your code more readable

      1. Do not abbreviate variable names with autocomplete and intellisense features (especially the pascal case intellisense in Visual Studio 2010), you don’t need to keep your variable names 3 characters long anymore!
      2. Do not call your variable “id” unless it is REALLY obvious what id is stored in that variable. The number of times I have had to read through code to figure out if id represented a StudentId, CourseId, CategoryId or ClassId, this is one of my pet peeves.
      3. Start using LINQ to SQL building T-SQL statements by concatenating strings is really confusing to read and prone to syntax errors like missing spaces or commas, LINQ to SQL is easier to read and debug.
      4. Use meaningful parameter names once again the intellisense is a great feature and it works for your methods as well as the built-in methods, so use meaningful parameter names so that anyone calling your method can easily figure out the expected parameter values.
      5. Be consistent with casing are you going to use uppercase for constants? pascal casing vs camel casing for public variables. Be consistent and everyone will get immediately recognized the different types of variables from their names alone.
    • Canadian Solution Developers' Blog

      Maybe You Do Support IE9 and No-One Told You


      I got an interesting email from a developer the other day who had attended one of the Internet Explorer 9 code camps. He told me he was having trouble using a particular website with IE9. When he called their support line, he was told “We don’t support IE9, you should uninstall IE9 or use another browser” Whoa! Stop Right There! Just because you haven’t done complete regression testing in IE9 doesn’t mean you should just abandon the members of the public who have moved to IE9 and want to access your site. At least try compatibility mode first! Compatibility mode is the feature in IE9 designed for backwards compatibility. So if you or someone you are talking to has trouble browsing a site in IE9 it is your first line of defense. Please forward this article to your service desk so the next time an IE9 user calls, they have a chance at solving that user’s issue with the click of a single button. I understand the service desk may not be able to say they support IE9 simply because the corporation has not completed full testing on the browser, but, many users will be satisfied with a “You can try this but we don’t officially support it” solution like compatibility view.

      There are 2 ways to access compatibility mode.

      1. Use the Compatibility mode button in the toolbar.
      2. Select Compatibility mode from the menu.

      Let’s take a look at each

      We’ll start with a website that doesn’t work in IE9. I am not a Beastie Boys lover or hater, I just happen to know that the web site promoting their new album does not work in IE9. When I visit the site in IE9, the main screen is just an image, and the menu options at the bottom of the screen are all jumbled on top of each other.


      Let’s use our first option, Use the Compatibility mode button in the toolbar, to fix the site. If you look at the toolbar, you will notice an icon of a broken page.


      If you click on this icon you will enter compatibility view. This will display the page as if you were using Internet Explorer 7. Voila the site now works!


      IE9 will remember the sites where you selected compatibility view. If you return to that site in another browsing session it will automatically be displayed in compatibility view.

      IE9 actually examines the source code of the web site to determine if it should display the compatibility view icon in the toolbar. But it is possible that one day you will visit a site that does not work in IE9, and there is no compatibility button on the toolbar. In the immortal words of Hitch Hiker’s Guide to the Galaxy “Don’t Panic.”

      You simply need to use our second option, Select Compatibility mode from the menu, to select compatibility view.

      You may have noticed the menu bar is not displayed by default in IE9. Just press the <ALT> key to display the menu. To enter compatibility view choose Tools | Compatibility View.


      That’s it! Now if you read the blog this far that shows dedication, so,  you deserve an extra tip, take a look at the menu option Tools | Compatibility View Settings. It brings up the following window.


      You can use the Compatibility View Settings Window to:

      • Add or remove websites from Compatibility View
      • Include updated website lists from Microsoft - this automatically displays in Compatibility View websites Microsoft has identified and registered as requiring compatibility view
      • Display intranet sites in Compatibility View-  to display all your intranet sites in Compatibility View so you may be able to start using IE9 before all the internal corporate sites have been upgraded and tested for IE9
      • Display all websites in Compatibility View - to display all websites in Compatibility View. I would not recommend this setting because more and more websites are taking advantage of HTML5 and pinned site features that require a current browser such as IE9.

      So there you have it, before you tell someone you don’t support IE9, give Compatibility View a try!

      Today’s My 5:

      5 Reasons you want to use IE9 to browse web sites. (in no particular order)

      1. Full hardware acceleration - IE9 leverages the Graphics Processor Unit (GPU) as well as the CPU, which means graphics and video display faster and more smoothly than ever before. So everyday sites with graphics and video will be better.
      2. Faster JavaScript execution – The new JavaScript engine Chakra is a huge improvement in speed over IE8. Don’t believe me, check out the SunSpider results for JavaScript performance on IE9.
      3. Favourites on steroids (aka pinned sites) – If you have a favourite web, you can just drag the tab down to the taskbar and pin it to your taskbar. Now whenever you start up your computer, you have an icon you can use to launch that site. Some websites customize the pinning with their own icons and jump lists (check out tsn.ca, cbc.ca, or theglobeandmail.com)
      4. More screen space for the website – When you browse a website, you want to see the content, not the browser. With the new layout in IE9, more screen space is devoted to the website.
      5. F12 Developer Tools – Hey, we are developers after all, the developer tools that display when you choose F12 are fantastic web debugging tools built right into the browser. Yes we had the F12 tools in IE8, but the new Network and Profiler tabs add even more tools to our developer toolkit! I like the developer tools so much, I wrote a blog about Console Smile.
    Page 1 of 5 (108 items) 12345