Vittorio Bertocci

Scatter thoughts

Setting up a quick & dirty STS which supports smartcard backed managed cards... using Zermatt

Setting up a quick & dirty STS which supports smartcard backed managed cards... using Zermatt

  • Comments 5

Just back from vacation. The tan barely started to fade, and here I am already playing with the new shiny toy :-). Did you experiment with Zermatt by now? As Kim mentions the samples (and the documentation) are an excellent way to start, and I am sure that blog posts & tutorials will soon start mushrooming here and there in the blogosphere: here I begin my humble contribution with my first technical post about Zermatt.

I had *absolutely* no hesitations when deciding which scenario I should tackle first: an active STS which handles requests backed by smartcards. I received asks about from many segments (especially about eID management from governments and high authentication levels for finance) and pretty much from everywhere in the world (especially Europe and Asia): I am really delighted to finally have a chance to give you something about that scenario that you can compile in visual studio, as opposed to the usual whiteboard sketches :-)

Before we dive into the code, let me disclaim the disclaimable:

  • as usual, the code you see in this blog is just an example and is by no mean production ready code. My purpose here is to introduce you to new ideas, so I favor readability and clarity over completeness
  • If you consider the definition of best practices as "A technique or methodology that, through experience and research, has proven to reliably lead to a desired result", I think I can safely say that there are no established best practices yet. Sure, there are some fixed points (use applyto, always check issuers & revocations, encrypt & sign as appropriate, sanitize everything including incoming claims...) and I am happy to share my experience here, but I think that the "experience and research" of you guys using the product in your business scenarios is what will drive the emergence of best practices. That's why it's so crucial that we get your feedback, and why you should not take what I write here as the golden standard and maintain a healthy skeptical attitude :)

Ooook, let's get to work. We should start by better defining the problem: we want to develop a Security Token Service (STS) that can be invoked via WS-Trust (hence active); furthermore, we want our STS to process request for security token messages (RSTs) that have been secured by using a smartcard. Later we will also want to talk about managed cards, but let's forget about that for the time being.

 

Active STS project structure in Zermatt

What is an active STS in Zermatt? You guessed right: it is simply a WCF service. Zermatt leverages the factory based extensibility model of WCF for hiding much of the protocol heavy lifting, and expects you to weave your custom logic by overriding specific methods in some derivation of a couple of base classes. Sounds complicated? On the contrary. Consider the diagram below:

image

Those four files are the basis for the typical Zermatt active STS project. Let's start from the right and move to the left:

Service.svc and web.config

We mentioned that an STS is a common WCF service: hence, we can describe it via a .svc file and associated web.config. The content of the .svc file will look like the following:

<%@ServiceHost  
    language="c#"
    Factory="Microsoft.IdentityModel.Protocols.WSTrust.WSTrustServiceHostFactory" 
    Service="MySTSConfig" %>

The Factory element uses WSTrustServiceHostFactory, which is the factory offered by Zermatt for hosting STSes in IIS. The factory generates a service called Microsoft.IdentityModel.Protocols.WSTrust.WSTrustServiceContract, whose parameters can be influenced by creating the proper Service entry in the web.config: below there is a snippet of the first few lines of such config settings.

<system.serviceModel>
    <services>
      <service name="Microsoft.IdentityModel.Protocols.WSTrust.WSTrustServiceContract"
         behaviorConfiguration="MySTSBehavior">
        <endpoint address=""
                  binding="customBinding"
                  bindingConfiguration="X509Binding"
                  contract="Microsoft.IdentityModel.Protocols.WSTrust.IWSTrustFeb2005SyncContract"/>
        <endpoint address="https://localhost/STS/Service.svc/Mex" binding="mexHttpsBinding" contract="IMetadataExchange"/>
      </service>
.....

The Service element of the .svc file points to a custom class, that in the diagram I called STSConfig.

STSConfig.cs

The STS programming model requires you to derive a class from SecurityTokenServiceConfiguration: here we call the resulting subclass STSConfig. The purpose of this class is to gather in a single entity information that must be fed to the factory for instantiate the STS service, namely:

  • intended address
  • signing certificate
  • custom issuance logic

The last element is provided by referencing in STSConfig another custom class, here called MySTS.

MySTS.cs

The custom issuance logic is injected in the system by deriving from (surprise surprise) the class SecurityTokenService, and by overriding two methods: GetScope and GetOutputSubjects. We will see those in details later, however the former performs some validation and prepares some crypto necessary for the issuance while the latter contains the actual claim values retrieval logic.

 

As you can see, this is not complicated at all. Let's go ahead and apply the above for developing our smartcard based STS.

Writing the STS

Let's start by firing up Visual Studio and creating a new web site; I suggest putting it in the local IIS and checking the "Use Secure Socket Layer" checkbox. In this example I called the IIS app "STS".

image

VS will create a Default.aspx page for you: ignore it for now, but don't delete it: it will come in handy in the next post.

We have seen in the former section what are the files that are needed for defining an STS. Let's create those files in reverse order: we will start from MySTS.cs.

Add a reference to System.ServiceUtil and Microsoft.IdentityModel; add the App_Code ASP.NET folder to the web site, and add to it a copy of CertificateUtil.cs (you can find it in the C:\<Program Files>\Microsoft Code Name Zermatt\Samples\Basic\ActiveSTSWithManagedCard sample); then, create the file MySTS.cs.

MySTS.cs

The using blocks, class declaration & constructor look like the following:

using System.Collections.Generic;
using System.Security.Cryptography.X509Certificates;
using System.ServiceModel;

using Microsoft.IdentityModel.Claims;
using Microsoft.IdentityModel.Configuration;
using Microsoft.IdentityModel.Services;
using Microsoft.IdentityModel.Services.SecurityTokenService;
using Microsoft.IdentityModel.Protocols.WSIdentity;
using System.Runtime.Remoting.Metadata.W3cXsd2001;
using System.Linq;


public class MySTS : SecurityTokenService
{
    public MySTS(SecurityTokenServiceConfiguration config)
        : base(config)
    {
    }

Nothing transcendental here. We use a bunch of namespaces, we derive from SecurityTokenService and we just call the base constructor. Then we finally start doing something interesting: remember the overrides I mentioned before? The first one we will implement is GetScope. GetScope takes care of verifying that we are happy with the RP for which the subject is requesting a token. Let's take a look at the code:

    /// <summary>
    /// verifies that the intended RP is one for which we are willing to issue a token;
    /// retrieves the corresponding certificate so that the resulting token can be encrypted accordingly
    /// </summary>
    protected override Scope GetScope(IClaimsPrincipal principal, RequestSecurityToken request)
    {
        //base
        Scope scope = base.GetScope(principal, request);
        //checks if we are happy with the intended RP
        ValidateAppliesTo(request.AppliesTo);
        //retrieves the corresponding cert and embeds it in the scope
        string RPhostname = request.AppliesTo.Uri.Host;
        scope.EncryptingCredentials = new X509EncryptingCredentials(CertificateUtil.GetCertificate(StoreName.My, StoreLocation.LocalMachine, "CN=" + RPhostname));

        return scope;
    }
 

The input parameters are the principal of the requestor (the subject) and the request itself, already nicely deserialized an a handy class (the heroic pioneers of the do-it-yourself STS era can tell you how much code that takes if you have to do it by hand).

Here we mainly want to do 2 things:

  1. we want to make sure that the requested token is scoped for an RP that we like. That means that we need to verify that we have a non empty ApplyTo in the request, and that its content matches one entry in our list of approved RPs. The call to ValidateAppliesTo takes care of it
  2. We want to make sure that the token we will issue will be encrypted for the intended RP: that means retrieving the certificate associated to the RP and assigning it to the EncryptingCredentials element of our current scope. The code that performs this is pretty self explanatory, if a bit crude: we assume to have in our LocalMachine cert store certificates following the convention "CN=RPhostname"

An example of ValidatesApplyTo can be easily obtained from the above mentioned example ActiveSTSWithManagedCard.

Before we move on, let me make a couple of comments:

  • The fact that we mandate the presence of ApplyTo is a good practice, but it is by no mean an universal rule. A non-auditing STS would not need (or even want) an ApplyTo.
  • The cert that goes in EncryptingCredentials here comes from the local cert store; however it is also possible to take the cert that is sent in the request, and use that instead of requiring the corresponding certificate to be present in the local store from a prior time. The tradeoff is, naturally, agility vs. security.
  • Some would advocate that GetScope should contain also the subject authentication logic (that is, if that logic didn't run even before reaching the MySTS class). That makes a lot of sense, and would also justify the presence of the principal parameter; GetScope is the first override that is invoked in the request processing pipeline, hence if the user didn't present valid credentials the sooner we fail the less resources we waste. On the other hand it seems that GetScope is about "validating" the RP, hence putting user auth logic there may be perceived as a side effect. For the clarity vs. completeness tenet introduced above, here I keep GetScope focused on RP related tasks and perform subject authentication elsewhere; however I invite you to ponder the considerations in this bullet point, and make decisions accordingly in your own projects.

The next method is GetOutputSubject: as mentioned, its purpose is to actually fetch the claim values requested. Here there's the code:

 

/// <summary>
/// authenticates the user requesting the token
/// retrieves the values of the requested claims
/// </summary>
public override ClaimsIdentityCollection GetOutputSubjects(Scope scope, IClaimsPrincipal principal, 
    RequestSecurityToken request)
{        
    //verifies that the incoming RST has been secured by one smartcard among the ones we recognize
    AuthenticateCredentials((IClaimsIdentity)principal.Identity);

    //goes through the list of requested claims and retrieves the corresponding values from the stores
    List<Claim> claims = new List<Claim>();
    foreach (RequestClaim requestClaim in request.Claims)
    {
        claims.Add(new Claim(requestClaim.ClaimType, RetrieveClaimValue(principal, requestClaim.ClaimType)));            
    }
    ClaimsIdentityCollection collection = new ClaimsIdentityCollection();
    collection.Add(new ClaimsIdentity(claims));

    return collection;
}

The parameters are the same ones of GetScope, plus the Scope itself.

The first thing we do is authenticating the incoming subject, by examining the incoming principal (for details on the principal-claims structures, see Keith's whitepaper). Here there's the code of AuthenticateCredentials:

  /// <summary>
  ///  verifies that the incoming RST was secured by the certificate associated to the managed card we generated;
  ///  normally this check would be performed against a database, here it's all hardcoded
  /// </summary>
  void AuthenticateCredentials(IClaimsIdentity cert)
  {
      string thumbstring = (from c in cert.Claims
                            where c.ClaimType == WSIdentityConstants.ClaimTypes.Thumbprint
                            select c.Value).Single();
      string savedthumb = System.Convert.ToBase64String(SoapHexBinary.Parse("xx xx xx xx xx xx xx xx xx xx xx xx xx xx xx xx xx xx xx xx").Value); 
      if (thumbstring != savedthumb)
      {
          throw new FailedAuthenticationException("The certificate credentials you presented are not recognized.");
      }
  }

This is the first time that we have any technology specific code. Here I assume that the incoming IClaimsIdentity has been created from an X509 certificate; hence I use a Linq expression for retrieving the base64 encoding of the incoming certificate thumbprint (do you have feedback about the logic we use for accessing claims?). The savedthumb assignment line is a terrible hack and a sublime trick at the same time. Here I am basically establishing what is the certificate that I am willing to accept: the way in which I obtain it is 1) going to the certificate MMC, selecting the certificate of the smartcard I am willing to accept and copying the thumbprint string from the cert properties dialog (the "xx xx .." censored string above) and 2) using this arcane SoapHexBinary.Parse method (from the esoteric System.Runtime.Remoting.Metadata.W3cXsd2001 namespace) which chews a hex string and gives back bytes[].

If the 2 thumbprints correspond, this means that the request was secured by using the smartcard I chose hence the user is authenticated. Obviously a real system would have a database of accepted smartcards/soft certificates.

Back to GetOutputSubjects: the central foreach goes through the claims requested in the RST and retrieves the corresponding value. Note that here I don't handle optional claims for the sake of clarity, you can refer to the ActiveSTSWithManagedCard sample if you are curious about it. The retrieving legwork is performed by the helper function RetrieveClaimValue: the code is below.

/// <summary>
/// retrieves the value of a claim associated to a certain principal
/// </summary>
private string RetrieveClaimValue(IClaimsPrincipal principal, string claimType)
{
    //here I would normally use something of the principal for looking up attributes values in one (or more) stores
    //here i ignore it and return hardcoded values
    switch (claimType)
    {
        case WSIdentityConstants.ClaimTypes.DateOfBirth: return "01/30/70";
        case WSIdentityConstants.ClaimTypes.Name: return "Gion";
        case "http://www.maseghepensu.it/hairlenght": return "50";
        default: throw new InvalidRequestException(claimType + " is not a claim issued by this STS.");
    }
}

A real attribute retrieval function would use the principal for querying an attribute store (or more than one) and return the value of the requested claim for that specific user. Here we don't do that, in fact we just have 3 hardcoded values that correspond to 3 claim types. Those claim types are the ones that our STS will support; the first 2 are the common DateOfBirth and Name, the third one is the classic custom claim that I use in every sts sample:  HairLenght. Any request asking for a claim that does not belong to this set of three would get an InvalidRequestException.

That's all: all our custom logic is there, the rest is just infrastructure.

MySTConfig.cs

Create the file MySTSConfig.cs, and paste in it the following:

using System.Security.Cryptography.X509Certificates;
using Microsoft.IdentityModel.Configuration;
using Microsoft.IdentityModel.Services.SecurityTokenService;

/// <summary>
/// Summary description for SecurityTokenServiceConfiguration
/// </summary>
public class MySTSConfig : SecurityTokenServiceConfiguration
{
    public const string issuerAddress = "http://localhost/STS/Service.svc";

    public MySTSConfig()
        : base(issuerAddress, new X509SigningCredentials(CertificateUtil.GetCertificate(StoreName.My, StoreLocation.LocalMachine, "CN=localhost")))
    {
        SecurityTokenService = typeof(MySTS);
    }
}

This is exactly as we described in the STS project structure section: our custom config class assigns a name to the issuer, establishes which certificate will be used for signing the issued tokens and embeds our custom logic by assigning the MySTS type to the SecurityTokenService that will be instantiated by the factory.

Service.svc

Create the Service.svc file and paste in it the lines we already saw:

<%@ServiceHost  
    language="c#"
    Factory="Microsoft.IdentityModel.Protocols.WSTrust.WSTrustServiceHostFactory" 
    Service="MySTSConfig" %>

web.config

The last piece we need for completing our STS is the service configuration. The web site already has a web.config file: we just need to add a serviceModel section to it, so that we can associate to the Microsoft.IdentityModel.Protocols.WSTrust.WSTrustServiceContract service a binding suitable for invoking the service by using a smartcard (or soft cert) as credentials. Below there is the excerpt of the serviceModel section:

<system.serviceModel>
    <services>
      <service name="Microsoft.IdentityModel.Protocols.WSTrust.WSTrustServiceContract"
         behaviorConfiguration="MySTSBehavior">
        <endpoint address=""
                  binding="customBinding"
                  bindingConfiguration="X509Binding"
                  contract="Microsoft.IdentityModel.Protocols.WSTrust.IWSTrustFeb2005SyncContract"/>
        <endpoint address="https://localhost/STS/Service.svc/Mex" binding="mexHttpsBinding" contract="IMetadataExchange"/>
      </service>
    </services>
    <bindings>
      <customBinding>
        <binding name="X509Binding">
          <security authenticationMode="MutualCertificate" />
          <httpTransport />
        </binding>
      </customBinding>
    </bindings>
    <behaviors>
      <serviceBehaviors>
        <behavior name="MySTSBehavior">
          <serviceMetadata httpGetEnabled="true"/>
          <serviceDebug includeExceptionDetailInFaults="true"/>
          <serviceCredentials>
            <serviceCertificate findValue="localhost"
                     storeLocation="LocalMachine"
                     storeName="My"
                     x509FindType="FindBySubjectName" />

            <issuedTokenAuthentication allowUntrustedRsaIssuers="true" />
          </serviceCredentials>
        </behavior>
      </serviceBehaviors>
    </behaviors>
  </system.serviceModel>

Looks familiar? It should. It is pretty much the same config we were using with the simpleSTS.

Next steps

We are done! The steps above were enough for setting up a fully functional, albeit simple, active STS which authenticates request for security tokens secured via smartcard/soft certificates. Now we need to test it: in the next couple of posts we will

  1. Add UI and logic for issuing cards associated to this STS and backed by smartcards
  2. Create an RP that will require tokens from our STS

As you can imagine, both tasks are going to be much easier that the already semi-trivial tutorial described in this post. At the end of the series I'll probably post as attachment the complete solution.

Well, that was FUN! Would you have ever imagined that one day we would say that about writing an STS? ;-) hehehe

Page 1 of 1 (5 items)
Leave a Comment
  • Please add 4 and 5 and type the answer here:
  • Post