Home » Digital Identity

Does the DisplayToken Violate the First Law of Identity?

29 October 2007 3 Comments

I have been following along with the Identity story for some time now.
Cardspace as an Identity selector supports two basic models;

  1. Self-Issued Cards in which essentially you act as your own security token service and
  2. Managed cards – in which a trusted third party acts as Identity Provider making assertions around your identity.

I have seen many examples leveraging self-issued cards but relatively few incorporating managed cards. There is a sample STS available on the http://cardspace.netfx3.com website but due to the complex nature of it I’ve found it challenging to set up and leverage. If you look at the message boards they are full of issues and questions involving managed cards.

To mitigate this I’ve put together a managed STS and will be hosting it here from my own website in the coming days. It’ll allow you to setup a relying party, setup claims and test values for same and even download a managed card.

I’ll also provide a generic test harness that’ll simulate your relying party and allow you to test the end to end interactions. Last thing it’ll do is provide you with the RST and RSTR structures passed around in XML as we go.

I hope this’ll be a useful service and a useful learning tool and there’ll be more to come on that in a few days. (as a side note I’m surprised Serrack or Microsoft hasn’t set this up themselves by now).

But of course there is a selfish agenda to all my work and the main reason I did this is because I wanted to understand the inner workings of a security token service. This (painful) process has shown me…

  1. how it processes the Request for a Security Token
  2. how it generates the Request for a Security Token Response
  3. how the Cardspace Identity Selector will process that and lastly
  4. how to consume the token on the Relying Party side.

When an RP indicates it needs a claim, let’s say
http://schemas.francisshanahan.com/sts/superclaim

Cardspace includes that as a required (or optional) claim in an RST. The Security Token Service reads this, (presumably) locates the value for this claim and then includes that value in an RSTR.

One thing I was surprised to learn is that Cardspace Identity Selector doesn’t actually display this value! The ID selector actually displays a value from what’s called a "Display token". Here’s where things begin to break down (for me)…

The values in the Display Token are actually what get displayed to the user.

So tying back to the Laws of Identity: The user should have knowledge and control over what gets sent to any Relying Party.

This Displaytoken seems to violate this as follows…

  1. There is nothing that prevents the STS from including claims in the RSTR that were not requested in the RST.  Thus an STS could
    • ignore the "isOptional" attribute of each claim and include that information regardless.
    • Or worse still, an STS could include claim values that WERE NEVER requested. I’ve tried this with my own STS and Cardspace happily forwards these on to the RP for decryption.
  2. There is nothing that prevents the STS from including Values in the Display Token that are DIFFERENT from the values in the actual claims token. So for example, it may be shown to the user that they are passing an email address of "foo@bar.com" but in reality the value being sent to the RP is actually "mypersonal@emailAddress.com". The user wouldn’t know at best until the RP processed the RSTR token.
  3. Whilst the Security token is encrypted and bundled up nicely to protect its information, the DisplayToken is sent in clear (to allow the Cardspace selector to display it). Now what’s the point of protecting your claims in a security token if you go ahead and put those same claims in a Display token? How can we have user control and consent (Law #1) without violating the security of the data itself?

So it seems once again I have confounded myself with Identity by delving into the details. Perhaps it would be better to just go along with the whiteboard conversations and ws-trust what I’m being ws-told rather than ws-implement it?

It would appear based on my rudimentary investigations that there’s a potential for the first Law to be broken either through
a) Unwittingly implementing an STS that pumps in claims into an RSTR without looking at the RST.
b) A malicious STS mis-representing claims to a end user and secretly passing different information to an RP.

This for me was the kind of "Ahaah" moment that would typically not be uncovered until knee deep in an implementation and could potentially derail a project. I’m not saying this is the fault of  Cardspace, or even the Identity Meta-System. Rather I think this is a problem that’s just inherent with Law #1. As with anything I tend to find there is a lot of buzz around the high-level solution but sometimes when you dive a level deeper, you come to find out there may actually be a problem [LINK].

That’s why it’s always better to be an "I-know-it-works-because-I-tried-it,-look-my-hands-are-dirty" architect than an "I-don’t-know-what-the-problem-could-be,-it-compiled-on-the-whiteboard" architect.

I will talk to Kim[LINK] about this…

3 Comments »

  • Anonymous said:

    You’re bang on for the most part.

    First, the only error I saw in your text: The DisplayToken isn’t transmitted in the clear. It is secured either by virtue of a secure transport layer (STS on HTTPS, e.g.) or message security (encrypted with a temporary key negotiated with the STS’s mex endpoint).

    Second, you’re right: the STS can include things in the DisplayToken that are not in the actual token, and vice versa. But, as you say, this happens when:

    “a) Unwittingly implementing an STS that pumps in claims into an RSTR without looking at the RST.
    b) A malicious STS mis-representing claims to a end user and secretly passing different information to an RP.”

    I would argue that (a) is a code defect and (b) is a sign you shouldn’t be trusting that STS to handle your private data — you can think of this like someone phoning up your employer to ask for your email address, and your employer happily giving them not only that but your SSN as well.

    In my eyes, it’s the fact that there is this structured exchange of claims that satisfies the first law. The RP has had to come clear about what it wants from you — if the STS is buggy/malicious, that is a separate concern.

  • Matt Ellis said:

    I don’t think the display token violates the 1st law. The STS certainly can (by design or error) send more claims than required (breaching law #2) or send different data in the display token than in the security token, but in these cases you’ve got a bigger problem – the STS is broken, and they’ve already got your credentials and your personal data. But the key thing here is that the STS is violating laws, not the display token. I’d even go so far as to say that the laws don’t mandate how to work with a buggy or even malicious implementation. I reckon they’re best viewed as laws that an exemplar solution should satisfy, i.e. not a way to build an impossible to get wrong solution, but that all of these things should be ticked off when a good solution is built.

    The previous commenter has already mentioned that the display token isn’t sent in the clear (although I didn’t know about the message level security – thanks!), but you could also apply law #2 to it (minimal disclosure). The way I understand it is that the display token can be used to provide a textual representation of the value of a claim. It needn’t be the exact value of a claim. A good example would be credit card number. The encrypted security token could contain the full number, while the display token might want to display “**** **** **** 1234″. As long as there was enough information for the user to be in control, law #1 isn’t broken by sending a different data representation.

    Cheers
    Matt

Leave your response!

Add your comment below, or trackback from your own site. You can also subscribe to these comments via RSS.

Be nice. Keep it clean. Stay on topic. No spam.

You can use these tags:
<a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>

This is a Gravatar-enabled weblog. To get your own globally-recognized-avatar, please register at Gravatar.