Getting Started With Information Rights Management (IRM) Integration With SharePoint

What is the purpose of Information Rights Management Marriage With SharePoint
Distributing local information through unwanted channels is one of the largest problems that exist within a SharePoint environment. Because SharePoint is meant to provide users with large facilities in order to share and work with arbitrary business data, this can sometimes lead to users sharing information that should otherwise not be shared.

A major method to procure added assurance that will help to eliminate intentional and/or accidental redistribution of sensitive or classified business information is to persistently protect the the business data under multiple circumstances, across multiple environments.

A common incident is when someone sends a piece of confidential information to the wrong person, through a mistake of choosing out of an address book or something similar. These situations are commonplace within an environment that builds out virtual teams focused on collaboration, when sensitive information in business information stored in such mediums such as Microsoft Office documents is easily shared accidentally or intentionally for whatever reason.

These types of information leaks can be costly because of:

  • loss of revenue
  • competitive advantage
  • customer confidence

MOSS is tailored to controls access to various documentation, following usage once the document has been downloaded. For an organization that has to adhere to certain legal / business requirements, this can be an invaluable piece of functionality.

What is Information Rights Management and What Can It Protect?

Information Rights Management (IRM) is a component of the Microsoft Office SharePoint Server and Microsoft Office product suite. Although its base technology derives from Windows Right Management, it has heavy ties into the Microsoft Office product suite, and has direct hooks into the Microsoft Office SharePoint Server system.
IRM allows document authors to specify who can read their document, what they are able to do with the document, and when they are able to do it. IRM can be applied to Outlook e-mails, Word documents, Excel spreadsheets, and PowerPoint presentations (along with others which implement a customized “protector”). While the Microsoft Office SharePoint Server environment is meant to promote collobration of documents between virtual teams, IRM will provide offline methods of working with the arbitrary office documents.
Some of the key features that one should look to implement in an offline protection implementation is:

  • Implement A Protection Scheme That Travels With An Arbitrary File
    • Protection that exists at the file level
    • Protection that will bind and travel with the file, wherever the file goes
  • Controls Access To The Document, and How the Document Can be Used
    • Leverages encryption methods that controls usage
    • Implements usage policies bound to the document that translate to the native client application
    • Expire relevant content when it is deemed no longer necessary
  • The Protection System Should Be Easy For End Users
    • Easy for clients to implement protection for business data
    • Tightly integrated with Microsoft Office clients that in turn are relevant to SharePoint
  • Policies That Are Managed By The Enterprise

    • Permission Policies that are organizational consistent
    • One organizational owns overall access

In a typical SharePoint environment, documents are controlled at a very granular level when stored at the web level, however once a client gets the chance to download the arbitrary document, the overall permission levels are lost. MOSS and IRM work together in order to translate roles on the SharePoint server, to permission levels as they are specified within IRM.

If a SharePoint environment if there is no IRM functionality implemented, documents circulated electronically are uncontrolled and can be printed, copied, and forwarded feasibly to anyone. Transmission of e-mails and documents over secure networks may protect the information in transit, but offer no control over what the recipients do with the information. Password security protection for documents can easily be circumvented if the password is also provided.

IRM can be used to prevent the printing or forwarding of e-mails and to make them inaccessible to the recipient after a specified expiry date. IRM can make documents unreadable by anyone other than the specified recipients.

Deploying Information Rights Management and IRM Requirements
Deployment of Information Rights Management is performed across an organization typically by the server/SharePoint administrator. In addition to installing the Microsoft Office 2003/2007 client software (since these are the default protectors that are provided by IRM), there are some other services and software that need to be installed and configured to support the IRM infrastructure:

  • Microsoft Windows Server 2003 Enterprise Edition (prerequisites for SharePoint)
  • Microsoft Windows Rights Management Server for Windows Server 2003
  • Microsoft Active Directory Services
  • Microsoft Internet Information Services
  • Microsoft SQL Server 2000/2005
  • Microsoft Windows Right Management Client software to be installed on all WFE’s

The relevant server and clients that will be accessing the IRM enabled document repositories need to be loaded with the Rights Management Update for Windows. For the encryption service to function correctly, public and private keys for creators and readers are created when the users enroll to use the Rights Management Service (RMS). Microsoft Office is required to create rights protected documents, or through the MOSS interface, but they can be viewed with other editions of Microsoft Office, or with the IRM add-on for Microsoft Internet Explorer.

By default, when Microsoft Office is installed, IRM is not enabled. Without the additional software listed above, end users will not be able to create rights-protected material even though it is enabled on the MOSS server.

IRM Protection Policies
The policies that RMS will leverage are formulated, enforced and populated by SharePoint or network administrators. After the policy has been established, the client still has to apply the appropriate policy to the document they are sending, by pressing a button and specifying rights that are available for this document. MOSS will translate roles from the site if here is no direct rights bound to the arbitrary piece of documentation.

What are some of the benefits of IRM?
There are several benefits of using IRM for various environments.

  • Documents created with MS Office with IRM are encrypted using Windows RMS (Rights Management Services). Restrictions can then be set to limit recipients’ rights to view, copy, print, and distribute MS Office 2003 documents, including Outlook e-mail messages, and to set a time limit on the readability of the document.
  • Appropriate use of this technology restricts access to records, either by internal or external organizations interacting with a third party, may prevent various organizations from creating, maintaining, and disposing of electronic records in a legal and proper fashion. Furthermore, use of this technology may prevent agencies from producing such records to competent external authorities, such as in response to arbitrary legal requests.
  • An organization can create and enforce a policy to deal with the receipt of MS Office IRM-restricted files sent by internal and external organizational users, in order to ensure such files comply with the accessibility requirements of an arbitrary organization.

Getting Started With the IRM
The reason that most people have trouble with IRM is because the requirements for MOSS can be somewhat rather confusing, however if you make sure, and inventory, all the portions that are required, and ensure that they are properly implemented within your environment, it is a relatively painless procedure. The important portion to gather out of the first steps needed to properly implement IRM is ensure that you meet all of the requirements. The actual process of getting IRM going is relatively painless and you can up in running in about 30 minutes (depending on whether you need to write / implement any custom protectors [the methods by which IRM can actually implement its protection policies]). Obviously, we are not talking about client based rights management, the IRM that we are going to be enabling exists on the server, although will provide hooks into the client portions of IRM.

The first requirement for IRM is you must have a server with IRM enabled, by this I mean a Windows 2003 Server with SP1 or later running Windows Rights Management Services since it provider the backbone framework for the IRM services. Next, since IRM has to somehow be enabled on all of the FWE’s through the web farm, it can be downloaded from here:

http://www.microsoft.com/downloads/details.aspx?familyid=A154648C-881A-41DA-8455-042D7033372B&displaylang=en

This is how your MOSS services will hook into the main IRM server, it provides the functionality that is needed in order for the document libraries that you are using, in essence, to become IRM enabled. As well, for your users to work with the IRM features that are available in an off-line format (those features after a document is downloaded from a document library and is placed in native encrypted format) will need to have the client installed.

Once the prerequisites are defined, and appropriately enabled on all of your servers, you can begin the actual implementation of the service.

Share

Biometric Encryption

Well, so we have been talking about biometrics for a while, and eventually the concept of biometric encryption (fuzzy commitment schemes, secure sketching, biometric key binding, bioHashing, biometric signatures, etc. [it goes by many names]) was going to come up. To be honest the subject is actually relatively fascinating. I have just started researching it, so I figured I would share my findings about it. Maybe I will make up my own term like Super Adam Bio Hashing. That sounds sexy (said for the MVP group, who knows all about my desire to make every SharePoint feature as sexy as possible). Anyways, this is just going to be a general overview to introduce the concept, and give readers a better idea about the aggregate field.
Like some other things that come up on this blog, it really isn’t going to have a large relevance to SharePoint discussions, but either way it is interesting. I will look into maybe expanding this into the CryptoCollaboration concept sometime (which I am going to update in the next post with the programmatic sequence diagrams and a general architectural overview), but it is currently beyond my skill set honestly until I actually find some good research that describes the mathematics behind BE. I haven’t been able to pinpoint this type of material yet.

As stated, we already have a pretty good idea about the overall concept of biometrics, that it is measuring quantifiable objects about a principal, being either physical or behavioral parameters.  We know that biometrics generally runs on fundamental comparison operations, where the template (biometric blueprint) sample is taken from the principal and then this blueprint is employed for the actual identification and verification of the principal. This template is often referred to as the biometric template (which is why you see the fingerprintTemplateParams Property in the BioPoint API documentation) or a biometric principal blueprint. People often refer to the person that the sample (for template generation) is taken from as the enrollee of the template (since a lot of people consider a biometric system to be a publish / subscribe architecture), but I prefer to refer to the template person as the principal, so let’s refer to the user as the principal for the remainder of this post.

So, let’s talk a little bit about encryption, and we will then get into Biometric Encryption. We know a few things about encryption. We know that one of the more common cryptographic schemes that we see in general business applications consist of using a public and private key.

The concept of biometric encryption was originally coined by Dr. George Tomko, who really started to put the concept to work when leveraging fingerprinting biometrics. Although there a have been several grants in the past couple years that have allowed this area of research to expand, and some brief searching provides several smaller research firms that are integrating biometric encryption into their research schemes, it is IMHO still somewhat untapped. I think that it will be an explosive field once the true business value of it is taped though.

However, I digress….

So what the hell could we use biometrics for in the terms of encryption? We know the encryption is dependent on various types of strings, so how could we use biometrics as a string representation parameter and subject? That is an excellent question. The answer to the overall question to this is just a simple plain no, at first glance it is not feasible to use biometrics in the straightforward method that orthodox encryption routines function. A biometric sample is not representative of a string, it is an object, and therefore won’t work well within a typical encryption schemes.

Where BE becomes possible is when you slightly extend this concept into a more abstract sense. Although a template is going to be composed of a large bit representation, and it would be possible to extract a string from this representation, that is, IMHO, a poor approach. Furthermore, acquiring consistently exact biometric samples is never really possible because managing the distortion rates between the samples would become improbable, regardless of noisy compensators are factored into the software.  Cryptography is a rather exact science, and with that in mind, noise disturbances within the empirical data would not be acceptable and the related algorithms required for descrambling would fail.

It is also possible to use traditional cryptography against the stored biometrics blueprints in whatever medium that they are subject to depending on organization data storage standards. This would be a fairly customary situation, and not much different than traditional forms of cryptographic situations. The problem with this is there is still a point where someone, be it a server / database administrator, that would have access to the source template. Because the templates would have a period in which they are exposed to some user, there is a possibility for a breach. This is large concern with biometric data because the user would have access to some of the most sensitive information available regarding a user that could be used in order to possibly fabricate future biometric templates.

So, we have explored two options that are not very desirable, now let’s actually get to the point where BE shows us an actual favorable circumstance. If you want to view some more of the circumstances that arise from the use of biometrics and cryptography, A Study on PKI and Biometrics by FIDIS is pretty interesting and a lot more in-depth.

Rather, it is possible to instead unite an encryption key with a biometric template. This would, in essence, result in both the key, as well as the biometric template, becoming inaccessible and sheltered.

A cryptographic key is generated from a set of user input, usually something like a passphrase that can be remembered. For example, I could use the string sharepointsecurity in order to generate a cryptographic key, and because it is the name of my site, it is easy for me to remember. That cryptographic key can then be used in order to generate the encrypted string back into plain text. As opposed to taking this approach, we can graft the unique biometric parameters regarding a principal empirically, identically how it is done for normal template verification. Once the empirical data is harvested from the principal, it is united within the cryptographic key. In order to regenerate the cryptographic key, the correct sample must be provided to the system. As opposed to providing a passphrase which generates the cryptographic key, the key is generated when the principal first generates the template. It does not use the biometric template as a parameter for the key generation, and therefore the two units although united are not tangibly related. Because of this segregation between the biometric template and the key, they key can therefore be built, rebuilt, and destroyed regardless of the biometric sample.

This is neat, and results in the key protected by the biometric template. Sometimes people refer to this template as a private template or protected template. While the key is united with the biometric template, in its segregated state, as opposed to the key being used in order to encrypt a string representation, the biometric is used to encrypt the key. Quite an adjustment, and somewhat confusing at first, however really quite clever. This type of functionality has to be written by the user, as the algorithm doesn’t adhere to normal cryptographic criteria, so can branch in a variety of fashions.

Now, the concept of the two layers of keys is still consistent, there is a scrambling and descrambling process otherwise the cryptographic process being presented wouldn’t be an exceptionally impressive because it would only half way work.

When the actual process occurs, the principal will approach a biometric device in order to provide a sample. This template will be used in order to breach the biometric encrypted template, and the custom written algorithm will be used in order to present the sample. This algorithm will be architected so the presentation will captured and will be used in order to decrypt the key. This is pretty neat, because as opposed to any keys being used in order to the complete the cyclic encryption routines, the biometric template is used for the decryption, it’s kind of backswords however interesting.

There are some problems that have to be cleverly approach in regards to the biometric sample. When using biometric encryption, you are going to be querying against principal samples that tend to contain various amount of distortions. This type of variation has to be compensated for with custom code, and can be a pain in the ass without some sort of SDK provided with the biometric device in order to code compensate for the variations. In other posts I have talked about fuzzy logic, and the same thing is true with this, because there is a rather grey area that must be compensated for with the template comparison algorithm.

Once the key is extracted using this process, the actual application process can concurrently occur, which can be used for cryptographic key generation.

In any respect, this is just an overview of what BE is, and I am tired of writing about it. There a lot of other sources that will discuss it more in-depth if you really want to learn more about it.

Share

CardSpace and SharePoint – Part II

In the Rabbit Hole, Keep Your Head Above Water!

In a previous post where we were discussing CardSpace and SharePoint, there was dialogue of some of the basics of Windows CardSpace and how it can possibly interact with SharePoint as a possible type of authentication mechanism (term used in a loose sense). In that post, I was attempting to introduce you at a high level the operations of CardSpace, and how, at a high level, it plays an important role in building an interactive piece (always remember, CardSpace is a piece of this!) of an identity metasystem, easily one of the most exciting security concepts to consider in the realm of current collaboration security technology. We talked about some interesting security concepts, which hopefully wet your appetite a little more to the proposal of integrating CardSpace and SharePoint into a cohesive solution that exploits the benefits of each of these assets to their maximum potential, while integrating their unique attributes into a singular system.

In this post, I will delve a little more in depth into the concepts behind CardSpace, and we will begin to see some of the more granular innards of how CardSpace functions. As well, in this discussion you will begin to understand how the CardSpace controls for SharePoint work and how using CardSpace enabled features can play the role of an authentication mechanism, as the basics of how the identity selector is instantiated from SharePoint will become clearer. This will give you a clearer understanding of what is happening under the hood with this whole hybrid CardSpace / SharePoint environment stuff.

Before we get started, working with the associated OM assets (System.IdentityModel.Claims, etc.) that is required when building solutions that tap into CardSpace is not big deal and nothing to shy away from. For example, iterating is iterating, like iterating through CardSpace claims is a relatively straightforward process since each type of ClaimType objects can be called through a ClaimTypesCollection.

[csharp]
//quasi-pseudo code for ClaimType iteration
foreach (ClaimType claims in ClaimTypesCollection)
{
// Do Something
}
[/csharp]

As we saw in the previous post, adding CardSpace support to SharePoint is not an exceptionally difficult task, as the steps to adding CardSpace support for normal ASP.NET 2.0 web applications (for which SharePoint architecturally belongs to) is not technically convoluted. Initially, it may appear that CardSpace solely functions by adding an HTML OBJECT tag which plays the pivotal role in calling the CardSpace identity selector (however it should be noted that it is possible to use XHTML). However, there are couple things that happen during this process though that deserves some attention. Firstly, we have the OBJECT tag that CardSpace requires, which we will examine a little more closely in a minute. Secondly, the form must be submitted, which will invoke the identity selector; this is essentially calling the object. The CardSpace server code (controls) massage the encrypted token in order to compute the claims as appropriate by pooling and decrypting the token according the private key of the SSL certificate. Then the encrypted (note that this token is encrypted with WS-Security), signed XML token can be sent to SharePoint. There are some more minor processes that occur, but this is a good high level overview regardless.

There are a bunch of Identity Selector parameters, however I am just going to go over the ones that I think are important. Ones that I don’t really want to type about right now (like issuerPolicy and privacyUrl) I will discuss in a separate post. Each of these parameters is important however whne architecting your CardSpace implementation because they will be used for the WS-SecurityPolicy data.

The information that is contained in the object tag is pretty self explanatory. The first portion of it is the object mimetype declaration. This is important to understand because working with cross-browser scenarios in relation to InfoCard support you will end up exploring InfoCard MIME Type existence testing to elegantly handle SharePoint users (simple comparisons on whether the user can leverage CardSpace environmentally pending)

[html]
–>
[/html]
This is just the beginning of the OBJECT declaration that is required in order to invoke the identity selector. We can see that the object tag is decorated with it’s MIME type, in this case it is for the InfoCard and the name attribute.
The second part of the OBJECT declaration is going to tell you the type of token that is being passed.
[html]

[/html]This is interesting, what is a SAML assertion? More along those lines, what is SAML? As SharePoint developers / architects, we don’t typically encounter SAML because the most of the time, it is typical to just use the default authentication / identity management schemes that are provided, and run with it. SAML stands for Security Assertion Markup Language, provided by the OASIS SSTC, and it provides methods that allow the exchange of relevant security information, namely identity and authentication information, building upon a widely understand standard, in this case XML. Because SAML is based on XML, it includes other technologies such as SOAP, HTTP, etc. XML is the most ideal data format to be used for the backend standard for SAML as it allows it directly available via a web browser, not requiring any type of client software. The main target of SAML is to reduce the overall cost and management that is associated with total user costs by providing a widely understand, easy to use standard that implements identity federation.

A SAML assertion can basically be thought of as a small container of information that is tiered around security data. There are two types of providers that exist in a SAML environment, an identity provider, as well as a service provider. Between these two, Assertions move from the identity provider to the service provider through a finite amount of paths. The purpose of both of the providers is pretty straight-forward. A SharePoint user will subscribe to an identity provider, this provider is responsible for authentication services. At this point, the assertion enters the argument. The assertion is generated by the identity provider. The generated assertion is passed to the service provider when the SharePoint user deems that it is appropriate. Then, as the service provider decides it, the user is granted access. This SAML assertion security data contains some very intriguing values. The first of these values is the header information. The header information will house some general data regarding the assertion, namely the identity provider formal name, expiration dates, etc.

There are some types of statements that are important to SAML in regards to CardSpace, authentication and attribute statements which are carried in the SAML assertion supplementing the header. The first of these, the authentication statement, will tell the service provider that the SharePoint at some point in time authenticated against the identity provider. The second of these, the attribute statements, simply contains metadata regarding the subject in name/value format that further define further details about the subject.

When working with these SAML attribute statements (these statements are essentially going to be represented as XmlElement objects out of System.Xml) you are going to need to check their validity, essential when coding against SAML, and easy to understand using some fairly standard XML functionality against the relevant XML nodes. You can see in the conditional test that we are building that we are using the IXmlElement interface as a parameter into our static condition. We are declaring this method as static because it is going to belong to the type itself rather than to a specific object. The IXmlElement interface because it helps to work with Xml objects. You can see that we are using the LocalName property because it will help to return the node’s qualified name and match it either against the string AuthenticationStatement or attribute statement (I am only showing one of each, but the strings values are the only things that would vary between the code statements).

[csharp]
public static bool IsValid (IXmlElement xmlElement)

{
if (! ((XmlElement) xmlElement).LocalName.Equals (“AuthenticationStatement”))
{
return false;
}
else
{
return ((XmlElement) xmlElement).NamespaceURI.Equals (“urn:oasis:names:tc:SAML:1.0:assertion”);
}
}
[/csharp]
You can use the IXmlElement in similar fashions when working with SAML. For example, you can use it when working with the pushing values to XML.
[csharp]
IXmlElement xmlElement = ((XmlDocument) xmlDocument).CreateElement (“saml”, “Attribute”, “urn:oasis:names:tc:SAML:1.0:assertion”);
[/csharp]

When working with the X509certificates, you can also use the IXmlElement interface in order to get the certificate using two static methods that both take the IXmlElement as a parameter. We are going to be working with X.509 v.3 certificates (which actually isn’t encrypted itself, it is encoded in base64) because there has to be a mechanism in place that allows the user to verify that the SAML token was issued by trustworthy person, so it gets signed. For example, it would look like the following to get the relevant signature, which is required because the signature value is the encrypted digest value. This is what you will decrypt to verify the digest. Anyways, here is the method:

[csharp]
public static IXmlElement GetSignature (IXmlElement xmlElement)

{
return ((IXmlElement) ((XmlElement) xmlElement).SelectSingleNode (“*[local-name(.) = ‘Signature’ and namespace-uri(.) = ‘http://www.w3.org/2000/09/xmldsig#’]”);
}
[/csharp]

The second method will use the first method in order to get the certificate out of the XmlElement object.

[csharp]
public static IXmlElement GetX509Certificate (IXmlElement xmlElement)

{
IXmlElement oxmlElement = XmlSignature.GetSignature (xmlElement);
if (oxmlElement != null)
{
return ((IXmlElement) ((XmlElement) oxmlElement).SelectSingleNode (“//*[local-name(.) = ‘X509Certificate’ and namespace-uri(.) = ‘http://www.w3.org/2000/09/xmldsig#’]”);

}
else
{
return null;
}
}
[/csharp]

There is another statement that exists, and that is the authorization decision statement. We are not going to talk about this too much, but the concept that it provides in the realm of SAML is nonetheless important to understand. The authorization decision statement basically just says what actions a user is permitted to have on certain, arbitrary objects. For example, let’s look at it in some code to get a better idea.

[csharp]
string myDecision;
if ((myDecision = text) != null)
{
switch (myDecision)
{
case “Permit”:

{
return Decision.Permit;
}
case “Deny”:

{
return Decision.Deny;
}
case “Indeterminate”:

{
return Decision.Indeterminate;
}
}
}
[/csharp]

We can see that there is a simple Permit / Deny gate that is being set up in regards to a decision string passed into the loop. This should give you a better idea of how authorization decision statements function in SAML.
Anyyyyyways, I could go on with working with SAML, XML, and C# all day, so let’s get back to CardSpace. It will probably be the subject of a future post, so stay tuned.

The next value that exists in the OBJECT tag is the issuer (Man, did I deviate, are we still talking about CardSpace?).

[html]

[/html]We talked about issuers in a previous post. Within CardSpace, there are generally two types of issues that exist, ones that are self-issued, and ones that issued from external companies. As we talked about CardSpace will query the issuer of the identity to obtain a digitally signed, encrypted XML token. People will often refer to the issuer as the STS (Security Token Server). This value contained in this parameter is the URI of the issuing entity.
The next parameter are the claims that are required. This can vary heavily. We already talked about what claims are. These are simply the claims that are required. In order for an InfoCard to become selectable within the CardSpace UI, the user must meet the demands of all the required claims. Otherwise, the InfoCard is typically greyed out and not selectable for the SharePoint instance. This is decorated as a space-separated list of URIs.

[html]

[/html]The next element is the optionalClaims element, which simply contains the claims that are optional. They are claims that are not necessarily required in order to process the users identity card for access to the SharePoint instance. For example, myFavoriteBeer = Blue Moon, would generally be an optional claim.

[html]

[/html]Those are the only parameters that I want to talk about right now.

Now, let’s first talk about some of the requirements of CardSpace, and why they exist. The most obvious requirement that you will see when using CardSpace with SharePoint is that it requires the use of SSL (Secure Sockets Layer). This is because CardSpace requires at the very least read access to the SSL private key. Now, this doesn’t necessarily mean that your entire SharePoint instance has to be SSL enabled, however this is the most typical implementation. Rather, the only page that requires having SSL enabled is the page where the identity selector that is going to be invoked must be secured with SSL. This is for good reason, due to the type of information that is going to be transferred over the transport layer with CardSpace; you want to be aware of the identity power, such as information regarding the SSL certificate.

CardSpace at least needs read access to the private key, because when CardSpace is relaying the XML token, it is going to be encrypted on the client by the use of the SSL public key. In order for decryption to happen CardSpace requires read access to the private key so that it can actually process the claims that are included in the SSL encrypted XML token.

So when you are sending a card to your CardSpace enabled SharePoint site, what exactly is being sent? Right now, we are used in a typical FBA enabled or Windows based site on sending some basic stuff, just objects like a username and password, nothing real complex. CardSpace is a little different however. The most important unique value that CardSpace will generate is a concept called a PPID, which is the personal private identifier. The PPID is essentially is an ID identifies a specific card for a certain relying party. The PPID is similar to other things that are sent in the encrypted XML token (claims), in that it is a rather complex claim, complex in the sense that the numeric generated is rather lengthy. The PPID that you send to one CardSpace enabled SharePoint instance will not be the same as the PPID that you send to another CardSpace enabled SharePoint instance, rather, they will differ greatly because the numeric is generated as the identity provider as a parameter. This is because of how the PPID is generated in regards to CardSpace. The PPID is generated using the parameters of the relying party certificate and something which is unique about the card. This is a beneficial because it prevents replay attacks across multiple sites that have CardSpace enabled authentication, because the certificate is used for the generation of the PPID.

So what does the PPID have to do with user authentication? Well, how are we used to authenticating users to our SharePoint instance? We use a username and password, and any number of .NET 2.0 authentication providers, be it Windows, Forms, or Web Single-Sign On, which binds into the SharePoint security system. When a user first hits your SharePoint instance that is enabled with CardSpace it will follow the process described in this post. Ata more granular level however, when the token operations proceed, the PPID will be saved locally on the user. Therefore, when the user next visits your SharePoint instance, there will be a comparison of the PPID that is requested against the one in the database. If the comparison operator between the two returns true, then the user is authenticated to the SharePoint instance, without the need for any type of re-authentication. If the user does not having a matching PPID, then the operations continues in a fist-time visited SharePoint flow, which would essentially follow the diagram described in the previous post. It is important to realize that the PPID should always be called from an encrypted, signed XML token; it should be used in mixed environments since this could lead to the PPID possibly becoming exposed.

This is ultimately, this is a much better process when authenticating your user since it eliminates password fatigue, and negates the need from multiple logins to several places. The following quote is a great way to consider current password schemes:

We should all remember that a secret passed unencrypted via a public medium is no longer a secret it’s a fact waiting for someone else to learn it. Richard Turner, Product Manager for Microsoft’s Identity Platform Developer Marketing group

However, CardSpace is MORE than authentication; it is a verification of an IDENTITY. An Identity is more than just your username and password word. It is all sorts of metadata that is associated with you.

For example, imagine that sharepointsecurity.com was a MOSS instance to login I would need the following authentication attributes:

Username : Adam

Password : Password

But an identity contains more information than just this, it contains things like:

Name : Adam Buenz

Username : Adam

Password: Password

Location : Eglin Air Force Base, Fort Walton Beach, FL

Favorite Food: Tacos

Favorite Beer: Blue Moon

Favorite Time To Work : Midnight to 4:00 a.m.

Etc. etc. etc.

An identity can contain a lot of information!

Ok, that is enough for this post on CardSpace, SAML, and everything else that we talked about. How random was this content, oh well. Based on how far I got on this today, I am probably going to need a few more posts about it before this series is closed up. Arg, I enjoy coding more than I enjoy writing. Oh well, happy CardSpacing!

Share