ADAM (Active Directory In Application Mode) Custom Role Provider

Introduction to the Standard ADAM Role Provider
ADAM (Active Directory on Application Mode) has become increasingly common for companies that want to implement a featherweight directory protocol to use within their SharePoint environment. With ADAM, setting up role based security principles is common because it is an extendable architecture that provides the benefits of Active Directory without the management and implementation overhead, and also allows the concept of binding roles to operations which is a very helpful function since beyond acting as a role and user data store, the ADAM instance can become an engine by which to run an mixed, integrated SharePoint and miscellaneous application environment. The AzMan role provider limits the users that you can integrate with the LDAP pluggable provider to those that resolve to a tangible windows identity (it must be a domain account), which of course is a problem for people that wish to use ADAM only users. With increased support for ADAM in SharePoint 2007, the requirement from customers has become even more prevalent and for some, the lack of role support has become a cause for concern.

The Need for a Role Provider
In other articles in the site (notably here) there have been detailed explanations that go into using the AvtiveDirectoryMembershipProvider in your SharePoint environment and how to do so, and this provider is indeed incredibly central when implementing ADAM within your SharePoint environment for user authorization. However, it is necessary to implement a custom provider that will support the same role type functionality that is available to other providers to those same ADAM users, allowing one to bypass the limitation of only supporting assigning roles to actual domain accounts.

The provider that comes with the default instance of ADAM is AuthorizationStoreRoleProvider, which couples with the functionality that is provided by the Authorization Manager. The actual communication happens with a primary interop (microsoft.interop.security.azroles.dll), which is a COM object (discussed shortly) imported from the type library AZROLESLib. The membership functions of ADAM with the latest releases of ADAM is supported, and storing principles without resolving to Windows identities works fine.
Role Provider Functionality
There are various layers of supported functionality that exist in the default role provider that are extended in the custom provider that is available here. Notably, AddUsersToRoles, CreateRole, DeleteRole, FindUsersInRole, GetAllRoles, GetRolesForUser, GetUsersInRole, IsUserInRole, RemoveUsersFromRoles, RoleExists, and other lower level functions like getting application names. Most of the methods that are in the ADAM role provider should resemble other providers that must be built in order to support database transactions of other origins, whether it is Oracle, flat-text, DB2, or Informix, the ADAM role provider will have several characteristics that are the same.

There are some important best concepts that must be considered with the ADAM role provider. The first is to take the concept of application name into consideration, in this case it is done by referencing the ADAM application which will in turn provide the methods that are needed in order to talk back forth between the role provider and the ADAM backend datastore, this is important in order to allow segregation of data points provided by the application name so that multiple vectors can exist. Because multiple applications can exist that are used by a provider, an attribute can be determinably expressed for provider functionality. If in the configuration files an application name is not specified, then the default application name as determined by the .Net framework. This can be extended further when assimilating the concept of ADAM application scopes which allow you to drilldown even further.
Since scopes are a lot less common than scopes, lets see a brief overview on getting an application name.


The first thing that we have to do is initialize a new AuthorizationStore object by calling a new AzAuthorizationStoreClass(). Following, we have to get the values that we require out of the configuration sections (the connection strings that are specified), and then call OpernApplication() out of microsoft.interop.security.azroles.dll and return the application name.

Firstly, there is the concept that the provider will have to inherit out of the RoleProvider base class, which derives itself out of the System.Web.Security namespace. Unlike other concepts introduced on the site, such as using Façade patterns (factory patterns that develop a generic interface for providing a certain set of functionality, see here) for transactional provider interaction, the ADAM role provider is slightly more specific, i.e. as opposed to database factory patterns that are abstract the methods introduced are in fact concrete by nature.

Role providers are quite easy to write and to implement, because they are typically just sealed or abstract classes, and the parameters that are massaged are very simplistic, they are role names, and the users that are associated with those role names. There are a couple methods that should always be implemented, and others that although may be considered optionally, are nonetheless very important.


As an example, lets examine the most obvious method, creating roles. This is done by using the CreateRole methods, which will simply create a new role in the backend datastore. There are of course some conditions that must be checked before connecting to the datastore that ADAM provides before you actually create the new role before executing committing actions. The first is to examine illegal character conditions, and the second is that the role name doesn’t firstly exist. Checking whether the name already exists can be achieved by simply iterating through the available roles and then determining whether the roleNames that are being returned match those that are passed in as a parameter:


The second condition to check is whether there are illegal characters that are in the role name. This is done by doing an if statement and using an IndexOf in order to tell whether the passed in character string is indeed in the role name. The only one that is technically illegal with a role provider is actually a comma, however if you have standards that you must implement in order to conform to a role name requirement, you can use the same type of technique.

After the conditions are met, it is time that you can actually add the role that you want.
The methods that are needed as you can see are pretty simple and easy to implement, although this article won’t detail them all, there are some central concepts that should be covered.

Configuration Elements
As with the other providers that are implemented for your SharePoint environment, one of the most important changes that you will make is the configuration elements for the web application. This is accomplished in two different places, the connection strings elements so that the LDAP connection can be made and the role manager section which will specify the provider type, reference to the connection strings settings, and the application name since it will help define scopes for your business applications.
The first of these is to setup the configuration string which will define the connections that are made to the ADAM data store.

[xml]
< connectionStrings >
connectionString=
LDAP://ADAM.sharepointsecurity.com/OU=Example,DC=sharepointsecurity,DC=com/ >

[/xml]

The second this is to enable the Role Manager, set the default provider to the ADAM role provider, and then set the application name in the configuration elements. The type name is the custom role provider name, in this case ARB.ADAM.RoleProvider.

[xml]
< roleManager enabled=true defaultProvider=directoryProvider >
< providers >
< clear / >
< add name=ADAMRoleProvider type=ARB.ADAM.RoleProvider connectionStringName= ADAMStoreConnection applicationName=SharePoint/ >

[/xml]

Share

The Definitive Guide To MOSS Pluggable Authentication Providers

Want To Skip Directly to Implementation? Check out the Universal Provider Framework (free of course) for Universal Membership, Role, and Profile provider schemas, getting you up and running with nearly any custom database type in less than 30 minutes. The following membership providers where defined using the framework for your convenience:

There are a total of six classes that make up the Universal Provider Framework.

Download Visual Studio Project File


SharePointMembershipProvider.cs – View Online | Download Class File


SharePointProfileProvider.cs – View Online | Download Class File


SharePointRoleProvider.cs View Online | Download Class File


SharePointUsersProvider.cs – View Online | Download Class File


GeneralUtilities.cs – View Online | Download Class File


UserData.cs View Online | Download Class File

SQL Pluggable Provider Management
If you are looking for a way to interact with the SQL provider database, you can find the ASP.NET 2.0 Provider Manager and the business layer classes that it uses here.

Restricting Security Features in Previous Versions of SharePoint and Improvements
In previous versions of SharePoint, there were many built-in security mechanisms and features that allowed a granular collaboration environment with varying types of application architecture. Within this antiquated security framework, one of the most frustrating, foremost restrictions was that SharePoint user accounts were required to resolve back to a Windows identity, severely impacting the application extendibility, particularly with perimeter facing deployments. There were workarounds to adapt to the restriction, creating multiple Active Directory trees and local user accounts was exceedingly common, however for large extranets this was not only a management nightmare but lead to poor security protocol management.


MOSS 2.0 Security Request Flow and IIS Handshaking

The IIS / ASP.NET 2.0 security process flow is relatively straightforward from when a client initiates the authentication handshake for proper verification and routing to the relevant MOSS web application and MOSS zone.

  1. When the request first begins the handshaking process, IIS will ensure that the incoming client request comes from an IP/host that is allowed access to the domain. If this condition the packets are rejects and the request is rejected by the MOSS server.
  2. Assuming that there are relevant IIS authentication routines (such as basic, integrated, digest, etc.) the MOSS IIS instance will perform that specific authentication.
    1. If you select anonymous authentication, IIS does not perform any authentication by default. There is further configuration required to get anonymous authentication to work with MOSS>
    2. If you select basic authentication, users must provide a Windows username and password to connect. This information is sent across the network in clear text, by which it is natively insecure. Therefore, this typically means that SSL will be used.
    3. If you select digest authentication, users must still provide a Windows username and password to connect. The difference that exists between this and bsic authentication is that the password presented will still be hashed after it is sent by the user. Since this is still an IIS authentication routines, the user Windows accounts wil still need to be stored in a network accessible Active Directory.
    4. If you select Windows integrated authentication, users still have a Windows username and password, however the authentication routine will depend Kerberos or typical challenge/response (NTLM). There are some further settings that are needed in Internet Explorer to make the user experience seamless when leveraging Integrated Windows Authentication.
  3. If there is no configuration done in IIS it will leverage anonymous access so client authentication handshake requests will be automatically passed through and authenticated as legit users (although this will not allow access to MOSS, since the Windows membership provider will be used by default). These types of authentication are set on a per virtual server basis, whereas the MOSS authentication providers can be set on a per web application basis (via zones), and multiple site collections can exist in varying web applications, on n number of virtual servers (it is important to remember that the concept of always having to resolve to a Windows identity doesn’t necessarily have to exist in MOSS).
  4. The first thing that MOSS will do when the authentication request is passed to it will see whether impersonation is enabled.This is how MOSS functions with the varying authentication providers, which allows ASP.NET to acts as an authenticated user.
  5. Finally, the identity from the previous step is used to authorize resources from the Microsoft Office Server System. This is based on the authentication providers that are configured as well as varying assets that can affect the authentication providers such as zones and web application policies. The MOSS resources obtained don’t even has to be restricted to the MOSS webforms, since Code Access Security (CAS) can enable exposure to such things such as keys, disks, and various other server resources.

One of the chief improvements in the new revision of SharePoint (Microsoft Office Server System, or MOSS) is the membership and user model which builds off the revamped ASP.NET 2.0 membership model providing user credentials and user roles functionality (with the addition of seven new server-side controls, membership classes to retrieve and update user information within a database, and role management functionality). This new SharePoint /ASP.NET 2.0 membership model presents state-of-the-art provider APIs that allow a SharePoint environment to talk with a variety of backend corporate user account systems, some providers being provided by default whereas others require creating a custom provider, or simply downloading an already created one from sharepointsecurity.com. Forms-based authentication, the subject of another article, integrate extremely well with this pluggable model, however they are not dependent on each other and can work as independent pieces of technology depending on requirements.

The membership architecture can depicted well visually, as shown in the following diagram:


So How Does This Membership Model Exactly Work?


There are three main pieces that build the application architecture of a membership provider in ASP.NET 2.0. There is the membership API, the membership provider, and the provider specific storage. The actual logic process of the membership model is very simplistic because of its relatively straight-forward design pattern that provides a high layer of abstraction. Instead of being restricted when using the provider API by simply providing methods to tap into a data store through it, the API is flexible and can be molded by a developer, along with definition of the user member storage mechanism.
[csharp]
public class MembershipProvider : ProviderBase
{
// Public properties
public abstract string ApplicationName { get; set; } public abstract bool
EnablePasswordReset { get; } public abstract bool EnablePasswordRetrieval {
get; } public abstract bool RequiresQuestionAndAnswer { get; }
// Public methods
public override void Initialize (string name, NameValueCollection config);
public abstract bool ValidateUser (string name, string password);
public abstract bool ChangePassword (…);
public abstract MembershipUser CreateUser (…);
public abstract bool DeleteUser (string name, bool deleteAllRelatedData);
public abstract string GetPassword (string name, string answer);
public abstract MembershipUser GetUser (string name, bool userIsOnline);
public abstract string ResetPassword (string name, string answer); public
abstract void UpdateUser (MembershipUser user);

}

[/csharp]

The main class within the membership API is the membership class. The membership class only contains static methods, and it doesn’t require an object instance. Though the controls handle the majority of desired functionally, ASP.NET 2.0 provides public methods of the Membership class to expand the developer’s control. A few of these include:

CreateUser
Adds an arbitrary user to the MOSS membership data store
DeleteUser
Removes an arbitrary user from the MOSS membership data store
GeneratePassword
Generates a random password of a specified length for access into MOSS
GetAllUsers
Retrieves a collection of MembershipUser objects representing all currently registered users for the pluggable based MOSS environment
GetUser
Retrieves a MembershipUser object representing a user
UpdateUser
Updates information for a specified user
ValidateUser
Validates logins based on user names and passwords
ChangePassword
Changes MOSS user’s password
ChangePassword- QuestionAndAnswer
Changes the MOSS users question and answer used for password recovery
Get Password
Retrieves a MOSS user password
ResetPassword
Resets a MOSS user password by setting it to a new random password
These members reference the MembershipUser class, which represents a user stored in the Membership data system. As stated before, this MembershipProvider class can be extended to create a custom providers for a variety of systems, allowing you extend the way that users can authenticate to your SharePoint environment.
As an example of how these methods work, lets look at how one could programmatically create a user for your MOSS environment to consume. This method could be hooked to any number of pluggable providers, depending on your backend membership data store:
For the first portion of the code, you simply need to call the CreateUser method and fill in the appropriate string entries to populate the membership database with relevant information about the user:
[csharp]
try {
Membership.CreateUser (“Adam”, “Buenz”, adam@sharepointsecurity.com);
}

[/csharp]
Following, you should allow there to be some flexibility to be procured for handling if a user does not meet any of the standard criteria. This can be accomplished by leveraging MembershipCreateUserException.StatusCode along with a switch case.

[csharp]
catch (MembershipCreateUserException e)

[/csharp]
Following, build the switch case then to handle the various types of exceptions that may occur when you are creating the user. These can vary heavily so it is best to be as robust as possible whilst writing them:
For example, we could do the following starting off with the switch case to parse the status code:
 [csharp]
switch (e.StatusCode)

[/csharp]
Following we can build out the various types of exceptions that may occur, some of them are the more obvious:

Check whether there is a duplicate username that exists
MembershipCreateStatus.DuplicateUsername

Check whether there is a duplicate email address that exists
MembershipCreateStatus.DuplicateEmail

Check whether there is an invalid password entered
MembershipCreateStatus.InvalidPassword

The above exception, where we are checking the user password, also has to be implemented programmatically, and can be bound to some of the initial sign on events to validate the password format as users enter into the MOSS instance. For example, it is common for passwords with an organization to require a certain length of characters. This can be achieved by using some regular expressions. Regular expressions are just a way to implement pattern matching within your code, and are supported by all .NET compliant languages.

 

[csharp]
void OnValidateCredentials (Object sender, CancelEventArgs e)

[/csharp]
Then, just create a if statement to start the regex pattern:
[csharp]
if (!Regex.IsMatch (LoginControl.Password, “[a-zA-Z0-9]{8,}”) )

[/csharp]
If they don’t meet the requirements, and use a cancel, e.Cancel = true; (whose full setting is CancelEventArgs.Cancel), you can simply display them a message using LoginControl.InstructionText, such as:
[csharp]
LoginControl.InstructionText = “Passwords must be 8 characters at least”

[/csharp]
Then setting the e.CancelEventArgs.Cancel to true:
[csharp]
e.Cancel = true;

[/csharp]
Along with multiple other types of various exceptions that could occur depending on your implementation.

The schema when using the SQL membership provider provides some great insight into how the database is structured with a pluggable membership provider:



In relation to the membership model for MOSS under pluggable authentication schemes, there are some unique events that are fired in order to facilitate the login process (this events can all be manipulated if you want to provide a custom login procedure to users):

Authenticate

Event that is triggered when a user trips the login process in order to authenticate the user by validating arbitrary passed credentials to MOSS

LoggedIn
Event that can optionally be tripped following a login to MOSS

LoggingIn
Prevalidation of user credentials in order to implement properly formed entry

LoginError
Event that is tripped if a user does not enter appropriate credentials to MOSS

For your login page, you will get a default one by MOSS if you use the internet presence template. However, if you need to create your own there are a variety of Visual Studio controls that are available for you to exploit in order to build your own, and a variety that will allow you to customize and enhance a very standard login procedure.

ChangePassword
User Interface for altering MOSS pluggable passwords

CreateUserWizard
User Interface for creating new MOSS pluggable user accounts

Login
User Interface for entering and validating MOSS pluggable user names and passwords

LoginName
Displays authenticated MOSS user names

LoginStatus
User Interface for logging in and logging out out of the MOSS instance

LoginView
Displays different views based on MOSS login status and roles

PasswordRecovery
User Interface for recovering forgotten MOSS instance passwords  

The Role ManagementProvider
The Role Management provider offers three built-in providers:

  • AuthorizationStoreRoleProvider
  • SqlRoleProvider
  • WindowsTokenRoleProvider

The role provider can be depicted visually as well:


These providers produce a lot more functionality over anything available within previous versions of SharePoint, as well as traditional .NET 1.1 applications (roles typically required the use of the Application_AuthenticateRequest method), which still required developers to leverage the custom roles passing through the ASP.NET HTTP pipelines. The purpose of roles is fairly straightforward, it allows a developer the option of creating rules that can control access to various pieces of content within your MOSS environment. A user does not have to be bound to just one role, but can in fact belong to n number of roles as they are defined by a developer. Programmatically, it is straightforward in adding a user to your MOSS role provider.

Firstly, you have to declare the user in a string in order to enable storage of the unique name that will later be passed into the AddUserToRole method. This is done by just declaring a string as such:
[csharp]
string name = Membership.GetUser (“Adam Buenz”).Username;

[/csharp]
In the above we are declaring a string with the name, name. Then we are using the GetUser method in and passing in the username of “Adam Buenz” to get that user. Following, we have to add that user to the MOSS role provider by using the AddUserToRole method:
[csharp]
Roles.AddUserToRole (name, “I Love SharePoint Security”);

[/csharp]
In the above we are simply passing in the string that we declared before, then declaring what group we are going to add the user to, in this case the role “I Love SharePoint Security”)
They are not a required feature, however when using a pluggable membership provider are easy to implement and manage, allowing you very granular control over your access security.
In order to get the role manager working, it is necessary to enable it. With your MOSS installation, since it will initially target itself to Windows Identities (even the initial authentication provider will be as such), this is disabled. To enable it, locate this section:

Just change the “false” node in the role manager section to true, as depicted below:

[xml]
< configuration >
< system.web >
< roleManager enabled=”false” / >
< /system.web >
< /configuration >
enable it so it is true:
< configuration >
< system.web >
< roleManager enabled=”true” / >
< /system.web >
< /configuration >

[/xml]

 

This can either be done through the WAT (Website Administration Tool) or through directly editing the web.config.

 

One of the features talked about in the overview article is that role cookies can be encrypted. This cookie comes from using the roles class which produces a role caching property. Assuming that the user roles are too large to fit in the cookie or there are some other restrictions, the cookie will store the most frequently used roles.

There are several role methods that related to the new functionality that can be exposed in SharePoint now through the role provider:

AddUserToRole
Adds a MOSS user to a role

CreateRole
Creates a new MOSS role (see example for instance of using this)

DeleteRole
Deletes an existing MOSS role

GetRolesForUser
Gets a collection of MOSS roles for which a user resides

GetUsersInRole
Gets the collection of MOSS users belonging to a specified MOSS role

IsUserInRole
Indicates whether a MOSS user belongs to a specified MOSS role

RemoveUserFromRole
Removes a MOSS user from the specified MOSS role

When a user enters a SharePoint environment where there is role checking being used, it will firstly check whether the encrypted cookie mentioned before is available for consumption. This is because frequent checks for the role manager service will result in performance issues, if instead all the relevant roles of the user is held in a tamper-proof cookie (the cookie must remain encrypted and tamper-proof otherwise someone could spoof it), it is much quicker for SharePoint to access this information. By default, this cookie will be held for 30 minutes (as shown in the below attributes), however how long it is held onto can be manipulated through the web.config file of the arbitrary SharePoint site.
Within the web.config, there are several settings related to how the SharePoint cookie will interact with the user, some of these are fairly self-evident, however explanation of each have been included for brevity:

[csharp]
< roleManager enabled=”true” cacheRolesInCookie=”true” / >
[/csharp]

To enable the cookie, you must enable the cacheRolesInCookie directive, as shown above. There are also the following attributes that are available for you to use for your MOSS cookies: 

  • cookieName=”.MOSSROLES” – what is the name of this cookie?
  • cookieTimeout=”30″ – how long should this cookie last (what is the cookie lifetime?)
  • cookiePath=”/” – what is the path to the cookie?
  • cookieRequireSSL=”false” – does this cookie require SSL
  • cookieSlidingExpiration=”true” – should expired cookies be renewed?
  • createPersistentCookie=”false” – should we implement persistence for the MOSS cookies?
  • cookieProtection=”All” – what is the cookie protection level?

If this cookie is not located on the client machine, SharePoint will instead make calls through the API the role provider leverages in order to determine the role of the user, and match whether the request matches the role in the environment. If the request is successful, the request will be written back to the encrypted (if the cookie is specified to be encrypted, this again is not a requirement) with the most recently requested role. If so, it will replace the last role stored in the cookie with the most recently requested role. Using this model, the role lookup processes is automated and can be immediately consumed by a SharePoint environment with no custom development needed, only configuration and implementation.

Going Beyond Using The Provider For Authentication
Once you have the provider setup, you can begin to build some pretty neat administration tools that can access the data layer that you can wrap to consume the data. One really neat use of this is building management applications that take advantage of the user options that exist in the environment. For example, sometimes you want to give other administrators within your environment the option of managing the pluggable users in your environment within an easy to use client. For example, lets assume that you are going to build a WinForms client to accomplish this task.

Here is the data access layer class files that I used to access the SQL pluggable provider that build out the WinForms client that allow management of all the custom users that are added to the SharePoint environment.

Once you have these classes define that build the data access layer, you can start to do some pretty neat management from within a WinForms client that ease the amount of visibility that you have over the users in your SharePoint environment.

Share

Introduction To Hybrid SharePoint Using SELinux

Hybrid SELinux / SharePoint Environment

Before I get into how to architect a proper MAC environment using SELinux, there is one major misconception I would like to get out of the way. SELinux is not complicated. I am primarily a C# developer and I didn’t find it that complicated, and my talents with Linux technologies are less than ideal. If you find Linux as a product complicated, SELinux will appear daunting, you will solely be adding insult to injury, but in general, the security concepts that SELinux supplements the Linux platform with are in general very easy to understand once the general concepts of the platform are tackled.

The purpose behind implementing a hybrid Linux solution with SharePoint is to bring enhanced kernel level security as that provided by SELinux to the most comprehensive collaboration and communications solution, coupling several layers of technology. The overall concept can be seen in the diagram below. In aggregate, we will establish two separate SharePoint environments, one that is a general environment, and another that is slated for confidential SharePoint centric information. Although there are two separate environments, VMware will allow us to leverage a singular piece of hardware. The general environment will still be able to contact public networks via the internet, whereas the confidential environment since it is meant for sheltered content. Keeping the two environment separate allows the two environments to be truly segregated in order to prevent unauthorized data migration as well as malware propagation from various untrusted sources.

The purpose of using a Hybrid SELinux/ SharePoint solution is to build up a multi-level security SharePoint capable environment. Multi-Level Security is an important concept for several flavors of industry, but most importantly those that exist within the federal government. The concept of Multi-Level Security is built upon the model that were firstly built on the Bell-LaPueda model, which in general, simply laid the foundation for read/write access across logical boundaries. We can see the concept of multi-level security in the below diagram:

We can see that there are two main actions. The first of those are the requestors, those that are querying to access a specific asset that exists on the MOSS server at a high level, which is noted as the Sigma to keep the context of the diagram somewhat bland for applicability purposes. SharePoint has the capability to store user information, as well as access controls to the backend databases, and therefore provides the medium between the actual user requests and the serving of the requested object. The second is the receiver, which are the low level of requestors that can have an arbitrary numeric as well. Hence, we have High systems denoted as S, and low systems that are denoted as R. The communication between the two is all mitigated with SharePoint as the general medium between the two objects.
As with any security model, there are two important concepts that are typical involved, the object, or the asset that a requesting party wants access to, and the subject, which is the querying party for that specific asset. Regardless of how the request is routed, these two concepts are a constant within any security model proof.

The overall architecture of SELinux is composed of the kernel, which makes the decisions regarding allowed and denied access, the SELinux Shared Library, which provides the overall support library for process execution for SElinux, and the SELinux Security Policy, which provides the intelligent decision making process.
One can sum up the packages that are included with SELinux as:

  • The standard Linux kernel
  • The Linux Security Module (LSM) provided by SELinux
  • SELinux kernel modules that provide native hookinh
  • Kernel patches for SELinux that need to be applied (depending on distro)
  • SELinux management programs (those used to build policy files etc.)
  • A distribution that includes policy files (others can be easily created in policy language)
  • Some variations on the standard linux programs. This is to make these applications aware of the SELinux existence. For example, mkdir will be replaced with a SE enhanced version provided by SELinux.

The architecture of a hybrid SharePoint /SELinux environment is depicated in the diagram below. There are several benefits that are gained from the separation of the two SharePoint environments.

  1. Information cannot be leaked from the confidential SharePoint environment to the general SharePoint environment (staying in line with MLS BP model standards that are common within federal and highly regulated sectors)
  2. Viruses and malware can’t be passed from the general environment to the confidential environment increasing availability and mitigating possible risk to the highly sensitive information stored on the confidential SharePoint environment
  3. The general environment can only pass information that meant certain sensitivity label requirements from the general environment to the confidential environment
  4. Subjects can only access the internet from the general environment, not from the confidential environment, preventing possible information exposure

What is SELinux?

SELinux is a NSA developed Linux distribution that implements finely tuned Mandatory Access Controls based on the Flask architecture which integrates directly into the Linux kernel. Because of its direct hooks into the kernel, it allows it the option of implementing system wide, administrator managed policies that will affect all objects that exist in the base operating system that SharePoint runs on. The security objects are based on the concept of sensitivity labels, which will deem whether the content fulfills a certain category of information, be it secret, top-secret, classified, unclassified, or any number of other labels. The overall purpose of SELinux is to minimize the possible breaches that could occur within a SharePoint environment.

SELinux provides a good parent environment for SharePoint when coupled with virtual machine technology because the security of SharePoint needs to not only be focused on the web application itself, but on the security basis of the root operating system. Providing administrative control using the SELinux policies which hook directly into the root kernel provides the highest level of security since otherwise it will be up to the users of what content would reside where, as opposed to en environmental, administrative decision that can assimilate organizational access policies that facilitate the use of sensitivity labels.

SELinux isn’t a separate OS, and can be used across multiple Linux platforms as a hooked architecture. It is called a Linux Security Module (LSM) in that it rides on top of an independent OS and inserts instructions into a kernel. As various processes are executed on the server, it can be checked against the SELinux policy database which can choose to execute or halt a process.

Within SELinux, there are two major types of objects that exist, transient and persistent objects. Transient objects are simply objects that exist within the system that are destructed once their purpose has been served. Therefore, they never are giving a persistent SID (Security Index) from a memory pool. Persistent objects are those that do have a persistent SID. The Security Index plays a vital role in the decision making process, the tuple of objects and subjects use it during the execution.
The permissions within SELinux are based on a tuple architecture that is similar to the architecture that we see with pluggable providers in SharePoint, there is an identity and a role that exists. For each subject and object, there is an identity that exists, regardless of context of either asset. Linux typically stores passwords at /etc/password, however in SELinux each subject has its own database promoting a powerful level of segregation that is highly administratively controlled. In this, the users are also assigned roles, which define the actions that a user can take, the actions that are consider legal by the system. In each role, there are privileges and actions assigned to it, which allows a many to many relationship to exist between users and roles, there can be multiple roles, that contain multiple users, or its singular counterpart.
SELinux has several roles that you get by default.

  • staff_r – users permitted for sysadm_r
  • sysadm_r – users permitted for system administrators
  • system_r – System processing roles
  • user_r – users role


A History of Flask

The Flask history is rich within the federal sector. Flask was originally developed in 1992 through 1993 by the National Security Agency and Secure Computing Corporation when design was kicked off for a controllable access control system that combined Distributed Trusted Mach and LOCK, which evolved into Distributed Trusted Operating System (DTOS), which spawned a viable system that become applicable for security sensitive military and university research projects. This development was later rolled into the Fluke project that was underway from the University of Utah, when the NSA and SCC bonded the DTOS project into Fluke, combining their concept of security policies the fluke OS, which in turn after some development spawn the SELinux Project.

Flask is a crucial portion of the overall execution because it allows a distinctive separation of the controlled, security policies, to the enforcement logic. There are two major components that build up the flask architecture, the security server which provides that enforcement logic that builds out the relevant security decisions, and the object manager that governs the attributes of security for objects, and feeds off the decision trees that are sponsored by the security server. These all build up to the concept of Mandatory Access Control which allow transparent policy enforcement when defining the default security behavior.

Mandatory Access Controls

In several articles on this site, the concept of Mandatory Access Control is heavily discussed, since it plays such a crucial role in systems that deal with environments that hold both highly sensitive information, as well as information that is fit for public consumption. For brevity, Mandatory Access Control spawns to main concepts, that of subjects, and objects. A subject is simply a user or system that would access an object, which is simply something that exists, such as a document, file, information, or process. For each subject that exists, they are grouped into domains. These domains are classifications, such as secure, or unsecure, classified, or unclassified, or sensitive, and insensitive. The domains can be any number of ambiguous terms that are typically only specific based on industry, however common within the federal sector as classified and unclassified. For each object, there is an assigned type, and like each subject is assigned a domain, each object is assigned a type that will say what domain of user can have access to it.

In this, all files that SharePoint will hold are still firstly bound to a Windows Identity, or custom user identity that is populated from a custom membership provider. The concept of item level security although will exist for each particular SharePoint instance. It is important to realize however that there is not one singular SharePoint instance however, there is one that is dedicated to unclassified, general information, and another dedicated to classified, sensitive information. Leveraging MAC, within a classified SharePoint environment, there is the option for a subject to access the classified file, however the sensitivity label will prevent users that don’t meet the type to not open or transfer the file. This is very different from typically OS permissions, such as in UNIX or Windows, where the permissions are instead controlled by the owner of the file.

Decision Trees In SELinux

Decision trees in SELinux are built on the concept of security attributes, which were discussed previously. The various security attributes are merged to generate the global security context for the aggregate SharePoint environment. As we have talked about thus far, there are three main properties that build out the security attribute, the user, the user’s role, and the type. When various server processes are executed on the server, a context will be assigned to it. This context is feed from the security server, and will define such things as what security rules should be applied, what process spawned the child processes, and what subject (userID) tripped the process, whether it is a system or a tangible user. In this, the security server is very important to the decision making process during the execution of SELinux controlled process. This may seem to be a lengthy process, however there is a cache mechanism that is implemented in order to circumvent this otherwise cumbersome process by using an Access Vector Cache (AVG). If the security context does change, then the AVC will be slated as invalid, and a new cache entry will be generated.

Downloading SELinux
There are several flavors and types of SELinux that are available for downloading depending on your environmental requirements (Gentoo, Debian, SuSE, etc.), all available here http://selinux.sourceforge.net/distros/ . Here is a list of the available distributions:

Share