Item Level Security Model (ILS), Securable Objects (SO), and Content Structure (SharePoint Site Definitions, Lists, Features, and Solutions)

One of the largest causes for complaints in previous versions of SharePoint was the lack of Securable Objects (SO) that existed only allowing end-users the option of securing items at the library level. Within SharePont 2007, this concept of Securable Objects is exposed and allows end users the option to bind a specific identity to a specific object. There are several different objects within MOSS that are allowed as securable procuring an environment that allows a very granular level of permissions:
  1. Web (Site)
  2. Library
  3. List
  4. Item
Therefore, a user can come into a site and bind identities to any of these arbitrary objects. For example, consider the following scenarios. There are several OOB permission levels that exist:
Permission Level Permission Level Description
Full Control Has full control.
Design Can edit lists, document libraries, and pages in the Web site.
Contribute Can view pages and edit list items and documents.
Read Can view pages, list items, and documents.
Limited Access Can view specific lists, document libraries, list items, folders, or documents when given permissions.
Approve Can edit and approve pages, list items, and documents.
Manage Hierarchy Can create sites and edit pages, list items, and documents.
Restricted Read Can view pages and documents, but cannot view historical versions or review user rights information.
SharePoint however allows you the option of divvying these up into groups, that you can use to more easily manage the access that is granted to your site. These groups follow the concept of AD groups in terms of aggregation, but are vastly different in functionality since they are exiled to exist at the SharePoint level. When using Secured Objects, you can optionally bind a group instead of an individual person:
Permission Level Permission Level Description
Approvers Members of this group can edit and approve pages, list items, and documents.
Designers Members of this group can edit lists, document libraries, and pages in the site.
Hierarchy Managers Members of this group can create sites, and they can edit pages, list items, and documents.
Quick Deploy Users Members of this group can schedule Quick Deploy jobs.
Restricted Readers Members of this group can view pages and documents, but cannot view historical versions or review user rights information.
Members Use this group to give people contribute permissions to the SharePoint site.
Owners Use this group to give people full control permissions to the SharePoint site.
Visitors Use this group to give people read permissions to the SharePoint site.
NT AUTHORITYAuthenticated Users Windows builtin user groups which represents authenticated users.
Each of these will have an association by default to the permission levels mentioned before that are rolled out by default. This allows the structure of a typically environment to be setup initially with little or no work.
SharePoint Group/Permission Level Full Control Design Contribute Read Limited Access Approve Manage Hierarchy Restricted Read
Regular website                
Approvers         X X    
Designers   X     X      
Hierarchy Managers         X   X  
Quick Deploy Users         X      
Restricted Readers         X     X
Members     X          
Owners X              
Visitors       X        
NT AUTHORITYAuthenticated Users         X      

Scenario of Multiple Users and Item Level Security

We have two users, user A and user B, both heavy users of our collaboration environment running MOSS (SharePoint 2007). Both of these users are in different divisions and geographical disparate locations, user A is a member of the marketing group, and user B is a .NET developer, however the have been merged into a project group who is going to develop a custom SharePoint WebPart for reporting on marketing trends with regression analysis. The site is setup with the following SharePoint assets:

  • An announcements list for important project announcements
  • An event list for team building events
  • A task list for overall project tasks
  • Two document libraries, one for functional design specifications and the other for performance reports for management metrics
In order to orphan this site from the rest of the collaboration environment so only the users that need access to it can get to it, in the current context, user A and user B will be the only people to access the site, therefore we can either make a group for them and add them to it after assigning the appropriate permissions, or explicitly add them as users, with certain permission levels, to the site.

Afterwards, there are sensitive materials that are being placed into the collaboration environment, notably things that the developer might not need the marketing group to see, and things that the marketing group may not want the developer to see. Recall that there are two document libraries in the site, one for development functional design specifications and another for performance reports that the marketing department as the project sponsor are going to submit to management regarding the work done by the developers.

In the development document library, we are going to detach permissions from the parent so that unique identities can be bound to the library or object in the document library. For a functional design specification, there are typically two versions that developers have, one is “sanitized” and another is “dirty”. Dirty functional design specifications are usually what developers use between them selves since the linguistics in it may be past the comprehension of the client, therefore, we would bind the unique identity of this document by selecting “manage permissions” of the object and setting it to the developer’s account. Firstly, select the appropriate manage permissions link from the context menu of the object in order to bring up the “Permissions” page which will allow us to breakdown and assign permissions at a very granular level.

Site Definition and List Breakdown Structure
Site definitions (STS and MPS, along with the SPS prefixed definitions) were the most typical way in WSS 2.0 to provide flexibility and control over an entire site, from design to WebPart provisioning through the ONET.xml file. Site templates, although manually heavily to make modification to either the ASP.NET WebForms or relevant XML files were the most beneficial option in terms of performance, and give power over the overall feel and functionality of the site. Those that have worked with these before know of the pains of working with CAML (Collaboration Application Markup Language), in terms of validation and testing modifications and enhancements, and the repetitive changes that are needed to promote uniform branding across relevant files.
The Two Largest Differences in MOSS
The two largest changes to the concepts of site definitions are the introduction of features and solutions, each which serve a very different purpose, making SharePoint site developers lives much easier. In order to create a site definition in WSS 2.0 it was often necessary to copy the complete site definition file, i.e. making a copy of the STS folder and renaming to something more relevant to your project task, and then making a new WEBTEMPS.XML file that would allow SharePoint to become aware of the new directory in order to populate it to the templatepick.aspx page. This causes the creation of an entire new site, and therefore a fair amount of work to complete the task of creating a new site. The introduction of features cuts down on the amount of work needed for a developer to introduce changes into the SharePoint environment by componentizing packages to push against a site. Developers will be comfortable with the environment of a feature, since it highly resembles that of a site definition with the similar file formats, XML files based off of CAML and ASP.NET WebForms. Instead of having to create a new site definition however to create a list template, or make modifications to the default WSS site directory, features allow you to package one change, and deploy that change to single, or multiple sites depending on your requirement.
The Old Way Of Doing Definition Switches
Many people are aware of the trick to switch a site definition by making the modification to the Site ID in the _SITES database in order to convert an existing site, which carries its own implications since it is not a supported Microsoft technique and is not always 100% effective. Features however solve this paradigm by allowing you to apply them for an existing site, on any site that exists within a farm. The method of deployment can vary depending on requirement, however can be done through:
  • Command Line
  • Code
  • GUI
This obviously has implication in how development of site definitions should be structured and planned, since features can be referenced across a farm from any site. List types can be spread and referenced from differing sites, therefore allowing a container of reusability and cutting down on the amount of work required for a developer to make sites and site collection that are more intelligent and tiered towards business purposes. As a developer, this is a must have feature that has immediate ROI. Typically, to make new types the process described above (copying the STS site definition etc.) is needed if you simply want a new list type, however leveraging the WSS 3.0 allows you to solely develop a singular features without having to make new definitions, and extend these references to the feature throughout differing portion of the farm.
Deploying New Site Definitions
Developing and deploying features is not that different than creating new site definitions, so should be familiar to those who have created site definition in WSS 2.0 (besides the introduction of the 12 hive). Features in WSS 3.0 are created by creating a folder in
C:\Program Files\Common Files\Microsoft Shared\web server extensions \12\templates\features
When you create a new folder, you can place all the relevant features files that you wish to include, however the one file that MUST exist is the feature.XML. The feature.XML file is the basis for the entire feature, providing the structure of the feature by exposing base properties and other supporting features. Within the feature.XML file, you can point to other relevant assets that will build up your aggregate feature, such as rendering resources or assembly files. Your feature file can also only contain the feature.XML file, depending on the requirements of your project and what type of logic is needed in order to complete the requirements of your feature.
Breakdown A Feature, and Then Build A New One
Features are really easy to dissect because typically unless it is a very intensive feature the amount of files that exist within them is very, very small. As mentioned before, this may be just the feature.XML file which is the only file that is actually required for the feature to be implemented within the SharePoint 2007 environment. Provisioning this file out into your environment as described above is rather easy and unproblematic, and can be done in a variety of fashions depending on user preference.
Before you get started writing the feature though, it is best to define who exactly you are tailoring to write the feature for! Is it for a site? Is it for the whole server to be able to active? (Remember, this is going to be available for users throughout the SharePoint GUI so it is best to plan the feature scope.
There are four main kinds of scopes that exist in relation to features, Site, Site Collection, Virtual Server, and Server Farm. The differences should be rather apparent; however for the sake of being complete, here is a little breakdown.
Assume you are developing a list feature that establishes a different type of view that applies to a product inventory list within your company. This feature doesn’t have much application in relation to other sites since this list really only exists at one site within your entire environment, most likely on your inventory management site (or site collection, which we will get to in a minute).

Solutions, Site Definitions, and Features

The other major change that exists within site definitions is that of a solution, whose structure should be very familiar to WebPart developers. The idea of a solution replaces that of using a .CAB file (deployed typically using the wppacker method) for a WebPart deployment, and extends the possibility of packaging other SharePoint assets such as site definitions. So why should the structure be familiar? Within WSS 2.0 a WebPart typically had a manifest file, and .dwp, and a related assembly that acted as a container of business logic. The .dwp played the role of establishing the connection between the presentation layer and the assembly describing things such as Title, TypeNames, and Assembly Names. The manifest handled many roles most importantly that of making the safecontrol entry into the web.config file so that the WebPart could actually run correctly. Within a solution, the same context of using an XML file within a .CAB solution which can describe the package and method of unpackaging and delivering the assets onto the server. Typically however with WebParts, the wppacker method had to be run to drop the assembly and relevant assets onto the front end web server. This is no longer the case, since the WSS 3.0 as described in other sections is more dependent on the database for storage of assets that would otherwise be stored in other location in WSS 2.0. When the solution is deployed onto one of the servers into the farm, it is housed within the configuration database, after which a job is tripped which will deploy the WebPart to the remaining front-end web servers that exist within the SharePoint farm.

Auditing List Changes With A Workflow

A common requirement within a collaborative environment is to implement a workflow for critical assets to be routed and intelligently automated throughout an enterprise. More often than not, this is a Microsoft Office document of some nature, and in most businesses this is typically a Microsoft Word document. Encompassing certain documents and tasks within a defined and standardized process is something that is typically a largely manually task, often resulting in redundant information being sent to both parties. This process could also be largely housed within persons head, not transparent to the rest of the parties involved in the business processes, and therefore remaining loosely defined and subject to several mistakes.

Windows Work Flow Foundation (WinFX/.NET 3.0)
WSS 3.0 however solves this common dilemma by introducing new technology called Windows Workflow Foundation (WinFX) which forms a basis of methods at a workflow developer’s disposal to build intelligent foundations to automate these business processes. There are all types of workflows, which break down further when examining how the workflow is supposed to be structured around the human element. The two workflows that are supported on the WSS 3.0 platform are sequential and state machine workflows, both of which can be tailored around arbitrary business processes, however the latter being well-suited or tasks that largely involved a human element. Sequential workflows are like a software development lifecycle; you define requirements, build the software, test, and go production with the push build. It builds a series of events up that in turn will happen one after another, executing when one event expires. A state machine workflow exists on different states, an event may occur is a certain state is adjusted whereas that same event may not occur, establishing a grey area and therefore the introduction of the human element.
Using a workflow within a SharePoint site can be extended in many different fashions, such as on a document that exists within a document library or on an item that exists within a list. One of the most typical processes is an approval routing workflow, whereby a document is sent between different parties to achieve signoff until it hits executive signoff to end the workflow. This can be routed in multiple ways, through serial, where a document goes one by one through a workflow route or through a parallel (also known as shot gunning), where the approval is sent to multiple parties or signoff after an event is tripped. Assume that there is a sales document that has to go through multiple parties, originating at the sales department, but going through the graphics department for design, marketing department for corporate conformity checks, financial department for verification of metrics and statistics of the document, and finally getting executive sign off before the document goes production. This is an example of a serial route, where the document will be routed to each department in a single step fashion, getting sign of until it reaches executive management where the final threshold of the workflow is satisfied and the cycle ends.
The built in workflows when first using WSS 3.0 are fairly rudimentary, however let you explore the options that are available when exposing Windows Workflow Foundation since they are built upon the same technology. One of those workflows is the example given above, setting up an approval route on an arbitrary document that you wish to route through your company in a fashion that you deem appropriate based on the given requirement.
Workflow Across Relevant MS Sister Server Systems
SharePoint by design has always had the ability to integrate with sister server platforms offered by Microsoft, and Windows Workflow Foundation provides the same types of facilities. Because Microsoft Exchange has close ties with how workflow functions within a company, it also provides the hooks so that the workflow can be integrated across relevant client applications. This extends further to the entire 2007 Microsoft Office suite, allowing you to build workflows intelligently integrated directly into your office applications.
Windows Workflow Foundation Run-Time Engine
The heart of SharePoint workflow is run by a component known as the Windows Workflow Foundation Run-Time Engine, the same entity that is responsible for the generation of workflow elements as they exists within the entire WinFX engine. The reason that there is one entity that is the heart of WinFX is that it is specifically built to keep active during periods off activity that other programmatic elements might have trouble surviving in, such as when your SharePoint server reboots. In essence, WinFX plugs into SharePoint similar to a puzzle piece, there are two sides of the equation that are unique to each other but they have common sides that are provided by both ends. The workflow however is the base piece, it is the base engine whereas SharePoint is the higher level functionality that plugs into this workflow to implement its own custom routines. It is possible to mimic this type of functionality through the SharePoint API and exposing programmatic elements as thus, so you are not restricted to building just one type of workflow to conform to a SharePoint standard. This is my task right now!
Fortunately, creating these workflows is easy through the Visual Studio 2005 interface, there is even a visual designer that cuts down significantly on the programmatic effort that is required to do so.

Enhancing SharePoint With Forefront AV Vendors Aggregation (MEM) and a Proper Update Policy

Corporate antivirus SharePoint protection is only as good as your AV vendor builds and how well you assimilate updates to their arbitrary AV scanning engines. For that reason, Forefront for SharePoint has built in mechanisms that will allow you to aggregate and incorporate AV vendors various scan engines into one cohesive unit protecting your SharePoint content repositories in a method that conforms to your enterprise antivirus policy. This is one of the most important features of Forefront for SharePoint, since you don’t have to buy sister software platforms, can use your current AV software platforms, and purchase additional AV software as your metrics determine out of the Forefront for SharePoint reporting modules.

Default Forefront for SharePoint Engines to Use With SharePoint

If you have other engines that you wish to implement with Forefront Security for SharePoint, all licensed engines can be assimilated into your Forefront Security for SharePoint framework using the scanner updates option. Forefront for SharePoint is somewhat indifferent to the engines you wish to implement, therefore arbitrary engine implementation is one of the greatest features that Forefront For SharePoint promotes.

Updating Arbitrary Forefront for SharePoint Scan Engines

The option of executing updates on assorted Forefront for SharePoint digested AV scan engines with miscellaneous vendors is a rather straightforward process, and is completed through the Settings menu in the FSSP client (the first pane when launching the FSSP client, see other article for options of working with the FSSP client application), which will allow you to attach to your appropriate server and show the handle and arbitrary updating agenda, depending on your current configuration. If you desire to update manually through this interface, that option is also available, using the update now feature which will allow you to trip an instantaneous update of your elected AV scanning engine. The relevant engines are updated by means of a component within Forefront for SharePoint called the Forefront for SharePoint Updater Server (AntEngUp), which will facilitate the updating processes for the relevant scan engines and pertinent AV signature files.

For each of the AV scanning engines within your SharePoint environment, simply select the server that you require to configure for updates. In the bottom portion, a slight details pane will populate presenting your:

  • Engine Version
  • Signature Version
  • Update Version
  • Last Checked
  • Last Updated

This should tell you all the relevant information regarding the current status of the arbitrary scanning instance, which should allow you to make intelligent decisions about your scanning engine update policy. To bring up to date a relevant AV scan engine on a schedule or for an update now option, there has to be a particular path for the Forefront for SharePoint service to seize the AV update file from, which can be from the FSSP FTP or HTTP site, or if you have a central SharePoint server that captures relevant updates to populate throughout your SharePoint environment (typically still through the Sybari FTP or HTTP site), you can enter that information into the update path. This is fairly normal, moreover recommended, since it means that only one of your front end web servers running SharePoint has to query outside of your network while the other remain unaffected.

Using a Proxy With Forefront for SharePoint

If you use a proxy within your network to gain external access, you can use the proxy setting dialog, invoked through the Use Proxy Server checkbox, which will allow you to specify the:

  • IP
  • Port
  • Username
  • Password 

settings of your proxy server so that you can successfully receive updates with your network arbitrary proxy configuration.

Using the Remaining Forefront for SharePoint Dialog Options

The rest of the options within this dialog are pretty straightforward to tailor an AV scanning engine Forefront for SharePoint update policy. You can use the data option to set the check for updates, the time for the update, the frequency of the update, the repeat option to select a schedule repetition of update checks, enabling updates for your arbitrary scan engine, and setting up multiple servers to assimilate updates. It is also best to choose the option to perform updates when the Forefront for SharePoint service starts, so that whenever your AV services begin they have the most relevant scan engines. Your SharePoint antivirus policy is only as good as your scan engines, having an antivirus solution in place without having a policy by which to update those engines doesn’t offer adequate protection for your SharePoint environment. However, the way that you schedule the updates should be based on your corporate Antivirus policy, so should be able to conform to your standards in an adaptive environment.

There will be a small lapse from what you initially get a new update within the Forefront for SharePoint framework while the new files are adapted to your environment. You current scan jobs will temporarily suspend themselves while they assimilate the newly gained data.


Why Microsoft Data Protection Manager Will Replace Your SharePoint Tape Backups

* This article was written in the context of System Center Data Protection Manager 2006 (SCDPM), a technology now considered deprecated with the introduction of System Center Data Protection Manager 2007. Variations may exist. *

Why Microsoft Data Protection Manager Will Replace Your SharePoint Tape Backups
Typically, within organizations it is common to have a backup strategy where your critical SharePoint data is backed up to tape, and either taken to secure on-site locations or to a designated off-site sheltered faculty. Tape backups have been a reliable way to backup SharePoint data for an extended period of time, however this type of disaster recovery, although typically reliable, tends to be slow for restoration of crucial business processes.
The Three Types of Backups Processes
There are three main types of backups that exist for SharePoint (there are obviously several others that can exist, however in the context of this particular article):
  • Disk-to-Tape (DtT)
  • Disk-to-Disk (DtD)
  • Disk-to-Disk-to-Tape (DtDtT)
The latter of the three is the most advanced, and relevant to a DPM implementation protecting a SharePoint environment. Although legacy networks are most familiar with DtT backups, this method alone is not advantageous to a SharePoint environment which needs a more agile disaster recovery framework so the business processes and the environment that information workers are used to can be ensured.
The second of the three, Disk-to-Disk backups, are much different that Disk-to-Tape backups for one overlying reason. Instead of populating backup material to a tape directly, it is copied to another server within your network, typically a network/file share. Similarly, within a Disk-to-Disk-to-Tape strategy, your SharePoint data is backed up into a network shared, and then pulled off that share onto a tape for offsite storage, while maintained on the file share for agile backups.
Why Combine Tapes with a Disk-To-Disk Strategy
Why are these two methods being combined anyways? It seems that in the long run, with a Disk-to-Disk-to-Tape strategy, there is a mixture of steps that could otherwise be handled with a simple Disk-to-Tape backup strategy. While this is true, one of the benefits of implementing Microsoft Data Protection Manager is that it allows automation of these steps in order to protect your SharePoint environment.
Picture first your SharePoint environment. Assume that you are involved with a medium sized company, around 5,000 employees each of which is heavily dependent on your SharePoint implementation for line of business applications and facilitating communications and collaborations within virtual teams in your organization. Your arbitrary SharePoint implementation is a medium server farm consisting of two front-end web servers, a separate server that facilitates indexing and job functions, and a backend SQL server. Within your SharePoint implementation are several file shares exposed as well which house certain content which don’t necessitate the need of revision controls which are provided by SharePoint such as .iso and .exe installation files (maintained in the blocked files list to protect the portal from malware). Within your SharePoint environment you also have 1 server dedicated to DPM processes that help to facilitate disaster recovery within your environment providing full fidelity backups for your 250 SharePoint site collections.
These site collections are critical for your business operations for multiple reasons, including however not limited to document repositories, revision controls, task management, and integration with a Team Foundation Server implementation providing your developers and program managers insight into your Software Development Lifecycle (SDLC) and work item tracking.
In the legacy backup strategy, your environment database files are placed on tape and moved off-site every morning at 2:00 a.m. in order to harvest the most recent data and not interfere with user activity.
Your CEO just uploaded a critical document to a document library whose subject is the quarterly fiscal budget, also including a PowerPoint presentation that is going to be shown to shareholders. Without these vital metrics, there will be less interest in the company and it is feasible that some of the shareholders may pull their funding and throw the company into a financial disarray.
And Then, a Catastrophe Occurs
Disaster strikes. Another user accidentally uploaded a document infected with a piece of malware that essentially turned your SharePoint server into a large paper weight, corrupting several pieces of functional SharePoint data and brining down your farm. Your CEO is in a state of panic because of the implications of not having the presentation and document available, and he is holding you responsible.
You tell him not to worry because as the SharePoint administrator you have rights to gain access to the tape backups. However, the CEO loaded the document at 9:00 a.m. this morning working on it feverishly all evening, making it not feasible for you to actually reload the document, so his work has been lost and now there is a possibility of shareholders not observing relevant metrics, and losing interest in the organization.
With DPM, this situation could be avoided. Using DPM, you can make a full backup of your SharePoint data (after export) and file stores so that if any relevant data is lost at anytime during the day, it can be restored, even in hourly increments. Once that data has been modified in coordination with the synchronization schedule, it will be pushed block by block into your backup files, ensuring that business critical data can immediately be pushed back into your environment.
This means that the CEO will be able to bring up the corporate portal during his meeting with shareholders, and even though his file was uploaded to the document library at 9:00 it can still be restored in enough time that he will have all of his relevant assets he needs to ensure his shareholders that they are making the right investment. Even better, since the CEO should have access to the relevant backups, he can even invoke the DPM UI and restore the backup himself. Other users can take advantage of this feature as well, depending on permissions that you set up. If the CEO didn’t have access, assuming he is not incredibly tech savvy and therefore his access is restricted to certain resources, you will most likely be responsible for restoring his relevant system status. This is easily done through the DPM UI, which is easy to facilitate through a Windows Explorer type snap-in, for both you as the administrator of the SharePoint environment as well as your users.
The Shrinking Window of Data Backups
Eliminating this 2:00 a.m. restoration process is eradicating the shrinking window of database backup. More and more data is needed to be backed up relating to your SharePoint environment, and there is less and less time during a 24 hour window for you to create these backups. The shrinking window isn’t large concern when you have an implementation of DPM since the data is constantly backed up for you to restore whenever problems may occur with your portal, which is quite useful for proper disaster recovery.
As described before, as the SharePoint administrator responsible or your network, you are responsible for proper disk allocations and how your backups are stored. Having space for multiple versions of large SharePoint environments might seem to be not advantageous for an environment based on disk-to-disk data storage, it would take up a fair amount of space if you have multiple SharePoint site collections with large content repositories!
Adaptive Copies Within Microsoft Data Protection Manager
DPM handles this type of allocation quite nicely by using adaptive copies, only moving the changes so that you can save disk space for an environment that can have incredibly large backups already pegged with network bandwidth allocations issues since users are typically relying heavily on SharePoint for virtual team environments. Even more relevant to SharePoint is that while users are currently hitting your portal environment backups can still be made of the file stores, which is crucial for a communications and collaborations platform which is typically under constant use.
Unhealthy Storage Limits Within A Disaster Recovery System
DPM will also warn you if you are exceeding an unhealthy storage limit, which by DPM standards is if you hit a threshold of 75%. This is an atypical situation, and should realistically cause two courses of actions.
1. What are my physical storage options, do I have proper disk allocation?
2. Would my DPM configurations be causing this issue?
DPM has several inherit calculations built into it that will help you as the SharePoint administrator. Using a disk-to-disk-to-tape backup solution further should emphasize why you should not be getting these types of messages, since your legacy data should eventually work its way off the disk-to-disk portion of the backup solution and should eventually move to a tape for off-site storage.
Errors Due To Lack of Space
Within a SharePoint environment, since the data is constantly changing, the cause of these types of errors is because within a platform that promotes virtual teams that data is changing constantly, and DPM upon initialization will make intelligent configuration options as to how fast your backup data will change. If the data changes over this threshold, the shadow copies of the data will grow to quickly and will cause DPM to become confused.
Solving this issue quickly is easy, by adding more space. The two options for adding more space are:
1. Add more disks
2. Increase the storage allocation of the DPM server