Building Secure SharePoint Service Oriented Farms

There are several types of SharePoint 2010 server deployments that have to be considered before implementation occurs. In this post I will go through the common designs so better, more informed decisions to be made.

There can be a single service and single farm application in place. This default is used for web applications within a given farm. All sites will have access to all of the service applications deployed within that farm. There are pros and cons to this type of scenario that need to be explored. In this type of design, all of the service applications are available for all of the web applications and all of the service applications are centrally managed. The farm resources are used efficiently and the architecture is simple to deploy.

However, the service application data can’t be isolated and departments or teams aren’t able to manage service applications independently. A single service application and a single farm is a pretty common configuration. That should be the initial set up and it works well when you want to host a variety of sites on the same farm for a given company.

This type of configuration can help you to meet goals of Optimizing resources that allow you to operate service applications on a farm and sharing content and profile data across multiple sites. They would otherwise require a process to isolate their performance so that security can be in place.

When you have a team in need of a dedicated service application, you can build an architecture that uses one or more customized groups of service applications. There are specific steps that you must take in order for this to function properly. They include:

  • Deploying specific service applications for dedicated use. This can be for one or more teams of the organization.
  • Make sure that the dedicated service applications aren’t part of the default group settings.
  • Create at least one web application that uses a custom group of service applications. The SharePoint Administrator can choose the service applications that will be included in any customized group.

It is possible to create more than one custom service application group. Those that are deployed to be used for a specific purpose can also share the same application pool. You do have the option of deploying a separate application for each of them if you would like to do so.

It is possible for a service application group to have multiple Managed Metadata Service applications. The sites found within the web application are able to display social tagging and taxonomy. There are also other features available that come from Managed Metadata Service applications. Under this architecture, service data can be isolated and the ability to accommodate many organizational goals in the same farm. The configuration of sites for the use of a subset of service applications and teams can manage the service applications that are dedicated for their use. However, this environment is complex when it comes to configuration and management and can take a great deal of farm resources to support multiple instances of service applications and that can reduce overall performance.

An architecture that includes multiple service application groups works very well for any company that has several times that can use specific data. It is also a format that works well for partnerships. When you have multiple groups of service applications configured, it allows for teams to be able to have control over those that are isolated for their use.

Under this type of design, there are several service applications that can be deployed for dedicated use by a team. They include:

  • Excel Services Allows for the optimization of performance for a given team. It allows for sensitive data to be isolated.
  • Business Data Connectivity Teams are able to customize their own line of business data systems. They can isolate that data from the other aspects within the organization.
  • Managed Metadata Allows a team to manage their keywords, hierarchies, and taxonomy. With SharePoint 2010 the results of multiple Managed Metadata service applications are combined. They can be shared across many areas of a given organization.

It is also possible for a committed server to be leveraged, you have a server that is dedicated to being a host for service applications within an organization. There are several types of farms that can use these services from the enterprise services farm that has been implemented. When all of the service applications are remote, you will have a published content only farm. This can be deployed without any local service applications in place. All of the service applications are being consumed from a separate farm. This type of configuring works very well for published content. This reduces the efforts from administration that are required to host a farm for published content. This also allows the organization to benefit from centrally managed service applications. This is a good type of configuration to follow if your organization has profiles, search, metadata, and centrally managed resources you want to integrate. The resources within the farm for hosting the content need to be optimized. This is in place of running service applications. /p>

Extended on this concept, a combination of local and remote service applications can be used. They are optimized so the service applications can’t be shared across all of the locally hosted farms. This pertains to the client related service applications as well. All of the cross farm service applications are consumed from an enterprise service farm. Such farms are able to consume services from more than one remote farm. The Managed Metadata service comes from a specialized department farm. It is integrated with the automated management of social tagging and taxonomy for that department. When there are multiple Managed Metadata service applications in place, one of them has to be designated as the primary service application. This is what will be used for hosting the corporate taxonomy. All of the other uses of the service application are considered as secondary. They have to provide additional data to the primary service application data. When you have web parts by default then the data comes from the multiple Managed Metadata service applications.

The configuration that is recommended when optimizing the administrative and farm resources on the enterprise level for hosting services and optimizing resources on the farm level to host collaborative sites. Also, when integrating organizational wide metadata, search, profiles, and resources that are centrally managed and integrating along with the metadata that is created by a specialized team

A mix of local and remote service applications can be used for organizations that have specialized departments. This allows an architect to ensure the ability to manage service applications through automation and Ensure data is isolated. Furthermore, it also allows centralized management of the service applications and the team is able to manage its metadata from the rest of the organization. This is a best practice design when the requirements mandate ensuring that specific service data is isolated and able to be managed separately from the rest of the organization and allowing a specialized team to manage their own metadata.

There are times when you may want to deploy specialized service farms. They help to optimize farm resources for specific types of service applications. This can make it possible to scale up hardware for optimizing performance as it relates to the specific service application.

There is a primary service application that may require a dedicated farm to be used for search. This is because search has a unique performance as well as requirements for capacity. When the search service application is offloaded to a dedicated farm, those resources have to be optimized for all of the other cross farm service applications.

The service applications can be shared across any farm with cross organization farms. They aren’t limited to only the entries service farms. There are some scenarios where you may want to consider doing so. They include:

  • Providing enterprise wide service applications but you won’t need a dedicated enterprise service farm
  • In order to share resources across farms while avoiding duplication of service applications that have been previously deployed
Share

List View Thresholds And Blocked Operations In SharePoint 2010

There have been several past posts that deal with the list threshold, such as here. A list may exceed the list view threshold and then some operations will be blocked. The big problem with this is that the default list view can’t be used to access the list, bad news bear! They have to be properly configured before they can work with a large list. The list view threshold blocks the database operations that affects more items than that threshold allows. It won’t just affect the number of items that have been returned.

There are two classifications that come into the picture when you have large lists: “List Exceeds The List Value Threshold” and “Container Exceeds The List View Threshold”.

There are operations that can be blocked when the size of the entire list exceeds the list view threshold. This occurs even if the items are placed into folders. These operations include managing and checking versions, operations of all items, and recursive queries. The views that return all items without folders can also be prevented. There are operations that affect a complete list too including adding a column or deleting indexes and they can be blocked.

There can be operations prevented due to the folder for the list containing more items than the list view threshold allows. You won’t be able to rename it or to delete it so you do need to be careful. The list view threshold can prevent you from performing some common actions when you setup your list. This is why you should configure the columns and indexes for a list before the size is greater than the list view threshold.

Should a list exceed what the list view threshold allows, then you need to plan to configure it correctly. You need to configure view and navigational options well in advance. However, lists can grow beyond the list view threshold and that will require some action from you. For example when you create a column or index that column in a list you need to be prepared for it to take time. The operations are prevented by the live view threshold. They can be performed during the daily time window. They can also be performed by the farm and computer administrators.

The operations need to be planned well in advance. The list may be too big so you will need to use a daily time window. An administrator with the right privileges may be needed in order to perform the necessary operations. It is possible for a list to become so large that some of the operations can time out when they are used in associated with a Web browser.

List Exceeds The List Value Threshold

  • Add/Remove/Update a list column All of the columns including lookup and calculated columns. There are updates such as name change. They aren’t blocked due to the fact that they won’t affect all of the items in the list.
  • Add/Remove/Update a list content type Every item in the list is affected so it is blocked if the list has more items than the list view threshold.
  • Create/Remove indexes it is blocked for any list that has more items than the list view threshold has it affects each item in the list.
  • Mange files The non indexed query fails for any list that has more items than the list view threshold.
  • Non indexed queries This includes filters and various sorts. The operation will fail if the list size is larger than the list view threshold. There isn’t an index so a full scan of the list occurs. The items will all be returned but the folders will be ignored.
  • Cross list query This includes the various queries by the Content Query Web Part. It follows the list view threshold setting for auditors and administration. The default for it is 20,000. For operations above that threshold it will fail.
  • Lookup columns This refers to those that enforce relationship behavior. You can’t offer lookup columns like this though if the list references content for more items than the list view threshold.
  • Delete a list This is blocked if the list has more items than the list view threshold due to the fact that it affects every item in the list.
  • Delete a site This affects all of the items in the list so it’s blocked for any list if there are more items in it than the list view threshold.
  • Save the list as a template This affects all of the items in the list so it is blocked for any list with more items than the list view threshold.
  • Show totals in list views This performs a query against all of the items in the list. It is blocked for a list that has more items than the list view threshold.
  • Enable/Disable attachments for a list This affects all of the items in the list so it will be blocked when the list offers more items than the list view threshold.
    Container Exceeds The List View Threshold

  • Delete/Rename/Copy a folder This fails if the folder contains more items than the list view threshold as too many of the rows will be affected.
  • Queries to filter non indexed columns This fails if the folder or list has more items than the list view threshold. It performs a full scan against the entire folder because there isn’t an index.
  • Fine grain security permissions This fails when the list or the folder that is being set has fine grained permissions that contain more items than the list view threshold as too many rows are affected. You can use the fine grain permissions on documents in a large list. However, you can’t set the permissions of the list or of the folders if they contain more than the list view threshold.
  • Open with Explorer This won’t show any items if a container has more items than the list view threshold, other than in reference to sub folders. If the root list contains more items than the list view threshold the “Open with Explorer” won’t show anything. In order to use Open with Explorer the list needs items to be organized into folders. The amount needs to be less than the list view threshold in the root for a given container.
Share

When Best Practices Aren’t Best Practices

I have a satisfactory tap into the SharePoint community, and something that I see on practically a daily occurrence is taking actions labeled as best practices without understanding the underlying issue and potential consequences of implementation. This seems extraordinarily ubiquitous in SharePoint, way more so than other facets. I believe that a lot of best practices are taken as holy doctrine because it helps abstracts a lot of understanding conventionally required to implement those actions. For some rationale, the resulting approach is taken as divine creed.

Before you read on any further, let me state up front that I am not saying implementing given best practices for SharePoint is bad, I am going to merely state some things that are concerning regarding how they are involved and used.

I think there are three important points that have to be taken into consideration when deliberating, exploring, and implementing best practices for SharePoint. The first is regarding industry verticals, the second company culture, and third about conception.

More Often Than Not a Best Practice Will Not Account For Industry Verticals

Best practices are commonly constructed as boilerplate templates that can be stamped out at an arbitrary organization and similar results expected. Its implementation should result in a more efficient approach to get the best results from a particular task. However, blindly implementing a best practice without considering the industry vertical is one of the largest, and most frequent, mistakes I see. Completely understanding a particular vertical can take years, a vast majority of large enterprises are dynamic, living entities subject to changes, adaptation, and evolution. A person developing and promoting a best practice is analogous to the industry, whereas industry sensitive collaboration deployments require an operational SharePoint practice to respond to industry and industry-bound user’s expectations. The primary obstacle is the tendency to view best practice tackled issues from a holistic perspective. However, for industry compensation it is better to instead package vertical specifics within cottage industries, since they will essentially function independently and uniquely. The current SharePoint best-practices perspective requires an organizational focus taking into account variability at levels fundamentally nonexistent.

More Often Than Not a Best Practice Will Not Account For Company Culture

Introduction of collaboration software, and the resulting enforcement of its use generally results in fear that the exercise will result in criticism and loss of business for someone. One of the largest friction points cultivates when a SharePoint architect implements an information taxonomy adhering to a best practice which presupposes generalization to all organizations. Such a best practice will not adhere to organizational carve-outs which would instead actively implement procedures delineating company cultural sensitive best practices for information architecture implementation. This results in SharePoint not being a competitive piece of software within an organization, since it becomes difficult to demonstrate pragmatic effectiveness.

More Often Than Not a Best Practice Will Not Teach You the Underlying Process

Best practices normally demonstrate a procedure that results in a more efficient approach to get the best results. As a result there is often very little attention that is paid to the underlying considerations that are below the surface. This results in a considerable maintenance obstacle since an organization might not then have the expertise to measure and interpret variability within a SharePoint implementation. As a result, it is common that data collection for maintenance of best practices comes for immediate sources, such as encounter reports and built-in measures. When the underlying issues of the best practice is understood however it is easier to generate meaningful information that allow SharePoint administrators to improve indicators and overall maintenance.

I am not alluding to an argument that I think best practices are bad, I think they are good! I am simply trying to demonstrate a frequent trend I have been noticing that is disturbing, and I think needs to be rectified with accommodations for when the best practice is implemented in an organization.

Share