I ran into this issue really recently working at a client with a new, pretty basic ADFS environment that was acting as the identity provider, through an ADFS proxy server relay. I couldn’t wrap my head around why it wasn’t working, the groups themselves were resolving correctly, and even showing up in the identity data sources in the people picker. However, members of the AD securitygroup, (NOT the SharePoint group) were still being denied access to the SharePoint site.
To fix this issue follow these steps:
- Open the federation server box.
- Open the ADFS management console.
- Edit the claim rules for the relaying party trust
- Select the tab Issuance Transform Rules
- Edit the Send LDAP Attributes as Claims
- This should have:
- Claim Rule Name : Pass-through LDAP Claims
- Attribute Store : Active Directory
- Set: Token Groups unqualified names (Don’t Use: Token Groups – Qualified by domain name) | Outgoing ClaimType: Role
- Set: User-Principal-Name | Outgoing Claim Type : UPN
- Open the SharePoint management shell, run the following PowerShell script to create the issuer. If you have an issuer already made, remove it or execute the coordinating update commands. Make sure you replace the $certPath, $realm, and any of the string literals within $ap.
$certPath = your certification path
$cert = New-Object System.Security.Cryptography.X509Certificates.X509Certificate2(“$certPath”)
$map1 = New-SPClaimTypeMapping “http://schemas.xmlsoap.org/ws/2005/05/identity/claims/emailaddress” -IncomingClaimTypeDisplayName “EmailAddress” -SameAsIncoming
$map2 = New-SPClaimTypeMapping “http://schemas.microsoft.com/ws/2008/06/identity/claims/windowsaccountname” -IncomingClaimTypeDisplayName “Login” â€“SameAsIncoming
$map3 = New-SPClaimTypeMapping “http://schemas.microsoft.com/ws/2008/06/identity/claims/role” -IncomingClaimTypeDisplayName “Role” â€“SameAsIncoming
$map4 = New-SPClaimTypeMapping “http://schemas.xmlsoap.org/ws/2005/05/identity/claims/upn” -IncomingClaimTypeDisplayName “Account ID” â€“SameAsIncoming
$realm = “urn:” + $env:ComputerName + “:adfs”
$signinurl = “https://yoursigninurl”
$ap = New-SPTrustedIdentityTokenIssuer -Name “ADFS” -Description ADFS 2.0 -Realm $realm -ImportTrustCertificate $cert -ClaimsMappings $map1, $map2, $map3, $map4 -SignInUrl $signinurl -IdentifierClaim $map1.InputClaimType
If you are curious about the way to update, rather than create a new issuer, you can use the following. In the above, you may or may not need all those assertion mappings depending on how your trusted relay issuance rule setup:
$issuer = Get-SPTrustedIdentityTokenIssuer
$map=New-SPClaimTypeMapping “http://schemas.microsoft.com/ws/2008/06/identity/claims/role” -IncomingClaimTypeDisplayName “Role” â€“SameAsIncoming
Now you HAVE to apply KB hotfix 2536591 located here: http://support.microsoft.com/kb/2536591/en-us
Boom! Groups will work.
When working with timer jobs, feature receivers, and other SharePoint assets that use the service application architecture components you may encounter the error:
UnauthorizedAccessException: “The current user has insufficient permissions to perform this operation”
even when leveraging a highly privileged accounts such as domain administrators. While the focus of this post is just going to use the Term Store as an example, this can happen with anything that subscribes to the 2010 services architecture.
Consider the below sample code which will attempt instantiating a few objects from the Term Store:
TaxonomySession session = new TaxonomySession(site);
// can also be a Metadata Term Store Name
TermStore store = session.TermStores;
// Bad news bear happens here
GroupCollection groups = store.Groups;
// Or other actions can cause this error
Group group = store.CreateGroup(“Group”);
TermSet set = group.CreateTermSet(“Term Set 1”);
set.CreateTerm(“Term 1”, 1033);
The second sample is just there for posterity, assume the first example piece of code because it is a pretty simple collection hydration. TaxonomySession is the entry point to accessing all data associated with TermStore objects as can be seen with the use of the TaxonomySession.TermStores property following. You can pretty much do whatever after those proxy objects are set in place.
Due to service application interaction in SharePoint 2010, services architecture process accounts running things like timer jobs and feature event sinks still lean on a proxy for connectivity. In 2007 this was a lot different because it was more blanket, timer process accounts were more liberally leveraged across processes. In 2010, it becomes increasingly important to ensure that accounts are specifically designated for particular services.
Under these pretenses, it is necessary to determine what is being used for the SharePoint\System account (your app pool and farm account, I know you can mask in a similar format ala policies) and add it to the Term Store Administrators or whatever service group you need. Should run just fine afterwards.
Mirroring a TFS DT is a very common action within large environments for a variety of reasons, redundancy and failover obviously being the primary catalysts for such an implementation but there are also a variety of other reasons why this is a beneficial action.
At a current client of mine where I was setting up initial, trial mirroring for the TFS instance, it because evident that the TfsIntegration Maintenance Job was causing issues with a long completion time which eventually resulted in TFS to respond uncharacteristically slow. Now this job does two things, firstly it re-indexes the TFS databases (TFSIntegration, TfsWarehouse, TfsWarehouse, TfsWorkItemTracking and TfsVersionControl) by calling the Re-indexing:TfsIntegration stored procedure. This is done by calling exec TfsIntegration.dbo.Prc_OptimizeTfsDatabases. As well, it also removes deleted process templates by calling exec TfsIntegration.dbo.prc_deleteTemplates. The first of these is the culprit though for this issue, and if you run the stored procedures directly you will see a Stop Request being issued which will result in the re-index has not being executed.
9 times out of the ten, because Index rebuilds and Defragmentation generate very hefty transaction logs, it is because a transaction log backup job (TLog backup job) is running while the re-indexing is executed. While calling TLog may help in log file reduction, it can also cause these issues when executed simultaneously with the index rebuild.