Quantcast
Channel: ADdict
Viewing all 95 articles
Browse latest View live

Forefront UAG (TMG) Remote SQL Logging Database Size

$
0
0

A while ago I did a basic install of UAG and enabled both Firewall and Web Proxy logging to SQL. I configured a trunk and published an application. Now one month later I checked the size of the SQL database which holds the logging information. It was 1,4 GB… Not really special, but taking into account that during that month I visited the published application like 5 times or so, it’s a lot…

So just out of curiosity I tried finding out if there were any records being logged which I didn’t care about.

In the database I did a select top 1000 rows and just glared at the rule names:

clip_image002

At first sight I saw a lot of [System] rules. To be honest I really don’t care if my UAG servers are accessing the SQL server configured for logging, or if they contact Active Directory for authentication. So I executed the following query:

image

This would delete all entries related to the logging configured in the TMG System Policy rules. Here’s the size before and after:

Before:

clip_image002[4]

After:

clip_image002[6]

So I only won like 122 MB. Not really as much as I’d expect. After looking at the SQL data some more I executed this query:

image

This would delete all events logged because the request was denied by the “default deny” rule in TMG. Now the database lost another 880 MB! Now we’re talking!

clip_image002[8]

So it seems to me that a large amount of data is related to the “default deny” rule in TMG. If you feel like you don’t need this information, you could disable the logging for this rule in the TMG console:

clip_image002[10]

However this seems to be impossible:

clip_image004

There’s some articles explaining a way around this, but I don’t like those. In troubleshooting times you might want to have logging enabled again for this rule. I feel more like setting a checkbox then start importing and exporting the configuration of a TMG server in a production environment. Here’s an example of an article explaining how to alter the default rule: ISAServer.org: Tweaking the configuration of Forefront TMG with customized TMG XML configuration files

So what I did, for now, is to create my own rule which is positioned just in front of the default deny rule. It’s almost exactly the same, but it has logging disabled:

clip_image006

Whether or not this is a good Idea you have to see for yourself. It all depends what you want the logs for. If you want to have statistics about your published applications this might be fine. If you want statistics about the amount of undesired traffic hitting your UAG’s you might want the default behavior. If you still feel you need some but not all of the default deny rule logging here’s some additional idea’s: configure a deny rule in front of the rule we just created, but now configure it to actually log requests. In this rule you can specify to only log requests concerning HTTP(S) traffic. Or only log requests hitting the external interface. This would avoid all the chitchat which is happing on the LAN side.


Quick Tips: October Edition #1

$
0
0

Tip #1 (IIS): appcmd and IIS bindings:

Some more IIS (configuration) awesomeness: you can easily view the bindings for an IIS site using the following command:

  • appcmd list site /site.name:”MySite”

Now obviously just viewing isn’t that cool, but you can also set them! This is extremely useful for those environment where you have work according to “Development –> Test –> Acceptance –> Production” or other variances. I hate doing the same task (manually) multiple times.

So here’s how you can “push” your bindings to a site called “MySite” in IIS.

Syntax option #1: just a host header available on all IP’s (*):

  • appcmd set site /site.name:”MySite” /bindings:”http://mysite.contoso.com:80”,”http://mysite:80”

Syntax option #2: a host header bound to one specific IP:

  • appcmd set site /site.name “MySiite” /bindings:”http/192.168.50.11:80:mysite.contoso.com”,”http/192.168.50.11:80:mysite”

Mind the difference in the bindings parameters syntax: e.g. http:// <> http/

Now what if you want to change one specific binding to an other value?

  • appcmd set site /site.name:"MySite" /bindings.[protocol='http',bindingInformation=':80:mysite.contoso.com'].bindingInformation:192.168.50.11:80:mysite.contoso.com

In this example I changed the binding which was listening on all IP address to only listen on a specific IP address.

Or what about adding a binding witouth modifying existing bindings?

  • appcmd set site /site.name:"MySite" /"+bindings.[protocol='https',bindingInformation='192.169.16.105:443:mysite.contoso.com’]

P.S. appcmd is an executable which you can find in the c:\windows\system32\inetsrv directory.

Tip #2 (SQL), is SQL Full Text Search installed?:

One of the prerequisites when installing the FIM Service is that the SQL Full Text Search feature is installed on the SQL Instance hosting your database. There’s two easy ways to see if this is the case:

  • Using the services.msc mmc: check if there’s a service name SQL Server FullText Search ([instance]) where [instance] will be the name of your instance
  • Using the following SQL query: IF (1 = FULLTEXTSERVICEPROPERTY('IsFullTextInstalled')) print'INSTALLED'else print'NOT INSTALLED'

My source: serverfault.com: How to detect if full text search is installed in SQL Server

Tip #3 (Visual Studio): auto-increment version info:

When you create a class in C# or your preferred language you might want to have some version information on the DLL you build. There’s an easy way to configure your solution/project to auto-increment the build version every time you compile your project.

You can do this by directly editing the AssemblyInfo.cs below the properties.

image_thumb

// Version information for an assembly consists of the following four values:
//
//      Major Version
//      Minor Version
//      Build Number
//      Revision
//
// You can specify all the values or you can default the Build and Revision Numbers
// by using the '*' as shown below:
// [assembly: AssemblyVersion("1.0.*")]
[assembly:AssemblyVersion("1.0.*")]
//[assembly: AssemblyFileVersion("1.0.0.0")]

In words: make sure to set the AssemblyVersion to something like “1.0.*” or “1.0.0.*” and comment the AssemblyFileVersion line. As I tested around a bit I noticed the following things:

  • the AssemblyFileVersion does not work with the *
  • If the AssemblyFileVersion is not commented the AssemblyVersion is ignored and the AssemblyFileVersion “wins”.

Some more background information:

Tip #4 (Certificate Authority): RPC traffic check

Often when playing around with certificates I’m hitting gpupdate like hell in order to retrieve auto-enrollment. But if you want to make sure your CA is actually reachable from a given endpoint over RPC/DCOM you can easily check this using the certutil utility. This utility is available out of the box.

  • certutil -ping -config "ca-server-name\ca-name”
  • Example: certutil –ping –config “SRVCA01\Customer Root CA”

UAG: Failed to run FedUtil when activating configuration

$
0
0

I’ve been testing an UAG setup where the trunk is either authenticated using Active Directory or Active Directory Federation Services. For this particular setup I had both configured some months ago. Now I wanted to reconfigure my trunk from AD to ADFS again. When I tried to activate the configuration  I was greeted with the following error:

image

In words: Failed to run FedUtil from location C:\Program Files\Microsoft Forefront Unified Access Gateway\Utils\ConfigMgr\Fedutil.exe with parameters /u "C:\Program Files\Microsoft Forefront Unified Access Gateway\von\InternalSite\ADFSv2Sites\secure\web.config".

image

In the event log I saw the above error. Now I started trying the most obvious things like a reboot, but all in vain. I also tried creating a completely new trunk, but that didn’t work out either. Finally I started thinking that some patch was being uncool. I verified the updates and I saw a patch round had occurred a few days ago. I uninstalled all patch from that day, and after a reboot I was able to activate the configuration again! Now you’re probably hoping for me to tell which specific patch is being the culprit? Well for now I don’t know that yet… But here’s the list of patched I uninstalled:

There are a lot…. Good luck! I still might have hit something else, but I sure did try a few reboots before actually going the uninstall-patches route… And that one definitely did it for me.

FIM: Calling FIM Automation cmdlets from within a PowerShell Activity

$
0
0

I’m currently setting up a FIM solution where the users should be preregistered for Self-Service Password Reset (SSPR). Their email address will be managed in a system outside of FIM, and will be pushed to the correct attribute in the FIM Portal: msidmOneTimePasswordEmailAddress. After some googling I quickly realized that in order for the user to be properly registered, flowing the mail attribute wouldn’t be enough. So Register-AuthenticationWorkflow to the rescue! Using this PowerShell cmdlet you can perform the proper registration from within an administrator perspective. In order to automate this, I combined this with a custom PowerShell activity in the Portal. This activity will execute a PowerShell script with some parameters (attributes from the FIM Portal object) upon execution.

The trigger: whenever the msidmOneTimePasswordEmailAddressattribute is modified, the workflow will be executed.

The script (I left out some logging):

Param($domain,$name,$mail)
Add-PSSNapIn FIMAutomation

try{
    $template = Get-AuthenticationWorkflowRegistrationTemplate –AuthenticationWorkflowName "Password Reset AuthN Workflow"
    $usertemplate = $template.Clone()
    $userTemplate.GateRegistrationTemplates[0].Data[0].Value = $maill

    Register-AuthenticationWorkflow -UserName "$domain\$name" -AuthenticationWorkflowRegistrationTemplate $userTemplate
}
Catch {
    $errorDetail = $_.Exception.Message;
}

However calling this script from within a workflow seemed to result in the following error:

Unexpected error occurred when registering Password Reset Registration Workflow for DOMAIN\USER with email address EMAIL, detailed message: The type initializer for 'Microsoft.ResourceManagement.WebServices.Client.ResourceManagementClient' threw an exception.

In the event log I found the following:

image

In words:

Requestor: Internal Service
Correlation Identifier: e98bcce4-54e7-4fd3-a234-7f7b5c7146d3
Microsoft.ResourceManagement.Service: Microsoft.ResourceManagement.WebServices.Exceptions.UnwillingToPerformException: IdentityIsNotFound
   at Microsoft.ResourceManagement.WebServices.ResourceManagementService.GetUserFromSecurityIdentifier(SecurityIdentifier securityIdentifier)
   at Microsoft.ResourceManagement.WebServices.ResourceManagementService.GetCurrentUser()
   at Microsoft.ResourceManagement.WebServices.ResourceManagementService.Enumerate(Message request)

Some where I found a forum thread or a wiki article which suggested you modified the FIM Service configuration file. The file is located in the FIM Service installation folder and is called Microsoft.ResourceManagement.Service.exe. The section we need to modify:

  • Before: <resourceManagementClient resourceManagementServiceBaseAddress=”fqdn” / > Depending on your installation it can also be localhost.
  • After: <resourceManagementClient resourceManagementServiceBaseAddress=”http://fqdn:5725” / >  Depending on your installation use FQDN or localhost.

After retriggering my workflow I now receive the following error:

image

In words: GetCurrentUserFromSecurityIdentifier: No such user DEMO\s_fim_service, S-1-5-21-527237240-xxxxxxxxxx-839522115-10842

This is easily resolved by adding the FIM Service as a user in the Portal. I’d make sure it’s filtered in the FIM MA or double check no attribute flows can break this AD Account.

Check the following URLs for some more background:

Pass an Array to a PowerShell Script in a TFS Build Definition

$
0
0

In one my current projects I had to configure a TFS deployment for my customizations I wrote for AD FS and FIM. Setting up a deployment (build) in TFS seems pretty straightforward. The most complex thing I found was the xaml file which contains the logic to actually build & deploy the solutions. I picked a rather default template and cloned it so I could change it to my needs. You can edit such a file with Visual Studio which will give you a visual representations of the various flows, checks and decisions. After being introduced to the content of such a file by a colleague of mine I was still overwhelmed. The amount of complexity in there seems to shout: change me as little as you can.

As I have some deployments which require .NET DLL’s to be deployed on multiple servers I had some options: modify the XAML so it’s capable of taking an array as input and execute the script multiple times, or modify the parameters so I could pass an array to the script. I opted for the second option.

My first attempt consisted of adding an attribute of the type String[] to the XAML for that deployment step. In my build definition this gave me a multi valued parameter where I could enter multiple servers. However in my script I kept getting the value “System.String[]” where I’d expect something along Server01,Server02 This actually made sense, TFS probably has no Idea it needs to convert the input to a PowerShell array.

So I figured if I use a String parameter in the build and if I feed it something like @(“Server01”,”Server02”), which is the PowerShell way of defining an array.

image

Well it did, but not exactly like we want it. The quotes actually screwed it up and it was only available partially in the script. So we had to do some magic. Passing the parameters to the script means you pass though some vb.net code. This is some simple text handling code, and all we need to do for this to work is add some quotes handling magic. Here’s my test console application which tries to take an array as input and make sure I got the required amount of quotes on the output.

image

Here’s the TFS parameter section where we specify the arguments for the scripts. The magic happens in the “servers.Replace” section. We’ll ensure that quotes “survive” be passed along to the PowerShell script.

String.Format(" ""& '{0}\{1}' '{2}' {3} "" ", ScriptsFolderMapping, BuildScriptName, BinariesDirectory, Servers.Replace("""", """"""""))

In the GUI this goes into the “Arguments” field:

image

This allows us to configure the build definition like this. Which is actually pretty simple. Just put the array as you’d put it in PowerShell.

image

P.S. Make sure to either copy paste or count those quotes twice ; )

FIM 2010 R2: Create FIM MA error

$
0
0

Recently I came across the following error when trying to import a FIM Synchronization Server configuration:

image

In words: Failed to connect to the specified database. The extension operation aborted due to an internal error in FIM Synchronization Service.

Not only was I seeing this when importing the configuration, but also when manually trying to create a FIM MA. Whilst it says it has problems trying to connect, it has nothing to do with either the database or FIM Service base address. Even filling in random stuff results in this error immediately. So something had to be wrong with the Synchronization Service or the management console.

The odd thing was that I didn’t had this error when migrating from Development to Test or Test to Acceptance. So what was off with the Production server?! After a bit of googling I stumbled upon this post:

TechNet Forums: FIM 2010 R2 Error when creating FIM MA

Well I can tell you, I wasn’t going to install .NET 4.0 on the Dev, Test, Acc environment just because a fresh installed FIM server was behaving odd. After looking around on the server and verifying all installed updates I couldn’t find anything specific until I verified the installed software. Seems that somewhere in the staging process of the server the “Microsoft .NET Framework 4 Client Profile” got installed. I didn’t saw it on the other servers, so I went forward, uninstalled, rebooted and voila!

I’m not sure whether a lot of people will stumble on this, but for those that do, I hope this post helps!

Windows 2008 R2 Certificate Authority Application Pool Crashes

$
0
0

Recently I had a customer where they had a Certificate Authority in a lab environment and one in a production environment. At first sight both seemed to function correctly. However SCOM (System Center Operations Manager, a monitoring solution) was showing various events of application pool crashes for both environments. The application pool belonged to the CA Web Enrollment pages. When investigating the the Event log on those machines we found the following events to be recurring:

image

In words:

Faulting application name: w3wp.exe, version: 7.5.7601.17514, time stamp: 0x4ce7afa2
Faulting module name: scrdenrl.dll_unloaded, version: 0.0.0.0, time stamp: 0x4a5bc7f2
Exception code: 0xc0000005
Fault offset: 0x000007fef9402594
Faulting process id: 0x10b4
Faulting application start time: 0x01cdee76c8747cfb
Faulting application path: c:\windows\system32\inetsrv\w3wp.exe
Faulting module path: scrdenrl.dll
Report Id: 795f74be-5a8c-11e2-8b2c-005056ac0079

And also:

image

In words:

A process serving application pool 'DefaultAppPool' terminated unexpectedly. The process id was '4276'. The process exit code was '0xff'.

The events were recurring, but not very exact. Sometimes it was about every 5 minutes, but sometimes it was once an hour, or even once a day. They only thing we could say that it occurred at least once a day. Besides those events, I couldn’t find anything out of the ordinary on those machines. So off to plan B: google gave me this: TechNet Forums: Prolific number of Windows error reports pertaining to 2008 R2 certificate services.

This seemed to match my problem exactly. I tried the suggestion: removed the CA web components/IIS, reboot, reinstalled. Quickly the events reappeared. The Second thing I noticed here was that SCOM was also involved. Of course, it could be the causing it or the one noticing it….  Disabling the SCOM agent didn’t help. So I started digging deeper. I started looking at the IIS logs. I could see that whilst the CA isn’t visited that regularly, still a lot of requests where logged at frequent intervals. The user agent of the request was mentioning SCOM, so it was pretty obvious this was part of some monitoring configuration.

I asked the guy responsible for SCOM, and besides the regular host based monitoring, they also added URL monitoring (for /certsrv). After disabling this URL monitoring the events stopped occurring. So somehow SCOM doesn’t plays to nice with its requests. I’m not sure why it causes the application pool to crash, maybe it’s something which has to be fixed on the Certificate Authority side, but I’m glad at least I found out WHAT was causing it!

UAG 2010: SP2 ADFS Behavior Change

$
0
0

I’m currently involved in a project where we publish multiple SharePoint sites using UAG 2010. These SharePoint sites require users to be authenticated using claims. These claims are provided from an AD FS 2.0 farm. When we first applied SP2 for UAG in our lab we noticed that our Single Sign On experience was broken. When we visited a SharePoint URL, we expected to be greeted with the UAG logon form followed by the SharePoint site itself.

Here’s the UAG logon form:

image

However, after choosing login we saw the following login form. It was fixed to our “ADFS” server we defined in UAG. No matter what credentials we entered, we didn’t got past this form.

image

After reverting the virtual machine its snapshot, yes I took one! ; ) I could see that SSO was working again as expected. So I switched back the snapshot to where SP2 was installed and started troubleshooting. I tried several things, but nothing worked. So in the end we logged a case with Microsoft. One of the things I had tried, and which the engineer asked me to do as well was the following:

This was how I set it up the authentication of one of the SharePoint sites initially:

image

This is how he asked me to set it up:

image

After changing this setting, we got rid of the second form, but now we got an additional “basic authentication” login popup…. After sending over some debug logs and so the engineer suggested to revert back the snapshot to before installing SP2 and to try reinstalling SP2. And I still have no idea why, but everything worked after reinstalling SP2.

Bottom line: when publishing a site which uses claims for authentication, you shouldn’t specify AD FS as an SSO server for the published site. The web application will redirect the user to the ADFS service, and UAG handles SSO towards the ADFS service. But with UAG 2010 pre SP2 either approach worked fine so I didn’t questioned this.

Here you can see an other (unrelated) change in the ADFS configuration when using claims based authentication for the UAG trunks. Before you had to explicitly check “allow unauthenticated access to the web server”. After SP2 the check box is gone.

image

image


UAG: You have attempted to access a restricted URL

$
0
0

One of the things I noticed during my latest UAG project is that users seemed to be redirect to some sort of error page. In short: if they logged on to a SharePoint site published over UAG and then had their session time out, after click “ok”, they’d be presented with the “You have attempted to access a restricted URL” error.

image

Using the UAG web monitor, I was able to get a more specific error:

A request from source IP address x.x.x.x, user to trunk secure; Secure=1 for application Internal Site of type InternalSite failed because
the URL /InternalSite/SessionTimeout.asp?site_name=secure&amp;secure=1 must not contain parameters. The method is GET.

image

So I started investigating the URL Set of the trunk where I was seeing the issue. I could indeed see that the rule didn’t not expected (allowed) parameters. The rule in question was for the URL /InternalSite/SessionTimeout.asp

image

I peeked around, and I found out that the rule /InternalSite/setpolicy.asp had these exact two parameters. This made adding them to our rule pretty easy, just click each parameter and chose the copy (and paste) option.

image

After pasting the rules we needed to perform some minor modifications:

  • Change the rule to Handle (instead of Reject) parameters
  • Modify the Existence to Optional

image

Save and activate the configuration and now your users shouldn’t be presented with this unexpected message.

image

IIS (Random) Kerberos Authentication Failures

$
0
0

Lately I assisted in troubleshooting an issue where users trying to start an App-v application where continuously prompted to enter their credentials. Entering the correct credentials did not matter. We found out pretty fast that this had to do with IIS not being able to handle the Kerberos tickets. If we’d put the NTLM authentication on top of the Negotiate provider all was fine. Rebooting also did the trick. But these are just workarounds…

Eventually after some googling I stumbled across this KB article: KB2545850 It seems that whenever a server (computer) changes its password and IIS is restarted somewhere after that, the application pool no longer can decrypt the tickets it receives resulting in an authentication failure. This is a patch which applies to both RTM and SP1 for Windows 2008 R2.

SharePoint: EncodeClaim: ArgumentException for claimType

$
0
0

I have a colleague who wrote a Custom Claim Provider for a SharePoint deployment which uses ADFS as its authentication provider. The goal of the custom claim provider was to ensure that the people picker would resolve things by searching our Active Directory. One of the other things we were working on was to set permissions on certain sites from within .NET code. Online we found several ways to get this task done, but my colleague kept running into an ArgumentException error.

I tried to simulate what he was doing in PowerShell. Our troubleshooting would benefit from this as a typical refresh of the custom code on SharePoint takes several minutes. One thing we quickly found out that it was working for a claim type we’ve had available in SharePoint and which was also specified in the claims provider. However lately we registered a new claim type in SharePoint, and for some reason this was not being accepted. Here’s how we registered it:

From a SharePoint PowerShell prompt:

  • $claimType = http://schemas.customer.com/claims/entity
  • $claimTypeSPS = "Entity"
  • $ti = Get-SPTrustedIdentityTokenIssuer "Cust ADFS Provider"
  • $ti.ClaimTypes.Add($claimType)
  • $ti.Update()
  • $map = New-SPClaimTypeMapping –IncomingClaimType "$claimType" –IncomingClaimTypeDisplayName "$claimTypeSPS" –SameAsIncoming
  • Add-SPClaimTypeMapping –Identity $map –TrustedIdentityTokenIssuer $ti

Typically this should be enough to start using the claim type “Entity” in SharePoint. However when we executed the following lines:

  • $mgr = Get-SPClaimProviderManager
  • $tp = Get-SPTrustedIdentityTokenIssuer -Identity "Cust ADFS Provider"
  • $cp = Get-SPClaimProvider -Identity "ADFSClaimsProvider"
  • $claim = New-SPClaimsPrincipal -ClaimValue "readPermission" -ClaimType http://schemas.customer.com/claims/technicalrole -TrustedIdentityTokenIssuer $tp
  • $mgr.EncodeClaim($claim)

We stumbled upon this error:

image

In words: Exception calling "EncodeClaim" with "1" argument(s): "Exception of type 'System.ArgumentException' was thrown.
Parameter name: claimType"
At line:1 char:194
+ $claim = New-SPClaimsPrincipal -ClaimValue "readPermission" -ClaimType "
http://schemas.customer.com/claims/entity" -TrustedIdentity
TokenIssuer $tp -IdentifierClaim:$false;$mgr.EncodeClaim <<<< ($claim)
    + CategoryInfo          : NotSpecified: (:) [], MethodInvocationException
    + FullyQualifiedErrorId : DotNetMethodException

So we started digging into it. Long story short, as far as we can tell, from the moment you specify a custom claim provider on your Security Token Issuer, you must include all claims of the security token issuer in the custom claim provider. You can easily check which claim types are currently visible to SharePoint by executing the following lines:

  • $cp = Get-SPClaimProvider -Identity "ADFSClaimsProvider"
  • $cp.ClaimProvider.ClaimTypes()

Remark:

  1. Don’t confuse the Get-SPTrustedIdentityTokenIssuer and the Get-SPClaimProvider.
  2. Often the name of the custom claim provider is the same as it’s proper name, but without spaces. This property is also called the internal name.

Here’s a screenshot, I could clearly see that Entity was not included:

image

And here’s a small code snippet:

image

Now after adding the claim type in the code, and redeploying the necessary assemblies, things still didn’t work. As far as we can tell the following actions are also required:

  1. Restart your SharePoint PowerShell shell
  2. Run the Update() method on the Claim Provider Manager ($mgr = Get-SPClaimProviderManager and $mgr.Update() )

image

And now we can see that we can successfully encode the claim:

image

Conclusion #1: whenever you write a custom claim provider, make sure to include all the claim types you want to use for that Token Issuer. It doesn’t matter whether you only want to add them programmatically or if you want them to be available for regular users through the graphical interface.

However, after this we were still having issues. We were trying to add the encoded claim as a member of a SharePoint group. This would allow people having the specified claim to access the site we are securing. Here’s the command we executed:

  • $tp = Get-SPTrustedIdentityTokenIssuer -Identity "Cust ADFS Provider"
  • $url = https://sharepoint.customer.com
  • $group = "Visitors"
  • $claimType = http://schemas.customer.com/claims/entity
  • $claimValue = "DepartmentX"
  • $web = Get-SPWeb $url
  • $SPgroup = $web.SiteGroups["$group"]
  • $principal = New-SPClaimsPrincipal -ClaimValue $claimValue -ClaimType $claimType -TrustedIdentityTokenIssuer $tp
  • $SPprincipal = New-SPUser -UserAlias $principal.ToEncodedString() -Web $web
  • $SPgroup.AddUser($SPprincipal)

So we are still having issues with the claim types we added. The error makes me suspect the resolving capabilities of the custom claim provider. After some googling I finally found this post: Adding users and claims to a site from PowerShell Here’s the relevant part:

image_thumb2

Conclusion #2: whenever you write a custom claim provider, make sure to provide fillResolve capabilities for your custom claim types. I don’t think you are obliged to add them to the fillResolve method which takes a string as value to resolve, but you have to add them to the fillResolve method which takes an SPclaim as value to resolve. You can then simply put an if statement to auto-resolve everything of the claimtypes you don’t want to be looked up.

you can add this in the protected override void FillResolve(Uri context, string[] entityTypes, SPClaim resolveInput, List<Microsoft.SharePoint.WebControls.PickerEntity> resolved) method

if (resolveInput.ClaimType == CustClaimType.Entity)
{
    resolved.Add(GetPickerEntity(resolveInput.Value, resolveInput.Value, resolveInput.ClaimType));                   
    return;
}

Some more background reading:

FIM Password Portal Customization

$
0
0

Due to me being creative with the FIM 2010 R2 Password Reset feature, I had to change some of the strings which are displayed during a password reset action on the password reset site. Luckily the FIM product team has foreseen this and allows you to customize whatever string you like. They’ve got it detailed over here: TechNet: FIM 2010 R2 Portal Customization

I stumbled across some things though:

In the guide they seem to suggest that you have to name resources files as Strings.<country>-<locale>.resources. To be honest I never tested naming the files like this. From what I could see from other files (like the DLLs in the folder) I could see that there’s no need to add the <country>. I simply named them Strings.<language>.resources and I can confirm that this works. This makes supporting different locales a lot easier. It’s also consistent with the DLLs (their filenames) the language packs add.

Conclusion #1: if you are having troubles getting your customizations work, try dropping the <country> from the filename.

image

However, once I got my files in place my customizations weren’t visible in IE. Luckily the event log showed me an error I could continue with:

image

In words:

System.Xml: System.Xml.XmlException: Invalid character in the given encoding. Line 18, position 32.
   at System.Xml.XmlTextReaderImpl.Throw(Exception e)
   at System.Xml.XmlTextReaderImpl.InvalidCharRecovery(Int32& bytesCount, Int32& charsCount)
   at System.Xml.XmlTextReaderImpl.GetChars(Int32 maxCharsCount)
   at System.Xml.XmlTextReaderImpl.ReadData()
   at System.Xml.XmlTextReaderImpl.ParseText(Int32& startPos, Int32& endPos, Int32& outOrChars)
   at System.Xml.XmlTextReaderImpl.ParseText()
   at System.Xml.XmlTextReaderImpl.ParseElementContent()
   at System.Xml.XmlReader.ReadString()
   at System.Resources.ResXResourceReader.ParseDataNode(XmlTextReader reader, Boolean isMetaData)
   at System.Resources.ResXResourceReader.ParseXml(XmlTextReader reader)

I opened a file and tripple checked the XML syntax. All seemed fine. Then I saw this:

image

After some googling and thinking I came to the conclusion the encoding of the XML file was the problem. Using Notepad++ (a great utility btw!) you can simply open the file, change the encoding and save the file.

image

Conclusion #2: if you are having troubles, make sure your encoding of your resource files is set to UTF-8?

And some bonus information: If you want to copy paste the default values for some of the strings you can reverse engineer the DLL’s provided by the product. I had some troubles finding ALL strings.

Some of them are located here: C:\Program Files\Microsoft Forefront Identity Manager\2010\Password Reset Portal\bin

image

But there’s also strings which are in DLL’s which are not in the product folder but in the GAC: c:\Windows\assembly\GAC_MSIL\Microsoft.IdentityManagement.CredentialManagement.Portal.Gates.resources

image

SharePoint: Encoded Claim Values

$
0
0

One of the things which pops up quite fast when working with SharePoint and claims based authentication is the weird identifiers you find throughout SharePoint. In one of my troubleshooting sessions I stumbled upon this post which provides a nice picture of the meaning of several parts of an encoded claim.

The post : How Claims encoding works in SharePoint 2010

The picture: Link to original picture (larger)

One thing I came across as well is that when you encode a principal in PowerShell, the shell isn’t capable of showing all encoded strings. Especially the 6th character is often displayed as a ? while in reality it’s some exotic character. The following lines of PowerShell code should print the SharePoint representation of the given claim (with it’s type and value)

  • $mgr = Get-SPClaimProviderManager
  • $tp = Get-SPTrustedIdentityTokenIssuer -Identity "Cust ADFS Provider"
  • $cp = Get-SPClaimProvider -Identity "ADFSClaimsProvider"
  • $claim = New-SPClaimsPrincipal -ClaimValue "readPermission" -ClaimType http://schemas.customer.com/claims/technicalrole -TrustedIdentityTokenIssuer $tp
  • $mgr.EncodeClaim($claim)

However if you want to see how the encoded claim really looks, I advise you to capture the output of $mgr.EncodeClaim($claim) and pipe it to Out-File claim.txt E.g.

  • $mgr.EncodeClaim($claim) | out-file claim.txt

To conclude this post, I would advise to never hardcode any encoded claims in your scripts or code. After all the 4th character is generated on a per farm base and could very well be different in another SharePoint environment. As long as you keep encoding the claim based upon the actual input (claim value, type & provide), you should be safe though.

App-V and User Variables within the Bubble

$
0
0

This post is just for me. I want to find this piece of information again whenever I might need it:

The problem: we wanted to avoid creating 20  App-V packages or entries in SCCM just because there’s groups of people having different connection URLs. For now we’d like to have this URL as a property of the AD group we use to assign the App-V application with.

The solution: create a pre-execute script which performs an LDAP query to retrieve the property of the group the user is a member of and set that URL as a user variable. Within the App-V package you can then simply reference that user variable (%APP_URL% for example).

The clarification: It seems to be by design that you can successfully set user variables with a pre-execute script, but you can’t read them within that bubble! Here’s the official answer: KB959452

The solution: we dropped the user variable idea and changed or pre-execute script to generate the application its config files.

SharePoint: Missing Server Side Dependencies: MissingFeature

$
0
0

In one of my projects we have a SharePoint solution which is deployed in a Dev, Test, Acceptance and Production environment. It seems that throughout the environments here and there we got some attention points. The SharePoint health analyzer found several problems such as MissingFeature, MissingSetupFile, MissingWebPart, SiteOrphan and all of those fall under the Missing Server Side Dependencies rule. The funny thing is that in most cases these are false positives. Mostly there’s a deleted object referencing a feature or a WebPart hasn’t be retracted completely. Both case don’t have any side effects. But we can’t have the Health Analyzer unhappy do we?

At first sight you find quite some good articles on how to resolve this:

There’s even a utility on codeplex which is also often referenced in this context:

However what I didn’t realized at fist is that since SharePoint 2010 SP1 there seems to be recycle bin at various levels.

Example #1: in order to fix the MissingWebPart issue I navigated to http://site.custom.com/_catalogs/wp which gave me an overview of all WebPart files present at that site.  The one referenced in the Health Analyzer was present as well. As we no longer needed it I deleted it. Simply deleting it did not make the Health Analyzer happy.

If you want this error to go away you also need to delete it from the site recycle bin:

image

And also from the site collection recycle bin:

image

Example #2: Besides files SharePoint also keeps entire sites in the recycle bin. So you might be having a MissingFeature problem because SharePoint retains a previous/test version of a site for you. You can easily retrieve those with PowerShell:

  • $site = Get-SPSite https://site.customer.com;
  • $site.RecycleBin | ?{$_.ItemType -eq "Web"};
  • $deletedWebs = $site.RecycleBin | ?{$_.ItemType -eq "Web"};
  • $deletedWebs | % {$site.RecycleBin.Delete($_.Id)}

The second line will print the deleted sites, the 4th line will actually delete them from the Site recycle bin.

Example #3: my last example has to to with deleted site collections. It seems that every time you delete a site collection they are also retained by SharePoint. If these reference features that you no longer wish to deploy, the Health Analyzer will be unhappy as he finds site collections in the databases without the features being available on the farm. After some searching I found the following SQL query:

USE [SPS2010_ContentDB]
SELECT* FROM features
JOIN webs ON (features.webid = webs.id)
WHERE featureid = 'e8c6c808-ab0b-43ab-bdbc-a977753d754e'

The featureID comes from the Health Analyzer information.We then looked through some more tables like:

SELECTFROM [SPS2010_ContentDB].[dbo].[AllSites]

And we saw that there were a lot more entries then we’d expect. So that’s when we realized there had to be some recycle bin for site collections. Some googling quickly gave us: SharePoint 2010: SP1 Site Collection Recycle Bin

In order to see what deleted sites are currently know to SharePoint. The command below will give ALL deleted sites collections but you can also scope the command to a Web Application.

  • Get-SPDeletedSite

In order to delete all sites in the recycle bin:

  • Get-SPDeletedSite |Remove-SPDeletedSite

However after executing the last command you’ll see that the Health Analyzer is still unhappy and that your SQL query still shows the entries. It seems that there’s a time job on SharePoint which handles the cleanup once a day. If you want to speed up the process, just do a run now on the Gradual Site Delete time job.

Some additional queries I used in the MissingSetupFiles & MissingWebPart problems:

  • MissingSetupFile:

USE [SPS2010_ContentDB]
SELECT* FROM AllDocs
WHERE SetupPath = 'Features\EPS.SPS.Core_jQueryReferenceWebPart\jQueryReferenceWebPart\jQueryReferenceWebPart.webpart'

  • MissingWebPart:

USE [SPS2010_ContentDB]
SELECT* FROM AllDocs
INNER JOIN AllWebParts ON AllDocs.Id = AllWebParts.tp_PageUrlID
WHERE AllWebParts.tp_WebPartTypeID = 'e8c6c808-ab0b-43ab-bdbc-a977753d754e'


UAG 2010: This Server Cannot Join The Array

$
0
0

Lately I had to reinstall a UAG server which is part of a two node array. The OS disk got corrupted somehow so a reinstall was necessary. When I wanted to rejoin the newly installed server to the UAG array I got the following error: “This server cannot join the array.” The procedure to add a UAG server to an array is explained in detail here: TechNet: Joining a server to an array

Whilst the procedure is very basic, it’s really important you don’t miss these items:

  • Do not join a server to an array during Forefront UAG installation using the Getting Started Wizard.
  • Ensure that the Forefront UAG Management console is closed on the array manager server

Also you might have to start some TMG services which are stopped during the initial (failed) attempt to join the array. This article is rather short and if you follow the documentation carefully you shouldn’t run into it. But somehow I did and I couldn’t find much for this error message. So here it is for future reference.

SharePoint 2010: Custom Claim Provider and the People Picker

$
0
0

Lately we got notified of a small bug in our claim provider we deployed on a SharePoint 2010 farm. In short, when using the “regular” people picker results were being returned just fine. It allowed people to search for both user and group claims. Now it seems that you can modify the behavior of the default picker. The default one results in a picker which is called "Search for People and Groups”. But by specifying some parameter of the field you are defining you can also force it to only “Search for People”. The latter however didn’t returned anything.

Here’s a small overview of the allowed types: MSDN: SPFieldUserSelectionMode

  • UserSelectionMode = PeopleAndGroup

image

  • UserSelectionMode = PeopleOnly

image

We still didn’t had any clue why the people picker wasn’t returning anything. So we decided to double check our code behind the searching and resolving. The only reason we found that could cause the people picker to return nothing was this line of code:

if (!EntityTypesContain(entityTypes, SPClaimEntityTypes.FormsRole))
      return;

To be honest we had that line in there because we created our custom claim provider based upon this sample: MSDN: Claims Walkthrough: Writing Claims Providers for SharePoint 2010 This article has the following statement:

One thing that is important to note here—if you are not creating identity claims, then a claim can be used to provision almost any security item in a SharePoint site except for a site collection administrator. Fortunately, we do get a clue when a search is being performed for a site collection administrator. When that happens, the entityTypes array will have only one item, and it will be Users. In all other circumstances you will normally see at least six items in the entityTypes array. If you are the default claims provider for a SPTrustedIdentityTokenIssuer, then you do issue identity claims, and therefore you can assign one of your identity claims to the site collection administrator. In this case, because this provider is not an identity provider, we will add a special check when we fill the hierarchy because we do not want to add a hierarchy we know we will never use.

After reading that a few times we started putting things together. We added some verbose logging to print the entityTypes and we found out that the only EntityType present was the Users one when searching in the “Search for People” picker. This small if statement exist to prevent you from adding a non-identity claim as a site collection administrator. In our case we can easily adjust or even skip this test altogether because we are using identity claims anyhow! We are not allowing to pick a claim like “favorite color”, which more than one user could have, but we set claims based upon the “UPN” which is an identifier claim in our setup.

Kudos to @ArneDeruwe who originally wrote this claim provider for this project. Together we were able to tackle this problem.

SharePoint and IIS Bindings Fun

$
0
0

Lately we had to stop (and start) the SharePoint Foundation Web Application services and the Central Admin services on several servers. We noticed that the bindings that were previously active were now totally different from what they were.

Some background: we got some SharePoint Web Applications which are made available over Claims Authentication. In order for these applications to be crawled by the Search service we extended them so that they could be made available over Windows Authentication. Besides that, we also had some URL and TCP/IP Port changes over the past year.

It seems that we did several changes on Alternate Access Mappings (AAMs) (SharePoint Central Admin) and on the IIS bindings (IIS Management Console). And that’s where the problem lies. When you create a Web Application, or extend it, you are asked some parameters (like host headers), and these are stamped in the SharePoint configuration database. These are the settings that are used when you start the “SharePoint Foundation Web Application service” on a given SharePoint server. When you modify AAM’s or IIS bindings, these settings become inconsistent.

Here’s how you can retrieve what’s currently known for your web application:

  • $web = Get-SPWebApplication -Identity https://site.customer.com
  • $iisDef = $web.GetIisSettingsWithFallback("default")
  • $iisDef.ServerBindings
  • $iisDef.SecureBindings

If you are interested in the settings (bindings) of an extended site, you have to pass the correct zone along. E.g. for the custom zone:

  • $iisCust = $web.GetIisSettingsWithFallback("custom")

Now I tried editing these in several ways, but it seems like this information is really read only. The only way I found to modify these is to go through the Central Admin web application management section and choose the “remove SharePoint from this site” option when selecting a web application (below the delete button). Afterwards you can extend the web application again. I performed this for several sites and it’s not really that painful. Don't forget, your site becomes unavailable in the whole farm as it's deprovisioned and re-deployed afterwards. Obviously this can be scripted as well.

Here’s the source of my information: MSDN Blogs: How to properly change the Host Header URL of a web application in SharePoint 2010

Things I’ve learned #1: the things I conclude from my tampering with web applications:

  1. There’s no way to specify what IP to listen on when creating a web application/ extending it
  2. There’s no way to select a specific certificate when creating a web application/ extending it (you can select “use SSL” though)

Out of these 2 I assume that you can modify these directly in IIS without breaking anything. Obviously if you start the SharePoint Foundation Web Application service it’s up to you to (re)do the proper IIS configuration.

Things I’ve learned #2:

  1. Always specify a host header. for port 80, for use SSL and for custom ports.

As SharePoint doesn’t allow you to specify an IP, you’ll be blocked from creating an other site with “use SSL” checked if you don’t specify host headers.

Things I’ve learned #3:

Perhaps a bit dirty, but we noticed that some of our sites weren’t being added in IIS. The following lines re-triggered the registration of the web application in IIS without having to stop and start the SharePoint Foundation Web Application service.

Quick Tip: Resolving an SID to a AccountName

$
0
0

When trying to avoid the usage of temporary profiles (see: Setspn: Temporary Profiles and IIS Application Pool Identities) I had to resolve some SIDs (Security Identifiers) to AccountNames. Using PowerShell this can easily be achieved:

  • $objSID = New-Object System.Security.Principal.SecurityIdentifier("S-1-5-21-xxxxxxxxx-yyyyyyyyyy-zzzzzzzzz-10472");
  • $objUser = $objSID.Translate( [System.Security.Principal.NTAccount]);
  • $objUser.Value

Happy resolving!

Windows 7 & Reverse Lookup DNS Registration [Update]

$
0
0

A while ago I wrote this post: Windows 7 & Reverse Lookup DNS Registration One of the problems with the approach was that adding the command: netsh interface ipv4 set dnsservers name="Local Area Connection" source=dhcp register=both only worked for the wired network adapter. We’re having more and more PC’s connecting over a recently installed wireless network and also our machines connected over VPN aren’t properly registering their reverse records. The reason for this is that both Wireless and VPN connections come with their own adapter with it’s own setting.

We played with the idea of scripting our way round this to get all adapters this particular setting, but I figured I’d give another attempt at looking for the right GPO which should do this with way less hassle. And here it is:

netshRegisterAdapterNameGPO

The GPO which does the exact same thing as the netsh command, or checking the checkbox Use this connection’s DNS suffix in DNS registration is located below Computer Configuration > Administrative Templates > Network > DNS Client. It’s called Register DNS records with connection-specific DNS suffix. I have no idea why I didn’t managed to find this one 2 years ago, but better late than never. This GPO will set the registry value RegisterAdapterName to 1 below the Policies hive. In fact, the netsh command or the checkbox I mentioned before will set the exact same key but then below the guid for that specific adapter.

Note: whether or not it’s a best practice to let clients register their own reverse records in stead of the DHCP server I’ll leave in the middle. In my situation the DHCP is a non-windows based solution and our DNS zones only allow secure updates. Therefor we choose to have the clients to the updates themselves.

Viewing all 95 articles
Browse latest View live