Quantcast
Channel: ADdict
Viewing all 95 articles
Browse latest View live

Windows 2012 R2 Preview: Active Directory Federation Services Installation Screenshots

$
0
0

Just for those interested, here’s the screenshots of the ADFS installation on a Windows 2012 R2 Preview installation. Before 2012 R2 it wasn’t advised to install ADFS on a domain controller as the ADFS solution relied on IIS. But with the 2012 R2 version the IIS dependency is gone and Microsoft recommends installing ADFS on domain controllers. I think this will lower the bar for a lot of companies. Also the enhanced authentication options (multi factor) seem really promising.

The installation:

image

image

image

image

image

image

image

Remark: in the end my system didn’t need to reboot

image

image

image

The configuration:

image

image

image

Remark: small sidestep here: obviously I want to use Group Managed Service Accounts!

image

image

Remark: lab only procedure: ensures Group Managed Service Accounts are available immediately

image

image

image

image

image

image

The management console with the focus on the new Authentication Policies section

image

A new Relying Party Trust type:

If I read the explanation correct this will allow you to publish non claims-aware application over the new Web Application Proxy role.

image

Remarks:

  • The option for a stand alone ADFS server is no more. Either you install a single node farm or you install a real farm. Makes sense to me.
  • You still have the option to choose between a Windows Internal Database or a dedicated SQL Server database. This might be a hard choice. I’m not sure I’m happy to have Internal Databases running on my domain controllers. The SQL on the other hand requires a cluster for proper availability which might be quite expensive to sell to your customers.
  • Named certificate forces you to take the subject of the certificate as the Federation Service Name. Wildcard certificate allows you to pick freely as long as the wildcard is respected. It seems you can have additional . in the wildcard part though. I advise against this as you’ll probably face certificate validation errors in your browser. Example: *.realdolmen.com allows you to select sts.sub.realdolmen.com.
  • If you want to compare=: the Windows 2012 ADFS installation: vankeyenberg.be: ADFS Part 1: Install and configure ADFS on Windows 2012
  • The Authentication Policies section in the management console seems awesome. Very clear and it seems very easy to manage.

Windows 2012 R2 Preview: Web Application Proxy Installation Screenshots

$
0
0

For those interested in the look and feel of the new Web Application Proxy role, here’s some screenshot of a fairly simple next next finish setup.

The installation:

image

image

image

image

image

image

image

image

Remark: seems I’ll have to add a server to my lab environment

image

image

image

image

The Configuration:

image

image

image

image

image

The Management Console:

Open the Remote Access Management Console

image

The Publish New Application Wizzard:

Remark: read the explanation of the ADFS selection bullet, it’s fairly descriptive.

image

image

Seems like basic internal <> external stuff.

image

Remarks:

  • Active Directory Federation Services and Web Application Proxy can’t be combined on one server.
  • Active Directory Federation Services is to be installed in your domain before you can install the Web Application Proxy as you need to specify it.
  • Selecting Pass-Through on the Preauthentication screen will skip the Relying Party selection and then your application will handle the authentication. This will break your users their SSO experience though.

AX 2012: Validate Settings Fails for Report Server Configuration

$
0
0

Setting up AX 2012 Reporting involves installing SQL Reporting Services and registering that Reporting Services installation in AX. One of the issues we were having is that we were seeing some problems deploying reports. In order to troubleshoot we tried the Test-AxReportServerConfiguration cmdlet.

image

This cmdlet was telling us: the report server URL accessible: False Hmm. That’s odd. We’re pretty sure that all involved URLs (Report Server Manager & Report Server Service) were properly resolving and responding. When double checking the AX Report Server configuration within the AX Client we tried the validate settings button:

clip_image003

However, we stumbled upon the following error:

clip_image005

In words:  Exception has been thrown by the target of an invocation. The SQL Server Reporting Services server name RPRTAX1B.contoso.com does not exist or the Web service URL is not valid.

As it kept complaining about the URL I started to suspect what could be the root cause. From earlier experiences (Dynamics Ax 2012: Error Installing Enterprise Portal)  I know that not all AX components can properly handle host headers. Because this is how our SQL Reporting Services host header configuration looks like for the Report Server URL:

clip_image001

Jup, we got multiple entries. The reason is somehow historical and not relevant here. It seems that AX, when validating the settings, checks whether the Report Server URL matches with the first host header in the SQL Reporting Services configuration. So I went ahead, removed all entries but the good one, ok-ed and applied. After that I re-added them. This ensured the URL AX knows of was on top of the list. And jup, everything started working!

A colleague from the AX team showed me which code was performing this check. Here’s the offending code:

public boolean queryWMIForSharePointIntegratedMode(str serverName, str _serverUrl)

{

    boolean result = false;

    try

    {

        result = Microsoft.Dynamics.AX.Framework.Reporting.Shared.Proxy::QueryWMIForSharePointIntegratedMode(serverName, _serverUrl);

    }

    catch (Exception::CLRError)

    {

        // We must trap CLRError explicitly, to be able to retrieve the CLR exception later (using CLRInterop::getLastException() )

        SRSProxy::handleClrException(Exception::Error);

        result = false;

    }

    return result;

}

And that’s how I come to part two. When creating Report Server configurations within AX, one might be wondering how to register a load balanced Reporting Services setup…

Here’s the configuration extract of the server name & URLs for such a configuration. Now how do we handle the fact that there’s 2 Servers and one (virtual) load balanced URL?

clip_image007

In a load balanced setup with 2 reporting servers you’ll typically have 3 configurations FOR EACH AOS instance:

  1. RSServerA(Default Configuration: unchecked)
    1. Server name: ServerA
    2. Report Manager URL: axreports.contoso.com/reports
    3. Web service URL: axreports.contoso.com/reportserver
  2. RSServer B (Default Configuration: unchecked)
    1. Server name: ServerA
    2. Report Manager URL: axreports.contoso.com/reports
    3. Web service URL: axreports.contoso.com/reportserver
  3. RSVirtualServer (the Load Balancer) (Default Configuration: checked)
    1. Server name: ServerA
    2. Report Manager URL: axreports.contoso.com/reports
    3. Web service URL: axreports.contoso.com/reportserver

Now the clue is in the server name: this is the name which is being used to contact the actual Windows server for certain information. Like in the code above, the server will be contacted over WMI to read the requested setting. If you were to enter “axreports.contoso.com” as a servername, you’ll be seeing all kinds of errors. For starters typically your load balancer only balances port 80 or 443, but WMI uses other ports. So these connections will fail. As far as I learned from my AX colleague, the AOS instance can use the load balancer configuration entry, and you can use the node configuration for your report deployments. In that way, the server name probably doesn’t matter that much on the load balancer configuration item.

I hope I don’t sound to cryptically, if you like any further explanation, feel free to comment.

SCCM: Task Sequence / Software Updates Paused

$
0
0

Lately we had a ticket where a user was unable to execute task sequences from the Run Advertised Program console on his client. FYI, we’re running SCCM 2007 R2. The error the user was facing was this one:

This program cannot run because a reboot is in progress or software distribution is paused.

In the smsts.log file on the client (c:\Windows\System32\CCM\Logs\SMSTSLog\smsts.log) we saw the message “Waiting for Software Updates to pause”. So it seems that besides our Task Sequence we wanted to execute the client was also performing software updates in the background.

clip_image001[8]

In the UpdatesDeployment.log we found something alike “Request received – IsPasued” and “Request received – Pause”

clip_image001[10]

Somehow we couldn’t do much with this information. We hit a wall as we had no clue what updates were installing or why did they hang . So we continued our search. After some digging we found the following information in the registry:

clip_image001

So SCCM keeps track of the Task Sequence currently executing below HKLM\Software\Microsoft\SMS\Task Sequence. It will only allow one at a time. When comparing the registry entries with a working client we saw a small difference. The problem client didn’t had a “SoftwareUpdates” registry entry. As far as I can tell this is SCCM’s system of letting a Task Sequence know it can execute or not. In order to execute it needs two “cookies”. One for Software Distribution and one for Software Updates. If it has both, it means it has the necessary “cookies” to get started.

The actual value of the cookies can also be found in the following location: HLM\Software\Microsoft\SMS\Mobile Client\Software Distribution\State

clip_image001[4]

There we could see that indeed execute was paused as this entry had a value of 1. This was consistent with the error we were seeing in the Run Advertised Programs GUI. A lot of articles and blogs tell you to set it to 0 or delete it. We tried that, but it didn’t had any effects. And then I found the following forum post: http://www.myitforum.com/forums/Software-Updates-waiting-for-installation-to-complete-m221843.aspx With the information posted by gurltechI was able to perform the following steps:

Open wbemtest and connect to root\ccm\softwareupdates\deploymentagent

clip_image002

Execute the following query: select * from ccm_deploymenttaskex1

clip_image002[4]

If all goes well you find an instance

clip_image002[6]

And now check the AssignmentID property

clip_image002[8]

This ID can be used to track down the Deployment so called being “in progress”. When opening the “Status for a deployment and computer” report. And providing the id we just found and the computer name, we couldn’t find any updates to be installed or failed.

clip_image002[10]

So I figured using the script to clear the deployment task from WMI couldn’t hurt much. Either on a next software update cycle scan it would reappear, or it would be gone forever. And indeed, after setting the ID’s (AssignmentId and JobId) to 0 and recycling the SCCM client service we were able to execute Task Sequences again on that client. This situation might be very rare to run into, but I think it might inform you of some insights as to how SCCM works.

Quick Tip: Use PowerShell To Browse Through An Event Log

$
0
0

When trying to troubleshoot AD FS claim rules, often I find myself going back and forth in the Security event log. But the interface doesn’t really allow to easily see whether the message is relevant or not. Here’s small PowerShell command, which probably can be optimized in many ways, that will print the last 60 (staring from the most recent) events that match the AD FS 2.0 Auditing source. Just press enter to go to the next event. Events are separate by a green dotted line.

get-eventlog Security -newest 60 | where-object {$_.Source -eq "AD FS 2.0 Auditing"}| % {write-host -foregroundcolor green "----------------------------------------------------";read-host " "; $_.message| fl}

image

Or even a bit more elaborate: a small script which allows you to go down, but also back up if you missed something:

$events = get-eventlog Security -newest 60 | where-object {$_.Source -eq "AD FS 2.0 Auditing"}|
$i = 0
while($i -lt $events.count -and $i -gt -1){
    write-host -foregroundcolor green "------------------$i-----------------------"
    $events[$i].message
    write-host ""
    write-host ""
    $direction = read-host "Continue? u(p) or d(own) [$default]"
    if($direction -eq $null -or $direction -eq ""){$direction = $default}
    if($direction -like "u"){
        $default = "u"
        $i--
    }
    else{
        $default = "d"
        $i++
    }
    $direction = $null
}

You can just copy paste this in a prompt, not even necessary to create a ps1 file for this. Although I can only encourage to modify this sample so you can easier find your needle in a haystack!

ADFS: Certificate Private Key Permissions

$
0
0

Just as a reminder for myself. The following error might appear in the ADFS Admin log after a user being faced with the ADFS error page. The error is pretty cryptic and gives no real clues away.

Error event ID 364: Encountered error during federation passive request.

Additional Data

Exception details:
Microsoft.IdentityServer.Web.RequestFailedException: MSIS7012: An error occurred while processing the request. Contact your administrator for details. ---> Microsoft.IdentityServer.Protocols.WSTrust.StsConnectionException: MSIS7004: An exception occurred while connecting to the federation service. The service endpoint URL 'net.tcp://localhost:1501/adfs/services/trusttcp/windows' may be incorrect or the service is not running. ---> System.ServiceModel.EndpointNotFoundException: There was no endpoint listening at net.tcp://localhost:1501/adfs/services/trusttcp/windows that could accept the message. This is often caused by an incorrect address or SOAP action. See InnerException, if present, for more details.

image

But after restarting the ADFS service an additional errors are shown:

Error event ID 102: There was an error in enabling endpoints of Federation Service. Fix configuration errors using PowerShell cmdlets and restart the Federation Service.

Additional Data
Exception details:
System.ArgumentNullException: Value cannot be null.
Parameter name: certificate
   at System.IdentityModel.Tokens.X509SecurityToken..ctor(X509Certificate2 certificate, String id, Boolean clone, Boolean disposable)
   at System.IdentityModel.Tokens.X509SecurityToken..ctor(X509Certificate2 certificate)
   at Microsoft.IdentityServer.Service.Configuration.MSISSecurityTokenServiceConfiguration.Create(Boolean forSaml)
   at Microsoft.IdentityServer.Service.Policy.PolicyServer.Service.ProxyPolicyServiceHost.ConfigureWIF()
   at Microsoft.IdentityServer.Service.SecurityTokenService.MSISConfigurableServiceHost.Configure()
   at Microsoft.IdentityServer.Service.SecurityTokenService.STSService.StartProxyPolicyStoreService(ServiceHostManager serviceHostManager)
   at Microsoft.IdentityServer.Service.SecurityTokenService.STSService.OnStartInternal(Boolean requestAdditionalTime)

And Event id 133: During processing of the Federation Service configuration, the element 'signingToken' was found to have invalid data. The private key for the certificate that was configured could not be accessed. The following are the values of the certificate:
Element: signingToken

This one is more descriptive. Here and there you see people saying that adding the ADFS service account to the local admins resolves this issue. Yeah I can imagine that, but that account is not supposed to have that kind of privileges! It’s sufficient to grant read (not even full control) to the private keys of the token signing and decrypting certificate. You can manage these by opening the mmc, adding the certificates snappin for the computer and browse the personal store.

image

Quick Tip: AD FS Server Name as a Claim

$
0
0

I’m not sure anyone else besides me finds this piece of information important, but sometimes I like to know which AD FS server issued the actual claims. That’s when multiple servers are joined to the ADFS farm of course. For instance when trying to find out whether the load balancing is acting like it should or just to make sure you are watching the event log or debug logs on the correct server. Here’s a simple way to do it. There might be other more elegant ways as well. If you have some I hope you drop a comment!

First I started by creating an additional attribute store:

image

The store is of the type SQL:

image

And here’s the connection string:

Server=\\.\pipe\MICROSOFT##WID\tsql\query;Database=AdfsConfiguration;Integrated Security = True

In my case I’m using the Windows Internal database instance used by the ADFS service. Whether to use WID or SQL for ADFS is a discussion which I will not touch here. By using the WID we can safely assume it’s available and accessible on all ADFS servers. If you were to use a SQL server instance that should be reachable from each ADFS server as well. Just update the connection string to use your remote SQL server instance in that case.

Now we’ll add the claim rules of our application to issue the ADFS server name:

image

As you can see by using the SQL query “Select HOST_NAME() As HostName” we can determine the hostname of the ADFS server issuing the claim. I’m not even sure “AS HostName” has to be in there. I just copy pasted this from some SQL blog ; ). That query will give you the hostname of the client talking to SQL, in this case the ADFS server. And here’s the result:

image

I am not saying it’s a good idea to have this rule active all the time as querying additional stores probably comes with a performance penalty, but it might be very convenient for test environments or for temporary situations.

UAG 2010: The URL you have requested is too long.

$
0
0

For a customer of mine we’ve setup a UAG which is configured as a Relying Party of an AD FS 2.0 server. This means the trunk itself is configured to use ADFS as it’s authentication server. It seems that upon accessing any application of this trunk we are redirected to the AD FS server, as expected, but UAG greets us with an error page containing "The URL you have requested is too long." For this setup we are publishing the AD FS server over that exact same trunk. So to be more precise, UAG is acting as an AD FS proxy as well.

UAG version in place: UAG 2010 SP3 U1

Here's some more background information regarding this specific issue: TechNet: UAG ADFS 2.0 Trunk Authentication fails: The URL you have requested is too long.

The error:

image

In words:

The URL you have requested is too long.

Navigate back and follow another link, or type in a different URL.

In the end we opened up a case with Microsoft and they came back with this registry key:

Windows Registry Editor Version 5.00

[HKEY_LOCAL_MACHINE\SOFTWARE\WhaleCom\e-Gap\von\UrlFilter]
"MaxAllHeadersLen"=dword:00020710

In order to properly apply this setting:

  • Set the key
  • Activate the UAG configuration
  • Perform an IIS Reset

The actual value is for testing only, for a real production environment I would start with 8192 (bytes), watch out, the key is in HEX, and slowly move up until I feel I have a confortable marge.


Windows 2012 File Server: SRMSVC Events In Event Log

$
0
0

We’re currently defining a new build for our file servers. On one of the servers we installed in the test environment we started seeing a lot of errors in the Application event log.

The events we were seeing:

Event 12344:

File Server Resource Manager finished syncing claims from Active Directory and encountered errors during the sync (0x80072030, There is no such object on the server.
).  Please check previous event logs for details.


Event 12339:

File Server Resource Manager failed to find the claim list 'Global Resource Property List' in Active Directory (ADsPath: LDAP://domaincontroller.contoso.com/CN=Global Resource Property List,CN=Resource Property Lists,CN=Claims Configuration,CN=Services,CN=Configuration,DC=contoso,DC=com). Please check that the claims list configured for this machine in Group Policy exists in Active Directory.

As you can see these were piling up real fast:

clip_image002

From what I can tell these started happing after we configured File Quota’s. In order to do this we added the File Server Resource Manager feature. A quick google led me to the following solution: in order to avoid this errors, a schema upgrade to Windows 2012 is required. Our domain is currently on 2008 R2. I didn’t performed the upgrade just yet, but I wanted to share this nonetheless.

My sources for this information:

Troubleshooting Certificates and the Chain Build Process

$
0
0

Recently I got a request of a customer to update the root certificates of several certificates they had in place. The problem was that one of the intermediate CA’s had an expiration date which was before the expiration date of the actual certificate. Here’s the information we got with this notification.

schema-brca2-server

The problem was the Belgium Root CA2. It’s valid until 27/01/2014 whilst several of the “your servercertificate (SSL)” are valid till the end of 2014. When clients would validate this chain after the 27th of January this would cause problems. With this news we received the new root and intermediate CA’s in a zip file.

Using certutil you can easily install them in the required stores on the server which has “your servercertificate (SSL)” configured for one or more services.

  • certutil -addstore Root "C:\Temp\NewRootChain\Baltimore Cybertrust Root.crt"
  • certutil -addstore CA "C:\Temp\NewRootChain\Cybertrust Global Root.crt"
  • certutil -addstore CA "C:\Temp\NewRootChain\Belgium ROOT CA 2.crt"
  • certutil -addstore CA "C:\Temp\NewRootChain\government2011.cer"

After performing these steps I could see the new chain reflected in my certificate on the server. Now I figured that the clients should retrieve this chain as well one way or another. Upon accessing https://web.contoso.com I could see that the certificate was trusted, but the path was still showing the old chain!

First thing I verified was that the “Baltimore Cybertrust Root” was in the trusted root certificate authorities of my client. Without me actually putting it there it was present. This makes sense as this probably comes with windows update or something alike. I assume the client has to retrieve the intermediate certificates himself. I thought that he would go externally for that. From the certificate I found the Authority Information Access URL which pointed to the (outdated) Belgium Root CA2 on an external (publicly available) URL. “AHAH” I figured, time to contact the issuers of these certificates. They kindly replied that if the server has the correct chain, the clients should reflect this. They also provided me an openssl command and requested more information.

This made me dig deeper. After a while I came to the following conclusion: my “bad” client showed different paths for these  kind of certificates… When visiting my ADFS service I saw the correct chain being build, but on my web server I had the old chain. Very odd. So something had to be wrong server side. From what I can tell here’s my conclusion:

The browser gets the intermediate certificates in the chain from the IIS server

  • IIS 8.0 on Windows 2012: update the stores and all is good (or the servers had a reboot somewhere in the last weeks that I’m unaware off)
  • IIS 7.5 on Windows 2008 R2: update the stores AND unbind/bind the certificate in the IIS bindings of your website(s).

For the IIS 7.5 I also tried an IIS reset, but that wasn’t enough. Perhaps a reboot would do too.  Here’s my source for the solution: http://serverfault.com/questions/238206/iis7-not-sending-intermediate-ssl-certificate

A usefull openssl command, even works for services like sldap. It will show you all certificates in the chain.

  • openssl.exe s_client -showcerts -connect contoso.com:636
  • openssl.exe s_client -showcerts -connect web.contoso.com:443

P.S. The new chain also has an oddity… Belgium Root CA2 is valid until 2025 whilst the Cybertrust Global Root expired 2020

Bonus tip #1: in the windows event log (of the client) you can enable CAPI2 logging. This will show you detailed information of all Certificate related operations. In my opinion the logging is often to detailed to tell you much, but it’s nice to know it’s there. You can find it under Application and Services\Microsoft\Windows\CAPI2 right-click Operational and choose enable.

Bonus tip #2: on Windows 2012/ Windows 8 you can easily open the certificates of both the current user and the current computer. In the past I often used mmc > add remove certificates > click some more > … Now there’s a way to open a certificates mmc for the local computer using the command line:

  • Current user: certmgr
  • Current computer: certlm

The Processing of Group Policy Failed: logged on user session…

$
0
0

Really weird one. A while ago we were being notified by SCOM (System Center Operations Manager) that one of our domain controllers had issues processing group policies. The event in the event log:

clip_image002

The actual error: The processing of Group Policy failed. Windows attempted to read the file \\contoso.com\sysvol\contoso.com\Policies\{31B2F340-016D-11D2-945F-00C04FB984F9}\gpt.ini from a domain controller and was not successful. Group Policy settings may not be applied until this event is resolved. This issue may be transient and could be caused by one or more of the following:

a) Name Resolution/Network Connectivity to the current domain controller.

b) File Replication Service Latency (a file created on another domain controller has not replicated to the current domain controller).

c) The Distributed File System (DFS) client has been disabled.

First thing I checked was whether each of the domain controllers actually did have the gpt.ini file for that specific GPO:

PS C:\Users\thomas> Get-ADDomainController -filter * |% {gci \\$($_.name)\sysvol\contoso.com\ Policies\{31B2F340-016D-11D2-945F-00C04FB984F9}}

This showed me that indeed all domain controllers had that file present. Somewhere online I found the following suggestion:

C:\Windows\system32>\\machine_with_management_tools_installed\c$\windows\system32\dfsutil.exe /spcflush

And that seemed to stop the error from returning. But after a few minutes I had a little “doh I saw this before moment”. The real cause (and solution) is to log off any old remote desktop sessions on that server which are left open for a considerate amount of time. So here’s a post for myself hoping this little knowledge bit will stick. So whilst the actual error might sound quit scary, there’s no real impact to your endusers or services.

SharePoint Configure Super Accounts

$
0
0

This post will try to explain how you can easily configure a SharePoint web application its superuser and superreader accounts. SharePoint uses these accounts for its caching system. Out of the box system accounts are used for this and you might get a warning in your event log periodically. However, if you get this part wrong, all of your users might end with an access denied message.

1. Here’s how you can do it for a claims based Web application that’s configured with a claims provider as authentication provider.

$webappurl = https://portal.contoso.com
###
### encode users
###
$mgr = Get-SPClaimProviderManager
$tp = Get-SPTrustedIdentityTokenIssuer -Identity "CONTOSO ADFS Provider"
#set super user to windows account (claims based)
$superuser = S_SPS_SU@CONTOSO.COM
$superuserclaim = New-SPClaimsPrincipal –ClaimValue $superuser -ClaimType http://schemas.xmlsoap.org/claims/UPN -TrustedIdentityTokenIssuer $tp
$superuserclaimstring = $mgr.EncodeClaim($superuserclaim)

#set read user to windows account (claims based)
$readuser = S_SPS_SR@CONTOSO.COM
$readuserclaim = New-SPClaimsPrincipal –ClaimValue $readuser -ClaimType http://schemas.xmlsoap.org/claims/UPN -TrustedIdentityTokenIssuer $tp
$readuserclaimstring = $mgr.EncodeClaim($readuserclaim)

###
### web policies
###
$webApp = Get-SPWebApplication $webappurl

#SuperUser
$policy = $webApp.Policies.Add($superuserclaimstring, $superuser)
$policyRole = $webApp.PolicyRoles.GetSpecialRole([Microsoft.SharePoint.Administration.SPPolicyRoleType]::FullControl)
$policy.PolicyRoleBindings.Add($policyRole)
$webApp.Update()
#ReadUser
$policy = $webApp.Policies.Add($readuserclaimstring, $readuser)
$policyRole = $webApp.PolicyRoles.GetSpecialRole([Microsoft.SharePoint.Administration.SPPolicyRoleType]::FullRead)
$policy.PolicyRoleBindings.Add($policyRole)
$webApp.Update()

###
### web properties
###

#$webApp = Get-SPWebApplication webappurl
$webApp.Properties["portalsuperuseraccount"] = $superuserclaimstring
$webApp.Properties["portalsuperreaderaccount"] = $readuserclaimstring
$webApp.update()

2. Here’s how you can do it for a claims based Web application that’s configured with Windows authentication.

$webappurl = https://portal.contoso.com
###
### encode users
###
$mgr = Get-SPClaimProviderManager
#set super user to windows account (claims based)
$superuser = "CONTOSO\S_SPS_SU"
$superuserclaim = New-SPClaimsPrincipal -identity $superuser -IdentityType "WindowsSamAccountName"
$superuserclaimstring = $mgr.EncodeClaim($superuserclaim)

#set read user to windows account (claims based)
$readuser = "CONTOSO\S_SPS_SR"
$readuserclaim = New-SPClaimsPrincipal -identity $readuser -IdentityType "WindowsSamAccountName"
$readuserclaimstring = $mgr.EncodeClaim($readuserclaim)

###
### web policies
###
$webApp = Get-SPWebApplication $webappurl

#SuperUser
$policy = $webApp.Policies.Add($superuserclaimstring, $superuser)
$policyRole = $webApp.PolicyRoles.GetSpecialRole([Microsoft.SharePoint.Administration.SPPolicyRoleType]::FullControl)
$policy.PolicyRoleBindings.Add($policyRole)
$webApp.Update()
#ReadUser
$policy = $webApp.Policies.Add($readuserclaimstring, $readuser)
$policyRole = $webApp.PolicyRoles.GetSpecialRole([Microsoft.SharePoint.Administration.SPPolicyRoleType]::FullRead)
$policy.PolicyRoleBindings.Add($policyRole)
$webApp.Update()

###
### web properties
###

#$webApp = Get-SPWebApplication webappurl
$webApp.Properties["portalsuperuseraccount"] = $superuserclaimstring
$webApp.Properties["portalsuperreaderaccount"] = $readuserclaimstring
$webApp.update()

3. And here’s for a SharePoint web application that is in classic (windows) authentication mode:

#for a Windows site:
$webappurl = https://portal.contoso.com
#Windows users the domain\group notation
$superuser = "CONTOSO\S_SPS_SU"
$readuser = "CONTOSO\S_SPS_SR"

#add the policies
$webApp = Get-SPWebApplication $webappurl

$policy = $webApp.Policies.Add($superuser , $superuser )
$policyRole = $webApp.PolicyRoles.GetSpecialRole([Microsoft.SharePoint.Administration.SPPolicyRoleType]::FullControl)
$policy.PolicyRoleBindings.Add($policyRole)
$webApp.Update()

$policy = $webApp.Policies.Add($readuser , $readuser )
$policyRole = $webApp.PolicyRoles.GetSpecialRole([Microsoft.SharePoint.Administration.SPPolicyRoleType]::FullControl)
$policy.PolicyRoleBindings.Add($policyRole)
$webApp.Update()

Bonus: here’s how to encode a group instead of a user. Not useful for the superuser/superreader account, but it might come in handy if you want to configure user policies.

$groupnameClaims = "GG_SPS_ADMINS"
#Windows users the domain\group notation
$groupnameWindows = "CONTOSO\GG_SPS_ADMINS"

$mgr = Get-SPClaimProviderManager
$tp = Get-SPTrustedIdentityTokenIssuer -Identity "CONTOSO ADFS Provider"

#get the string for users authenticating over claims
$claim = New-SPClaimsPrincipal -ClaimValue $groupnameClaims -ClaimType http://schemas.xmlsoap.org/claims/Group -TrustedIdentityTokenIssuer $tp
$claimstr = $mgr.EncodeClaim($claim)

#get the string for users authenticating over classic windows
$windowsprincipal = New-SPClaimsPrincipal -identity $groupnameWindows -IdentityType "WindowsSamAccountName"
$windowsstr = $mgr.EncodeClaim($windowsprincipal)

Generate Self Signed Certificate for Demo Purposes

$
0
0

From time to time you might require a certificate and you want it fast. Mostly you see openssl commands flying around to get this job done. But recently I came across the following information and it’s actually pretty use to do with certreq.exe as well!

Cert.txt content:

[NewRequest]
; At least one value must be set in this section
Subject = "CN=sts.realdolmen.com"
KeyLength = 2048
Exportable = true
MachineKeySet = true
FriendlyName = "ADFS"
ValidityPeriodUnits = 3
ValidityPeriod = Years
RequestType = Cert
ProviderName = "Microsoft RSA SChannel Cryptographic Provider"
ProviderType = 12
KeyUsage = 0xa0
[EnhancedKeyUsageExtension]
OID=1.3.6.1.5.5.7.3.1 ; this is for Server Authentication

The command:

certreq.exe -new .\cert.txt

A popup will appear asking you to save your certificate to a location of choice. And voila:

image

Quick Tip: Multiple IP’s on Adapter and Firewalls

$
0
0

Typically a server has one network interface with one IP on it. Especially in virtualized environments. However in certain scenario’s, like web servers, multiple IP’s can be bound to one network interface. When configuring firewalls external to the host, e.g. a hardware device shielding the server segment from other segments, people often wonder what address the server is going to use for outgoing traffic. People tend to think that the first address on the adapter is the one that is used for all outgoing traffic. Perhaps that was true for some earlier versions for Windows, but it seems that somewhere in time this has shifted:

It seems that the server verifies which address has the longest matching prefix with the gateway configured on the adapter.

You can read the details here: http://blogs.technet.com/b/networking/archive/2009/04/25/source-ip-address-selection-on-a-multi-homed-windows-computer.aspx

The example the blog uses:

There’s a server with address 192.168.1.14 and 192.168.1.68 (gateway: 192.168.1.127). The server will use the 192.168.1.68 address because it has the longest matching prefix. To see this more clearly, consider the IP addresses in binary:

  • 11000000 10101000 00000001 00001110 = 192.168.1.14 (Bits matching the gateway = 25)
  • 11000000 10101000 00000001 01000100 = 192.168.1.68 (Bits matching the gateway = 26)
  • 11000000 10101000 00000001 01111111 = 192.168.1.127

The 192.168.1.68 address has more matching high order bits with the gateway address 192.168.1.127. Therefore, it is used for off-link communication.

In the above example you could force the 192.168.1.14 address by using the SkipAsSource parameter you can pass along with netsh. In order to use SkipAsSource we have to add additional address from the command line:

  • Netsh int ipv4 add address <Interface Name> <ip address> <netmask> skipassource=true

In order to verify this we can execute the following command:

  • Netsh int ipv4 show ipaddresses level=verbose

SQL: Delete a Large Amount of Records

$
0
0

I’ve got a setup where an UAG array logs into a remote SQL. I’m still wondering how other people handle the database size. I’ve got a very small SQL agent job which runs once a week and deletes all records older than a month. The size of the database is still pretty large: somewhere around 150 GB. A while ago the job seemed to have problems to finish successfully. We also saw the log file of the database growing to a worrying size: 160 GB. FYI: the database was in simple recovery mode.

Each time I started the job I could see the log file filling up from 0 GB all the way to 160 GB and then it stopped as we set 160 GB as a fixed limit. We did this (temporary) to protect other log files on that volume. Here’s the SQL script (query) used in the cleanup job:

DELETE

FROM FirewallLog

WHERE logTime < DATEADD(MONTH, -1, GETDATE())

As you can see it’s a very simple query. The problem lies in the fact that SQL tries to perform this as one transaction. I hope I get the terminology right btw. A SQL database in simple mode should reuse log space quite fast. If I read correctly, about every minute a checkpoint is issued and it will start overwrite/reusing previously written bits in the log file. Now the problem with my query is that its seen as one large transaction and thus needs to be written away entirely in the log file. Hence the space is not reused during the execution. Here's a script which I found online and which does the job way more gently. In my example WebProxyLog is name of the table I’m targeting.

DECLARE @continue INT

DECLARE @rowcount INT

SET @continue = 1

WHILE @continue = 1

BEGIN

--PRINT GETDATE()

SET ROWCOUNT 10000

BEGIN TRANSACTION

DELETE FROM WebProxyLog WHERE  logTime < DATEADD(MONTH, -1, GETDATE())

SET @rowcount = @@rowcount

COMMIT

--PRINT GETDATE()

IF @rowcount = 0

BEGIN

SET @continue = 0

END

END

This script will have the same outcome, but it will delete records in chunks of 10.000 records at a time. This way each transaction is limited in size and the SQL server can benefit from the checkpoints being issued and can thus reuse database space. Using this approach my logging space was somewhere between 0 and 7 GB during the execution of this task. The ideal value for the maximum amount of records deleted at once might differ depending on your situation. I tend to execute this on calmer moments of the day and thus a bit of additional load is not that worrying.

Bonus tip:  you can easily poll for the free space in the log files using this statement:

DBCC SQLPERF(LOGSPACE);

GO


AX Enterprise Portal Webparts Only Deployment

$
0
0

Lately a nice challenge was presented to me: make the Dynamics AX webparts work on a SharePoint site other than the actual Enterprise Portal. As per TechNet (Deploy Microsoft Dynamics AX Web parts to a SharePoint site [AX 2012]) this is a valid scenario. In our case we would like to use the AX report viewer webpart to render some reports in an extranet scenario. One of the steps to enable this is to install the AX Portal components without checking the create site option in the setup and obviously target the site you wish to install the webparts on. During the AX portal components installations I got greeted with an error:

2013-11-14 13:07:45Z    Bezig met invoeren van functie ConfigureAuthenticationMode
2013-11-14 13:07:45Z    An error occurred during setup of Enterprise Portal (EP).
2013-11-14 13:07:45Z   Reason: The given key was not present in the dictionary.
2013-11-14 13:07:45Z    Registering tracing manifest file "D:\Program Files\Microsoft Dynamics AX\60\Server\Common\TraceProviderCrimson.man".
2013-11-14 13:07:45Z    WEvtUtil.exe install-manifest "C:\Users\lab admin thomas\AppData\Local\Temp\3\tmp4243.tmp"
2013-11-14 13:07:45Z        **** Warning: Publisher {8e410b1f-eb34-4417-be16-478a22c98916} is installed on
2013-11-14 13:07:45Z        the system. Only new values would be added. If you want to update previous
2013-11-14 13:07:45Z        settings, uninstall the manifest first.

Ok… Fair enough… Which key is being looked after? In which dictionary? I could guess it was trying to find a match of a piece of information in a list (dictionary) and that didn’t went so well. Searching the web didn’t gave me any clues. So time to start the reverse engineering again… The DLL I analyzed was this one: Microsoft.Dynamics.Framework.Deployment.Portal.dll. I used ILSpy to reverse engineer it.

Here’s a screenshot of the relevant code I found. I used the information “bezig met invoeren van functie ConfigureAuthenticationMode” (translated: busy entering function ConfigurationAuthenticationMode”) to get to this point.

image

I didn’t saw any dictionaries being accessed but following the call to GetSPIisSettings led me to the following:

image

And that correlates to the authentication provider configuration (per zone) in the SharePoint Central Administration. You can find this by going to the web application management section. Select your webapp and choose manage authentication providers.

image

The reason our site was extended is because we had users authenticating using claims issued by an ADFS instance. So we had one web application which was configured with 2 authentication providers:

  • Default: ADFS tokens (user access)
  • Custom: Windows Authentication (access for the crawling process)

We were aware of the fact that the site couldn’t be extended, so we (temporary) un-extended the site to only leave Windows Authentication active. However as you can see in the code snippet above, the code really expects the “default” zone to be populated…

Summary:

If you are trying to install the AX webparts on a SharePoint site you have to make sure the following prerequisites are OK:

  • The site cannot be extended
  • The site has to have an authentication provider configured on the Default zone
  • The service account (application pool identity) for the web application should be the BCP account. If you don’t do it the setup will do it for you. It doesn’t very best practice, but in the AX world it seems to be a common requirement to run a lot of code under the context of the BCP account…
  • Make sure the site is available over Windows Authentication. This seems to be necessary in order to successfully register the site (your SharePoint site hosting the webparts) within the sites section of AX. If you don’t do this your site will not be authorized to make requests to AX.
  • If you want your webpart to be available on a site that have users authenticate by claims, you’ll have to register those users as “claims user” within AX.

Once you got everything installed and configured within AX you’re free to extended or modify the authentication providers again.

Good Luck!

Windows 8.1 and the RemoteApp Connection URL GPO Setting Issue

$
0
0

One of my customers is deploying Windows 8.1 clients. They mainly use SCCM and App-V as a software delivery solution. Besides that they also have some applications published over RemoteApp. Windows 8 has a GPO setting which allows you to configure the RemoteApp Connection URL: Setting the default RemoteApp connection URL on your clients using GPO

My customer mentioned to me that some of the clients got the following error message: There are currently no connections available on this computer.

CtrlPanelBad

Verifying the RemoteApp and Desktop Connections event log showed no entries. I double checked that the GPO was applying by checking the following registry key:

I did some googling and stumbled upon the following articles:

The first one didn’t apply for our situation. Our clients were not part of any VDI or TS setup. The second one seemed interesting. Sadly no solution was given, but it gave us the hint that adding the user to the local administrators might help. And indeed this made the RemoteApp Connection URL work. If adding a user to the local administrator is a solution for any given problem, this means you got to tweak one of the below:

  • Permissions in the registry
  • Permissions on the file subsystem
  • User Rights Assignment (GPO/Secpol.msc)

I removed my user from the Local Administrators and started testing again. Using process monitor I couldn’t see any particular access problems for either file or registry. What I did learn from process monitor though is that a part of the RemoteApp configuration is handled by a group of scheduled tasks! I could clearly see these being created as an Administrator, but not for my regular account. Here’s how they should look:

GoodSchedTask

So somehow the creation of the scheduled tasks went sideways. I thought I ‘d be smart and I started with enabling all failure audits for all events in the security event log: auditpol /set /category:* /failure:enable. Whilst I could see some events being generated during my logon, I couldn’t relate any of those to the scheduled tasks not being created. So no clues there. The only thing I figured I could do is start examining the user right assignment settings. This customer has implemented the security recommendations (Security Compliancy Manager) which means a lot of these are customized. After I put my client in an OU with NO GPO’s whatsoever I could see that my user got this RemoteApp configured just fine.

I thought this would be an easy job. I started comparing my user right assignments on the client (with no GPO’s) and the ones being set by GPO. Typically I would need to hunt something down where by default users have the right, but where the GPO is more strict and only Administrators have it. After going through the list I had found none of those…

So I cloned the GPO, made sure the original one no longer applied to my client and started setting settings to not configured in the user rights assignment section. Sadly I started at the bottom because the one which seemed to be the culprit is Act as part of the operating system. The GPO seemed to set grant this right to “Authenticated Users”. That made me frown… It seems like a very privileged thingy to grant to authenticated users… From the SCM 3.0 toolkit you can see that both the default and the advised setting is “None”:

SCM

It seems that somehow the GPO got wrongly configured at some point in time. By default this is set to “None”. After removing the authenticated users from this particular setting I was finally able to get my user his RemoteApp configuration up and running:

GoodCtrlPanel

This is the setting which seemed to be having this negative effect:

GoodSecPol2

I ‘m still trying to wrap my head around the fact that whilst this setting had authenticated users in it, only non-administrator users were impacted. Either way, case closed!

Direct Access: Error Loading Configuration

$
0
0

Recently I worked together with some Microsoft consultants to implement a Direct Access proof of concept. Initially the configuration went just fine. However, the operational state page showed that we had an issue with the IPSEC certificate. Initially we thought the problem had to do with our DA server being unable to reach the internet in order to download the CRL (Certificate Revocation List). So we had two options: either ask the network team to open up the DA’s external interface connectivity towards the internet or configure the DA server so it could use a proxy.

Due to the workload of the network team, not the best reasoning though, we choose option two: the proxy. We configured the proxy in IE and we were able to successfully download the CRL file. The error didn’t go away though. So we figured we had to configure the system to use the proxy. This can be done using netsh:

  • netsh winhttp set proxy 10.0.10.30:8080

The error didn’t went away and it was time for lunch so we thought we’d give the DA server a reboot and some time. When we came back there was something weird with the operation state page. It just said Name_Of_DA_Server is not working properly. We tried rerunning the configuration wizard but we got greeted with the following error:

clip_image002

In words: The WinRM client sent a request to an HTTP server and got a response saying the requested HTTP URL was not available. This is usually returned by a HTTP server that does not support the WS-Management protocol. Suddenly we figured out that our netsh winhttp proxy might have broken things a bit. So we gave the command a little twist:

  • netsh winhttp set proxy 10.0.10.30:8080 bypass-list="<local>;*.contoso.local"

Now that we used this command our DA console was back to normal, but our IPSEC certificate was still in error. The solution was actually quite simple: just make sure the DA server also has a certificate based upon the Computer (or your custom computer) template. Such a certificate can be used for both client and server authentication. This certificate should be issued by the same CA that issues certificates for your clients and that is configured in the DA configuration wizard. After enrolling a certificate we now had a working DA setup. We could see our clients setting up a connection successfully.

One thing that bothered us though: on the operation state page we had an error on the availability of the IPHTTPS interface of the DA server. Now that was odd. netsh showed it as available and after all we had several clients connected to it. When we removed the proxy (netsh winhttp reset proxy) and then the status page went all green!

So for now I’m not even sure if the DA server needs to check the CRL’s of the certificate used by IPHTTPS. But I guess we’ll find out soon if DA stops working. All I can say: the netsh winhttp set proxy doesn’t seems a really good idea to use. Things started breaking in weird and unpredictable ways.

I’m not sure whether there will be another person ever running into this, but if you do I hope this saves you some time.

Generate a SAN Certificate Request File

$
0
0

Recently I had to generate a request file for a SAN (Subject Alternative Name) certificate. Using the GUI this is pretty straight forward, but I wanted to use the command line as this allows to be repeated for other certificates way faster. The tool to be used, which is installed by default on Windows, is certreq.exe. Typically certreq.exe uses an inf file to gather most of the input. For the actual parameters I started googling around. I quickly stumbled upon: KB931351: How to add a subject alternative name to a secure LDAP certificate

The relevant section:

[RequestAttributes]
SAN="dns=name.contoso.com&dns=othername.contoso.com"

The generation of the request file went flawless with this parameter. However upon verification using an online CSR decoder (https://www.sslshopper.com/csr-decoder.html ) I couldn’t find my SAN attribute. So I googled a bit more and finally came up with the following contents for certreq.ini:

[Version]

Signature="$Windows NT$"

[NewRequest]
Subject = "CN=name.contoso.com,O=CSR Demo,OU=IT,L=Brussels,S=Brussels,E=certificates@contoso.com,C=BE"

;EncipherOnly = FALSE
Exportable = TRUE   ; TRUE = Private key is exportable
KeyLength = 2048     ; Valid key sizes: 1024, 2048, 4096, 8192, 16384
KeySpec = 1          ; Key Exchange – Required for encryption
KeyUsage = 0xA0      ; Digital Signature, Key Encipherment
MachineKeySet = True
ProviderName = "Microsoft RSA SChannel Cryptographic Provider"

RequestType = PKCS10 ; or CMC.

[EnhancedKeyUsageExtension]
OID=1.3.6.1.5.5.7.3.1 ; Server Authentication
;OID=1.3.6.1.5.5.7.3.2 ; Client Authentication  // Uncomment if you need a mutual TLS authentication

[Extensions]
2.5.29.17 = "{text}"
_continue_ = "dns=name.contoso.com&"
_continue_ = "dns=othername.contoso.com"

As you can see the SAN properties are now specified in an other way, and these seem to make it to the certificate request. I also highlighted Exportable (Private Key) = TRUE but that’s entirely personally and dependent on your scenario. To conclude the parameter required to actually perform the procedure:

  • Generate the request file: Certreq.exe –new certreq.ini certreq.req
  • Accept and Install certificate: Certreq.exe –accept certificate.cer

Windows 7 and the RemoteApp Connection URL Issue

$
0
0

Some weeks ago I had a customer which experienced issues with RemoteApp on Windows 8: Setspn: Windows 8.1 and the RemoteApp Connection URL GPO Setting Issue Now I had another customer with a similar issue. This one was working with Windows 7 and we were entering the URL manually (for now). Upon entering the URL we got greeted with an error:

Workspace_screeny

In order to find out what was going on I thought I’d run a process monitor alongside with it. After filtering most of the success/harmless messages, here’s what’s left:

Workspace

The one that got my attention was the “NOT IMPLEMENTED” one. It got me thinking. The users logging on to Windows 7 have their appDataRoaming redirected to a folder in their home share. And we use offline files and folders as well. Due to the fact that I was messing around with Direct Access I had my home folder as "work offline”:

Workspace2 - Copy

After transitioning to “work online”:

Workspace3 - Copy

All seemed to go well:

Workspace4

Seems a pretty specific case, but as usual, if I ran into it, perhaps other people will as well.

Viewing all 95 articles
Browse latest View live