Quantcast
Channel: ADdict
Viewing all 95 articles
Browse latest View live

Azure Billing Changes

$
0
0

A bit less technical for once, but a few days ago I noticed several announcements for billing related changes that I though were worth mentioning. And besides that, my personal test subscription got disabled once more because I ran out of credit… So what else is there to do? ; )

Azure Billing Detailed Usage Change

Every time I talk to a customer who is new to Azure and is starting to get into IAAS, I explain how Virtual Machines are billed. Roughly there’s 4 things to take into account:

  • Compute hours: depends on the uptime/tier
  • Storage space consumed: the bigger the VM,….
  • Storage transactions: the more disk IO VM performs, ….
  • Network IO: “upload”/”download” where download is free

Now a lot of customers want to, or foresee, that they want to split the bill to the responsible department, project or another factor. Before the recent changes there were 2 ways to do this:

  • Separate subscriptions
  • Creating your VM’s on separate storage accounts/in separate cloud services

Personally I’m not too fond of the separate subscriptions idea. It will bring you overhead in terms of network connectivity and the overall picture might become more difficult to see. I’m aware that there are definitely cases where you clearly want to provide a group of people “full control” on “their” stuff and you want to be able to just send the bill of everything they use. But in many cases I feel having many subscriptions will become a PITA to manage. What if your billing scheme changes, instead of per department, you have to have a picture per application. Do you really want to tie your subscriptions to that?

Now I’m more in favor of creating VM’s in separate storage accounts and cloud services. Still not ideal if you have to restructure, but the impact should be less. Here’s how the detailed usage looked before June:

TypeUnitGranularity
NetworkingData Transfer In( GB)Cloud Service
NetworkingData Transfer Out (GB)Cloud Service
StorageStandard IO – Page Blob/DISK (GB)Storage Account
Virtual MachinesCompute HoursCloud Service\Tier
Data ManagementStorage Transactions (in 10,000s)Storage Account

As you can see Cloud Service and Storage Account are really important if you want to separate our resources. Now things have changed, both Networking and Compute now include the VM name (next to the Cloud Service):

TypeUnitGranularity
NetworkingData Transfer In( GB)Cloud Service (VM Name)
NetworkingData Transfer Out (GB)Cloud Service (VM Name)
StorageStandard IO – Page Blob/DISK (GB)Storage Account
Virtual MachinesCompute HoursCloud Service (VM Name)
Data ManagementStorage Transactions (in 10,000s)Storage Account

So assigning VM’s to cloud services is no longer an absolute requirement for building detailed bills. Other than that, there’s two more new fields:

  • Resource Group
  • Tags

From tags I know they are a V2 (Azure Resource Manager) feature. Resource groups are also available for V1 VMs. On my detailed usage overview the column resource group was empty. So it might be that this only will be filled in for V2 resources. Once V2 resources are commonly used we’ll be able to add one ore more tags to resources like VM’s. This will greatly benefit Azure Automation and Azure Billing! You’ll be able to specify information that can help identify the VM: e.g. Environment: Dev/Test/Acceptance/Production or Department: HR/IT/Sales or …

Enterprise Agreement: MSDN subscriptions

Something that has been available for a while: MSDN subscriptions below an Enterprise Agreement. If your company has both an Azure Agreement and your developers/IT Pro’s have an MSDN, they are allowed to have machines run at MSDN rates. These machines cannot belong to production! The advantage is pricing: Windows VM run at the price of the equivalent Linux VM and software available in the MSDN library is for free (e.g. SQL). You can configure this on the EA portal: https://ea.azure.com

image

Azure Billing API

In the pas there was an API available for the EA customers. Luckily the new Azure Usage API (MSDN) and Azure RateCard API (MSDN) or for all subscriptions! You can read more on these here: ScottGu: New Azure Billing APIs Available

Side note

It’s a common practice to shutdown VM’s that are not being used in order to save Azure credits. The less hours a VM turns, the better. One thing I overlooked this month is the cost of the Azure VNET Gateway. I had been playing with a site to site VPN (between two Azure VNets) and this resulted in two Gateways burning quite some credit. So I’d say: keen any eye on those gateways! They can cost quite a lot.


Azure DSC and Configuration Archive Case Sensitiveness

$
0
0

Lately I’ve been working on my Azure Automation skills. More precisely I want to have a script that is able to create a virtual machine and creates a new Active Directory (domain controller) on it. The are several ways of doing this. One way is to create a PowerShell script that is executed through the Azure script extension. An other way is through the Desired State Configuration (DSC) extension. In my opinion the latter is the best option. DSC is really great at getting your server configured with minimal scripting. If you’re unfamiliar with DSC you might be experiencing quite some issues in the beginning. Having a working DSC extension is one thing, but getting it to work through the Azure DSC extension has it’s own challenges. Most of these so called issues have probably to do with me being at the bottom of the DSC learning curve…

A while back I wrote a simple DSC extension to get the time zone right (Working with PowerShell DSC and Azure VM’s based on Windows 2012). That simple example went pretty well. Now I wasn’t even getting my DSC script to properly download to the target system. Now how on earth could something that simple be that hard? Here’s the error I was having:

Log file location: C:\WindowsAzure\Logs\Plugins\Microsoft.Powershell.DSC\1.10.1.0\DscExtensionHandler.3.20150627-211133

VERBOSE: [2015-06-27T21:11:42] File lock does not exist: begin processing
VERBOSE: [2015-06-27T21:11:42] File
C:\Packages\Plugins\Microsoft.Powershell.DSC\1.10.1.0\bin\..\DSCWork\2-Completed.Install.dsc exists; invoking extension
handler...
VERBOSE: [2015-06-27T21:11:43] Reading handler environment from
C:\Packages\Plugins\Microsoft.Powershell.DSC\1.10.1.0\bin\..\HandlerEnvironment.json
VERBOSE: [2015-06-27T21:11:44] Reading handler settings from
C:\Packages\Plugins\Microsoft.Powershell.DSC\1.10.1.0\RuntimeSettings\3.settings
VERBOSE: [2015-06-27T21:11:47] Applying DSC configuration:
VERBOSE: [2015-06-27T21:11:47]     Sequence Number:              3
VERBOSE: [2015-06-27T21:11:47]     Configuration Package URL:   
https://thvuystoragetest.blob.core.windows.net/windows-powershell-dsc/MyDC.ps1.zip
VERBOSE: [2015-06-27T21:11:47]     ModuleSource:                
VERBOSE: [2015-06-27T21:11:47]     Configuration Module Version:
VERBOSE: [2015-06-27T21:11:47]     Configuration Container:      MyDC.ps1
VERBOSE: [2015-06-27T21:11:47]     Configuration Function:       MyDC (2 arguments)
VERBOSE: [2015-06-27T21:11:47]     Configuration Data URL:      
https://thvuystoragetest.blob.core.windows.net/windows-powershell-dsc/MyDC-69d57a1f-2522-41d7-b5ac-3b635c63ba93.psd1
VERBOSE: [2015-06-27T21:11:47]     Certificate Thumbprint:       FC89BDBF395EFC39EA3633BBDEAE9BB7AA7C475E
VERBOSE: [2015-06-27T21:11:47] Creating Working directory:
C:\Packages\Plugins\Microsoft.Powershell.DSC\1.10.1.0\bin\..\DSCWork\MyDC.ps1.3
VERBOSE: [2015-06-27T21:11:48] Downloading configuration package
VERBOSE: [2015-06-27T21:11:48] Downloading
https://thvuystoragetest.blob.core.windows.net/windows-powershell-dsc/MyDC.ps1.zip?sv=2014-02-14&sr=b&sig=k38XoVn5%2Bn5P1UIMM8q
mh9bc7YBD7Q5ZNV%2B5aqvP2xs%3D&se=2015-06-27T20%3A10%3A16Z&sp=rd to
C:\Packages\Plugins\Microsoft.Powershell.DSC\1.10.1.0\bin\..\DSCWork\MyDC.ps1.3\MyDC.ps1.zip
VERBOSE: [2015-06-27T21:11:48] An error occurred processing the configuration package; removing
C:\Packages\Plugins\Microsoft.Powershell.DSC\1.10.1.0\bin\..\DSCWork\MyDC.ps1.3
VERBOSE: [2015-06-27T21:11:48] [ERROR] An error occurred downloading the Azure Blob: Exception calling "DownloadFile" with "2"
argument(s): "The remote server returned an error: (404) Not Found."
The Set-AzureVMDscExtension cmdlet grants access to the blobs only for 1 hour; have you exceeded that interval?
VERBOSE: [2015-06-27T21:11:49] Writing handler status to C:\Packages\Plugins\Microsoft.Powershell.DSC\1.10.1.0\Status\3.status
VERBOSE: [2015-06-27T21:11:49] Removing file lock


The most interesting part:

VERBOSE: [2015-06-27T21:11:48] [ERROR] An error occurred downloading the Azure Blob: Exception calling "DownloadFile" with "2"
argument(s): "The remote server returned an error: (404) Not Found."
The Set-AzureVMDscExtension cmdlet grants access to the blobs only for 1 hour; have you exceeded that interval?

I found the following URL from the log file: https://thvuystoragetest.blob.core.windows.net/windows-powershell-dsc/MyDC.ps1.zip?sv=2014-02-14&sr=b&sig=k38XoVn5%2Bn5P1UIMM8qmh9bc7YBD7Q5ZNV%2B5aqvP2xs%3D&se=2015-06-27T20%3A10%3A16Z&sp=rd

Some googling led me to some results, but nothing relevant. I took the URL and copy pasted into a browser:

StorageContainerXMLChrome

It showed me an XML type response stating: BlobNotFound:The specified blob does not exist. By accident I used an open chrome instance as I typically use IE. If I visited this URL using IE I simply got a page not found error. That’s probably something that can be tweaked in the IE settings, but still good to know. After seeing that error page I went to the Azure management portal:

StorageContainer

I drilled down till I found my .ps1.zip file and copy pasted its URL in a notepad++ windows:

image

As you can see the only difference is the casing of “MyDC.ps1”… The URL in the log file is constructed by the Azure DSC extension. More particular by the following PowerShell lines:

001
002
003
004
005
006
007
008
009
010
011
012

$configurationArchive = "MyDC.ps1.zip"
$configurationName = "MyDC" 
$configurationData = "C:\Users\Thomas\SkyDrive\Documenten\Work\Blog\DSC\Final\myDC.psd1"

$VM = get-AzureVM -Service $svcname -name 
$vmname
$vm
 = Set-AzureVMDSCExtension -VM $vm `
    -ConfigurationArchive $configurationArchive `
    -ConfigurationName $configurationName `
    -ConfigurationArgument $configurationArguments `
-ConfigurationDataPath 
$configurationData

$vm
 | update-azurevm

Updating my $configurationArchive to myDC.ps1.zip was all I needed to do to get this baby running.

Summary

Whenever creating storage accounts, containers or blobs on them, make sure to watch out for case sensitiveness. In my opinion using an all lower case approach might be the best way forward.

Quick Tips: Azure: Where did my Public IP go?

$
0
0

Over the past few months I’ve been working more and more on Azure and here is a small tip I’d like to share. I’ve seen various customers that are not aware about the following:

Whenever you start the first VM in a cloud service, the cloud service gets a public IP from the Azure infrastructure. Suppose you have an IIS server in it and you want to expose it to the internet, you might create a 443/80 endpoint for it. In order to point users to it you’ll probably rather want http://web.contoso.com than http://contosoweb.cloudapp.net Chances are you’ll start fiddling around in your public DNS zone. If you want to achieve this, there’s some options for you:

  • Create a CNAME (alias) record web.contoso.com –> contosoweb.cloudapp.net
  • Creata an A record web.contoso.com –> public IP of the cloud service

Which option you prefer is up to you, but watch out with the last option! By default cloud services get a dynamic public IP. Once all VM’s are stopped (deallocated) in the cloud service, the cloud service stops as well and the public IP is released. Whenever you start your VM’s again, they’ll be no longer reachable on that old public IP! There’s an option to reserve your IP though, you can even reserve 5 for free with each subscription. For pricing details: http://azure.microsoft.com/en-us/pricing/details/ip-addresses/

Some relevant screenshots:

Before: “Virtual IP-Address” > Dynamic

Dyn

After: “Virtual IP-Address” > Reserved

dyn2

Assigning an IP is pretty straight forward, first we create one:

2

Then we reserve it:

3

Note: the reserved IP will not be the same as the IP currently in use, so make sure to coordinate this with your DNS record update!

ADFS Alternate Login ID: Some or all identity references could not be translated

$
0
0

First day back at work I already had the chance to get my hands dirty with an ADFS issue at a customer. The customer had an INTERNAL.contoso.com domain and an EXTERNAL.contoso.com domain. Both were connected with a two-way forest trust. The INTERNAL domain also had an ADFS farm. Now they wanted both users from INTERNAL and EXTERNAL to be authenticated by that ADFS. Technically this is possible through the AD trust. Nothing special there, the catch was that they wanted both INTERNAL and EXTERAL users to authenticate using @contoso.com usernames. Active Directory has no problems authenticating users with an UPN different with that from the domain. You can even share the UPN suffix namespace in more than one domain, but… you cannot route shared suffixes cross the forest trust! In our case that would mean the ADFS instance would be able to authenticate user.internal@contoso.com but not user.external@contoso.com as there would be no way to locate that user in the other domain.

Alternate Login ID to the rescue! Alternate Login ID is a feature on ADFS that allows you to specify an additional attribute to be used for user lookups. Most commonly “mail” is used for this. This allows people to leave the UPN, commonly a non public domain (e.g. contoso.local), untouched. Although I’m mostly advising to change the UPN to something public (e.g. contoso.com). The cool thing about Alternate Login ID is that you can specify one or more LookupForests! In our case the command looked like:

001
002

Set-AdfsClaimsProviderTrust -TargetIdentifier "AD AUTHORITY" -AlternateLoginID mail -LookupForests internal.contoso.com,external.contoso.com

Some more information about Alternate Login ID: TechNet: Configuring Alternate Login ID

Remark: When alternate login ID feature is enabled, AD FS will try to authenticate the end user with alternate login ID first and then fall back to use UPN if it cannot find an account that can be identified by the alternate login ID. You should make sure there are no clashes between the alternate login ID and the UPN if you want to still support the UPN login. For example, setting one’s mail attribute with the other’s UPN will block the other user from signing in with his UPN.

Now where’s the issue? We could authenticate INTERNAL users just fine, but EXTERNAL users were getting an error:

3

In words:

The Federation Service failed to issue a token as a result of an error during processing of the WS-Trust request.

Activity ID: 00000000-0000-0000-5e95-0080000000f1

Request type: http://schemas.microsoft.com/idfx/requesttype/issue

Additional Data
Exception details:
System.Security.Principal.IdentityNotMappedException: Some or all identity references could not be translated.
   at System.Security.Principal.SecurityIdentifier.Translate(IdentityReferenceCollection sourceSids, Type targetType, Boolean forceSuccess)
   at System.Security.Principal.SecurityIdentifier.Translate(Type targetType)
   at System.Security.Principal.WindowsIdentity.GetName()
   at System.Security.Principal.WindowsIdentity.get_Name()
   at Microsoft.IdentityModel.Claims.WindowsClaimsIdentity.InitializeName()
   at Microsoft.IdentityModel.Claims.WindowsClaimsIdentity.get_Claims()
   at Microsoft.IdentityServer.Service.Tokens.MSISWindowsUserNameSecurityTokenHandler.AddClaimsInWindowsIdentity(UserNameSecurityToken usernameToken, WindowsClaimsIdentity windowsIdentity, DateTime PasswordMustChange)
   at Microsoft.IdentityServer.Service.Tokens.MSISWindowsUserNameSecurityTokenHandler.ValidateTokenInternal(SecurityToken token)
   at Microsoft.IdentityServer.Service.Tokens.MSISWindowsUserNameSecurityTokenHandler.ValidateToken(SecurityToken token)
   at Microsoft.IdentityModel.Tokens.SecurityTokenHandlerCollection.ValidateToken(SecurityToken token)
   at Microsoft.IdentityServer.Web.WSTrust.SecurityTokenServiceManager.GetEffectivePrincipal(SecurityTokenElement securityTokenElement, SecurityTokenHandlerCollection securityTokenHandlerCollection)
   at Microsoft.IdentityServer.Web.WSTrust.SecurityTokenServiceManager.Issue(RequestSecurityToken request, IList`1& identityClaimSet)

Now the weird part: just before the error I was seeing a successful login for that particular user:

2

I decided to start my search with this part: System.Security.Principal.IdentityNotMappedException: Some or all identity references could not be translated. That led me to all kind of blogs/posts where people were having issue with typo’s in scripts or with users that didn’t exist in AD. But that wasn’t the case with me, after all, I just had a successful authentication! Using the first line of the stack trace: at System.Security.Principal.SecurityIdentifier.Translate(IdentityReferenceCollection sourceSids, Type targetType, Boolean forceSuccess) I took an educated guess of what the ADFS service was trying to do. And I was able to do the same using PowerShell

001
002
003

$objSID = New-Object System.Security.Principal.SecurityIdentifier ("S-1-5-21-3655502699-1342072961-xxxxxxxxxx-1136") 
$objUser = $objSID.Translate( [System.Security.Principal.NTAccount]) 
$objUser.Value

And yes I got the same error!:

psError

At first sight this gave me nothing. But this was actually quite powerful: I was now able to reproduce the issue as many times as I liked, no need to go through the logon pages and most importantly: I could now take this PowerShell code and execute it on other servers! This way I could determine whether it was OS related, AD related, trust related,… I found out the following:

  • Command fails on ADFS-SRV-01
  • Command fails on ADFS-SRV-02
  • Command fails on WEB-SRV-01
  • Command runs on HyperV-SRV-01
  • Command runs on DC-INTERNAL-01

Now what did this learned me:

  • The command is fine and should work
  • The command runs fine on other 2012 R2 servers
  • The command runs fine on a member server (the Hyper-V server)

As I was getting nowhere with this I decided to take a Network Trace on the ADFS server while executing the PowerShell command. I expected to see one of the typical SID translation methods (TechNet: How SIDs and Account Names Can Be Mapped in Windows) to appear. However absolutely nothing appeared?! No outgoing traffic related to this code. Now wtf? I had found this article: ASKDS: Troubleshooting SID translation failures from the obvious to the not so obvious but that wouldn’t help me if there was no traffic to begin with.

Suddenly an idea popped up in my head. What if the network traffic wasn’t showing any SID resolving because the machine looked locally? And why would the machine look locally? Perhaps if the domain portion of the machine SID is the same as that of the user we were looking up? But they’re in different domains… However, there’s also the machine’s local SID! The one that is typically never encountered or seen! Here’s some info on it: Mark Russinovich: The Machine SID Duplication Myth (and Why Sysprep Matters)

I didn’t took the time to find out whether I could retrieve it’s value with PowerShell or so, but I just took PsGetsid.exe from SysInternals. This is what the command showed me for the ADFS server:

2015-08-03_14-43-07

Bazinga! It seemed the local SID of all the machines that were failing the command were the same as the domain portion of the EXTERNAL domain SIDs! Now I asked to customer if he could deploy a new test server so I could reproduce the issue one more time. Indeed the issue appeared again. The local SID was again identical. Running sysprep on the server changed the local SID and after joining the server again to the domain we were able to succesfully execute the PowerShell commands!

Resolution:

The customer had been copying the same VHD over and over again without actually running sysprep on it… As the EXTERNAL domain was also created on a VM from that image the Domain Controller promotion process choose that local SID as base for the EXTERNAL domain SID. My customer choose to resolve this issue by destroying the EXTERNAL domain and setting it up again. Obviously this does not solve the fact that several servers were not sysprepped, and in the future this might cause other issues…

Sysprep location:

image

For a template you can run sysprep with generalize and the shutdown option:

image

Each time you boot a copy of your template it will run the sysprep process at first boot.

P.S. Don’t run sysprep on a machine with software/services installed. It might have a nasty outcome…

FIM 2010 (NOT R2!) Upgrade to MIM 2016

$
0
0

This blog post will assist you in upgrading a FIM 2010 environment to MIM 2016. To be clear: FIM 2010, not FIM 2010 R2. Disclaimer: if you “play” around like I do below, make sure you use one, or more, of the following.

  • A test environment
  • SQL Backups
  • VM Snapshots

Trust me sooner or later they’ll save your life or at least your day. After each attempt I did an SQL restore to be absolutely sure my upgrade path was OK. The installer “touches” the databases pretty quickly even if it fails in the beginning of the process.

The upgrade process is explained on TechNet: Upgrading Forefront Identity Manager 2010 R2 to Microsoft Identity Manager 2016 as well, but the guide is only partially applicable for the scenario I’ve foreseen.

  • No information on upgrading from FIM 2010, only FIM 2010 R2 is mentioned
  • No information on transitioning to a more recent Operating System
  • No information on transitioning to a more recent database platform

In order to clarify I’ll show a topology diagram of our current setup:

visio

Current versions:

  • Operating System: Windows 2008 R2
  • SQL: SQL Server 2008
  • FIM: FIM 2010 (build 4.0.3576.2)

Target versions:

  • Operating System: Windows 2012 R2
  • SQL: SQL Server 2012 SP1
  • FIM: MIM 2016 (RTM)

I won’t post a target diagram as in our case we decided not to change anything. We intend to upgrade FIM 2010 to MIM2016. However, we also would like to upgrade the various supporting components such as the underlying operating system and the SQL server edition. The TechNet guide shows you what has to be done to perform an in place upgrade of FIM 2010 R2 to MIM 2016. If I were to do an in place upgrade I would end up with MIM 2016 on server 2008 R2. I’d rather not do an in place upgrade of 2008 R2 to 2012 R2. That means I would have to migrate MIM 2016 to another box. Another disadvantage of upgrading in place is that you’ll have downtime during the upgrade. Well eventually you’ll have some downtime, but if you can leave the current environment intact, you can avoid the lengthy restore process if something goes wrong. And what about the database upgrade processes? Depending on your environment that can take quite some time. If you want plan your window for the upgrade, you could follow my approach as a “dry run” with the production data without impacting your current (running) environment! If you’re curious how, read on!

I wanted to determine the required steps to get from current to target with the least amount of hassle. I’ll describe the steps I followed and the issues I encountered:

Upgrading/Transitioning the FIM Synchronization Service: Attempt #1

  1. Stop and disable all scheduled tasks that execute run profiles
  2. Stop and disable all FIM 2010 services (both Sync and Service)
  3. Backup the FIMSynchronization database on the SQL 2008 platform (see note at bottom)
  4. Restore the FIMSynchronization database on the SQL 2012 platform
  5. Enable SQL Server Service Broker for the FIMSynchronization database (see note at bottom)
  6. Transfer the logins used by the database from SQL 2008 to SQL 2012
  7. Copy the FIM Synchronization service encryption keys  to the Windows 2012 R2 Server
  8. Ran the MIM 2016 Synchronization Service MSI on the Windows 2012 R2 server

However that resulted in the following events and concluded with an MSI installation failure:

Error1

In words: Error 25009.The Microsoft Identity Manager Synchronization Service setup wizard cannot configure the specified database. Invalid object name 'mms_management_agent'. <hr=0x80230406>

And in the Application Event log:

sync1

In words:Conversion of reference attributes started.

sync2

In words: Conversion of reference attributes failed.

Sync3

In words: Product: Microsoft Identity Manager Synchronization Service -- Error 25009.The Microsoft Identity Manager Synchronization Service setup wizard cannot configure the specified database. Invalid object name 'mms_management_agent'. <hr=0x80230406>

The same information was also found in the MSI verbose log. Some googling led me to some fixes regarding SQL access rights or SQL compatibility level. None of which worked for me.

Upgrading/Transitioning the FIM Synchronization Service: Attempt #2

This attempt is mostly the same as the previous. However now I’ll be running the MIM 2016 installer directly on the FIM 2010 Synchronization Server. I’ll save you the trouble: it fails with the exact same error. As a bonus the setup rolls back and leaves you with a server with NO FIM installed.

Upgrading/Transitioning the FIM Synchronization Service: Attempt #3

I’ll provide an overview of the steps again:

  1. Stop and disable all scheduled tasks that execute run profiles
  2. Stop and disable all FIM 2010 services (both Sync and Service)
  3. Backup the FIMSynchronization database on the SQL 2008 platform (see note at bottom)
  4. Restore the FIMSynchronization database on the SQL 2012 platform
  5. Enable SQL Server Service Broker for the FIMSynchronization database (see note at bottom)
  6. Transfer the logins used by the database from SQL 2008 to SQL 2012
  7. Install a new (temporary) Windows 2012 Server
  8. Copy the FIM Synchronization service encryption keys to the Windows 2012 Server
  9. Run the FIM 2010 R2 (4.1.2273.0) Synchronization Service MSI on the Windows 2012 server–> Success
  10. Stop and disable the FIM Synchronization Service on the Windows 2012 server
  11. Copy the FIM Synchronization service encryption keys to the Windows 2012 R2 Server
  12. Run the MIM 2016 Synchronization Service MSI on the Windows 2012 R2 server

Again that resulted in several events and concluded with the an MSI installation failure:

SyncErrorBis

In words: Product: Microsoft Identity Manager Synchronization Service -- Error 25009.The Microsoft Identity Manager Synchronization Service setup wizard cannot configure the specified database. Incorrect syntax near 'MERGE'. You may need to set the compatibility level of the current database to a higher value to enable this feature. See help for the SET COMPATIBILITY_LEVEL option of ALTER DATABASE.

Now that’s an error that doesn’t seem to scary. It’s clearly suggesting to raise the database compatibility level so that the MERGE feature is available.

Upgrading/Transitioning the FIM Synchronization Service: Attempt #4 –> Success!

I’ll provide an overview of the steps again:

  1. Stop and disable all scheduled tasks that execute run profiles
  2. Stop and disable all FIM 2010 services (both Sync and Service)
  3. Backup the FIMSynchronization database on the SQL 2008 platform (see note at bottom)
  4. Restore the FIMSynchronization database on the SQL 2012 platform
  5. Enable SQL Server Service Broker for the FIMSynchronization database (see note at bottom)
  6. Transfer the logins used by the database from SQL 2008 to SQL 2012
  7. Don’t worry about the SQL Agent Jobs, the MIM Service setup will recreate those
  8. Install a new (temporary) Windows 2012 Server
  9. Copy the FIM Synchronization service encryption keys to the Windows 2012 Server
  10. Run the FIM 2010 R2  (4.1.2273.0) Synchronization Service MSI on the Windows 2012 server
  11. Stop and disable the FIM Synchronization Service on the Windows 2012 server
  12. Changed the SQL Compatibility Level to 2008 (100) on the database
  13. Copy the FIM Synchronization service encryption keys to the Windows 2012 R2 Server
  14. Run the MIM 2016 Synchronization Service MSI on the Windows 2012 R2 server –> Success!

Changing the compatibility level can easily be done using the using the SQL Management Studio:

sqlCompatLevel

In my case it was on SQL Server 2005 (90) and I changed it to SQL Server 2008 (100). If you prefer doing this through an SQL query:

USE[master]

GO

ALTERDATABASE[FIMSynchronization]SETCOMPATIBILITY_LEVEL= 100

GO

Bonus information:

This is the command I ran to install both the FIM 2010 R2 and MIM 2016 Synchronization Instance:

Msiexec /i "Synchronization Service.msi" /qb! STORESERVER=sqlcluster.contoso.com SQLINSTANCE=fimsql SQLDB=FIMSynchronization SERVICEACCOUNT=svcsync SERVICEDOMAIN=CONTOSO SERVICEPASSWORD=PASSWORD GROUPADMINS=CONTOSO\GGFIMSyncSvcAdmins GROUPOPERATORS=CONTOSO\GGFIMSyncSvcOps GROUPACCOUNTJOINERS=CONTOSO\GGFIMSyncSvcJoiners GROUPBROWSE=CONTOSO\GGFIMSyncSvcBrowse GROUPPASSWORDSET=CONTOSO\GGFIMSyncSvcPWReset FIREWALL_CONF=1 ACCEPT_EULA="1" SQMOPTINSETTING="0" /l*v C:\MIM\LOGS\FIMSynchronizationServiceInstallUpgrade.log

No real rocket science here. However, make sure not to run /q but use /qb! as the latter allows popups to be thrown and answered by you. For instance when prompted to provide the encryption keys.

Upgrading/Transitioning the FIM Service: Attempt #1 –> Success!

Now to be honest, the upgrade I feared the most proved to be the easiest. From past FIM experiences I know the FIM Service comes with a DB upgrade utility. The setup runs this for you. I figured: why on earth would they throw away the information to upgrade from FIM 2010 to FIM 2010 R2 and cripple the tool so that it can only upgrade FIM 2010 R2 to MIM 2016?! And indeed, they did not! Here’s the steps I took to upgrade my FIM Portal & Service:

  1. Stop and disable all scheduled tasks that execute run profiles => this was already the case
  2. Stop and disable all FIM 2010 services (both Sync and Service) => this was already the case
  3. Backup the FIMService database on the SQL 2008 platform (see note at bottom)
  4. Restore the FIMService database on the SQL 2012 platform
  5. Enable SQL Server Service Broker for the FIMService database (see note at bottom)
  6. Transfer the logins used by the database from SQL 2008 to SQL 2012
  7. Installed a Standalone Sharepoint 2013 Foundation SP2
  8. Run the MIM 2016 Service and Portal MSI on the Windows 2012 R2 server –> Success!
  9. Note: the compatibility level was raised to 2008 (100) by the setup

One thing that assured me the FIM Service database was upgrade successfully was the database upgrade log. The following event indicates where you can find it:

dbupgrade

The path: c:\Program Files\Microsoft Forefront Identity Manager\2010\Service\Microsoft.IdentityManagement.DatabaseUpgrade_tracelog.txt An extract:

Database upgrade : Started.
Database upgrade : Starting command line parsing.
Database upgrade : Completed commandline parsing.
Database upgrade : Connection string is : Data Source=sqlcluster.contoso.com\fimsql;Initial Catalog=FIMService;Integrated Security=SSPI;Pooling=true;Connection Timeout=225.
Database upgrade : Trying to connect to database server.
Database upgrade : Succesfully connected to database server.
Database upgrade : Setting the database version to -1.
Database upgrade : Starting database schema upgrade.
Schema upgrade: Starting schema upgrade
Schema upgrade : Upgrading FIM database from version: 20 to the latest version.
Schema upgrade : Starting schema upgrade from version 20 to 21.
...
Database upgrade : Out-of-box object upgrade completed.
Database ugrade : Completed successfully.
Database upgrade : Database version upgraded from: 20 to: 2004
The AppDomain's parent process is exiting.

You can clearly see that the datbase upgrade utility intelligently detects the current (FIM 2010) schema and upgrades all the way to the MIM 2016 database schema.

Bonus information

This is the command I ran to install the MIM 2016 Portal and Service. Password reset/registration portals are not deployed, no reporting and no PIM components. If you just want to test your FIM Service database upgrade, you can even get away with only installing the CommonServices component.

Msiexec /i "Service and Portal.msi" /qb! ADDLOCAL=CommonServices,WebPortals SQLSERVER_SERVER=sqlcluster.contoso.com\fimsql SQLSERVER_DATABASE=FIMService EXISTINGDATABASE=1 SERVICE_ACCOUNT_NAME=svcfim SERVICE_ACCOUNT_DOMAIN=CONTOSO SERVICE_ACCOUNT_PASSWORD=PASSWORD SERVICE_ACCOUNT_EMAIL=svcfim@contoso.com MAIL_SERVER=mail.contoso.com MAIL_SERVER_USE_SSL=1 MAIL_SERVER_IS_EXCHANGE=1 POLL_EXCHANGE_ENABLED=1 SYNCHRONIZATION_SERVER=fimsync.contoso.com SYNCHRONIZATION_SERVER_ACCOUNT=CONTOSO\svcfimma SERVICEADDRESS=fimsvc.contoso.com FIREWALL_CONF=1 SHAREPOINT_URL=http://idm.contoso.com SHAREPOINTUSERS_CONF=1 ACCEPT_EULA=1 FIREWALL_CONF=1 SQMOPTINSETTING=0 /l*v c:\MIM\LOGS\FIMServiceAndPortalsInstall.log

Note: SQL Management Studio Database Backup

Something I learned in the past year or so: whenever taking an “ad hoc” SQL backup, make sure to check the “Copy-only backup” box. That way you won’t interfere with the regular backups that have been configured by your DBA/Backup Admin.

SQL Backup

Note: SQL Server Service Broker

Lately I’ve seen cases where applications are unhappy due to the fact that the SQL Server Broker Service is disabled for their database. In my case it was an ADFS setup. But here’s an (older) example for FIM: http://justanothertechguy.blogspot.be/2012/11/fim-2010-unable-to-start-fim-service.html Typically a database that is restored from an SQL backup has this feature disabled. I checked the SQL Server Broker Service for the FIM databases on the SQL 2008 platform and it was enabled. I checked on my SQL 2012 where I did the restore and I could see it was off. Here’s some relevant commands:

Checking whether it’s on for your database:

SELECTis_broker_enabledFROMsys.databasesWHEREname='FIMSYnchronization';

brokerOff

Enable:

ALTERDATABASEFIMSYnchronizationSETENABLE_BROKERWITHNO_WAIT

If issues arise you can try this one: I’m not an SQL guy. All I can guess it handles the thing less gracefully:

ALTERDATABASEFIMSYnchronizationSETENABLE_BROKERWITHROLLBACKIMMEDIATE;

And now it’s on:

BrokerOn

Remark

I didn’t went all to deep on certain details, if you feel something is unclear, post a comment and I’ll see if I can add information where needed. The above doesn’t describe how to install the second FIM Portal/Service server or the standby MIM Synchronization Server. I expect the instructions to be fairly simple as the database is already on the correct level. If I encounter issues you can expect a post on that well. To conclude: there’s more to do than just running the MIM installers and be done with this. You’ll have to transfer your customizations (like custom workflow DLLs) as well. FIM/MIM Synchronization extensions are transferred for you, but be sure to test everything! Don’t assume! Happy upgrading!

Conclusion

The FIM 2010 Portal and Service can be upgraded to MIM 2016 without the need for a FIM 2010 R2 intermediate upgrade. The FIM 2010 Synchronization Service could not be upgraded directly to MIM 2016. This could be tied to something specific in our environment, or it could be common…

Update #1 (12/08/2015)

Someone prompted me if the FIM can be upgraded without providing the FIM/MIM Synchronization service encryption keys. Obviously it can not. That part has not changed. Whenever you install FIM/MIM on a new box and you point it to an existing database, it will prompt for the key file. I’ve added some “copy keys” steps in my process so that you have them ready when prompted for by the MSI.

Azure Quick Tip: Block or Allow ICMP using Network Security Groups

$
0
0

For a while now Azure allows administrators to restrict network communications between virtual machines in Azure. Restrictions can be configured through the use of Network Security Groups (NSGs). Those can be linked to both subnets or virtual machines. Check the following link if you want some more background information: https://azure.microsoft.com/en-us/documentation/articles/virtual-networks-nsg/

A NSG always contains some default rules. By default all outbound traffic is allowed, and inbound from other subnets (not the internet) is also allowed. Typically if you ping between VM’s on different subnets (same VNET) you’ll see that the machines respond as expected.

Now what if you want to restrict traffic between subnets but still allow ICMP? ICMP is great for troubleshooting connectivity. Set-AzureNetworkSecurityRule allows you to provide the protocol parameter. In a typical firewall scenario this value would contain TCP, UDP, ICMP, … Ping uses ICMP which is neither TCP or UDP… Azure only seem to allow TCP, UDP and * for the protocol:

image

Now how can we block all traffic but allow ICMP? Simple, by explicitly denying UDP and TCP but allowing *. In this example I included the allow rule, but it should be covered by the default rules anyhow.

001
002
003
004
005
006

#allow ping, block UDP/TCP
Get-AzureNetworkSecurityGroup -name "NSG-1" | Set-AzureNetworkSecurityRule -Name BlockTCP -Type Inbound -Priority 40000 -Action Deny -SourceAddressPrefix "*"  -SourcePortRange '*' -DestinationAddressPrefix '*' -DestinationPortRange '*' -Protocol "TCP"

Get-AzureNetworkSecurityGroup -name "NSG-1" | Set-AzureNetworkSecurityRule -Name BlockUDP -Type Inbound -Priority 40001 -Action Deny -SourceAddressPrefix "*"  -SourcePortRange '*' -DestinationAddressPrefix '*' -DestinationPortRange '*' -Protocol "UDP"

Get-AzureNetworkSecurityGroup -name "NSG-1" | Set-AzureNetworkSecurityRule -Name AllowPing -Type Inbound -Priority 40002 -Action Allow -SourceAddressPrefix "*"  -SourcePortRange '*' -DestinationAddressPrefix '*' -DestinationPortRange '*' -Protocol "*"

If we want to work the other way round: allow UDP/TCP but block ICMP we can turn the logic around:

001
002
003
004
005
006

#block ping, allow UDP/TCP
Get-AzureNetworkSecurityGroup -name "NSG-1" | Set-AzureNetworkSecurityRule -Name AllowTCP -Type Inbound -Priority 40000 -Action Allow -SourceAddressPrefix "*"  -SourcePortRange '*' -DestinationAddressPrefix '*' -DestinationPortRange '*' -Protocol "TCP"

Get-AzureNetworkSecurityGroup -name "NSG-1" | Set-AzureNetworkSecurityRule -Name AllowUDP -Type Inbound -Priority 40001 -Action Allow -SourceAddressPrefix "*"  -SourcePortRange '*' -DestinationAddressPrefix '*' -DestinationPortRange '*' -Protocol "UDP"

Get-AzureNetworkSecurityGroup -name "NSG-1" | Set-AzureNetworkSecurityRule -Name BlockPing -Type Inbound -Priority 40002 -Action Deny -SourceAddressPrefix "*"  -SourcePortRange '*' -DestinationAddressPrefix '*' -DestinationPortRange '*' -Protocol "*"

The source/destination information is pretty open as I use * for those, but that’s just an example here. It’s up to you to decide for which ranges to apply this. And you might probably open up some additional ports for actual traffic to be allowed. The above logic is also mentioned in the information I linked at the beginning of the article:

The current NSG rules only allow for protocols ‘TCP’ or ‘UDP’. There is not a specific tag for ‘ICMP’. However, ICMP traffic is allowed within a Virtual Network by default through the Inbound VNet rules that allow traffic from/to any port and protocol ‘*’ within the VNet.

Kudos to my colleague Nichola (http://www.vnic.be) for taking the time to verify this.

MIM 2016: PowerShell Workflow and PowerShell v3

$
0
0

One of the issues of running FIM 2010 R2 on Windows Server 2012 is calling PowerShell scripts from within FIM Portal Workflows (.NET). It seems the workflow code is running .NET 3.5 but uses PowerShell 2.0. When we started migrating our FIM 2010 to MIM 2016 (on Server 2012 R2) we ran into the same issues. This is the .NET code that has been running fine on Windows 2008 R2 for years without any issues:

RunspaceConfiguration config = RunspaceConfiguration.Create();
Runspace runspace = RunspaceFactory.CreateRunspace(config);
runspace.Open();
psh = PowerShell.Create();
psh.Runspace = runspace;
psh.AddCommand(this.PSCmdlet)

psh.Invoke();

And one the scripts that was executed contained code like this:

001
002
003
004
005
006
007
008

doSomething.ps1

#region Parameters
Param([string]$UserName,[string]$Department)
#endregion Parameters
Import-Module ActiveDirectory
Get-Aduser 
...

Now when porting that same logic to our MIM 2016 running on Windows Server 2016 we saw that our get-AD* cmdlets returned nothing. After some investigation we found the following error was triggered when running import-module Active Directory: The 'C:\WINDOWS\system32\WindowsPowerShell\v1.0\Modules\ActiveDirectory\ActiveDirectory.psd1' module cannot be imported because its manifest contains one or more members that are not valid. The valid manifest members are ('ModuleToProcess', 'NestedModules', 'GUID', 'Author', 'CompanyName', 'Copyright', 'ModuleVersion', 'Description', 'PowerShellVersion', 'PowerShellHostName', 'PowerShellHostVersion', 'CLRVersion', 'DotNetFrameworkVersion', 'ProcessorArchitecture', 'RequiredModules', 'TypesToProcess', 'FormatsToProcess', 'ScriptsToProcess', 'PrivateData', 'RequiredAssemblies', 'ModuleList', 'FileList', 'FunctionsToExport', 'VariablesToExport', 'AliasesToExport', 'CmdletsToExport'). Remove the members that are not valid ('HelpInfoUri'), then try to import the module again.

There are various topics online that cover this exact issue.

It seems some PowerShell modules are hardwired to require PowerShell v3. I came across the following suggestion a few times, but it scares me a bit as with my (limited?) knowledge of .NET It’s hard to estimate what impact this might have on FIM. The suggestion was to add the following to the Microsoft.ResourceManagement.Service.exe.config file.

001
002
003
004

<startup>
 <supportedRuntime version="v4.0"/>
 <supportedRuntime version="v2.0.50727"/>
</startup>

I found some approaches using a script that calls another script, but I wanted to avoid this. So I came up with the following approach to update the workflow itself:

PowerShellProcessInstance instance = new PowerShellProcessInstance(new Version(3, 0), null, null, false);
Runspace runspace = RunspaceFactory.CreateOutOfProcessRunspace(new TypeTable(new string[0]), instance);

Source:http://stackoverflow.com/questions/22383915/how-to-powershell-2-or-3-when-creating-runspace

The PowerShellProcessInstance is class that is available in System.Management.Automation. And that’s part of PowerShell itself. I tried various DLLS, but either they didn’t know the class or they resulted in the following error when building my .NET project:

The primary reference "System.Management.Automation" could not be resolved because it has an indirect dependency on the .NET Framework assembly "System, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089" which has a higher version "4.0.0.0" than the version "2.0.0.0" in the current target framework.    FODJ.FIM.Workflow.ActivityLibrary

My project is configured to build for .NET 3.5, but If I’m not mistaken .NET 3.5 use CLR 2.0. Whilst .net 4/4.5 use CLR 4.0 (see .NET Framework Versions and Dependencies). So I guess this route isn’t going to work after all. Back to the drawing board. As I only got a number of scripts to call like this, I decided to go back the wrapper script approach:

The script containing the logic to be executed:

001
002
003
004
005
006
007
008

doSomething.script.ps1

#region Parameters
Param([string]$UserName,[string]$Department)
#endregion Parameters
Import-Module ActiveDirectory
Get-Aduser 
...

As you can see I prepended .script to the .ps1 extension. And here’s my wrapper script. This is the one that is called from the FIM/MIM Workflow:

001
002
003
004
005
006

doSomething.ps1

Param
([string]$UserName,[string]$Department)
$script = $myinvocation.invocationName.replace(".ps1",".script.ps1")
powershell -version 3.0 -file $script -UserName $username -Department 
$Department

There are some things to note: the param line is just a copy past from the base script. And I just specify them as parameters again when calling the base script. I had been looking for a way to use unbound parameters, e.g. the calling workflow says –username … –department … and the wrapper script just passes it over. That would have allowed me to have a generic wrapper script. I got pretty close to getting it to work, but I kept running into issues. In the end I just decided to go for KISS.

Note: if you want to capture errors like the one I show from “import-module Active Directory”, just use the $error variable. You can use it like this. Saving it to disk is just one example. Typically you could integrate this with your logging function.

001
002
003

$error.Clear
Import-Module ActiveDirectory
$error | out-file c:\users\public\error.txt

Azure Management Portal: Properly Remove Co-Adminstrators Permissions

$
0
0

Something I’ve noticed for a while now: whenever I perform an Add-AzureAccount I see more subscriptions being returned than I’d expect. The list I have to choose from in the old portal (manage.windowsazure.com) is definitely not showing that much subscriptions. The new portal (portal.azure.com) displays also more subscriptions than I’d expect. The problem to sort those out is that many of those belong to subscriptions I’ve once have gotten access to, but now I no longer have. Either from customers or test subscriptions from colleagues.

For test purpose subscriptions I don’t really care whether people take my permissions away or not. But for production subscriptions I feel more at ease when I don’t have any permissions I don’t need anyway. Lately a customer mentioned my permissions were taken away, but I still saw their entry in the new Portal. Hmm, odd! Here’s how that’s possible:

First off, Initially I was granted access on my Microsoft Account (invisibal_at_gmail.com) through the old Portal:

image

Now I could manage that subscription through both old and new Portal.

image

And as I also worked for another “customer”, I had multiple subscriptions to manage, Setspn and RealDolmen Azure POC:

image

After my work was done, the customer removed me from the list of Administrators of the Setspn subscription.

subvs

su2

Now when I log in to the old Portal (manage.windowsazure.com) I’ll only see the other subscription.

image

However, when I log on to the new Portal, it’s still there!

image

Trying to show “all resources” of the Setspn subscription shows nothing. As expected.

image

The same is observed through PowerShell:

image

Now the only solution I could think is to also remove the live ID from the Azure Active Directory the subscription is linked to.

Capture3

Captur4e

After removing the user from the Azure AD, you’ll no longer see the subscription in the new Portal:

image

Well as you can see, not exactly… Typically when you try to reproduce things for screenshots, it doesn’t happen or it goes wrong. This is a case “it goes wrong”.  I tried a few times, but the GUID (belonging to the Azure AD I was part of) kept appearing… All I can say whenever the customer actually removed me from their Azure AD it got properly removed from my Azure Portal UI and PowerShell experience….

Conclusion:

I’m pretty sure the only reason you keeping seeing the entry in the new Portal is because you still have the User role assigned in the Azure Active Directory instance. So in a way you’re not really seeing the subscription, but rather the Azure Active Directory instance. But the issue remains the same, it clutters your PowerShell (get-AzureSubscription) and Portal UI experience. So whenever someone takes your co-administrator permissions away, ask them to also remove you from the Azure AD instance.


MIM 2016: no-start-ma when Exporting to Active Directory

$
0
0

Recently I did an upgrade of FIM 2010 to MIM2016 for a customer of mine. I’ve described that process here. We’ve only upgraded our lab environment and are now testing whether everything works as expected. Today I was testing the flow that is triggered by adding a new user to the HR source. One of the things that MIM is supposed to do is create an AD account and an Exchange mailbox. However when the export run profile was executed on the AD MA we saw the following error:

error1

Status: no-start-ma

In the Application event log:

error2

In words: The management agent controller encountered an unexpected error.
 
"ERR_: MMS(8228): ..\libutils.cpp(10186): Failed to start run because of undiagnosed MA error
Forefront Identity Manager 4.3.1935.0"

When troubleshooting an issue like this it’s important to narrow down the possible causes. Is there a connectivity issue with AD? Is there an issue with a rules extension? Is there an issue with the Exchange Provisioning component? The latter is quite easy to check. On the configure extensions we can simply set the Provision for to No Provisioning.
workaround1

After disabling Exchange provisioning the MA seemed to be able to export just fine. So something was up with the Exchange provisioning. To be sure nothing was wrong with the remote PowerShell I tested the URL by opening a remote PowerShell connection to Exchange [technet]. That seemed to go fine. After looking some more in the Application event log I also noticed several Application Crash (event 1000) events whenever I was trying to run an export profile. The application was mmsscrpt.exe. I’m guessing that’s the utility being used to setup the remote PowerShell session and call the Update-Recipient cmdlet.

I found an older article (link) stating errors like this might occur whenever .NET 4.0 is missing. But in my case I was running on a Server 2012 R2 with .NET 4.5.2 installed on it. Either way, that article pushed me into suspecting .NET. We had installed .NET 4.5.2 using the Add-WindowsFeature cmdlet. This is the exact .NET version we had:

netBefore

As you can see we were running 4.5.51650 which matches .NET4.5.2 (May 2014 Update) If I may believe: http://deletionpedia.org/en/List_of_.NET_Framework_versions I binged a bit to find out whether there were any updates available for .NET 4.5.2 but I couldn’t find any. Then a colleague of mine (Thanks Kevin!) reminded me that very recently .NET 4.6 became RTM. So I went ahead and downloaded it from here:

After installing the 4.6 package the .NET version showed us 4.6.00081 in the registry. After a reboot I performed the test again and now I could export to AD again while provisioning mailboxes on Exchange!

Conclusion:

Whenever you are preparing a Server 2012 R2 to host the FIM Synchronization Service, do not forget to download and install .NET 4.6 as the .NET 4.5.2 that comes out of the box is not sufficient.

MIM 2016: Failed to Connect to the Specified Database

$
0
0

I ran into another issue after upgrading a FIM 2010 deployment to MIM2016. As part of the OS/Infrastructure refresh I moved the database to a more recent SQL server platform. One of the things I initially forgot but found out pretty quickly is that obviously I also needed to update the FIM Management Agent parameters so that it points to the new database location. However when I clicked OK, I got the following error:

bError1

In words: Failed to connect to specified database. Failed to connect to the specified database with the given credentials.

And in the Forefront Identity Manager Management Agent event log (on the FIM Sync server):

bError3

In words: mmsmafim: MIIS.ManagementAgent.ManagedMACredentialFailureException: Failed to connect to the specified database with the given credentials.
   at MIIS.ManagementAgent.RavenMA.UIValidateCredentials(String pszCredentials, Int32& pfValid, String& ppszResult)

Upon looking a bit deeper I found the following error in the SQL Server logs:

bError2

In words: Login failed for user 'CONTOSO\FIMSYNC'. Reason: Failed to open the explicitly specified database 'fimservice'. [CLIENT: 10.x.y.z]
Error: 18456, Severity: 14, State: 38.
This was a bit odd, it was complaining that the FIM Synchronization Service account had no access to the FIMService database. I’d expect the FIM MA Account to be used for this connection…

I checked the old database/SQL and I could confirm the FIM Sync service has no access to that database… As I didn’t wanted to start handing out permissions on a wild guess I started googling a bit. I came up with this TechNet thread: TechNet Forum: FIM 2010 Update Rollup 2 Problem which suggests granting the FIM Sync service account the FIM_SynchronizationService role within the FIMService catabase:

bFix

Of the FIM MA configuration again. Attempt #2:

cError1

In words: Unable to update the management agent. The object explorer specified was not found. (Exception from HRESULT: 0x80070776)

Back to google gave me this thread: Technet Forum: Strange stopped-extension-dll-file-not-found error where Craig suggest rebooting the FIM server. I rebooted the server, undid the SQL changes. And tada! It seems all the Synchronization Service needed was a good old restarted. Perhaps restarting the sync service was good enough but I rebooted it anyhow.

Conclusion

I was having troubles reconfiguring the FIM MA and I was able to resolve it by just rebooting the sync server. Configuration changes on SQL were not required.

Azure VPN Gateway Sizes

$
0
0

One of the things I’ve been finding very confusing is the VPN Gateway sizing. Especially the mismatch between the pricing table and what the systems show you. Here’s the technical information:

image

Source: Azure.com: About VPN Gateways The same table is more or less available on the pricing page as well. There you can clearly see that the price difference is real. Pricing goes from 0,0304€ over 0,1603€ to 0,4133€/GW hour. As a VPN Gateway runs 24/7 this might have an impact on your bill. Pricing Source From a technical point of view both basic and standard offer the same features/performance for NON Express Route VPNs.

Conclusion #1: If you don’t need Express Route, there’s no difference between Standard and Basic.

Now what was bothering me: in many blogs/documentation people explain how to change the Gateway SKU. They always mention Default or HighPerformance. When you create a Gateway from the “old” portal this is how the resulting Gateway looks in PowerShell:

2015-09-03_9-31-57

As you can see the GatewaySKU is Default. Now it might be just me, but how the hell on earth are we supposed to know what value default is? Luckily there’s a place like Azure Advisors on Yammer where get to ask questions like this. The Microsoft PM’s do a great job of helping is out and gathering feedback on various topics. The answer I got there is: Default is the same as Basic. Which does make sense in a way as we initially had either Basic or HighPerformance. But it does suck a bit that instead of using a default value they are setting default as a value. If you catch my drift…

Here’s the PowerShell cmdlet reference for the New-AzureVNetGateway cmdlet There you can see that the PowerShell cmdlet takes Basic, Standard and HighPerformance as parameters for the GatewaySKU parameter.

Conclusion #2: Default SKU == Basic SKU

Another area that might confuse you is how the new Portal displays the Gateway sizes. For all types (basic, standard and high performance) a size of Small is shown. Lucian also mentions this on his blog: blog.kloud.com: Azure VNET gateway: basic, standard and high performance But I must admit that I haven’t looked into this. I primarily cared about sku vs pricing.

Direct Access: Windows Internal Database (SQL) High CPU Usage

$
0
0

I’ve got a customer who has deployed Direct Access quite a while ago. Something which we have observed for a while now is that the CPU usage of the servers is rather high. Some details about our setup: we got 2 Direct Access servers which are load balanced using Windows NLB. They are running Windows 2012 R2, have 4 vCPU’s and 8 GB of RAM. When troubleshooting this issue, we were seeing 400 active users, roughly 200 for each server. Here’s what the CPU usage looked like:

image001

As you can see sqlservr.exe is using 67% CPU. Now that’s quite a lot… I would hope a DA server had other things to do with it’s CPU instead of running an SQL instance. Now I know where this instance comes from. We configured inbox accounting on the Direct Access servers. This allows an administrator to pull up reports about who connected when to what resources. You can choose between Radius and Windows Internal Database (WID) for the auditing data targets. We choose the WID approach. We configured our accounting to hold data for 3 months. So I started wondering, is the SQL database instance having troubles with the amount of data? Or is there an issue with indexes that are fragmented or… In order to investigate this, we’d had to do some SQL talking to this instance. As it’s a WID instance, we can only talk to it from the box itself. So we can either install the SQL commandline tools or the SQL Management Studio. I’m not an SQL guru, so I prefer to do my troubleshooting using the SQL Management Studio. In order to determine what version you can use you can check the location for the sqlservr.exe binary:

image003

And from the details you can see that a WID on a Windows 2012 R2 is actually build 11.0.2100.60 which, if bing is correct, is a SQL 2012 edition.

image005

So I took the SQL 2012 iso and installed the SQL Management Studio on the DA servers. Watch out when going through the setup, we don’t want to install another SQL instance! Just the management tools. Here’s the string we can use to connect to the instance: \\.\pipe\MICROSOFT##WID\tsql\query

image007

After connecting to the instance we see that there’s only one Database (RaAcctDb) which has 4 tables. This query I found here: TechNet Gallery and also resembles the query that is presented here: KB2755960. To check all indexes for fragmentation issues, execute the following query:

SELECTOBJECT_NAME(ind.OBJECT_ID)ASTableName,

ind.nameASIndexName,indexstats.index_type_descASIndexType,

indexstats.avg_fragmentation_in_percent

FROMsys.dm_db_index_physical_stats(DB_ID(),NULL,NULL,NULL,NULL)indexstats

INNERJOINsys.indexesind

ONind.object_id=indexstats.object_id

ANDind.index_id=indexstats.index_id

WHEREindexstats.avg_fragmentation_in_percent> 30

ORDERBYindexstats.avg_fragmentation_in_percentDESC

No indexes were returned. Another thing which I hear from time to time is rebuild “statistics”. So I checked them and I saw they were two weeks old. I figured rebuilding them couldn’t hurt:

 

useRaAcctDb;

UPDATESTATISTICSconnectionTable

WITHFULLSCAN

GO

 

useRaAcctDb;

UPDATESTATISTICSEndpointsAccessedTable

WITHFULLSCAN

GO

 

useRaAcctDb;

UPDATESTATISTICSServerEndpointTable

WITHFULLSCAN

GO

 

useRaAcctDb;

UPDATESTATISTICSSessionTable

WITHFULLSCAN

GO

Again no real change in CPU usage… Ok, back to the drawing board.  I googled a bit for “high cpu usage SQL” and I found the following blog: http://mssqlfun.com/2013/04/01/dmv-3-what-is-currently-going-on-sys-dm_exec_requests-2/ One of the queries there is this one:

 

SELECT

R.SESSION_ID,

R.REQUEST_IDASSESSION_REQUEST_ID,

R.STATUS,

S.HOST_NAME,

C.CLIENT_NET_ADDRESS,

CASEWHENS.LOGIN_NAME=S.ORIGINAL_LOGIN_NAMETHENS.LOGIN_NAMEELSES.LOGIN_NAME+'('+S.ORIGINAL_LOGIN_NAME+')'ENDASLOGIN_NAME,

S.PROGRAM_NAME,

DB_NAME(R.DATABASE_ID)ASDATABASE_NAME,

R.COMMAND,

ST.TEXTASQUERY_TEXT,

QP.QUERY_PLANASXML_QUERY_PLAN,

R.WAIT_TYPEASCURRENT_WAIT_TYPE,

R.LAST_WAIT_TYPE,

R.BLOCKING_SESSION_ID,

R.ROW_COUNT,

R.GRANTED_QUERY_MEMORY,

R.OPEN_TRANSACTION_COUNT,

R.USER_ID,

R.PERCENT_COMPLETE,

CASER.TRANSACTION_ISOLATION_LEVEL

WHEN 0 THEN'UNSPECIFIED'

WHEN 1 THEN'READUNCOMITTED'

WHEN 2 THEN'READCOMMITTED'

WHEN 3 THEN'REPEATABLE'

WHEN 4 THEN'SERIALIZABLE'

WHEN 5 THEN'SNAPSHOT'

ELSECAST(R.TRANSACTION_ISOLATION_LEVELASVARCHAR(32))

ENDASTRANSACTION_ISOLATION_LEVEL_NAME

FROM

SYS.DM_EXEC_REQUESTSR

LEFTOUTERJOINSYS.DM_EXEC_SESSIONSSONS.SESSION_ID=R.SESSION_ID

LEFTOUTERJOINSYS.DM_EXEC_CONNECTIONSCONC.CONNECTION_ID=R.CONNECTION_ID

CROSSAPPLYSYS.DM_EXEC_SQL_TEXT(R.SQL_HANDLE)ST

CROSSAPPLYSYS.DM_EXEC_QUERY_PLAN(R.PLAN_HANDLE)QP

WHERE

R.STATUSNOTIN('BACKGROUND','SLEEPING')

The result:

image009

It returns one ore more queries the SQL instance is currently working on. It’s actually pretty easy and very powerful. The first record is a sample entry we care about. The others are me interacting with the SQL management studio. Scroll to the right and you’ll see both the execution plan and the actual query. Now how cool is that?!

image011

There we can get the query being executed (QUERY_TEXT)

CREATEPROCEDURE raacct_InsertSession(

            @Hostname NVARCHAR(256),

            @ClientIPv4Address BINARY(4),

            @ClientIPv6Address BINARY(16),

            @ClientISPAddressType SMALLINT,

            @ClientISPAddress VARBINARY(16),

            @ConnectionType TINYINT,

            @TransitionTechnology INT,

            @TunnelType INT,

            @SessionHandle BIGINT,

            @Username NVARCHAR(256),

            @SessionStartTime BIGINT,

            @AuthMethod INT,

            @HealthStatus INT)AS

BEGIN

    DECLARE @SessionId BIGINT

    DECLARE @ConnectionId BIGINT

    DECLARE @NumActiveSessions SMALLINT

    IF (@SessionHandle ISNULLOR @SessionHandle = 0)

    BEGIN

        -- error (BAD PARAMETER)

        RETURN (1)

    END

    IF (@SessionStartTime ISNULLOR @SessionStartTime = 0)

    BEGIN

        -- error (BAD PARAMETER)

        RETURN (1)

    END

    SELECT @SessionId = 0

    BEGINTRANSACTION

    SELECT @SessionId = [SessionId]

    FROM [dbo].[SessionTable]

    WHERE    @SessionHandle = [SessionHandle]

        AND @SessionStartTime = [SessionStartTime]

 

    IF (@@ROWCOUNT> 0)

    BEGIN

        -- error (session already exists)

        ROLLBACKTRANSACTION

        RETURN (2)

    END

    -- check if connection exists

    SELECT @ConnectionId = connTbl.[ConnectionId]

    FROM [dbo].[ConnectionTable] AS connTbl, [dbo].[SessionTable] AS sessTbl

    WHERE sessTbl.SessionState = 1

      AND connTbl.ConnectionId = sessTbl.ConnectionId

      AND connTbl.Hostname = @Hostname

      AND connTbl.ClientIPv4Address = @ClientIPv4Address

      AND connTbl.ClientIPv6Address = @ClientIPv6Address

      AND connTbl.ClientISPAddressType = @ClientISPAddressType

      AND connTbl.ClientISPAddress = @ClientISPAddress

      AND connTbl.ConnectionType = @ConnectionType

      AND connTbl.TransitionTechnology = @TransitionTechnology

      AND connTbl.TunnelType = @TunnelType

    IF@@ROWCOUNT= 0

    BEGIN

        -- create connection record

        INSERTINTO [dbo].[ConnectionTable]([Hostname],

                [ClientIPv4Address],

                [ClientIPv6Address],

                [ClientISPAddressType],

                [ClientISPAddress],

                [ConnectionType],

                [TransitionTechnology],

                [TunnelType]

                )

            VALUES (@Hostname,

            @ClientIPv4Address,

            @ClientIPv6Address,

            @ClientISPAddressType,

            @ClientISPAddress,

            @ConnectionType,

            @TransitionTechnology,

            @TunnelType

            )

        IF@@ERROR<> 0

        BEGIN

            -- error (failed to create connection), return from here

            ROLLBACKTRANSACTION

            RETURN (99)

        END

        SET @ConnectionId =@@IDENTITY

    END

    SELECT @NumActiveSessions =COUNT(SessionHandle)

    FROM [dbo].[SessionTable]

    WHERE   [SessionState] = 1

    SET @NumActiveSessions = @NumActiveSessions + 1

    INSERTINTO [dbo].[SessionTable]([ConnectionId],

                [SessionHandle],

                [Username],

                [SessionStartTime],

                [AuthMethod],

                [HealthStatus],

                        [NumConcurrentConnections]

                )

        VALUES (@ConnectionId,

            @SessionHandle,

            @Username,

                  @SessionStartTime,

            @AuthMethod,

            @HealthStatus,

            @NumActiveSessions

        )

    IF@@ERROR<> 0

    BEGIN

        ROLLBACKTRANSACTION

        RETURN (4)

    END

    COMMITTRANSACTION

END

 

The only thing I can see, with my limited SQL knowledge, is that potentially performance hits might occur on the where statements:

 

FROM [dbo].[SessionTable]

    WHERE    @SessionHandle = [SessionHandle]

        AND @SessionStartTime = [SessionStartTime]

And

FROM [dbo].[ConnectionTable] AS connTbl, [dbo].[SessionTable] AS sessTbl

    WHERE sessTbl.SessionState = 1

      AND connTbl.ConnectionId = sessTbl.ConnectionId

      AND connTbl.Hostname = @Hostname

      AND connTbl.ClientIPv4Address = @ClientIPv4Address

      AND connTbl.ClientIPv6Address = @ClientIPv6Address

      AND connTbl.ClientISPAddressType = @ClientISPAddressType

      AND connTbl.ClientISPAddress = @ClientISPAddress

      AND connTbl.ConnectionType = @ConnectionType

      AND connTbl.TransitionTechnology = @TransitionTechnology

      AND connTbl.TunnelType = @TunnelType

The where statements act as filters and columns they use are often indexed. Without an index the SQL server would have to scan the complete table looking for the records. Now on smaller tables that’s not an issue but the SessionTable table contains 14.482.972 records!

image013

So if we check the indexes for that table, one would hope SessionHandle, SessionStartTime and SessionState to be present:

image015

The last one UQ_SessionT… seems to have both SessionHandle and SessionStartTime in it. So I guess that should satisfy the first where statement:

image017

Now what about SessionState? I can’t seem to find that one… Now back to our query that showed us the query being executed. There’s also an XML_QUERY_PLAN. It’s clickable in the Management Studio:

image019

See how this query cost shows 50%? Further down there’s another Query that shows the other 50%. Both show “missing index”:

image021

As previously stated, I’m not an experienced SQL engineer/DBA. I try to crosscheck stuff I find online before applying it. Also I wouldn’t do this kind of stuff on a FIM Service or SCCM database. Those are pretty complex databases. But I made a personal assessment and the Direct Access auditing database seems simple enough to tinker with it. So I decided to give it a try and create the index. Undoing this is pretty straightforward, so I guess there’s no real harm in going forward. Right-click one of the existing indexes and choose Script Index as > CREATE To > New Query Editor Window

image023

Simply change both the Index name and the column to “SessionState”. And execute the query. After refreshing the UI you can see the index:

image025

And there goes the CPU usage:

image027

Conclusion: to me it looks like the DA team just forgot this particular index. From the other indexes you can tell they actually did something for those. I’m not really sure why we didn’t just log a case with Microsoft. Partially I guess because we were afraid/guessing we’d get the answer: by design with that amount of auditing data. But after this troubleshooting session we can clearly see there’s shortcoming in the SQL database setup. As with most stuff you read on the internet: be careful when applying in your environment. If you do not know what commands/queries you’re executing, look them up and do some background reading.

Protected Users Group

$
0
0

Earlier this week I’ve been talking to a customer about the “Protected Users” group. You might have seen it appearing when introducing the first 2012 R2 domain controller. Here’s a good explanation on its purpose:

Protected Users is a new global security group to which you can add new or existing users. Windows 8.1 devices and Windows Server 2012 R2 hosts have special behavior with members of this group to provide better protection against credential theft. For a member of the group, a Windows 8.1 device or a Windows Server 2012 R2 host does not cache credentials that are not supported for Protected Users. Members of this group have no additional protection if they are logged on to a device that runs a version of Windows earlier than Windows 8.1. Source: TechNet: How to Configure Protected Accounts

The above is actually a bit misleading. The functionality was actually backported to Windows 2008 R2/Windows 2012 in the hotfix KB2871997 See blogs.technet.com: An Overview of KB2871997 for an explanation on this.

This group might be part of your organization’s strategy to reduce the attack surface for pass the hash. A great white paper on this can be found here: Mitigating Pass-the-Hash (PtH) Attacks and Other Credential Theft, Version 1 and 2

One of the things the Protected Users group ensures is that no NTLM hashes are available to be used or stolen. Now I wanted to see this for myself. There are various tools out there that are capable of listing the various secrets. I tried Windows Credential Editor (WCE) but that one didn’t work on (my) Windows 2012 R2. So I used Mimikatz. My setup: A 2012 R2 domain controller and a 2012 R2 member server. I’ve got 3 domain admins: one that has the remote desktop session open to the member server and then two that have a powershell runnning through runas. Of the latter one is a member of the Protected Users group:

Run as different user: SETSPN\john

image

Run as different user: SETSPN\thomas

image

As you can see John is an oldschool Domain Admin whereas Thomas has read the Mitigating PtH whitepaper and is a proud member of the Protected Users group. This is the PowerShell oneliner I used to dump the groups I care about: WHOAMI /GROUPS /FO CSV | ConvertFrom-Csv | where {$_."group name" -like "Setspn\*"}

Here you can see the Protected Users admin has no NTLM available:

image

Where the regular admin has NTLM available:

image

Here’s the difference from an attacker point of view:

Start Mimikatz –> Privilege::debug –> sekurlsa::logonpasswords And here are the goodies:

John:

Authentication Id : 0 ; 3529276 (00000000:0035da3c)
Session           : Interactive from 0
User Name         : john
Domain            : SETSPN
Logon Server      : SRVDC01
Logon Time        : 2/24/2016 6:59:54 PM
SID               : S-1-5-21-4274776166-1111691548-620639307-5603
        msv :
         [00000003] Primary
         * Username : john
         * Domain   : SETSPN
         * NTLM     : 59884edfb057d0fec8cb7e0d571dc200
         * SHA1     : 7e655db2b3a7e88fb0c50ca56416ae655469f09e
         [00010000] CredentialKeys
         * NTLM     : 59884edfb057d0fec8cb7e0d571dc200
         * SHA1     : 7e655db2b3a7e88fb0c50ca56416ae655469f09e
        tspkg :
        wdigest :
         * Username : john
         * Domain   : SETSPN
         * Password : (null)
        kerberos :
         * Username : john
         * Domain   : SETSPN.LOCAL
         * Password : (null)
        ssp :
        credman :

Thomas:

Authentication Id : 0 ; 3493146 (00000000:00354d1a)
Session           : Interactive from 0
User Name         : thomas
Domain            : SETSPN
Logon Server      : SRVDC01
Logon Time        : 2/24/2016 6:59:36 PM
SID               : S-1-5-21-4274776166-1111691548-620639307-5602
        msv :
         [00010000] CredentialKeys
         * RootKey  : db1c2347608db0c4e2d89bbd6c328bf6f42671b7d88653cd4cc9af2713
e958f0
         * DPAPI    : 63adfe49948fca81c885933b3aa23eba
        tspkg :
        wdigest :
         * Username : thomas
         * Domain   : SETSPN
         * Password : (null)
        kerberos :
         * Username : thomas
         * Domain   : SETSPN.LOCAL
         * Password : (null)
        ssp :
        credman :

As you can see the admin that’s a member of the Protected Users group does NOT have the NTLM hashes dumped. Wooptiedoo! Now think and test before you start adding the Domain Admins group to the Protected Users group! By no means you should do that! Here’s some good information on how to start with the Protected Users group and some additional caveats: How to Configure Protected Accounts

Here’s one from my side: after adding my admin user to the Protected Users group he was no longer to RDP to a 2012 R2 member server:

image 

In words: A user account restriction (for example, a time-of-day restriction) is preventing you from logging on. For assistance, contact your system administrator or technical support.

Remote desktop to a Windows 2008 R2 worked fine with that account. It seems for my Protected User admin to be able to log on to a Windows 2012 R2 server it had to actualy use mstsc.exe /restrictedadmin and I had to enable Restricted Admin mode on the member server:

image

You can find that value below HKLM\SYSTEM\CurrentControlSet\Control\Lsa

If you want to know more about the Protected Users group and the Restricted Admin feature read up on both of them here: TechNet: Credentials Protection and Management or digital-forensics.sans.org:Protecting Privileged Domain Accounts: Restricted Admin and Protected Users

Some additional reading on Restricted Admin mode: Restricted Admin mode for RDP in Windows 8.1 / 2012 R2

IDX10311: RequireNonce is 'true' (default) but validationContext.Nonce is null

$
0
0

I’ve been educating myself on the capabilities of OpenID Connect/OAuth in Server 2016. The version I’m currently playing with is based on TP5. I created a small application which consists of a web application and an API. Just for educational purposes. The actual application can be found here: https://github.com/tvuylsteke/TodoListWeb

When I started testing my application I ran into an issue. I would visit my application, hit the sign in button and be redirected to AD FS. I would either enter my credentials or be authenticated transparently and then be redirected to my application. That’s where things went wrong. I always seemed to get this error:

error

In Words:We're having trouble signing you in.

IDX10311: RequireNonce is 'true' (default) but validationContext.Nonce is null. A nonce cannot be validated. If you don't need to check the nonce, set OpenIdConnectProtocolValidator.RequireNonce to 'false'.;

Some online searching led me to some threads but no real good suggestions. I also found a session off Build 2015: Cloud Authentication Troubleshooting and Recipes for Developers They mention that IDX10311 typically happens when you don’t receive an expected cookie from the browser. Likely cause: Your reply URL is sending the browser to somewhere different than where you started. I double checked everything, but that didn’t seem to be the cause.

Now I found out that using chrome everything was working as expected. Still I had no real clue. I posted my issue to an internal DL and one of my colleagues quickly spotted my issue using the Fiddler traces I provided. He told me that the OpenIdConnect.nonce.OpenIdConnect cookie was not being set correctly for the todolistweb.contoso.com application in IE. And when I took my traces I could indeed see this:

A trace from Internet Explorer:

You can see the response from AD FS and then the browser going back to the application without any cookies:

IE1

IE2

Now if we compare that to a session from within Chrome:

Chrome1

You can clearly see the OpenIDConnect.nonce cookie

Chrome2

As a solution to this issue I added my application to the Local Intranet Zone in IE and that resulted in the cookie being sent to the application. Mystery solved!

Domain controller: LDAP server signing requirements and Simple Binds

$
0
0

Lately I’ve been wondering about the impact of the following setting: Domain controller: LDAP server signing requirements. The documentation (TechNet #1 and TechNet #2 ) spells it out pretty well: This policy setting determines whether the Lightweight Directory Access Protocol (LDAP) server requires LDAP clients to negotiate data signing. You can set it to either None or Required. None is the default and allows signing if the client asks for it.

Sometimes when I read information I read too fast and draw my conclusion. Shame on me. Wrong conclusion from my side: configuring this setting to required requires all connection to use LDAPS (TCP 636). Nope. It says data signing! Signing can be perfectly done with traffic targetted at both LDAP (TCP 389) or LDAPS (TCP 636).

From AskDS: Understanding LDAP Security Processing I learned various things about simple binds. Simple binds send your username and password in clear text. Needless to say that in combination with LDAP you’re at risk. On the other hand, if the communication is using LDAPS, sending passords in clear text could be acceptable. 

Now the documentation I referenced earlier is a bit conflicting on this topic:

  • This setting does not have any impact on LDAP simple bind or LDAP simple bind through SSL.
  • If signing is required, then LDAP simple bind and LDAP simple bind through SSL requests are rejected.
  • Require signature. The LDAP data-signing option must be negotiated unless Transport Layer Security/Secure Sockets Layer (TLS/SSL) is in use.

Now it might be just me but I would phrase that in another way. Both articles suffer from the same wording. So like with any other uncertainty we just test it. Once you see and experience it you’ll never forget!

This is part of the Default Domain Controller Policy on Windows Server 2012 R2:

image

I changed it to:

image

Now using LDP.exe we can do some tests:

Connecting over LDAPS:

image

Performing a simple bind:

image

And the result:

image

Now if we try to connect over LDAP:

image

Bind like before. But now we get:

image

In words: Error <8>: ldap_simple_bind_s() failed: Strong Authentication Required
Server error: 00002028: LdapErr: DSID-0C090202, comment: The server requires binds to turn on integrity checking if SSL\TLS are not already active on the connection, data 0, v2580
Error 0x2028 A more secure authentication method is required for this server.

Conclusion:

All of this is definitely not new. But writing about it helps me never forget it. Setting the LDAP Server Signing Settings to required will probably require some planning and testing. But it doesn’t mean you can’t use simple binds. As long as you can configure your application to use LDAPS. Your domain controller should be logging a warning event every once in a while when simple binds or unsigned LDAP traffic is seen. Here’s some more info on this event: Event ID 2887 — LDAP signing.

If you want to read more on LDAP signing, please check KB935834: How to enable LDAP signing in Windows Server 2008


Viewing all 95 articles
Browse latest View live