Quantcast
Channel: ADdict
Viewing all 95 articles
Browse latest View live

Control The Amount Of Cached Logons

$
0
0

One of my colleagues had a project where they though it would be a good idea to restrict the amount of cached logons to 1. This would ensure only the last logged on user (the owner) would be able to use the computer off the network. This way the credentials of privileged users such as helpdesk employees wouldn’t be cached.

However when they set the setting to 1 it seemed like even the user itself couldn’t log on any more. Well here is some explanation: http://blogs.technet.com/b/instan/archive/2011/12/06/cached-logons-and-cachedlogonscount.aspx

Basically there’s also other processes filling up slots of the cached logons, and it’s hard to take these into account. So I’d say it was never meant to be set THAT restrictive.


Exchange 2010 Autodiscover/Outlook Anywhere Knowledge Bits

$
0
0

A while ago I had to publish Exchange 2010 services across TMG 2010. All what’s below is more or less from the TMG administrator point of view. All of the tests are done externally (across TMG).

Also, important to note: my customer had an e-mail domain which was different from the Active Directory UPN. E.g.

I believe this information will be important as a lot of the autodiscover GUI pieces only ask for an e-mail address and a password. There’s no room to provide the username. Before diving into Autodiscover & Outlook Anywhere I’m going to go into the basics of publishing Exchange Services across TMG.

For starters, in a typical scenario where all Exchange Services are published there are 3 publishing rules. Each are listed with the defaults paths published as the TMG wizards configure them.

1. Outlook Web Access (OWA)

image

2. Outlook Anywhere (with support for Outlook 2007 clients):

image

3. Active Sync

image

Each of these paths points to a vdir on the Exchange CAS server. Here’s a screenshot of all available vdir’s:

image

TMG Concepts: The Listener

Now obviously when you are publishing applications there’s got to come some authentication in play. On the internet side of things there’s one or more listeners. A listener determines which kind of authentication method the user on the internet is challenged with. For instance we have Form Based Authentication. In form based authentication the user is present with a nice TMG logo’d form and is asked to provide credentials. So to summarize, as listener is a way for TMG to capture the end user his credentials in a given form.

TMG Concepts: The Publishing Rule

On the other hand we got our applications on the intranet which require credentials to be presented in each request in order to show the appropriate content. This kind of behavior is determined in the publishing rule of each service. One of the items we configure in the publishing rule is the authentication delegation. We have to specify the way TMG can use the credentials it gathered on the listener to authenticate to the service in the backend.

And this is where troubles can begin. You always have to match the authentication delegation settings with the authentication protocols enabled on the vdirs. For instance if you configure the OWA website in Exchange to be available with basic authentication, there’s no point in configuring TMG to publish it with NTLM or Kerberos. The “Test Rule” button in TMG will show you this right away. I often encounter the missmatch where the Exchange Admin configured forms-based authentication on Exchange for OWA. TMG can’t handle this, unless you allow unauthenticated traffic to the Exchange server…. Now if you are an IIS savvy admin you could start tweaking IIS right away from the IIS management console. I would advise against this. All of the IIS configuration for Exchange related web services can be performed to the Exchange Management Shell or Exchange Management Console. Here’s a screenshot for OWA from withing the EMC.

Authentication Protocols for OWA

image

And finally I get to the subject of this post, how to test if your Autodiscover configuration is running ok. I myself see 4 possibilities:

1. Microsoft Remote Connectivity Analyzer (recommended)

image

This tool will test the Autodiscover functionality and will provide you the returned XML information. Very complete. It also allows you to disable the SSL verification check which can be convenient if you are messing with non-commercial certificates. And it allows you to specify the user account in the Domain\Username format, which is convenient if your email domain differs from your AD UPN.

2. New Mail Profile using the control panel

Make sure Outlook is closed and then go the control panel.

image

You can easily add a new profile and configure an Exchange Account:

image

The great part of this wizard is that it’s able to prompt for additional credentials. You can only provide the e-mail address and the password. But TMG will not accept this. TMG is not able to find a user which matches this e-mail address as a user with this UPN in it’s domain. So to be more precise, the passwords entered over there don’t matter. You will be prompted anyway.

3. New Mail Profile from within Outlook

image

You’ll be presented with the same wizard of option 2. Disadvantage is that you’ll have to close Outlook before the mailbox can actually be accessed.

4. Test E-mail Autoconfiguration

You can launch the Test E-mail Autoconfiguration wizard by pressing ctrl and right-click the Outlook icon in the tray.

image

If you launch the wizard you’ll be presented with the following screen:

image

Now again you’re only asked for the e-mail address and the password. But unlike the configure new profile wizards presented above, this will NOT prompt for your account name in an other format. Somehow it seems to use the same sessions Outlook has established as an application.

So for instance:

  1. Start outlook
  2. Add new account from within outlook
  3. Be prompted for credentials
  4. Do NOT close outlook when prompted
  5. Start the e-mail auto configuration wizard
  6. E-mail auto configuration wizard succeeds

On the other hand:

  1. Start outlook, don’t configure a profile or connect to an Exchange organization
  2. Start the e-mail auto configuration wizard
  3. E-mail auto configuration wizard will not succeed as there are no credentials to be-reused and no prompt appears!

In step 3 this is the error you will receive:

image

And on the log tab:

image

The error in words: Autodiscover to https://…/autodiscover/autodiscover.xml Failed (0x800C8203) or (0X80070057)

Test E-mail Auto Configuration Fact #1: it does not prompt for authentication

Besides that, suppose you got a connection using RPC over HTTPS to your mailbox. Try the E-mail Auto Configuration Test and provide the e-mail address of a colleague, fill in a dummy password. Yup, you do receive the information.

Test E-mail Auto Configuration Fact #2: you don’t have to provide the credentials of the mailbox you are querying for, as long as you are authenticated.

I just though I’d share this with the error code as it might lead you to think there are problems with your configuration whilst in fact everything is tip top.

Happy Discovering!

Quick Tip: Win 8 Quick Launch To Admin Tools

$
0
0

A colleague of mine today learned me this neat shortcut: typing windows key + x on the Windows 8 Consumer Preview gets you a small menu with a lot of frequently used mmc’s. See for your self:

image

I especially like the “Command Prompt (Admin)” and “Network Connections” options. I’ve been using “ncpa.cpl” as a shortcut for network connections every since I started working with Windows 2008. But this might even be faster/easier.

Service Accounts: Active Directory Permissions Issues: Part #1 SharePoint

$
0
0

Currently I’m involved in a project where we are setting up a lot of Windows technologies, just to name a few: Dynamics Ax 2012, BizTalk, SharePoint 2010, FIM 2010 and thus SharePoint Foundation 2010. It seems some legacy thingy in the Active Directory is biting us in the ass, so for both SharePoint (Full blown and Foundation) and Dynamics Ax we’ve had to modify the permissions on the service accounts used by those products.

So here’s the first occurrence I came across. It happened when I was running the initial configuration wizard of SharePoint Foundation 2010. This SharePoint instance will host the FIM 2010 R2 Portal in the near future. This wizard is to be executed after installing the bits & bytes of SharePoint 2010.

clip_image002

What it says in words: Failed to create the configuration database. An exception of type System.Collections.Generic.KeyNotFoundException was thrown. Additional exception information: The given key was not present in the dictionary.

One thing to look into would be the permissions on the SQL server instance. All in vain. In the end I fired up google and came across numerous posts like these:

The solution is pretty simple to implement, but I’m still struggling whether there’s a “nicer” way which does not involve touchy every service account or the OU in which they reside. Using Active Directory and Computers, or the new Active Directory Administrative Center, we can modify the permissions on the involved service accounts. The ones involved or the ones which are being used by SharePoint: to run various service, application pools or the farm admin. In order to view the security tab on a given user you might have to enable the advanced features in the view option of the ADUC MMC.

image

Once you got your service account open, just check “Allow Read” for “Authenticated users”.

This exact same error also popped up when registering a new Managed Service Account within the Central Administration site:

image

If you’re interested in a more definite solution which does not involve modifying the security of all your service accounts, make sure to read Service Accounts: Active Directory Permissions Issues: Part #4 Conclusion.

Service Accounts: Active Directory Permissions Issues: Part #2 Dynamics Ax 2012

$
0
0

The solution “grant Authenticated Users Read permissions on the involved service accounts” can also be applied during installations of Dynamics Ax 2012. In fact it’s twofold. We came across two instances of this problem. When installing the bits for the Enterprise Portal extensions there’s an option to prepare SharePoint for the deployment of the Ax 2012 Portal. This setup failed with the error “The given key was not present in the dictionary.” This obviously sounds very much like my post in Service Accounts: Active Directory Permissions Issue Part #1. And indeed this change allowed to setup to end gracefully.

But we also came across an other issue which does not seems so related at first sight. When trying to configure the Business Connector Proxyaccount in the Ax console we received the following error when clicking ok:

image

In words: Infolog (1). One or more critical STOP errors have occurred. Use the error messages below to guide you or call your administrator. The alias/network domain entered for the Business Connector proxy is not valid.

I don’t know why, but somehow this felt like the same issue as before. And indeed. Granting the “Authenticated Users” “Read” access on the BCP account allowed us to configure it as the BCP user.

If you’re interested in a more definite solution which does not involve modifying the security of all your service accounts, make sure to read Service Accounts: Active Directory Permissions Issues: Part #4 Conclusion.

Service Accounts: Active Directory Permissions Issues: Part #3 SQL 2008 R2

$
0
0

And yep, there’s more instances of this phenomena! I also came across the following when install an Active Directory Federation Services farm which uses SQL to store its configuration. Whilst there was not noticeable impact (yet), I saw the SQL loggings being filled with the following warnings:

clip_image002

In words: The activated proc '[IdentityServerPolicy].[SqlQueryNotificationStoredProcedure-616f6b36-c503-4503-a6cd-7e067a1b9e43]' running on queue 'AdfsConfiguration.IdentityServerPolicy.SqlQueryNotificationService-616f6b36-c503-4503-a6cd-7e067a1b9e43' output the following:  'Could not obtain information about Windows NT group/user '***\s_****_adfs', error code 0x5.'

And a slightly other one:

clip_image002[5]

In words: An exception occurred while enqueueing a message in the target queue. Error: 15404, State: 19. Could not obtain information about Windows NT group/user '***\s_****_adfs', error code 0x5.

Error: 28005, Severity: 16, State: 2.

The solution: is to give the “Authenticated Users”  “Read Permissions” on the ADFS service account. An easy way to test this solution is executing the following query:

image

The query xp_logininfo ‘Domain\service account’ will return something like this if things go well:

clip_image002[9]

Or like this if the SQL Server service lacks the mentioned permissions:

clip_image002[7]

If you’re interested in a more definite solution which does not involve modifying the security of all your service accounts, make sure to read Service Accounts: Active Directory Permissions Issues: Part #4 Conclusion.

Service Accounts: Active Directory Permissions Issues: Part #4 Conclusion

$
0
0

In the last three posts:

I came to the conclusion that given Authenticated Users Read permissions seems to solve some issues. However providing the “full read” permission might be a bit blunt. I was wondering what property is there that the default permissions don’t cover….

I didn’t see the link at first, but suddenly all puzzle pieces fell together. When trying to find a solution for Part 3 of my AD Service Account Permissions Issues I came across these posts which provide an alternative solution:

They say to add the service account, which requires a higher level of read access on the involved service accounts, to the built-in group “Windows Authorization Access Group”.

When this group hasn’t been moved you can find it in the Builtin container:

image_thumb[1]

And the group we are discussing:

image_thumb[3]

Now how does this help with our case? When adding users to this group they are granted read access to the tokenGroupsGlobalAndUniversal attribute on all users. And this seems to be the exact permission we were looking after! Instead of granting Authenticated Users Read, it would be sufficient to grant them Read to the tokenGroupsGlobalAndUniversal attribute. But then again, that would be a lot of work compared to just adding them to the built-in group.

After some more research the “Pre-Windows 2000 Compatible Access” seems also tightly coupled to this permissions related stuff. My guess is that choices in the past (during the DCpromo) and manual modifications to either of this groups might determine whether or not you are seeing these kind of issues. Here are the members of these groups after a Windows 2008 R2 being DC Promo-ed into a new domain.

  • Pre-Windows 2000 Compatible Access: only Authenticated Users are member
  • Windows Authorization Access Group: only Enterprise Domain Controllers are member

In my domain neither of these groups had the “Authenticated Users” in them. So that’s why adding the service accounts made sense. It would by my guess that the far easiest workaround would be to add the authenticated users to the pre-windows 2000 compatible access group. After all in a new 2008 R2 domain this is done for you so this would mimic a standard installed domain based on 2008 R2 media. So I would conclude that this isn’t against best practices or that no security holes are being created. Do you agree?

Some more technical background regarding this attribute: KB331951: Some applications and APIs require access to authorization information on account objects

GPOtool Sysvol Mismatch

$
0
0

Recently some colleagues of me logged a case because a GPO which worked fine before didn’t seem to work anymore. In the GPO the security of c: was redefined. In the end the root cause of that problem was McAfee. Whilst troubleshooting with Microsoft, in one of the generated log files we noticed SYSVOL mismatch errors.

When troubleshooting GPO’s often an utility called GPOtool.exe is used. This tool is available in the Windows 2003 resource kit tools. Since that tool I’ve never seen a newer version been released. I always assumed it just worked with Windows 2008 or 2008 R2 Domain Controllers. So when we got the following errors, we assumed we had a problem with some of our GPO’s:

clip_image002

In words: Error: sysvol mismatch. At first sight the version in the output seem identical. Also when verifying using ADSIedit, the GPO Management Console & in the SYSVOL share, all GPO related versions seemed to be correct. One thing we noticed though, only GPO’s which were exported and imported in a newer GPO seemed to be mentioned. Although I see no reason for that to cause versioning issues. After some googling I came across this: http://kb.elmahdy.net/2011/02/gpotool-for-windows-server-2008-r2.html

So it seems Microsoft (internally) has a more recent build of GPOtool.exe which plays nicer with Windows 2008 R2 domain controllers. I am by no means responsible for the tool provided on that blog, but I tested it in my environment and it worked fine. The exe seems to be signed by Microsoft, so I would assume it’s safe. To conclude the correct output:

clip_image002[5]

And some GPOtool.exe version information (gpotool1 is the old one):

clip_image002[7]

P.S. It is to my understanding that the Windows Server 2012 will have a GPMC which has enhanced capabilities regarding the GPO health. Way to go!


Dynamics Ax 2012: Error Installing Enterprise Portal

$
0
0

I was assisting a colleague which was installing the Ax 2010 Enterprise Portal on a SharePoint Farm. The farm consisted of 2 servers hosting the SharePoint web applications (the actual sites), and 2 servers hosting the central admin and application services roles. We wanted to start with installing the Enterprise Portal bits on both web front servers without actually choosing the “create site” option in the installer. This would just prep the server and then we’d finalize it by running the “create site” option on the central admin server.

Here’s the screenshot where we selected the Enterprise Portal (and some other prereqs for the Portal):

image

Some steps further we were supposed to get a dropdown with an overview of all sites hosted by SharePoint. Although SharePoint was installed, and we had multiple sites created, we were greeted with an error stating Microsoft SharePoint 2010 is not installed or running. Please run the prerequisite utility for more information. Operation is not valid due to the current state of the object.

image

Going back and clicking next again doesn’t really solve the problem. Going to the installer log file (which is located in \Program Files\Microsoft Dynamics AX\60\Setup Logs\[Date]) showed us that the installer seemed to query the local IIS configuration just fine. As far as I could tell no actually error was given, but it started processing shortly after trying to get information regarding the first actual SharePoint site.

 

image

After staring a bit at the log my eye fell on the “GetFirstHostHeaderOfWebSite” method. It seemed to have ran fine for the default website, but it wasn’t executed for the first actual SharePoint site. And it rang a bell as we have customized this a bit. We had in fact 3 host headers for each SharePoint site. One for the virtual name, one for the virtual name but dedicated to the node and one which was just blank with the IP. I know the last one more or less make the others unnecessary, but we added that one later on when we figured our hardware load balancer status probing wasn’t playing nice with the host headers.

image

Long story short, after modifying ALL sites found in the IIS configuration so that thed have one or no host headers, the setup was able to enumerate the potential sites to configure the AX Enterprise Portal for just fine. Bit weird and seems like a bug in the installer to me…

Windows OS About To Stop Support For RSA Keys Under 1024 Bits

$
0
0

One of my colleagues was having troubles accessing an HTTPS site. The site is secured with a certificate coming from an Active Directory Certificate Authority. Now I know of a bug where if you have a pinned website on your taskbar, and from that browser instance you open an HTTPS site with an untrusted certificate, there’s no "continue anyway” button…

Now this wasn’t the case today. He had the “continue anyway” option, where you typically click on, load the site and check the certificate. However, after clicking, it didn’t go trough, it just remained at the same page. We installed the root CA manually in the trusted root authorities, but still no improvements. When verifying the root certificate in the MMC we also saw it mentioned that the digital signature was invalid.. odd!

Using that as a query for google we quickly came across this:

If you read those first two carefully you’ll see the update will be released as a critical non-security update on august 14th for Windows XP, Windows Server 2003, Windows Server 2003 R2, Windows Vista, Windows Server 2008, Windows 7, and Windows Server 2008 R2.

An example of a bad certificate:

image

Now how come he was having this issue now already?! Ahah, now comes the clue! He was using Windows 8! Now I am too, and I’m not having that problem with that specific site, but here’s the difference:

  • Windows 8 with issue: Windows 8 Release Preview: build 8400
  • Windows 8 without issue: Windows 8 Consumer Preview: build 8250

So it seems they’ve included this update somewhere in the build process of Windows 8.

Having certificates with an RSA key < 1024 is probably not really the case for most of us, but be sure to double check those certificates and their (intermediate) roots! Especially for those customer facing sites where you can’t control what updates hit the clients and thus potentially might be denied access to your sites.

Solaris OpenWindows, Xming and Windows 7 Stealth Mode Firewall

$
0
0

First off, this is a very very very specific issue which I think not many people will run in to. But as I found some forum posts here and there which look like the same I thought I'd post it nevertheless.

A while ago I was troubleshooting a situation where we seemed to experience some kind of delay in an application startup. More specific whenever we used Xming to connect to an OpenWindows desktop session on a Solaris server, we were seeing a delay of about 3 minutes before actually seeing the desktop. This delay was only seen when connecting to this specific Solaris server. Other servers did not posed this problem.

Very soon we found out that if we'd set all Windows Firewall Profiles to "off" on the Windows 7 client, we didn't saw the issue. Now one could think we needed to open some specifics port. We thought we had them all covered, but still no luck. In the end we let the firewall on, but for both in and outbound we built rules which were supposed to allow all traffic. The so called any rules ;) And we were still seeing the issue. Now what is that?!

So in comes the tracing...

Below is an excerpt from a trace on the client to the server when the Windows Firewall is ON:

clip_image001

In this trace I’ve filtered all other traffic than the one to port 2000. So in the background there’s more (client-server) traffic, such as regular X11 traffic. What we are seeing here is that the server(.10) is trying to reach the client (.227) on port 2000. The client does not respond to these queries. After +- 3 minutes the server continues X11 traffic and the user gets his desktop. Now this is quit odd... It's the client which is contact the server. Why is the server initiating traffic to the client?!

If we compare this with the trace to the server (.10) when the Windows Firewall is OFF:

clip_image002

Here we clearly see that the same traffic is sent by the server (.10), but the client (.227) immediately answers with RST, ACK response. Basically telling the server that there’s nothing there. After this entry the server/client communication continues. The users gets his desktop more or less instantly.

In the Solaris configuration there must be a configuration option which makes the server poll the client on port 2000 when launching an OpenWindows desktop from that client. The problem lies in the fact that the server waits for an answer, or times out after about 3 minutes, until he decides communication can go on. To be precise, on the client there’s nothing listing on port 2000, in a situation without a firewall, the client would answer with a RST stating that nothing is there and that communication can continue. A windows firewall however is by default working in “stealth mode”  (http://technet.microsoft.com/en-us/library/dd448557(WS.10).aspx ). As such the client doesn’t send the RST answer and the server waits for about 3 minutes before continuing and showing the desktop.

[First thought]: The Windows 7 firewall stealth mode is causing the server to keep retrying for a number of times

Now if we compare this with the trace where we connect to another server:

clip_image003

Here the server does NOT contact the client on port 2000 (or another port) and the desktop starts promptly.

[Conclusion 2]: That kind of traffic should hit the client on the first place!

Lucky a colleague which is more experienced in Solaris then me had a golden hunch. When he connected using Xming he started toying around with the client settings. In one of his attempts he tried connecting with another font selected. And voila! So it seems we are trying to connect to a server where we say we want to use a specific font which the server hasn't. As such it tries search for this font and even contacts the client for it. I've no idea what the places are it searches, but this was definitely the culprit!

[Final conclusion]: If you are seeing this behavior, check your fonts!

Just for completeness: here’s the exact same issue also discussed:

Windows Azure: Add Your Own Management Certificate

$
0
0

Recently I figured out that I can try out Azure as that comes as one of the benefits of having an MSDN account. I got 375 hours of free computing hours per month! Just for the fun of it I want to host a small VM which acts as a TeamSpeak server every now and then. I guess that’s not really what the Azure subscription is meant for in the MSDN package, but hey I’m experimenting and getting to know the possibilities of Azure in the meanwhile! Guess that’s a Win-Win right?

Either way, because I only have 375 hours that means I can’t have my VM deployed 24/7. I wrote some simple PowerShell scripts which basically remove the VM, leaving the VDH intact and recreate it whenever I want. That might be another blogpost if I find some time. But now I want the possibility to have my colleagues power it up whenever I’m not around. The following options were not OK:

  • Be on duty 24/7 with an internet connection at hand
  • Hand out my live-id to everyone

So here comes the, be it limited, delegation capabilities of the Windows Azure management infrastructure: it seems you need your live ID to log in via the web interface. But for the PowerShell cmdlets you can actually have up to 10 certificates! So here comes how to start toying around with that part of Azure.

Remark: I only used the Get-AzurePublishSettingsFile cmdlet as explained on Windows Azure Cmdlet Guidance for my initial Azure PowerShell configuration on my home PC. However it seems like if you run the command again it will just generate another Windows Azure very long name –date-credentials management certificate. So in the end you got no clue to who you handed out which certificate.

So here we go:

1. Generate a new certificate

Using Visual Studio’s makecert utility I created my own certificate, for a detailed howto: How to Create a Certificate for a Role

The command I used: makecert -sky exchange -r-n "CN=[CNF]Invisibal" -pe -a sha1 -len 2048 -ss My "o:\SkyDrive\Documenten\Personal\Azure\Invisibal.cer"

2. Upload the .cer file in the Windows Azure management portal

image

3. Export your certificate from your local store and store it somewhere safe

The makecert command created a .cer file which is good for the upload,  but you have to make sure that from whatever computer you want to run your Azure PowerShell cmdlets you have the certificate with the private key available. So as in my case I created the certificate on my own PC, and I want my colleague to be able to connect to the Azure management API using PowerShell, I have to export the certificate (including the private key) and hand it over to him.

To export the certificate:

Start –> Run –> MMC –> Add/Remove the certificate snap-in, choose user

image

image

4. Download and configure the Azure PowerShell cmdlets

You can download the cmdlets from here: Downloads for managing Azure

After starting the shell and trying out a simple command you will be greeted with an error:

image

In words: Get-AzureVM : Call Set-AzureSubscription and Select-AzureSubscription first.

After some trial and error I found the following in one of the help sections of a cmdlet.

5. Retrieve your Azure subscription ID

You can get it either from the account section (where you get to see the usage & billing information) or just copy it from the Management Certificates section where you just uploaded a certificate:

image

Just copy paste it in a temporary notepad file.

6. Retrieve your certificate thumbprint

From a PowerShell prompt execute get-item cert:\\currentuser\my\*

image

Also just copy paste it in a temporary notepad file.

7. Start up the Azure PowerShell shell and start the magic

You can now easily copy the SubscriptionID ($subID) and the Thumbprint ($thumbprint) from the tempory notepad into the required variables.

$subID = "af2f6ce8-demo-demo-demo-dummydummyd3"
$thumbprint = "01675217CF4434C905CF0A34BBB75752471869C6"
$myCert = Get-Item cert:\\CurrentUser\My\$thumbprint
Set-AzureSubscription -SubscriptionName "CNF_TS" -SubscriptionId $subID -Certificate $myCert

This should command should also persist between sessions. Meaning if you restart the shell, it will still be available and you can go ahead and start executing cmdlets right away.

8. You’re good to go!

image

Well just when I was about the wrap this up I found this great article: it covers most of my stuff and way more. Definitely worth reading: Automating Windows Azure Virtual Machines with PowerShell

DebugView 100% CPU In a Windows 2008 VM

$
0
0

A while ago I got a tip of a colleague to use the DebugView utility from Sysinternals (Microsoft) to debug code. Once in a while I write a simple rules extension for Forefront Identity Manager, or even an attribute store for ADFS. As simple as they may be, sometimes things don’t go as I wish…

You can use DebugView by using the following lines in your coding: at the top of your class you make sure you have “using System.Diagnostics;” and everywhere you feel like you want diagnostic output you put “Debug.WriteLine(“your string here”); It might be obvious, but you have to make sure you compile your code in Debug mode!

And perhaps a little sting here: make sure the DEBUG constant is enabled. It’s on by default though.

image

I’ve used this approach a few times now, but yesterday things went bad. After starting DebugView my server, a VM I was running on my Laptop, became sluggish. I still could reproduce my issue though, but nothing was being captured. Odd. After checking the task manager I found out my DebugView.exe process was using 100% CPU.

Off to google! I quickly found this topic: forum.sysinternals.com: DbgView.exe 100%CPU

Finding the DebugView version 4.76 is not that easy though, there’s a zillion sites just linking through to the Microsoft site and thus giving you version 4.79 every time. Finally I found this site which has the actual 4.76 version:  http://www.myfiledown.com/download/435608/debugview-435608-3.html But the link seems down now… Once I used this version my CPU usage was normal and my debug came out just fine.

SCCM 2007: DCM Development Tip

$
0
0

The actual reason why I’m toying around with DCM (Desired Configuration Management) will be explained in my next post. But here’s a tip I’ve found to be quit practical when trying to get your CI (Configuration Item) configuration right.

I quickly found out that whenever you changed settings in the CI you’ve had to initiate the Machine Policy Retrieval & Evaluation Cycle action so that the Conf Manager client would have the latest version of your Baseline/CI.

image

In the Configuration Manager client you’ve got a button called Evaluate on the last tab which you can use to actually allow the CI to be evaluated and give you a report displaying the current compliance state.

image

In the screenshot you see “Unknown:Scopel….” but that’s just a GUI refresh thingy. After a few minutes it’s properly displayed. Now this part is easy. Now on the other hand I was switching a regkey by hand on the client in order to trigger the various possible outcomes of my baseline. And after I while I figured out that there had to occur some caching behind the scene’s…

Using google I found an explanation at the following forum: myitforum.com:[mssms] Configuring DCM to detect (Default) Value name [mdfdr5]

And then I started using the following workaround in order to avoid the 15’ interval:

image

By appending a number to the CI name I was triggering a version increase. This in turn causes the cached result to be come invalid and ensures my evaluation always gives the most up to date answer. It’s a bit dirty and causes for a high version number, but on the other hand, this is in a test environment, and it’s damn easy like this.

SCCM 2007: DCM Check For A Registry Value Only If the Value Exists

$
0
0

This is a bit far from my regular technologies, but today I used the DCM (Desired Configuration Management) feature of SCCM to map the amount of clients which are suffering a particular issue. More specific, we are suffering the issue as described in: social.technet.microsoft.com: Print drivers on windows 7 clients missing dependent files..?

So we know that clients which have the “corrupted” printer driver registry settings look like this:

  • Key: HKLM\SYSTEM\CurrentControlSet\Control\Print\Environments\Windows NT x86\Drivers\Version-3\Lexmark Universal
  • Value1: Help File=””
  • Value2: Dependent Files=””

We also know that clients which are healthy look like this:

  • Key: HKLM\SYSTEM\CurrentControlSet\Control\Print\Environments\Windows NT x86\Drivers\Version-3\Lexmark Universal
  • Value1: Help File=”UNIDRV.HLP”
  • Value2: Dependent Files=”blabla.dll blablo.dll ….dll”

And we should not forget that not all clients have this driver! So the ones which don’t have have the key/value should not be reported!

SCCM DCM to the rescue! I’ve actually spent quit some time to get this right. Probably because I’m a first time DCM’r, but perhaps because some things aren’t that obvious as well. What I wanted to achieve with DCM explained in words: get me a report which returns all computers that have a blank value for the “Help File” value. So I’d specifically wanted to ignore the ones where that registry value didn’t exist or where it has a value of “UNIDRV.HLP”.

So here is how you don’t do it:

Adding a CI (Configuration Item) where you add a registry key to the Objects tab

image

As far as I’ve come to understand the DCM configuration, by adding a registry key to the Objects tab, you can check for it’s existence. Now I typed key in bold as in registry terms, a key is like a folder. A registry value on the other hand is like a string, or binary thing which can hold an actual value.

Here’s how can do it:

Leave the Objects tab empty and go on with the Settings tab.

image

On the settings tab we can add a specific setting of the type registry. Your definition should look like this:

image

On the general tab all we need to do is specify the Hive, the Key and the name of the Value we are interested in. The validation tab is the one where the real magic happens:

image

I will first go the next screenshot and then I’ll come back to this one. In the next screenshot you will see how I added a new validation rule by clicking “new”.

image

What you see here should be pretty obvious: I specified that if the “Help File” registry value equals “UNIDRV.HLP” all is good. And more specific if this wouldn’t be the case it should be expressed as a severity of Error. Now some examples:

  • Value example #1: “UNIDRV.HLP”: compliant
  • Value example #2: “UNIDRV”: non-compliant
  • Value example #2: “”: non-compliant
  • Now what if the registry value doesn’t exist to begin with?!

Well that’s where the previous screenshot comes into play: by default Report a non-compliance event when this instance count fails is checked. I specifically unchecked this one. It is to my understanding that this one will cause the CI to be non-compliant if the registry value (the instance) can’t be found. In my case if the value can’t be found it means the driver isn’t installed and thus the client is not suffering the issue.

So in short, using the configuration as shown above I have established that all clients which have a registry value “Help File” under the given key should have a value of “UNIDRV.HLP”. If they’ve got an empty value, they’ll be included in the report. The ones which don’t have this driver, and thus don’t have this registry value will be excluded from the report. This will allow us to do some quick and dirty fixing of the clients which already are suffering this issue and at the same time we can try distributing a printer feature hotfix package of Microsoft. Once that one is out on the clients we can use the reporting to find out if new cases are occurring.

It was a post of KevinM (MSFT)  which made all of the above fall together: social.technet.microsoft.com: Check if Registry Value Exists?


Quick Tips: September Edition #1

$
0
0

Ok, I’ve gone through my mailbox and I’ve got quite some little neat tricks I want to share and most of all never forget myself. So I’ll put them here for future reference.

Tip #1 (Network):

Remember “Network Tracing Awesomeness” If you’d only want to have traffic captured which involves a specific IP you can start the trace like this:

netsh trace start capture = yes ipv4.address=10.90.101.41

This can be very convenient if your server is a domain controller or a file server and communicates with a lot of clients all the time.

Tip #2 (IIS):

In various IIS Kerberos configuration howto’s you are instructed to set useAppPoolCredentials to true. I Always hate editing XML’s directly as it’s quite easy to make errors. Using the following command you can easily set this parameter from a command prompt:

appcmd set config "Default Web Site" /section:windowsauthentication
/useAppPoolCredentials:true /commit:MACHINE/WEBROOT/APPHOST
(the command is supposed to be on one line)

The Default Web Site is the name of the site as it appears in the IIS management console. Remember, you might need to have something like Default Web Site/vDir If you have to configure this for sublevels of the site.

Tip #3 (Kerberos):

If you enable an account to be trusted for delegation to a given service, you might have to wait some time before the service itself notices this. This is often noticed as: I changed something, it didn’t work and magically the next day it started working. If I’m not mistaken, this might have to do with the Kerberos S4U refresh interval which is at 15’ by default. At least that was the value at Windows 2003… See also: KB824905: Event ID 677 and event ID 673 audit failure messages are repeatedly logged to the Security log of domain controllers that are running Windows 2000 and Windows Server 2003

Tip #4 (PowerShell):

From: MSDN: Win32_PingStatus class

When you use PowerShell to perform remote tasks on a server, such as WMI queries, it might be way more efficient to do a quick ping before actually trying to talk WMI to the server. This way you can circumvent those nasty timeouts when the server you are trying to talk to is down.

$server = "server01"
$PingStatus = Gwmi Win32_PingStatus -Filter "Address = '$Server'" |Select-Object StatusCode

Tip #5(Tools):

Every once in a while I need a tool from the Sysinternals Utilities set. Mostly I go to google, type in the name, get to the Microsoft site hosting the utility and click launch. However, it seems you easily access all of the tools using this webdav share: \\live.sysinternals.com just enter it in a file explorer or your start-> run. The utilities we all know so well are located in the Tools folder. Or if that doesn’t works, just use http://live.sysinternals.com/ 

clip_image001

Thanks to a colleague for this last tip!

-Stay tuned for more!-

Win 8 Client: Manage Wireless Networks, Where Art Thou? Follow Up

$
0
0

A while ago I posted a workaround to manage the more advanced settings of wireless networks: Win 8 Client (Dev Preview): Manage Wireless Networks, Where Art Thou?

In some of the comments I read that in the final version the explorer.exe shell:: command did no longer worked. After verifying on my own fresh install I noticed that this was indeed the case. However, there’s other possibilities which make it less bad. You can now access the advanced settings in the followings ways:

1. Just before finishing the creation of a new network:

In the network and sharing center click “set up a new…”

image

Choose “Manually connected to a …”

image

After entering some basic parameters you can choose “Change connection settings” before clicking close.

image

2 For an existing network connection:

Ok, my title is a bit misleading, I think you can only edit this one if the SSID is actually accessible. Meaning you are actually in the physical location where the Wireless LAN is supposed to be. So I’m not saying authentication should succeed, but the SSID should be “online”. So in a lot of situations this might be sufficient.

When clicking the network item in the tray a bar will appear to the right with your networks in it. You can right-click it and choose “view connection properties”.

image

3 By deleting and re-adding the profile:

Yep, this one is not funny, but for now I don’t see any other options. I actually found this one on the following blog: Ryan McIntyre : Windows 8 Missing “Manage Wireless Networks”

  • Show the profiles: netsh wlan show profile
  • Delete a profile: netsh wlan delete profile “profile name”
  • Recreate it using the GUI and make sure you now do it properly

image

UAG: Trunk With Anonymous Authentication Not Working

$
0
0

A few days ago I was setting up an UAG which has a trunk configured with anonymous authentication so that I could publish our FIM Self Service Password Reset page. I think I tried to outsmart UAG because this was I was getting over and over again:

image

In words: 500 – Internal server error.

I said to myself “how hard can it be?!”. After some time I started thinking that removing the default Portal entry which is added to the trunk wasn’t a good idea. I didn’t need it as my users will go directly to the SSPR site, but it seems like UAG needs it very badly! Just re-add it, activate the config and everything should start working.

image

To conclude: even if you don’t need it, better leave it in place.

Temporary Profiles and IIS Application Pool Identities

$
0
0

I’m a bit stumbled that I’ve only come across this now. Recently I discovered that there are some cases where you can end up with your service account using a temporary profile. Typically this is the case where your service account has very limited privileges on a Server. Like application pool identities which run as a regular AD user, which I consider a best practice. I myself saw this in the context of the application pool identities in a SharePoint 2010 farm or with SQL Server Reporting Services 2008 R2.

The phenomena is also described at: Todd Carter: Give your Application Pool Accounts A Profile So this does not apply to all Application Pool identities! Only those running with “load profile=true”.

In the Application event log you can find the following event:

Windows cannot find the local profile and is logging you on with a temporary profile. Changes you make to this profile will be lost when you log off.

How to fix it if you see those nasty “c:\users\TEMP” folders?

  1. Stop the relevant application pools
  2. Stop the IIS Admin Service (in services.msc)
  3. See that the TEMP folders are gone in c:\users
  4. Follow the next steps

How to make sure your accounts get a decent profile?

We will temporary add the service account to the local administrators group so they can create a profile. In fact all they need is the “logon locally” privilege. The second command will start a command prompt while loading a profile. This will ensure a proper profile is created.

  1. net localgroup administrators CONTOSO\AppPoolAccount /add
  2. runas /u:CONTOSO\AppPoolAccount /profile cmd
  3. net localgroup administrators CONTOSO\AppPoolAccount /del

As a side note: if the TEMP folders are not disappearing, or you are still getting a temporary profile, you can try to properly cleanup the temporary profile:

  1. Stop the application pools
  2. Stop the IIS Admin Service
  3. Using right-click properties on computer, choose advanced tab and then pick User Profiles. There you can properly delete them.

If you’re still having troubles you might need to delete the TEMP folders manually AND cleanup the following registry location: HKLM\SOFTWARE\Microsoft\Windows NT\CurrentVersion\ProfileList Especially look if there aren’t any keys with .bak appended to it.

FIM 2010 R2 Password Reset Configuration Troubleshooting

$
0
0

I configured a FIM 2010 R2 for Self Service Password Reset using Email OTP. This is documented quite well on TechNet. However when my test user provided the OTP and entered a new password he got greeted with an error:

image

In words: An error has occurred. Please try again, and if the problem persists, contact your help desk or system administrator. (Error 3000)

In order to explain when this issue occurred:

  1. I Provided the username
  2. I Received the OTP in my mailbox
  3. I Entered the OTP in the form
  4. I Provided a password in the form
  5. I Clicked Next

In order to solve this I tried/verified the following items:

Besides the user being confronted with an error in his browser I also noticed the following events in the event log.

  • Log: Forefront Identity Manager
  • Source: Microsoft.ResourceManagement
  • Level: Warning
  • Text: System.Workflow.ComponentModel.WorkflowTerminatedException: Exception of type 'System.Workflow.ComponentModel.WorkflowTerminatedException' was thrown.
  • Log: Forefront Identity Manager
  • Source: Microsoft.CredentialManagement.ResetPortal
  • Level: Error
  • Text: The error page was displayed to the user.

    Details:

    Title: Error

    Message: An error has occurred. Please try again, and if the problem persists, contact your help desk or system administrator. (Error 3000)

    Source:

    Attributes:

    Details: System.InvalidProgramException: Error while performing the password reset operation: PWUnrecoverableError

  • Log: System
  • Source: Microsoft.CredentialManagement.ResetPortal
  • Level: Error
  • Text: Microsoft.IdentityManagement.CredentialManagement.Portal: System.Web.HttpUnhandledException: ScriptManager_AsyncPostBackError ---> System.InvalidProgramException: Error while performing the password reset operation: PWUnrecoverableError
  • Log: System
  • Source: Microsoft.CredentialManagement.ResetPortal
  • Level: Error
  • Text: The web portal received a fault error from the FIM service.

    Details:

    Microsoft.ResourceManagement.WebServices.Faults.ServiceFaultException: DataRequiredFaultReason

  • Log: System
  • Source: Microsoft.ResourceManagement
  • Level: Error
  • Text: mscorlib: System.UnauthorizedAccessException: Access is denied. (Exception from HRESULT: 0x80070005 (E_ACCESSDENIED))

    at System.Runtime.InteropServices.Marshal.ThrowExceptionForHRInternal(Int32 errorCode, IntPtr errorInfo)

    at System.Management.ManagementScope.InitializeGuts(Object o)

    at System.Management.ManagementScope.Initialize()

    at System.Management.ManagementObjectSearcher.Initialize()

    at System.Management.ManagementObjectSearcher.Get()

    at Microsoft.ResourceManagement.PasswordReset.ResetPassword.ResetPasswordHelper(String domainName, String userName, String newPasswordText)

Besides the above entries I also stumbled upon this one:

image

In words: The program svchost.exe, with the assigned process ID 684, could not authenticate locally by using the target name RPCSS/fimsyncdev.contoso.com. The target name used is not valid. A target name should refer to one of the local computer names, for example, the DNS host name.

Try a different target name.

As far as I could tell this entry did not got logged when a user attempted a reset or when the service got restarted, but it was logged a few times nevertheless.

After seeing this one it finally came clear: I like to use a DNS alias to target the FIM Synchronization Server when installing the FIM Service bits. This makes it easier when I have to activate my cold standby FIM Synchronization server. Typically you got two options for creating an “alias”:

  1. An A record
  2. A CNAME record

Scenario 1 is very much the preferred scenario when working with web applications. It makes registering your SPN’s way more logic as you just add them to the service account (the application pool identity). However here we have a special version of this scenario. The password reset relies on WMI/DCOM and it should be authenticated to those in order to successfully execute a set password. The WMI/DCOM stuff doesn’t run as a service account, it’s a service which runs under the local system account. Even if I would add my alias as an SPN on the computer account of the active FIM Sync server I would have to modify this SPN when activating my cold standy server.

So long story short: if you feel like using an alias for your FIM Synchronization Server is interesting, use a CNAME. Normally I do in this scenario, but for this specific customer it slipped through and caused me some hours to figure it out.  On the other hand I learned something about DCOM and it’s authentication stuff.

Viewing all 95 articles
Browse latest View live


Latest Images