Quantcast
Channel: Ask the Directory Services Team
Viewing all 48 articles
Browse latest View live

AGPM Production GPOs (under the hood)

$
0
0

Hello, Sean here. I’m a Directory Services engineer with Enterprise Platforms Support in Charlotte. Today, I’d like to talk about the inner workings of Advanced Group Policy Management (AGPM). Let’s begin by discovering what occurs behind the scenes when you take control of a Production GPO using AGPM.

The term “Production GPO” is used frequently in AGPM documentation to describe an existing GPO in Active Directory and differentiate between it and the copy that AGPM stores in the Archive to allow for “Offline Editing”.

For those new to AGPM, it provides many features to help you better manage Group Policy Objects in your environment. Role-based administration allows you to delegate certain actions to users, even those that may not be administrators. The four built-in roles are Reviewer, Editor, Approver and Administrator. Change-request approval helps to avoid unexpected and unapproved modifications to production GPOs. AGPM also provides the ability to edit GPOs offline, allowing for review and approval of the changes before committing them to production. Furthermore, version tracking of GPO changes, the ability to audit/compare versions and the rollback feature can help you recover from GPO changes that need to be revised. The Overview of Advanced Group Policy Management white paper (Link) has information about these features and more.

Environment Overview:

The environment has three computers: a domain controller, a member server, and a client.

  • CONDC1 : Windows Server 2008 R2 Domain Controller
  • CONAGPM : Windows Server 2008 R2 AGPM Server
  • CONW71 : Windows 7 AGPM Client

The AGPM server and client computers are members in the contoso.com domain. This scenario uses the 64-bit version of AGPM for server and client installations, but a 32-bit version is available as well. The AGPM server and client installs were done following the Step-by-Step Guide (Link). This document is also included on the MDOP disk (..\Documents\4.0\AGPM_40_Step-by-Step_Guide.pdf).

clip_image001

Tools Overview:

The following tools will be used to gather data during this exercise:

  • Microsoft Network Monitor (Link) will be used to capture the network traffic that is generated between each computer.
  • Process Monitor (Link) is a Windows Sysinternals utility that we will use to monitor the activity of individual processes running on each computer during the exercise.
  • Group Policy Management Console (GPMC) logging will be enabled (Link), in order to track the operations performed by this MMC snap-in on each computer. This will allow us to point out any differences between the snap-in’s behavior between the different computers.
  • Active Directory Object Auditing will be enabled (Link), notifying us of any changes to Active Directory Objects that we configure for auditing. This will generate events in the computer’s security event log.
  • Advanced Group Policy Management logging (Link) is configured via Group Policy. This will be enabled in order to see exactly what the AGPM components are doing on each computer.

Prologue:

Before we begin, it’s important to understand how AGPM is able to delegate management of GPOs to non-Administrators. Delegation of the various AGPM roles is done within AGPM itself. All operations performed by AGPM in the domain are handled by the AGPM service account. During the AGPM server installation, you specify what account you wish to use as the AGPM service account. This single account is granted the permissions to create, delete and manage GPOs in the domain. When we start GPMC as a user who has delegated permissions within AGPM, even if the user account has no rights to manage GPOs by itself, AGPM instructs the service account to perform the actions on the user’s behalf.

When performing data collection on multiple systems like this, it’s important to understand how each component works, and under what security context it’s working. For this task, I’m logged into CONW71 with my AGPM Administrator account (agpmadmin). The changes I make through the AGPM console on CONW71 are commands sent through GPMC.msc as the user agpmadmin. Even though I request to change the status of a GPO that is located on a domain controller, the commands sent from CONW71 go to the AGPM service running on CONAGPM. On CONAGPM, the AGPM service receives those commands and evaluates what permissions the submitting user account has been granted.

Based on the role of the user submitting the commands to the AGPM service, the action will be allowed or disallowed. If the user has the appropriate permissions, the AGPM service builds the request to send to the domain controller and forwards it, not as the user who initiated the requests, but as the AGPM Service account. Since the AGPM service account is being used for the request sent to the domain controller, access is based on the permissions assigned to the AGPM service account.

Getting Started:

First, we’ll log into CONDC1 and create a few Organizational Units (OU) named “Development”, “HR” and “Sales”. By right-clicking on the OUs and selecting “Create a GPO in this domain, and Link it here”, we will create the new GPOs that will automatically be linked to their respective OUs. CONDC1 doesn’t have the AGPM server or client installed, so we will use the vanilla Group Policy Management Console (GPMC.msc). For the sake of today’s blog post, we’ll only be working with the “Dev Client Settings” GPO. Let’s add a few drive mapping GP Preference settings, just to make it seem a bit more authentic. Before we do anything further to the GPO, let’s make note of a few key details regarding the GPO.

  • The GPO GUID : {01D5025A-5867-4A52-8694-71EC3AC8A8D9}
  • The GPO Owner : Domain Admins (CONTOSO\Domain Admins)
  • The Delegation list :Authenticated Users, Domain Admins, Enterprise Admins, ENTERPRISE DOMAIN CONTROLLERS and SYSTEM

Second, we want to get each of our data collection tools ready to capture data. Logging options will be configured for GPMC and AGPM. Active Directory Object Auditing will be enabled, and our GPO will have auditing configured to report any attempted change, successful or not. Network Monitor and Process Monitor will be started and tracing on all three computers right before we take control of the production GPO.

Next, we’re ready to take control of the GPO using the AGPM client installed on CONW71. Computers that have the AGPM client installed have a new “Change Control” entry within GPMC. This is where we will perform most of the functions that brought us to install AGPM in the first place. On the “Uncontrolled” tab, we see a list of GPOs in the domain that are not currently controlled by AGPM. Let’s right-click on the “Dev Client Settings” GPO, and bring up a context menu where we select the “Control” option.

image

If we hold the delegated role of AGPM Admin or Approver, we’ll be prompted to add a comment for this operation. Without Admin or Approver, we’ll be asked to fill out a request form that will be emailed to the AGPM Approvers first. It’s always a good idea to comment with something meaningful, explaining why we’re taking ownership of this GPO. It’s not always obvious why changes were made to a GPO, and the comment is our chance to inform others of the reasons behind our action. If your organization has change control procedures, it would be an excellent place to link the action to the official change request identifier.

Assuming we have the permissions to take control of a production GPO, when we add our comment and click “Ok”, we will see a progress window appear. It will update itself with the progress it’s making on our request. It should report whether the operation was successful or not, and if not it should give us some additional information regarding the problem(s) it ran into.

Simple enough on the front end, but what exactly is taking place behind the scenes while we made those flew clicks? Let’s take a look…

The AGPM Client

Network Monitor on the AGPM Client shows some TCP chatter back and forth between an ephemeral port on the AGPM client, and TCP Port 4600 on the AGPM server. TCP 4600 is the default port when installing the AGPM Server component, but you can change that during the install or after (Link) if you prefer. There is no communication between the AGPM client and the domain controller other than ARP traffic. The process making the calls to the AGPM server is MMC.exe.

image

Process Monitor on the AGPM Client is similarly sparse on information. MMC.exe accesses the registry and file system briefly as it builds the request to send to the AGPM server, and writes to the agpm.log file under the profile of the logged on user.

GPMC logging (gpmgmt.log) seems to generate many entries, but there were none generated on the AGPM Client during the test.

AGPM logging on the client shows a number of actions being taken between the AGPM Client and AGPM Server. The control operation appears between two [Info] entries, and shows the various functions being called by the AGPM client to process and report the results from the operation to the user.

image

The AGPM Server

Moving to the AGPM Server, we can see a difference in behavior from nearly every data point.

The network capture from the AGPM Server shows the TCP communication back and forth with the AGPM Client followed by TCP and LDAP packets between the AGPM Server and the Domain Controller. Once the commands have been received from the AGPM Client, the AGPM Server initiates the requested actions with the Domain Controller. The request to change the GPC and its contents comes in the form of SMB SetInfo Requests.

image

If we drill down into the packet info, into the SetInfo Request… we’ll see the modified object:

image

And further down, the DACL changes:

image

The highlighted SID is for the AGPM Service account in our domain. We can get the user account SID for the AGPM service account by looking up the objectSID attribute of that user account within ADSIEdit.msc. 0x001f01ff is the equivalent of Full Control. Notice, the owner is still set to S-1-5-32-544 (Built-In/Administrators). This is the case for every file and folder within the GPT except for the top level folder named after the GPO’s GUID. Here we see the AGPM Service account’s SID again.

image

After the AGPM Service account has permissions, you can see it start to query the domain controller via LDAP and SMB2, copying over the GPO to the AGPM server. This is the AGPM server creating a copy of the GPO in the Archive you created during installation of the AGPM Server.

Process Monitor on the AGPM Server is very busy. First, the service checks for the Archive path, and reads through the gpostate.xml file, checking to see if it already knows about this GPO. The gpostate.xml file contains a historic view of GPOs known to AGPM. We see some LDAP communication between the AGPM server and the Domain Controller that corresponds to the AGPM server modifying permissions on the portion of the GPO that resides in Active Directory. This is followed by the AGPM service exploring the entire folder structure of the GPO’s SYSVOL component, modifying the DACL and Owner information to include the AGPM service account.

In order to provide the ability to edit GPOs offline, AGPM makes use of the Archive to store a copy of each GPO it controls. The Process Monitor capture from the AGPM Server gives us a very good look at what’s going on between SYSVOL and the archive.

image

We see it start to dig into the Group Policy Template for the GPO we’re taking control of, reading the information from the folders and files beneath it. In the next image, we see the AGPM service query the registry for the location of the Archive.

image

We also see below that it reads from a Manifest.xml file. This is a hidden file that has some basic information about every GPO in the Archive. Things like the GPOs production GUID, the domain and domain GUID, as well as the AGPM-assigned GUID.

image

After this, the AGPM service starts to create a folder structure within the Archive for the GPO. What’s interesting here is, closer scrutiny reveals an uncanny resemblance to a standard GPO backup routine. If you’ve ever backed up a GPO using GPMC, you’ll recognize the files and folder structure created by AGPM when it adds a GPO to its archive.

image

Notice the GUID in the Archive path. AGPM creates its own unique identifier for the archived copy of the GPO. Process Monitor shows the AGPM service going back and forth between SYSVOL, reading info and writing it into the Archive. The AGPM service pulls the settings from the GPO and creates a gpreport.xml file with that information in it. GPReport.xml also has the following information within it:

  • GPO Name, Created Time, Modified Time and Read Time
  • Security Descriptor (Security principal SIDs with SDDL permissions)
  • Additional info regarding each Security Principal

Two other files in the archived GPO’s folder are Backup.xml and bkupInfo.xml (Hidden). Backup.xml contains the following information:

  • The list of Security Principals on the GPO, along with additional information about each
  • The actual settings from the GPO itself
    • Security Descriptor (in hex)
    • Options
    • UserVersionNumber
    • MachineVersionNumber
    • CSE GUIDs

BkupInfo.xml is essentially an excerpt directly from Manifest.xml of the info that pertains to this GPO.

AGPM logging on the AGPM server doesn’t generate many entries during the control operation. It shows the incoming message, identifies the Client/Server SIDs (The user account SIDs of the user initiating the action on the AGPM Client, and the AGPM service account being used by the AGPM Server), and calls the appropriate functions. The control operation has the AGPM Server sending requests to check the GPO’s security (doGpoLevelAccessCheck()) and then take control of the GPO (ControLGPO()).

image

GPMC logging on the AGPM Server gives us a wealth of information. Without much delay, you see the GPMC log record a LDAP bind and permissions being modified on the GPO objects within Active Directory.

image

The next thing you’ll notice in the GPMC logging on the AGPM Serer is reference to Backup related functions being called. Remember seeing the AGPM server accessing the Group Policy Template and Container seen in other data collections? When the GPO is copied to the AGPM Archive, this is essentially a GPO backup, very much like the one you can perform in GPMC.msc. The remainder of the GPMC log was dedicated to covering the backup processes.

image

The Domain Controller

This is the last stop in our data analysis. The network capture shows the traffic from the AGPM Server. Process Monitor, however is a bit different. Where the AGPM Server had a lot of entries specific to our operation to control the GPO, all of the information in Process Monitor on the Domain Controller shows up as reads/writes to the Active Directory Database (NTDS.DIT). Process Monitor does not allow us to see what was being read/written, so they are fairly useless for really seeing what’s going on.

The Security log has generated many events, just in the short time it took to take control of this GPO. We can see the AGPM service account connect and read various attributes of the Group Policy Container from Active Directory. We’ll also see a single event for the actual modification of the Group Policy Container (GPC) replacing the current nTSecurityDescriptor information with one containing permissions for the AGPM Service Account.

image

The Object Name value in the event data corresponds to the objectGUID of the GPO’s container object within Active Directory.

Since AGPM nor GPMC was utilized on the Domain Controller, there are no corresponding logs to review from those tools.

In Closing

We’ve pulled the curtain away from a very simple procedure of taking ownership of a production GPO, reviewing it from different perspectives using different tools, and found it’s a very simple task that is broken up into a few common subtasks.

  • The AGPM service takes ownership of the GPO and adds itself to the DACL with Full Control, both on the Group Policy Container within Active Directory and the Group Policy Template in SYSVOL.
  • The AGPM service then performs a GPO backup to a specified location (the Archive).

Once the GPO is controlled by AGPM and backed up to the Archive, a number of other tasks can be performed on it, which we will cover in depth in future blog posts.

Complete series

http://blogs.technet.com/b/askds/archive/2011/01/31/agpm-production-gpos-under-the-hood.aspx
http://blogs.technet.com/b/askds/archive/2011/04/04/agpm-operations-under-the-hood-part-2-check-out.aspx
http://blogs.technet.com/b/askds/archive/2011/04/11/agpm-operations-under-the-hood-part-3-check-in.aspx
http://blogs.technet.com/b/askds/archive/2011/04/26/agpm-operations-under-the-hood-part-4-import-and-export.aspx

 

Sean “right angle noggin” Wright


Friday Mail Sack: The Year 3000 Edition

$
0
0

Hello all, Ned here again. Today we talk DCDIAG, DFSN, DFSR, group policy, user profiles, migrations, USMT, and the fuuuuuuturrrrrrrrre.

Question

I have a mixed environment of Win2003 and Win2008 DCs. When I run DCDIAG.EXE it tells me the Windows Server 2003 DCs are failing a service test around RPCSS:

Starting test: Services

      Invalid service type: RpcSs on DC01, current value

      WIN32_OWN_PROCESS, expected value WIN32_SHARE_PROCESS

   ......................... DC01 failed test Services

I see some Internet posts that say I should change the value using the SC.EXE command. Do you know why this is different and what’s going on? It looks like the difference is being a service in a shared versus isolated process.

Answer

It’s expected and normal for this service’s behavior type to be 0x10 on Win2003 and 0x20 on Win2008 and later. Do not change it based on what DCDIAG says unless you are running the version of DCDIAG that goes with that OS (this is where much of the Internet got confused on causality versus correlation). Win2008 DCDIAG doesn’t know that Win2003 was designed this way so he can’t give you a reasonable answer – he just wants it to be default in 2008 terms.

Your assumption around shared versus isolated is totally correct:

Win2008 R2
clip_image002

Win2003
clip_image002[4]

Between Win2003 and Win2008, the behavior changed for the RPC service, but there was nothing yet to “share” in that svchost.exe process. In Win2008 R2, the new RPCEptMapper service was added to that shared svchost. You can see who would launch in that same process by looking for this value in the service registry keys:

%systemroot%\system32\svchost.exe –k RPCSS

Later versions of Task Manager make this easier too, if you’re allergic to command-line:

image

Svchost.exe exists mainly to lower computer resource usage: the more DLLs that can run in fewer shared processes, the less memory/CPU the OS has to allocate for services. You might think it was OK to change this on Win2003 to stop the error and maybe even get back some resources. The problem with that theory is that on Win2003, you get no resources back (as no one else is going to share that process) and you open yourself up to weird issues – when I tell Windows developers about issues caused services being modified by customers, their first response is “Why on earth would anyone change the service? We don’t test for that at all!”

Playing around with service configurations is not something you do without valid reason and some tool complaining doesn’t meet that bar.

Best long term solution: get rid of those remaining Win2003 servers. Then you get all sorts of advantages, like features unlocked by higher functional levels or magically load-balancing bridgeheads

Plus I get paid.

Question

Is there a way to disable and enable DFS namespace targets from the command-line? We’re building some automation.

Answer

You can use the Win2008/Vista RSAT (or later) versions of dfsutil.exe with this syntax:

dfsutil property state offline <DfsPath> [<\\server\share>]
dfsutil property state online <DfsPath> [\\server\share]

Nicely buried…

Question

When I use RSOP.MSC on a Windows 7 computer, I see a lot of missing entries and errors and whatnot.

Answer

Blink and you may miss the reason why:

image

Since Vista, the OS has been trying to tell you not to use this tool (which is no longer updated and has no idea about a great number of policies). To get a nice, readable resultant set of policy you need to use GPRESULT.EXE /H foo.htm. Mike has yammered about this before.

Question

I was curious - has the team heard what the future is for Active Directory, beyond Win2008 R2?

Answer

Lots (that’s my full time job now) but we cannot discuss anything. Don’t worry, the marketing people won’t keep it a secret one moment longer than necessary. And our fearless leader lets things out every so often.

Question

Can the new MIGAPP.XML included in KB2023591 be used with USMT 3.01?

Answer

[A reprint of a comment reply made to the Deployment Guy site]

The 4.0 migapp.xml does "work" when used with USMT 3.01 - and by that I mean it is schema compatible, will not cause a fatal error during 3.0 scanstate/loadstate, and will not corrupt the store in any way that I have identified. However, under the covers it may be causing issues within the migration. That XML and Office 2010 have not been tested in any fashion with USMT 3 (and never will be), so while it might appear to work fine on the surface, we have zero idea of any more insidious problems.

Now, if you are using USMT 3.01 because you have to - such as migrating from Win2000 or to Win XP - I can offer you a supported workaround: migrate to a computer that has Office 2007 installed, then upgrade the Office install to 2010 after the migration is done but before the users log on. Office 2010 will upgrade the Office 2007 settings (mostly – see that KB for details on the limits). 

Naturally, if you don’t have to use 3… use 4.

Question

We have Windows Server 2003 DFSR and have started to explore adding Win2008 R2 servers. Is mixing supported and are there any known issues?

Answer

Supported all day. You will need to install this hotfix on all Win2003 R2 DFSR servers:

KB2462352 DFSR fails from a computer that is running Windows Server 2008 R2 to a computer that is running Windows Server 2003 R2
http://support.microsoft.com/default.aspx?scid=kb;en-US;2462352

You will also need the Win2008 (version 44) or later AD schema added if you want to use DFSR for RODCs and if you wanted to customize staging compression behavior:

What are the Schema Extension Requirements for running Windows Server 2008 DFSR?
http://blogs.technet.com/b/askds/archive/2008/07/02/what-are-the-schema-extension-requirements-for-running-windows-server-2008-dfsr.aspx

If you want to use Win2008/R2 DFSR throughout and start replacing old servers (and you really should – we’re working pretty hard on the 3rd OS since 2003 came out):

Series Wrap-up and Downloads - Replacing DFSR Member Hardware or OS
http://blogs.technet.com/b/askds/archive/2010/09/10/series-wrap-up-and-downloads-replacing-dfsr-member-hardware-or-os.aspx

Question

I have a large number of users with computers that were in a workgroup. They are now moving to a domain, and we need their user profiles converted. USMT seems to be overly complex for me – is there another way?

[Asked by multiple customers this week, oddly enough. The last gasps of Netware?]

Answer

Yes, we have two ways to do this:

MOVEUSER.EXE - XP and older, comes from the resource kit

Win32_UserProfile WMI - Vista and newer:

These tools correctly change permissions and ProfileList registry settings in order to “move” (i.e. convert) a user profile between local and domain accounts.

Other Dorky Goo

  • This year is gonna be a sci-fi movie bonanza:
I didn’t want to like it… but I did.
The name is Bond. Wyatt Bond.
No shots of Bucky yet.
Close encounters of the eleventyth kind
  • Speaking of which, I was able to fight my way through the e-crowds and get tickets to Comic-Con 2011 for self and the wife. She is not exactly geeky but is an epic people watcher – she especially wants to see the day care center. Her theory being that kids will be wearing little gray suits and power ties to rebel against their parent’s uber-nerdiness. Anyone else going?
  • The latest Cracked photo contest was a zingfest - If Everything Was Made By Apple. My favorite was this subtle dig (pretty timely, having read about their latest iPhone security woes yesterday):

Have a nice weekend folks.

Ned “the future, Conan?” Pyle

Getting the Effective Audit Policy in Windows 7 and 2008 R2

$
0
0

Ned here again folks. We introduced granular auditing in Windows Vista and a few years later we released Advanced Audit Policy Configuration. Legacy Windows audit policy didn’t go away, of course. To make things interesting, all of this can be configured through domain policy, local policy, multiple-local policy, per-user, or using command-line tools. Like most security policy that has evolved through 20 years of Windows, it’s a bit of a Frankenstein’s monster. Making sense of what settings are actually in place in Win7 and 2008 R2 can be a real pain in the neck. Today we’ll see if I can make it easier.

Fire good!

A Scenario

You commonly configure audit settings using the following:

  • Domain based group policy (via GPMC.MSC)
  • Local policy (via GPEDIT.MSC)
  • Directly (only advanced audit policy, via auditpol.exe)

But depending on how you set the policy, your reporting tools may be misleading you around effective settings. For instance, I have specified the following policies using the following techniques.

1. I have a legacy audit policy applying from domain policy that configures Object Access auditing:

image

2. I have advanced audit configuration applying from domain policy that sets AD changes, account lockouts, and logons:

image

3. I have advanced audit configuration applying from local policy for process startup and termination:

image

4. I have granular audit settings configured using auditpol.exe /set for file share access:

image

Pro tip: this is not awesome auditing technique, on a number of levels. :) Just for demo purposes, mmmkay?

Initial Results

Now I generate a resultant set of policy report. I am not using RSOP.MSC as it’s deprecated and often wrong and generally evil. I run GPRESULT /H foo.htm instead:

image

image

Looks pretty good so far. I can’t see my policy that I set through auditpol.exe though; that kinda sucks but whatevah.

So now I start generating some audit events for the areas I am tracking from my four audit points. Immediately I see some weirdness:

  • All the advanced audit configuration coming from “Local Group Policy” and “Advanced Audit DC Policy” is working great.

image

  • My event log should be flooded with Object Access events but there are zero. 
  • Accessing file shares doesn’t generate any audit events.

The lack of Object Access auditing is expected: as soon as you start applying Advanced Audit Configuration Policy, legacy policies will be completely ignored. The only way to get a Win7/R2 computer to start using legacy policy is to set the security policy “Audit: Force audit policy subcategory settings (Windows Vista or later) to override audit policy category settings” to DISABLED. That disables the use of the newer policy type. Then you must clear the existing advanced policy from the machines (auditpol.pol /clear, having a blank audit.csv file, etc). The system isn't optimal, but the intention was never for you to go back.

Not seeing the File Share events makes sense too: after all, I created domain based and local policy to set all of this; they are just blowing away my local settings, right?

Yes and no.

First, I delete my link for the “Advanced audit DC policy” and run GPUPDATE /FORCE. Now I am only getting local policy settings for process creation and termination as expected. If I then re-run my auditpol /set /subcategory:”file share” /success:enable command and access a file share, I get an event. Yay team. Except after a while, this will stop working, because the local policy setting is going to reapply when the computer restarts or every 16 hours when security policy is reapplied arbitrarily.

Here’s where things get weird.

Unlike most security settings that directly edit registry keys as preferences, advanced audit policy stores all of its local security policy values in an audit.csv file located here:

%systemroot%\system32\grouppolicy\machine\microsoft\windows nt\audit\audit.csv

Which is then copied here:

%systemroot%\security\audit\audit.csv

But the domain-based policy settings are in an audit.csv in SYSVOL and that is never stored locally to the computer. So examining any of them is rather useless. Unfortunately for you, those audit.csv files are what RSOP data is returning, not the actual applied settings. And if you use legacy tools like SECEDIT.EXE /EXPORT it won’t even mention the advanced audit configuration at all – it was never updated to include those settings.

The Truth

All of this boils down to one lesson: you should not trust any of the Group Policy reporting tools when it comes to audit settings. There’s only one safe bet and it’s this command:

auditpol.exe /get /category:*

image

Only auditpol reads the actual super-top-secret-eyes-only-licensed-to-kill-shaken-not-stirred registry key that stores the current, effective set of auditing policy that LSASS.EXE consumes:

HKEY_Local_Machine\Security\Policy\PolAdtEv

image

If it’s not in that key, it’s not getting audited.

Before you get all excited and start plowing into this key, understand that this key is intended to be opaque and proprietary. We don’t really document it and you certainly cannot safely edit it with regedit. In fact, as an experiment I once renamed it in order to see if it would be automatically recreated using “default, out of box” settings. Instead, the computer refused to boot to a logon prompt anymore! I had to load that hive using regedit in WIN PE and name it back (Last Known Good Configuration boot does not apply to the Security hive). If you want to write your own version of auditpol you use the function AuditQuerySystemPolicy (part of the gigantor Advapi32 library of Authorization functions; have fun with that goo and don’t call me about it, it’s grody).

As a side note - if you want a safe way to remove auditing settings you can easily clear that registry key by running auditpol /clear and removing policy. That puts you to “nothing”. If you want to restore to “out of the box” experience you would use auditpol /backup on a nice clean unadulterated repro computer that was installed from media and never joined to a domain. That gives you the “before”. Then if you ever want to reset a computer to OOB you auditpol /restore it.

Until next time.

Ned “Wait. Where are you going? I was going to make Espresso!” Pyle

AGPM Operations (under the hood part 2: check out)

$
0
0

Sean again, here for Part 2 of the Advanced Group Policy Management (AGPM) blog series, following the lifecycle of a Group Policy Object (GPO) as it transitions through various events. In this installment, we investigate what takes place when you check-out a controlled GPO.

Before editing an AGPM controlled GPO, it is checked out. There are several potential points of failure for the check-out procedure. Network communications during the backup can drop, leaving the Archive copy only partially created. Firewall rules can block network traffic, preventing the AGPM client from contacting the server. Disk corruption can cause the Archive copy of the GPO to fail to restore. We use the same tools to collect data for these blog posts and to troubleshoot most issues affecting AGPM operations.

In Part 1 of this series (Link) we introduced AGPM and followed an uncontrolled “Production” GPO through the process of taking control of it with the AGPM component of the Group Policy Management Console (GPMC). If you are unfamiliar with AGPM, I recommend you refer to the first installment of this series before continuing.

Environment Overview:

The environment has three computers: a domain controller, a member server, and a client.

  • CONDC1 : Windows Server 2008 R2 Domain Controller
  • CONAGPM : Windows Server 2008 R2 AGPM Server
  • CONW71 : Windows 7 AGPM Client

For additional information regarding the environment and tools used below, please refer to Part 1 of this series (Link).

Getting Started:

We start on our Windows 7 computer logged in as our AGPM Administrator account (AGPMAdmin). We need GPMC open, and viewing the Change Control section, which is the AGPM console. We are using the “Dev Client Settings” GPO from the previous blog post, so let’s review the GPO details.

  • The GPO GUID :{01D5025A-5867-4A52-8694-71EC3AC8A8D9}
  • The GPO Owner : Domain Admins (CONTOSO\Domain Admins)
  • The Delegation list : AGPM Svc, Authenticated Users, Domain Admins, Enterprise Admins, ENTERPRISE DOMAIN CONTROLLERS and SYSTEM

We also log into the AGPM Server and the Domain Controller and start the data capture from each of the tools mentioned previously.

As with most actions within AGPM, checking out a GPO is a simple right-click and select operation. Right click the “Dev Client Settings” GPO to bring up the context menu and select the “Check Out…” option.

image

Notice the grayed out “Edit” option for a checked in GPO. AGPM prompts for comments from logged on accounts with the AGPM Admin or Editor role delegated. Clicking “Ok” displays a progress window that updates us as the AGPM server request is processed. When it is complete, we return to the AGPM console and see the changed status.

image

Notice how the AGPM console differentiates between a "Checked out" GPO, and one that is "Checked in". The icon has a red outline, and the “State” column updates. The “Comment” column displays the comment entered during the most recent operation on the GPO; it is useful to add relevant information to the comment whenever possible.

Let’s look at the data we’ve collected for the Check-Out operation.

The AGPM Client

Network Monitor shows the AGPM Client and AGPM server communications. TCP port 4600 is the default for the AGPM server; this is configurable during the installation or afterwards (Link).

image

Process Monitor on the AGPM Client highlights the simple nature of the work done by the AGPM Client itself during the Check-Out procedure. The MMC process accesses gpmctabs.dll to generate the AGPM console, followed by access to agpm.log to write entries related to the communications between the AGPM Client and Server.

There were several entries in the GPMC log (gpmgmt.log) pertaining to the opening of the GPMC.msc snap-in, and looking up each of the accounts defined in the delegation tab for the GPOs. There were no entries in the log during the time of the Check-Out operation, however.

AGPM Logging shows the exact same block of entries that we saw when taking control of a production GPO.

image

This log only shows entries related to the client establishing a connection with the AGPM server and sending it the instruction “ExecuteOperations()” and that the instruction has been completed.

The AGPM Server

Since we focused on the traffic between the AGPM Client and Server in the section above, we now examine the traffic between the AGPM Server and the Domain Controller. The first thing we notice is a lot of SMB traffic with the AGPM Server regarding a policy GUID that is different from that of the GPO we are checking out.

image

image

A search of the network trace for the “Dev Client Settings” GPO {01D5025A-5867-4A52-8694-71EC3AC8A8D9} turns up nothing. A quick refresh of GPMC.msc shows a brand new GPO in the list.

image

There are several important bits of information in the screenshot above. First, notice the name of the GPO “[AGPM] Dev Client Settings”. The GUID is the one we see in the network trace. Notice the "Created Date/Time": it's the “Dev Client Settings” GPO check-out time. The GPO is not linked anywhere, the GPO history does not match that of the GPO it shares a name with and the Delegation list shows full control granted to the account that checked it out. From here on, we refer to this GPO as the “Offline GPO”.

image

Within the network trace, we see the request to create the policy GUID folder in SYSVOL.

image

AGPM takes the same action to create the rest of the policy folder structure and contents. Security is set on these folders as well.

The AGPM Server log (agpmserv.log) shows the entries related to the process it goes through "IAgpmServer.SendMessage()" to send the appropriate messages along to the Domain Controller to perform the actions we’ve requested via the AGPM console.

image

Process Monitor shows entries that confirm its writing to the Agpmserv.log file. It retrieves the registry path to the AGPM Archive and we access gpostate.xml (located within the Archive). As mentioned in the first blog post, gpostate.xml contains a historic view of GPOs known to AGPM.

image

It reports gpmgmt.log accesswithin Process Monitor as well. It’s important to note the user account in the path. The security context of the account that is performing the actual GP management work is the one logging all of the entries.

image

The AGPM Server accesses the Archive path and copy the GPO folder and contents to a path beneath the Archive’s Temp folder.

image

Next, we see the creation of the “Offline” GPO path in SYSVOL. GPMC builds out the new GPO based on the information copied from the AGPM Archive.

image

The gpmgmt.log created in the AGPM service account’s profile path shows the process taken to build the new GPO folder from the AGPM Archive copy. The log addresses each aspect of the GPO, from assigning security to configuring the GP settings. The process looks like a GPO Restore.

image

image

The Domain Controller

Looking at the network capture on the domain controller shows very little from the client (CONW71). SMB protocol negotiation, session setup and connection from the client to the DC’s IPC$ share are shown. We reviewed the network traffic from the AGPM server earlier in this post.

The DC security log shows several "Security-Auditing” 5136 events generated by the creation of the Offline GPO.

image

Editing the GPO

We now see what has taken place during a controlled GPO Check-out. Let’s modify the GPO slightly, adding a setting or two. On our AGPM Client (CONW71), right clicking on the checked-out GPO brings up the context menu, and the “Edit” entry is now clickable.

image

Notice the two new entries to the context menu, “Check In…” and “Undo Check Out…” We’ll come back to those in a bit. Editors and Administrators alone hold the ability to edit a GPO controlled by AGPM, so if we see the option still grayed out on a checked-out GPO, we need to make sure we have the appropriate permissions within AGPM. There is no prompt for comment within AGPM when editing a GPO. Windows 2008 and later allows us to comment at the GPO level as well as at the setting level (within Administrative Templates), if we need to. With the Group Policy Editor started, we can use it to make changes to the checked out GPO.

If we decide to check the GPO back in without saving any changes, we can select “Undo Check Out…”. This simply deletes the Offline GPO created during the Check-Out procedure, and removes the reference to it in gpostate.xml.

In Closing

In this second installment, we covered a procedure repeated every time there’s need to modify a GPO within AGPM. During the Check-Out of a GPO, the following steps are performed:

  • The Archive copy of the GPO is copied to a temp folder.
  • From the duplicated Archive data, a new “Offline” GPO is created with the [AGPM] prefix by performing a GPO Restore.
  • The GPOs entry within the Archive’s gpostate.xml file is updated to reflect its checked-out state, and references the newly created “Offline” GPO.
  • Once the Check-Out procedure is complete, the temp copy of the Archive data is deleted.
  • The “Offline” GPO is not linked anywhere, and edits to it are made in real-time.

From this information, we can make an important observation: any changes made to an AGPM-controlled GPO outside of the AGPM console (i.e. the rogue Domain Admin that doesn’t bother with the AGPM console and edits the GPO directly through GPMC.msc) are overwritten the next time the GPO is deployed from the AGPM console. Since the Check-Out procedure builds the editable “Offline” GPO from the AGPM Archive data, the Admin’s changes are not included automatically. We do have the option of using the “Import from…” feature to pull the settings from the production GPO again prior to the Check-Out, which updates the Archive data with any changes made outside of AGPM.

Come back for Part 3 of this series, where we will check our GPO back in.

Complete series

http://blogs.technet.com/b/askds/archive/2011/01/31/agpm-production-gpos-under-the-hood.aspx
http://blogs.technet.com/b/askds/archive/2011/04/04/agpm-operations-under-the-hood-part-2-check-out.aspx
http://blogs.technet.com/b/askds/archive/2011/04/11/agpm-operations-under-the-hood-part-3-check-in.aspx
http://blogs.technet.com/b/askds/archive/2011/04/26/agpm-operations-under-the-hood-part-4-import-and-export.aspx

Sean "To the 5 Boroughs " Wright

Restrictions for Unauthenticated RPC Clients: The group policy that punches your domain in the face

$
0
0

Hi folks, Ned here again. Around six years ago we released Service Pack 1 for Windows Server 2003. Like Windows XP SP2, it was a security-focused update. It was the first major server update since the Trustworthy Computing initiative began so there were things like a bootstrapping firewall, Data Execution Protection, and the Security Configuration Wizard.

Amongst all this, the RPC developers added these new configurable group policy settings:

Computer Configuration \ <policies> \ Administrative Templates \ System \ Remote Procedure Call

Restrictions for unauthenticated RPC clients
RPC endpoint mapper client authentication

Which map to the DWORD registry settings:

HKEY_LOCAL_MACHINE\SOFTWARE\Policies\Microsoft\Windows NT\Rpc
EnableAuthEpResolution
RestrictRemoteClients

These two settings add an additional authentication "callback capability" to RPC connections. Ordinarily, no authentication is required to make the initial connection to the endpoint mapper (EPM). The EPM is the network service that tells a client what TCP/UDP ports to use in further communications. In Windows, those further communications to the actual application are what typically get authenticated and encrypted. For example, DFSR is an RPC application that uses RPC_C_AUTHN_LEVEL_PKT_PRIVACY with Kerberos required, with Mutual Auth required, and with Impersonation blocked. The EPM connection not requiring authentication is not critical, as there is no application data transmitted: EPM is like a phone book or perhaps more appropriately, a switchboard with an operator.

That quest for Trustworthy Computing added these extra security policies. In doing so, it introduced a very dangerous scenario for domain-based computing: one of the possible policy settings requires all applications that initiate the RPC conversation send along this authentication data or be able to understand a callback request to authenticate.

The problem is most applications haveno idea how to satisfy the setting's requirements.

The Argument

One of the options for Restrictions for unauthenticated RPC clients is "Authenticated without Exceptions".

image

When enabled, RPC applications are required to authenticate to RPC service on the destination computer. If your application doesn't know how to do this, it is no longer allowed to connect at all.

Which brings us to…

The Brawl

Having configured this policy in your domain on your DCs, members, and clients, you will now see the following issues no matter your credentials or admin rights:

Group policy fails to apply with errors:

GPUPDATE /FORCE returns:

The processing of Group Policy failed. Windows could not resolve the computer name. This could be caused by one of more of the following:
a) Name Resolution failure on the current domain controller.
b) Active Directory Replication Latency (an account created on another domain controller has not replicated to the current domain controller).
Computer Policy update has completed successfully.
To diagnose the failure, review the event log or invoke gpmc.msc to access information about Group Policy results
.

The System Event log returns errors 1053 and 1055 for group policy:

The processing of Group Policy failed. Windows could not resolve the user name. This could be caused by one of more of the following:
a) Name Resolution failure on the current domain controller.
b) Active Directory Replication Latency (an account created on another domain controller has not replicated to the current domain controller).

The Group Policy Operational event log will show error 7320:

Error: retrieved account information. Error code 0x5.
Error: Failed to register for connectivity notification. Error code 0x32.

Active Directory Replication fails with errors:

Repadmin.exe returns:

DsBindWithCred to RPC <servername> failed with status 5 (0x5)

DSSites.msc returns:

image

Directory Service event log returns:

Warning 1655:
   
Active Directory Domain Services attempted to communicate with the following global catalog and the attempts were unsuccessful.
Global catalog:
\\somedc.cohowineyard.com
The operation in progress might be unable to continue. Active Directory Domain Services will use the domain controller locator to try to find an available global catalog server.
Additional Data
Error value:
5 Access is denied.

Error 1126:

Active Directory Domain Services was unable to establish a connection with the global catalog.
 
Additional Data
Error value:
1355 The specified domain either does not exist or could not be contacted.
Internal ID:
3200e7b

Warning 2092:

This server is the owner of the following FSMO role, but does not consider it valid. For the partition which contains the FSMO, this server has not replicated successfully with any of its partners since this server has been restarted. Replication errors are preventing validation of this role. Operations which require contacting a FSMO operation master will fail until this condition is corrected.

Domain join fails with error:

Changing the primary domain DNS name of this computer to "" failed.
The name will remain "<something>".
The error was:
Access is denied

image

After failed join above, rebooting computer and attempting a domain logon fails with error:

The security database on the server does not have a computer account for this workstation trust relationship.

image

Remotely connecting to WMI returns error:

Win32: Access is denied.

image

Remotely connecting to Routing and Remote Access returns error:

You do not have sufficient permissions to complete the operation

image

Remotely connecting to Disk Management returns error:

You do not have access rights to logical disk manager

image

Remotely connecting to Component Services (DCOM) returns error:

Either the machine does not exist or you don't have permission to access this machine

image

Running DFSR Health Reports returns errors:

Domain Controller is unreachable
Cannot access the local WMI repository
Cannot connect to reporting DCOM server

image

DFSR does not replicate nor start initial sync, with errors:

DFSR Event log error 1202:

The DFS Replication service failed to contact domain controller to access configuration information. Replication is stopped. The service will try again during the next configuration polling cycle, which will occur in 60 minutes. This event can be caused by TCP/IP connectivity, firewall, Active Directory Domain Services, or DNS issues.

error: 160 (one or more arguments are not correct)

DFSRMIG does not allow configuration of SYSVOL migration and returns error:

"Unable to connect to the Primary DC's AD. Please make sure that the PDC is reachable and retry the command later"

FRS does not replicate and returns event log warning 13562:

Could not bind to a Domain Controller. Will try again at next polling cycle.

Remotely connecting to Windows Firewall with Advanced Security returns error:

You do not have the correct permissions to open the Windows Firewall with Advanced Security Console.
Error code: 0x5

image

Remotely connecting to Share and Storage Management returns error:

Connection to the Virtual Disk Service failed. A VDS (Virtual Disk Service) error occurred while performing the requested operation.

image

Remotely connecting to Storage Explorer returns error:

Access is denied.

image

Remotely connecting to Windows Server Backup returns error:

The Windows Server Backup engine is not accessible on the computer that you want to manage backups on. Make sure you are a member of the Administrators or Backup Operators group on that computer.

image
Remotely connecting to DHCP Management returns error:

Access is Denied

RPC Endpoint connections seen through network capture shows errors:

Note how the client (10.90.0.94) attempts to bind to the EPM on a DC (10.90.0.101) and gets rejected with status 0x5 (Access is Denied).

image

Depending on the calling application - in this case, the Group Policy service running on a Win7 client that is trying to refresh policy - it may continue to try binding many times before giving up. Again, the DC responds with the unhelpful error "REASON_NOT_SPECIFIED" and keeps rejecting the GP service.

image

For comparison, a normal working EPM bind of the GP service looks like this:

image

Restitution

Anyone notice the Catch-22 above? If you deployed this setting using domain-based group policy to your DCs, you have no way to undo it!  This is another example of “always test security changes before deploying to production”. Many virtualization products are free, like Hyper-V and Virtual PC– even a single virtualized DC environment would have shown gross problems after you tried to use this policy.

To fix your environment:

1. You must delete or unlink the whole policy that includes this RPC setting:

image

2. Delete or rename this specific policy's GUID folder from each DCs SYSVOL folders (remember, file replication is not working so it must be done on all individual servers).

image

image

3. Manually visit all DCs and delete the RestrictRemoteClients registry setting.

image

4. Reboot all DCs to get your domain back in operation. Not all at once, of course!

These are only the affected Windows in-box applications and components that I have identified. The full list probably includes 99% of all third party RPC applications ever written.

Parole

Some security audit consulting company may ask you to turn this policy on to be compliant with their standards. Make sure you show them this article and make them explain why. You can also point out that our Security Compliance Manager tool does not recommend enabling "Authenticated without Exceptions" even in Specialized Security Limited Functionality networks (and SSLF is far too restrictive for most businesses). This setting is really only useful in an unmanaged, standalone, non-domain joined member computer environment such as a DMZ network where you want to close an RPC connection vector. Probably just web servers with local policy.

You should always get in-depth explanation from any third party security audit's findings and recommendations; many a CritSit case here started with a customer implicitly trusting an auditor's recommendations. That auditor is not going to be there to troubleshoot for you when everything goes to crap. Disconnecting all your DCs from the network makes them more secure. So does disabling all your user accounts. Neither is practical.

If you absolutely must turn on Restrictions for unauthenticated RPC clients, make sure it is set only to "Authenticated", and guaranteeRPC endpoint mapper client authentication is also enabled. Then test like your job depends on it - because it does. Your applications may still fail with this setting in its less restrictive mode. Not all group policies are intended for domains.

By the way, if you are a software development company you should be giving the Security Development Lifecycle a frank appraisal. It is a completely free force for good.

Until next time.

Ned "2005? I am feeling old" Pyle

SCM 2 CTP released (whoops, a month ago)

$
0
0

Hey all, Ned here again. Jeff Sigman let me know that the new pre-beta version of Security Compliance Manager became available last month. It adds the number one feature request you’ve all been demanding: GPO Import.

Remember, this is a CTP release so keep it in test and out of production for now. If you are scratching your head at what SCM does and why you should be using it, check it out this and this. Really!

- Ned “SCMbag” Pyle

AGPM Operations (under the hood part 3: check in)

$
0
0

Sean again, here for Part 3 of the Advanced Group Policy Management (AGPM) blog series, following the lifecycle of a Group Policy Object (GPO) as it transitions through various AGPM-related events. In this installment, we investigate what takes place when you check-in a controlled GPO.

Before editing an AGPM controlled GPO, it is checked-out. Similarly, after editing the GPO, it is checked in before the changes are deployed to production. Many of the same failure points exist for both the check-out and check-in processes. Network communications during the restore can drop, leaving the production GPO only partially updated. Disk corruption can cause the Archive copy of the GPO to fail to restore correctly. The AGPM service account could fail to authenticate when attempting to perform the requested operation. We use the same tools to collect data for these blog posts and to troubleshoot most issues affecting AGPM operations.

In Part 1 of this series (Link), we introduced AGPM and followed an uncontrolled “Production” GPO through the process of taking control of it with the AGPM component of the Group Policy Management Console (GPMC). If unfamiliar with AGPM, I would recommend you refer to the first installment of this series before continuing.

Part 2 of the series (Link) continued the analysis of this GPO as it was Checked-Out using AGPM. We revealed the link between AGPM controlled GPOs and the AGPM Archive as well as how AGPM provides for offline editing of GPOs. If you haven’t read Part 2, I recommend doing that now.

Environment Overview:

The environment has three computers: a domain controller, a member server, and a client.

  • CONDC1 : Windows Server 2008 R2 Domain Controller
  • CONAGPM : Windows Server 2008 R2 AGPM Server
  • CONW71 : Windows 7 AGPM Client

For additional information regarding the environment and tools mentioned below, please refer to Part 1 of this series (Link).

Getting Started:

We start out on our Windows 7 computer, logged in as our AGPM Administrator account (AGPMAdmin). We need GPMC open, and viewing the Change Control section, which is the AGPM console. We are using the “Dev Client Settings” GPO from the previous blog post so let’s review the GPO details:

  • The GPO GUID : {01D5025A-5867-4A52-8694-71EC3AC8A8D9}
  • The GPO Owner : Domain Admins (CONTOSO\Domain Admins)
  • The Delegation list : AGPM Svc, Authenticated Users, Domain Admins, Enterprise Admins, ENTERPRISE DOMAIN CONTROLLERS and SYSTEM

We also want to log into the AGPM Server and the Domain Controller and start the data capture from each of the tools mentioned in the previous section.

Picking up where we left off from the previous blog post, we now have our GPO checked out and modified with some new settings. When we’ve made the desired changes to the Group Policy Object, we close the Editor and return to the AGPM Console. In order to check it back in, we right-click the GPO in the AGPM console and select the “Check In…” option. We have the option to enter a comment for the check-in operation. The red-outlined GPO icon returns to normal once checked back in.

The AGPM Client

As we might expect, Network Monitor shows traffic is mainly between the AGPM Client and AGPM Server. It is TCP traffic between the client and port 4600 on the AGPM Server.

image

Process Monitor shows MMC writing to the AGPM.log file, but otherwise has few entries that relate to the Check-In process. As before, this shows the AGPM client does not perform any of the operations on the GPO itself. It simply relays the instructions to the AGPM Server.

There were no entries generated in the GPMC log during the Check-In operation. Considering the only entries in the log pertained to the startup of GPMC, these actions within the AGPM console obviously do not flag any GPMC logging events.

The AGPM.log shows nearly identical information in the Check-In operation as it did in the Check-Out. The AGPM Client contacts the AGPM Server and notifies it of incoming instructions. When the AGPM Server is ready, the AGPM Client sends the instructions and awaits return information. Once the AGPM Server returns the resulting data the function exits successfully.

image

AGPM Server

We covered the AGPM client network traffic in the previous section. Once the AGPM client gives instructions to the AGPM server, that server opens an LDAP connection to the Domain Controller. The AGPM server accesses the checked out GPO information within Active Directory and SYSVOL. While we can’t see exactly what’s being read from the directory, we do see the SMB traffic as the AGPM server reads the information from SYSVOL.

image

Process Monitor shows quite a lot of activity from the Agpm.exe process. It starts out by looking up the AGPM Archive path from the registry, and accessing gpostate.xml to determine the status of the GPO.

image

Within the gpostate.xml, each GPO has its status and check-in history listed.

image

The "agpm:type" entry indicates the “CHECKED_OUT” status, the time of the operation, the comment entered during the check-out operation and the SID of the user performing the operation. This is also where the reference to the "agpm:offlineId" is found, which is the Offline GPO's GUID created during the Check-Out process.

The AGPM process then looks to the manifest.xml file, which contains entries for every time a GPO was backed up to the AGPM Archive. From Part 1 of this blog series, we learned taking control of a production GPO initiated a backup of that production GPO into the AGPM Archive. At this point, AGPM.exe uses the manifest.xml to check the current backup status.

image

Next, we see the AGPM server read the SYSVOL folder for the Offline GPO, and start verifying the folder structure within the AGPM archive matches.

image

image

AGPM then copies files from the GPO’s SYSVOL folders to their corresponding location in the AGPM Archive path. Here we see the copy of the Computer Configuration registry settings file.

image

Once copied, AGPM updates the manifest.xml and bkupInfo.xml files within the GPOs Archive folder.

image

Where the bkupInfo.xml file contains the information of the GPO it has created, manifest.xml has a copy of that same information for every GPO in the Archive. The following is the bkupInfo.xml for the GPO check-in.

image

AGPM updates Backup.xml with the modified GPO’s security settings, as well as any new GP Extensions required. GPreport.xml contains all of the settings within the checked out GPO.

Now that the checked out and modified GPO is backed up to the Archive, the gpostate.xml file is updated to reflect the new “CHECKED_IN” status of the GPO. Notice the AGPM Archive path has changed from {85B77C99-1C4B-473C-A4E5-0AF10DD552F9} to {CD595C25-5EC6-4653-8E24-0E640588C654}.

image

It’s important to note what we do not see here: AGPM does not write the modified GPO to SYSVOL under the production GPOs GUID {01D5025A-5867-4A52-8694-71EC3AC8A8D9}. This is evidence that checking in a GPO we modified in AGPM does not commit the changes to production. In order to do that, we must ‘Deploy’ the GPO within AGPM.

Reviewing the gpmgmt.log entries from the Check-In operation mirror much of what we saw in Process Monitor. AGPM backs up the Offline GPO to a newly created Archive path, and then updates gpostate.xml, bkupInfo.xml and Manifest.xml to associate the production GPO with the new path.

image

The AGPMserv.log has a very limited view of the process, simply recording a GPO Check-In “CheckInGPO()” function was called.

image

The Domain Controller

We’ve already covered the network traffic between the AGPM Client and Server and the Domain controller, so let’s move on to the Process Monitor output. Similar to the activity during the Check-Out operation, lsass.exe is accessing the Active Directory database, pulling the GPO information from the corresponding GP Container.

The security event log should have events correlating to the removal of the Offline GPO. Look for Event ID: 5136.

In Closing

In this third installment, I covered part of a procedure repeated every time there’s need to modify a GPO within AGPM. To rehash from Part 2 of this blog series, during the Check-Out of a GPO, the following steps are performed:

The Archive copy of the GPO is copied to a temp folder.

  • From the duplicated Archive data, a new “Offline” GPO is created with the [AGPM] prefix by performing a GPO Restore.
  • The GPO’s entry within the Archive’s gpostate.xml file is updated to reflect its checked-out state, and references the newly created “Offline” GPO.
  • Once the Check-Out procedure is complete, the temp copy of the Archive data is deleted.
  • The “Offline” GPO is not linked anywhere, and edits to it are made in real-time.

During the Check-In process, we have observed the following:

A new Archive path is created with a new GUID

  • A GPO Backup is performed of the “Offline” GPO to the newly created Archive path
  • The “Offline” GPO is deleted
  • Gpostate.xml, bkupInfo.xml and Manifest.xml are updated to reflect the new association between the originally Checked-Out GPO and the new Archive path

From this information, we can make a few important connections: any changes made to an AGPM-controlled GPO outside of the AGPM console (i.e. the rogue Domain Admin that doesn’t bother with the AGPM console, and edits the GPO directly through GPMC.msc) are overwritten the next time the GPO is deployed from the AGPM console. Since the Check-Out procedure builds the editable “Offline” GPO from the AGPM Archive data, the Admin’s changes are not included automatically. We do have the option of using the “Import from…” feature to pull the settings from the production GPO again prior to the Check-Out, which updates the Archive data with any changes made outside of AGPM. As mentioned earlier, the Check-In operation does NOT commit the changes to the production GPO. We must follow the Check-In operation with a “Deploy” in order to have our changes released to production.

Complete series

http://blogs.technet.com/b/askds/archive/2011/01/31/agpm-production-gpos-under-the-hood.aspx
http://blogs.technet.com/b/askds/archive/2011/04/04/agpm-operations-under-the-hood-part-2-check-out.aspx
http://blogs.technet.com/b/askds/archive/2011/04/11/agpm-operations-under-the-hood-part-3-check-in.aspx
http://blogs.technet.com/b/askds/archive/2011/04/26/agpm-operations-under-the-hood-part-4-import-and-export.aspx

Sean "my head will not shift when stored in the overhead compartment" Wright

AGPM Operations (under the hood part 4: import and export)

$
0
0

Sean again, here for Part 4 of the Advanced Group Policy Management (AGPM) blog series, following the lifecycle of a Group Policy Object (GPO) as it transitions through various events. In this installment, we investigate what takes place when you use the Import and Export features within AGPM.

With the use of Group Policy so common in today’s Active Directory environments, there may be a need to create new GPOs with a baseline of common settings already in place. Taking GPOs from one domain and creating an identical GPO in another domain or forest may be required. Having a backup copy of a GPO to keep elsewhere for disaster recovery is always handy. Using the Import and Export features of AGPM, an admin can accomplish all of these.

In Part 1 of this series (Link), we introduced AGPM, and followed an uncontrolled, or “Production” GPO through the process of taking control of it with the AGPM component of the Group Policy Management Console (GPMC). If you are unfamiliar with AGPM, I would recommend you refer to the first installment of this series before continuing on.

Part 2 of the series (Link) continued the analysis of this GPO as it was Checked-Out using AGPM. We revealed the link between AGPM controlled GPOs and the AGPM Archive as well as how AGPM provides for offline editing of GPOs.

With Part 3 of the series (Link), we picked things back up with our checked out GPO and checked it back in. Our analysis of the process pointed out how AGPM keeps previous Archive folders, and how it maintains the historic link between the managed GPO and each of its previous iterations.

Environment Overview:

The environment has three computers: a domain controller, a member server, and a client.

  • CONDC1 : Windows Server 2008 R2 Domain Controller
  • CONAGPM : Windows Server 2008 R2 AGPM Server
  • CONW71 : Windows 7 AGPM Client

For additional information regarding the environment and tools used below, please refer to Part 1 of this series (Link).

Before We Begin:

Since the Export function is very straightforward, it doesn’t warrant an entire blog post.  As such, let’s go over it quickly here, to summarize what takes place during an Export before we move on to looking at the Import function.

The AGPM Client and Server are the only two involved in the Export operation.  The client sends the instructions to the AGPM Server, which calls the “ExportGpoToFile()” function as shown below.

image

image

The information from the Archive folder is copied into temp folders within the AGPM Archive before being written into the .cab file.  The contents of the .cab file depend on the settings within the GPO.  For example, if the GPO has any scripts configured, the script file itself will be included along with a scripts.ini file containing options for the script execution.  Registry settings will be included in a registry.pol file.  Drive mapping preference settings will cause a drives.xml file to be included, and so on.

Once the .cab file is created within the AGPM Archive temp folder, it is copied over to the desired destination folder on the AGPM Client.

image

Now that we have that out of the way, let’s move on to the focus of this blog post. The Import!

Getting Started:

We start on our Windows 7 computer logged in as our AGPM Administrator account (AGPMAdmin). We will need GPMC open, and viewing the Change Control section, which is the AGPM console. We’ll be using the “Dev Client Settings” GPO from the previous blog post, so let’s review the GPO details.

  • The GPO GUID : {01D5025A-5867-4A52-8694-71EC3AC8A8D9}
  • The GPO Owner : Domain Admins (CONTOSO\Domain Admins)
  • The Delegation list : AGPM Svc, Authenticated Users, Domain Admins, Enterprise Admins, ENTERPRISE DOMAIN CONTROLLERS and SYSTEM
  • Current ArchiveID : {1946BF4D-6AA9-47C7-9D09-C8788F140F7E}

If you’re familiar with the previous entries in this blog series, you may notice a new entry above. The ArchiveID value is simply the current GUID assigned to the backup of this GPO in the AGPM Archive. It’s included here because we will observe the activity within the AGPM archive caused by the Import and Export functions.

Before we begin, we log into the AGPM Server and the Domain Controller and start the usual data capture tools discussed previously. Right clicking the Checked-In GPO displays the context sensitive menu and we see both the “Import from…” and “Export to…” items on the list. Mousing over the “Import from…” selection, we get a slide-out menu that has “Production” and “File”. Notice the grayed out “File” option below; checked in GPO files can’t be imported.

image

For our first test, we select the option to import from production. We are prompted to enter a comment when logged in as an AGPM Administrator or AGPM Editor. It’s always a good idea to provide some context to the action. Since AGPM keeps a history of GPOs it manages, use the comments to keep track of ‘why’ you performed certain actions.

The GPO Import progress dialog tells us when the operation is complete. Clicking the “Close” button brings us back to the AGPM Console. Let’s look at the data we’ve captured to see what really happened.

The AGPM Client

Similar to the Network Monitor analysis of our previous entries in this blog series, we see a small amount of traffic to TCP port 4600 on the AGPM Server.

image

The AGPM log shows the same block of information we’ve seen in every other data capture in this blog series. The AGPM client begins the AgpmClient.ProcessMessages() function, connects to and notifies the server of incoming operation requests, sends the commands over and receives the server response.

image

The AGPM Server

Network traffic from the AGPM Client was covered above, so we’ll focus on what’s going on between the AGPM Server and the Domain Controller. SMB2 traffic shows the AGPM Server reading the GPO information from SYSVOL.

image

image

There is a significant amount of traffic between the AGPM Server and the Domain Controller on TCP port 389 (LDAP), which would be the AGPM Server reading the GPO information from Active Directory.

We retrieve the AGPM Archive registry path and access gpostate.xml for the GPO’s information.

image

I mentioned the ArchiveID value for this GPO earlier. The following screenshot is from gpostate.xmlBEFORE the Import.

image

Next, we read the manifest.xml file. The following screenshot is from BEFORE the Import.

image

Once AGPM has verified the current information on the GPO, it reads the GPO information from the Domain Controller and writes it into the AGPM Archive.

image

image

Notice how the GUID in the Archive path is different? AGPM creates a new ArchiveID/GUID to store the GPO data. The Backup.xml, bkupInfo.xml and overall Manifest.xml files are updated with the new Archive ID information.

Finally, we update the gpostate.xml with the new information, as shown here. Notice the original Archive path GUID moves to the second <History> entry now.

image

image

The GPMC log shows some elements familiar to those of you who have read the previous entries in this blog post. GPMC performs a GPO Backup routine, pulling data from the production GPO and storing it in the newly created AGPM Archive path.

image

The AGPMserv.log shows the typical block of messages related to receiving, processing and responding to the AGPM Client.

image

The Domain Controller

We’ve already covered network traffic between the three systems, and Process Monitor shows events we would expect on any Domain Controller.

The security event log shows a number of Object Access entries, where the AGPM service account is used to read properties from AD objects. This is AGPM reading the GPO information out of Active Directory.

image

In Closing

This fourth entry in the AGPM Operations series covers the import of group policy settings from a Production GPO. Specifically, we covered importing the production GPO settings into an existing, AGPM controlled GPO.

  • The AGPM Archive folder for a controlled GPO is linked to its Production GPO in the gpostate.xml file.
  • The Import from Production process utilizes a GPO Backup, storing the settings in a newly created Archive folder.
  • The previous Archive folder is maintained for rollback/historic purposes
  • The gpostate.xml file references both the current Archive folder GUID as well as the previous versions’.

Another method exists for importing settings into AGPM Controlled GPOs. The Export of a GPO within the AGPM console creates a .cab file with all files and settings associated with that GPO contained within. The Import from File features uses these .cab files to import settings into new or existing GPOs within AGPM in the same domain, or foreign domains as well. Whereas the Import from Production feature only works with existing AGPM Controlled GPOs, when creating a new GPO within the AGPM console, you can opt to import the settings directly from an exported GPO’s .cab file. From our observations here, we can deduce the newly created GPO is created with a new AGPM Archive folder and an entirely new entry in gpostate.xml. Unlike the Import from Production we investigated above, the information used to create the new GPO is sourced directly from the .cab file, instead of querying the Domain Controller.

Complete series

http://blogs.technet.com/b/askds/archive/2011/01/31/agpm-production-gpos-under-the-hood.aspx
http://blogs.technet.com/b/askds/archive/2011/04/04/agpm-operations-under-the-hood-part-2-check-out.aspx
http://blogs.technet.com/b/askds/archive/2011/04/11/agpm-operations-under-the-hood-part-3-check-in.aspx
http://blogs.technet.com/b/askds/archive/2011/04/26/agpm-operations-under-the-hood-part-4-import-and-export.aspx

Sean "two wrongs don't make a" Wright


Target Group Policy Preferences by Container, not by Group

$
0
0

Hello again AskDS readers, Mike here again. This post reflects on Group Policy Preference targeting items, specifically targeting by security groups. Targeting preference items by security groups is a bad idea. There is a better way that most environments can accomplish the same result, at a fraction of the cost.

Group Membership dependent

The world of Windows has been dependent on group membership for a long time. This dependency is driven by the way Windows authorizes access to resources. The computer or user must be a member of the group in order to access the printer or file server. Groups are and have been the bane of our existence. Nevertheless, we should not let group membership dominate all aspects of our design. One example where we can move away from using security groups is with Group Policy Preference (GPP) targeting.

Targeting by Security Group

GPP Targeting items control the scope of application for GPP items. Think of targeting items as Group Policy filtering on steroids, but they only apply to GPP items included in a Group Policy object. They introduce an additional layer of administration that provides more control over "how" GPP items apply to a specific user or computer.

image
Figure 1 - List of Group Policy Preference Targeting items

The most common scenario we see using the Security Group targeting item is with the Drive Map preference item. IT Professionals have been creating network drive mappings based on security groups since Moby Dick was a sardine-- it's what we do. The act is intuitive because we typically apply permissions to the group and add users to the group.

The problem with this is that not all applications determine group membership the same way. Also, the addition of Universal Groups and the numerous permutations of group nesting make this a complicated task. And let's not forget that some groups are implicitly added when you log on, like Domain Users, because it’s the designated primary group. Programmatically determining group membership is simple -- until it's implemented, and its implementation's performance is typically indirectly proportional to its accuracy. It either takes a long time to get an accurate list, or a short time to get a somewhat accurate list.

Security Group Computer Targeting

Using GPP Security Group targeting for computers is a really bad idea. Here's why: in most circumstances, the application retrieves group memberships from a domain controller. This means network traffic from the client to the domain controller and back again. Using the network introduces latency. Latency introduces slow process, and slow processing is the last thing you want when the computer is processing Group Policy. Also, Preference Targeting allows you to create complex targeting scenarios using Boolean operators such as AND, OR, and NOT. This is powerful stuff and lets you combine one or more logon scripts into a single GP item. However, the power comes at a cost. Remember that network traffic we created by make queries to the domain controller for group memberships? Well, that information is not cached; each Security Group targeting item in the GPO must perform that query again- yes, the same one it just did. Don't hate, that's just the way it works. This behavior does not take into account nest groups. You need to increase the number of round trips to the domain controller if you want to include groups of groups of groups etcetera ad nauseam (trying to make my Latin word quota).

Security Group User Targeting

User Security Group targeting is not as bad as computer Security Group targeting. During user Security Group targeting, the Group Policy Preferences extension determines group membership from the user's authentication token. This process if more efficient and does not require round trips to the domain controller. One caveat with depending on group membership is the risk of the computer or user's group membership containing too many groups. Huh- too many Groups? Yes, this happens more often than many realize. Windows creates an authentication token from information in the Kerberos TGT. The Kerberos TGT has a finite amount of storage for this information. User and computers with large group memberships (groups nested with groups…) can maximize the finite storage available in the TGT. When this happens, the remaining groups memberships are truncated, which creates the effect that the user is not a member of that group. Groups truncated from the authentication token results in the computer or user not receiving a particular Group Policy preference item.

You got any better ideas?

A better choice of targeting Group Policy Preference items is to use Organization Unit targeting items. It's da bomb!!! Let's look at how Organizational Unit targeting items work.

image
Figure 2 Organizational Unit Targeting item

The benefits Organizational Unit Targeting Items

Organization Unit targeting items determines OU container membership by parsing the distinguished name of the computer or user. So, it simply uses string manipulation to determine what OUs are in scope with the user or computer. Furthermore, it can determine if the computer or user has direct container membership of an OU by simply looking for the first occurrence of OU immediately following the principal name in the distinguished name.

Computer Targeting using OUs

Computer Preference targeting with OUs still has to contact a domain controller. However, it’s an LDAP call and because we are not chasing nested groups, it's quick and efficient. First, the preference client-side extension gets the name of the computer. The CSE gets name from the local computer, either from the environment variable or from the registry, in that order. The CSE then uses the name to look up the security identifier (SID) for the computer. Windows performs an LDAP bind to the computer object in Active Directory using the SID. The bind completes and retrieves the computer object's distinguished name. The CSE then parses the distinguished name as needed to satisfy the Organizational Unit targeting item.

User Targeting using OUs

User Preference targeting requires fewer steps because the client-side extension already knows the user's SID. The remaining work performed by the CSE is to LDAP bind to the user object using the user's SID and retrieve the distinguished name from the user object. Then, business as usual, the CSE parses the distinguished name to satisfy the Organizational Unit targeting item.

Wrap Up

So there you have it. The solution is clean and it takes full advantage of your existing Active Directory hierarchy. Alternatively, it could be the catalyst needed to start a redesign project. Understandably, this only works for Group Policy Preferences items; however-- every little bit helps when consolidating the number of groups to which computer and users belong-- and it makes us a little less dependent on groups. Also, it's a better, faster, and more efficient alternative over Security Group targeting. So try it.

Update

We recently published a new article around behavior changes with Group Policy Preferences Computer Security Group Targeting.  Read more here.

- Mike "This is U.S. History; I see the globe right there" Stephens

 

 

Forcing Domain Admins to use AGPM (but not really)

$
0
0

Hi folks, Sean Wright here for my final post. So, you have AGPM installed, but your Domain Admins continue using GPMC to create, delete, and modify Group Policy. You’ve asked nicely, but that hasn’t had much effect. Now you want to make your point, and prevent your Domain Admins from managing Group Policy the wrong way. You decide to deny Domain Administrators the rights to modify Group Policy Objects (GPOs) through any means save the AGPM console. It may seem like a good idea, but let me explain how your time is better spent elsewhere.

First, let’s cover the concept of a domain administrator. The domain admin is the most trusted and unrestricted user account in the domain. The domain admin can do anything in the domain and can give themselves permissions that make anything possible. The domain admin is the "Domain Overlord" if you will. Go ahead, laugh maniacally now, I’ll wait.

The very notion that you want to deny something to a Domain Admin is a foreign concept. You don’t deny them anything. They deny rights to others. Windows and Active Directory are built upon this fundamental concept, which brings us to our next section.

Why you’re wasting your time:

Active Directory is tailored to Domain Admins being all-powerful. No matter what you do to restrict their rights, they can simply change it back at will. You can make it difficult, which might discourage them… but a determined admin can undo anything you change.

You now have a new admin on the team, and during his troubleshooting “Random Group Policy Problem #5”, they receive an access denied error when managing policy through GPMC. They should be using AGPM, and the fact that they are unaware of this is a whole other issue. Most admins take access denied errors as a bad thing-- after all, they are an admin; so, they may start fixing the environment by changing permissions.

If you contact Microsoft Support for a Group Policy related issue, we will likely return the permissions to defaults before proceeding with troubleshooting. We do not recommend this scenario, because you can't prevent a domain administrator from being an domain administrator, and your efforts can be so easily undone.

If you modify permissions on policy folders within SYSVOL, you’re going to trigger replication for every file and folder that is changed. In large environments with many policies, that can be a significant network traffic surge.

Most importantly, Microsoft has not tested this scenario, so you may introduce unforeseen problems to your environment by attempting it.

What you should do instead:

The advice I give to every customer who force domain admins into AGPM is Education. You can’t prevent a domain admin from doing something if they are determined. If you can’t trust your domain admins to do the right thing, and do it the right way, then they should not be a domain admins. That said, I suggest educating administrators by teaching them about AGPM and its benefits. Explain why they should only use AGPM manage policy, and you will likely see them consciously decide to go the extra mile to do things the correct way.

Recently, I had a customer insist AGPM was incomplete, because it did not have this restrictive functionality built-in. The developers did not intend for AGPM to restrict admins. It was designed to provide benefits that make troubleshooting and administration of policy more manageable.

If you’re still reading, and are determined to try this in spite of my recommendations against it:

Editing existing Group Policy object

During installation, in an effort to make things easier, some customers simply add the AGPM service account to the Domain Administrators group. Since we’re about to prevent domain admins from accessing production GPOs, you’ll want to read over the AGPM Least Privilege scenario and make sure you have successfully implemented this before you proceed.

1. We’ll need to remove any Administrative users or groups from the “Group Policy Creator Owners” group. You can do this through Active Directory Users and Computers.

2. If it’s not already there, make sure you add the AGPM service account to ”Group Policy Creator Owners”

3. Open the Group Policy Management Console (GPMC.msc) and find the Group Policy Objects container. The Delegation tab shows a list of users/groups that have the ability to create new GPOs in the domain. You can try to remove Domain Admins from this location, but alas, it won’t let you.

image

Note: This is a safety feature, designed to prevent you from accidentally removing all rights to create GPOs.

What you can do, is prevent your domain admins from editing the existing GPOs.

4. Within GPMC, expand the Group Policy Objects container and find the Default Domain Controllers Policy.

5. Select the Default Domain Controllers GPO and go to the Delegation tab.

6. Remove the Domain Administrators and Enterprise Administrators groups from the delegation list.

7. Make sure the list contains SYSTEM with full control, and ENTERPRISE DOMAIN CONTROLLERS and the Authenticated Users entries with Read permissions (at least).

8. Repeat steps 5 through 7 for every GPO currently in your environment.

image

This makes your existing GPOs resistant (but not immune) to your administrator’s editorial charms.

9. Next, open GPMC with your AGPM Administrator account and go to the AGPM console.

10. Click on the Production Delegation tab and remove Domain Administrators and Enterprise Administrators from this location. This tab within the AGPM console determines the permissions AGPM assigns to controlled GPOs when they are deployed to production using AGPM. Making this change prevents all of the hard work you just did in the section above from going to waste.

image

Don’t worry that the list isn’t complete. We need to add Authenticated Users and the AGPM service account to production GPOs.

Control Group Policy object links

So far, we’ve removed the domain admin’s ability to edit existing GPOs, but they can still create new GPOs and link new and existing GPOs to OUs. In order to prevent these actions, we need to explicitly deny specific rights related to Group Policy.

1. Open GPMC and click on the domain node that contains the name of your domain.

2. Click Delegation and click Advanced.

3. On the domain’s security dialog box, click Advanced to open the Advanced Security Settings dialog.

4. Click Add button to add a new entry.

5. Type Domain Admins and then click Check Names. Click OK to show the Permission Entry dialog.

6. Click Properties.

7. Select the Deny check box next to the permissions Write gPLink and Write gPOptions.

8. Click OK on all dialogs until you return to GPMC.

image

9. Check the permissions by right-clicking the node with the name of the domain. Notice the menu items Create a GPO in this domain, and Link it here… ; Link an Existing GPO… ; and Block Inheritance are unavailable. Additionally, the menu items Enforced and Link Enabled are unavailable on existing GPO links.

image

10. You will need to repeat steps 1-9 for every OU in your domain. This change is also needed for any newly created OUs. It might seem easier to set these deny permissions at the domain level and let inheritance propagate the settings down to existing and new OUs, it doesn’t work. When an OU is created in Active Directory, permissions are explicitly defined at the OU level. When you set an explicit deny permission at the domain level, inheritance applies an implicit deny at the OU level. An explicit deny wins over an explicit allow; however, an explicit allow wins over an implicit deny.

Note: There is also an option to change the default permissions applied to new OUs as they are created. This option modifies the schema, so use caution when modifying any value in the schema. The defaultSecurityDescriptor attribute is in SDDL format, so I recommend you configure one OU with the correct security settings and copy the value. This prevents having to manually set the permissions as new OUs are created in the future.

No new GPOs for You

So far, we removed the domain admin’s right to edit existing GPOs, and their rights to link new GPOs to existing OUs in the domain. Also, we removed their right to edit the GPOptions such as link and enforced states. The last step is to prevent a domain admin from creating new GPOs in the domain’s Group Policy Objects container.

1. Open ADSIEdit.msc. Right-click the ADSI Edit node in the navigation pane and then click Connect to…

2. Configure the Connections Settings dialog similar to the following image. Click OK.

image

3. In the navigation pane, expand the Default naming context until you find the following container: CN=Policies,CN=System,DC=domain,DC=com.

4. Right-click CN=Policies and then click Properties.Click the Security tab.

5. Click Advanced to open the Advanced Security Settings dialog. Add an entry for Domain Admins,and deny the permission to create or delete groupPolicyContainer objects.

image

This last step makes Create menu item unavailable within GPMC when creating a new Group Policy Objects. The Delete menu item remains available for GPOs; however, attempting a delete results with an access denied error.

An Imperfect Solution:

Many aspects of this scenario require periodic administrative attention that certainly increases management costs.. In addition, domain admins can undo this partially or in total. This can increases the difficultly that comes with troubleshooting.

Group Policy was designed to be managed by domain administrators. Attempting to hack a solution can cause its fair share of administrative burden (even when it’s working correctly). Why? Because any domain admin can undo the solution with relative ease, making it a monumental waste of time and provides a false sense of security. Since Microsoft does not recommend this scenario, we advise everyone to use AGPM as a beneficial tool and educate your staff. When they are familiar with it, and have it as readily available as GPMC, they will be more likely to do the right thing by using the AGPM to manage GPOs.

And the real solution? Have some consequences when admins choose not to use AGPM. That will straighten people out in a hurry. If your domain admin can't follow simples rules, like use AGPM, the imagine what other dangers lurk behind your back.

Sean "Don't Taz Me Bro" Wright

[Editor’s note: this was Sean’s last post – he left us for greener pastures last week. Good luck man, I hope you can get a chuckle out of your new colleagues with your famous photoshopping – Ned]

Friday Mail Sack: Anchors Aweigh Edition

$
0
0

Hiya folks, Ned here again. I finally have an editor that allows anchors on all the questions, so I am adding a quasi “table of contents” for these posts that allow easier navigation and linking. I’ll retrofit all the old mail sack articles too… eventually. This week we discuss – eh - let’s have the bullets do the talking:

Question

We are trying to move away from NTLM in our Active Directory environment. I read your previous post on NTLM Auditing for later blocking. However, the blog posting does not differentiate between the two versions of NTLM. What would be the best way to audit for only NTLMv1 or LM? Also, will Microsoft ever publish those two TechNet articles?

Answer

I still suggest you give up on this, unless you want to spend six months not succeeding. :) If you want to try though, add security event logging on your Win2008 R2 servers/DCs for 4624 Logon events:

977519  Description of security events in Windows 7 and in Windows Server 2008 R2
http://support.microsoft.com/default.aspx?scid=kb;EN-US;977519

Those will capture the Package Name type. For example:

clip_image002

Best I can tell, those two TechNet articles are never going to be published. Jonathan is trying yet again as I write this. Maybe Win8...? We'll see...

Question

I am now 100% Windows Server 2008 R2 in my domains and am ready to move my Domain and Forest Functional Levels to 2008 R2. What does that and my new schema buy me, and are there any steps I should do in special order?

Answer

Nothing has to happen in any special order. Some of your new AD-related options include:

  1. AD Recycle Bin ( http://blogs.technet.com/b/askds/archive/2009/07/24/active-directory-recycle-bin-in-windows-server-2008-r2.aspx )
  2. DFSR for SYSVOL ( http://technet.microsoft.com/en-us/library/dd640019%28WS.10%29.aspx )
  3. V2 DFS Namespaces (http://technet.microsoft.com/en-us/library/cc770287.aspx ) and migrate existing V1 namespaces ( http://technet.microsoft.com/en-us/library/cc753875.aspx )
  4. Last Interactive Logon (http://technet.microsoft.com/en-us/library/dd446680(WS.10).aspx )
  5. Fine Grain Password Policies – (http://technet.microsoft.com/en-us/library/2199dcf7-68fd-4315-87cc-ade35f8978ea )
  6. Virtual Desktops (http://technet.microsoft.com/en-us/library/dd941616(WS.10).aspx )
  7. Managed Service Accounts with automatic SPN management (http://technet.microsoft.com/en-us/library/dd548356(WS.10).aspx )
  8. Other things we recommend at the end of the upgrade ( http://technet.microsoft.com/en-us/library/cc753753(v=WS.10).aspx  )

With your awesome Win2008 R2 servers, you can also:

Question

Our USMT scanstate log shows error:

Error  [0x08081e] Failed to load manifest at C:\USMT\x86\dlmanifests\security-ntlm-lmc.man: XmlException:  hResult = 0x0, Line = 18, Position = 31; A string literal was expected, but no opening quote character was found.

But nothing bad seems to happen and our migration has no issues we can detect. It looks like the quotation marks in the XML are incorrect. If I correct that, it runs without error, but am I making something worse and is this supported?

Answer

Right you are. Note the quotation marks – looks like some developer copied them out of a rich text editor at some point:

image

But no matter – you can change it or delete that MAN file, it makes no difference. That manifest file does not have a USMT scope set, so it is never used even when syntactically correct. In order for USMT to pick up a manifest file during scanstate and loadstate, it must have this set:

  <migrationscope="Upgrade,MigWiz,USMT">

If not present, the manifest is skipped with message “filtered out because it does not match the scope USMT”:

clip_image002[8]

Roughly two thirds of the manifests included with USMT are not used at all for this very same reason.

Question

I am looking for a full list of Event IDs for FRS. KB 308406 only seems to include them for Windows 2000 – is that list accurate for later operating systems like WIndows Server 2003 or 2008 R2?

Answer

That KB article has a few issues, I’ll get that ironed out. In the meantime:

Windows Server 2003 added events:

Event ID: 13569

Severity: Error

The File Replication Service has skipped one or more files and/or directories during primary load of the following replica set. The skipped files will not replicate to other members of the replica set.

Replica set name is    : "%1"

A list of all the files skipped can be found at the following location. If a directory is skipped then all files under the directory are also skipped.

Skipped file list      : "%2"

Files are skipped during primary load if FRS is not able to open the file. Check if these files are open. These files will replicate the next time they are modified.

 

Event ID: 13570

Event Type: Error

The File Replication Service has detected that the volume hosting the path %1 is low on disk space. Files may not replicate until disk space is made available on this volume.

The available space on the volume can be found by typing

"dir /a %1".

For more information about managing space on a volume type "copy /?", "rename /?", "del /?", "rmdir /?", and "dir /?".

 

Event ID: 13571

Event Type: Error

The File Replication Service has detected that one or more volumes on this computer have the same Volume Serial Number. File Replication Service does not support this configuration. Files may not replicate until this conflict is resolved.

Volume Serial Number : %1

List of volumes that have this Volume Serial Number: %2

The output of "dir" command displays the Volume Serial Number before listing the contents of the folder.

 

Event ID: 13572

Event Type: Error

The File Replication Service was unable to create the directory "%1" to store debug log files.

If this directory does not exist then FRS will be unable to write debug logs. Missing debug logs make it difficult, if not impossible, to diagnose FRS problems.

Windows Server 2008 added no events.

Windows Server 2008 R2 added events:

Event ID: 13574

Event Type: Error

The File Replication Service has detected that this server is not a domain controller. Use of the File Replication Service for replication of non-SYSVOL content sets has been deprecated and therefore, the service has been stopped. The DFS Replication service is recommended for replication of folders, the SYSVOL share on domain controllers and DFS link targets.

 

Event ID: 13575

Event Type: Error

This domain controller has migrated to using the DFS Replication service to replicate the SYSVOL share. Use of the File Replication Service for replication of non-SYSVOL content sets has been deprecated and therefore, the service has been stopped. The DFS Replication service is recommended for replication of folders, the SYSVOL share on domain controllers and DFS link targets.

 

Event ID: 13576

Event Type: Error

Replication of the content set "%1" has been blocked because use of the File Replication Service for replication of non-SYSVOL content sets has been deprecated. The DFS Replication service is recommended for replication of folders, the SYSVOL share on domain controllers and DFS link targets.



All  OSes included event:



Event ID: 13573

Event Type: Warning

File Replication Service has been repeatedly prevented from updating:

File Name : "%1"

File GUID : "%2"

due to consistent sharing violations encountered on the file. Sharing violations occur when another user or application holds a file open, blocking FRS from updating it. Blockage caused by sharing violations can result in out-of-date replicated content. FRS will continue to retry this update, but will be blocked until the sharing violations are eliminated.  For more information on troubleshooting please refer to http://support.microsoft.com/?id=822300.

Win2008 should have had those 13574-13576 events as they are just as applicable, but $&% happens.

Question

Why isn’t it possible to grant local admin rights to a domain controller without added them to the built-in Administrators or Domain Admins groups? It can be done on RODCs, after all.

Answer

It’s with good intentions – if I am a local administrator on a DC, I own that whole domain. I can directly edit the AD database, or even replace it with my own copy. I can install a filter driver that intercepts all password communications between LSASS and the database. I can turn off all auditing and group policy. I can add a service that runs as SYSTEM and therefore, runs as the DC itself – then impersonate the DC. I can install a keyboard logger that captures the “real” domain admins as they logon. My power is almost limitless.

The reasons we added the functionality for non-domain admin administrators on RODC are:

  1. RODCs are not authoritative for anything and cannot originate any data out to any other DC or RODC. So the likelihood of damage or compromise is lower - although theoretically, not removed.
  2. RODCs are for branch offices that don’t have dedicated IT staff and which may not even be reliably network connected to the main IT location – so having a “local admin” makes sense for management.

Question

You have talked about how to track individual DFSR file replication using its built-in “enable audit” setting. Does this impact server performance?

Answer

Yes and no. The additional DFSR logging impact is negligibly low on any OS. The object access auditing impact ranges from medium (by default) to high (if you have added many custom SACLs). You have to enable the object access auditing to use the DFSR logging on Win2003 though, so the net result there is medium to high impact when compared to other auditing categories.

It’s worth noting that overall, auditing impact in Win2008+ is lower, as the audit system was redesigned for greater scalability and performance. You also have a much less disruptive security audit option, which is to enable only the subcategory:

Category: Object Access
Subcategory: Application Generated

image

That way you don’t have to enable the completely ridiculous set of Object Access auditing in order to track only DFSR file changes. And the impact is greatly lowered.

image

And besides, to run Win2008+, you need much faster hardware anyway. ^_^

Question

Can Netware volumes be DFSN Link Targets?

Answer

Good grief, someone still has Netware servers?

Yes, with caveats:

824729  Novell 6 CIFS pass-through authentication failures
http://support.microsoft.com/default.aspx?scid=kb;EN-US;824729

Novell also created a DFS service, to act as a root instead of simply a link target like above:

Using DFS to Create Junctions
http://www.novell.com/documentation/nw6p/?page=/documentation/nw6p/nss_enu/data/adqqknt.html

Generally speaking, if a target can provide SMB/CIFS shares, they can be a link target. To connect to a DFS target, your OS needs a DFS client:

Can Apple, Linux, and other non-MS operating systems connect to DFS Namespaces?
http://blogs.technet.com/b/askds/archive/2011/01/18/can-apple-linux-and-other-non-ms-operating-systems-connect-to-dfs-namespaces.aspx

Bring on the Banyan Vines questions!

Question

There is no later version of the Group Policy Best Practices Analyzer tool and no updates when it starts. Is it going to be updated for Windows Server 2008 or later? The tool was even mentioned by Tim on this very blog years ago, but since then, nothing.

[This “question” came from a continued conversation about a specific aspect of the tool – Ned]

Answer

  • This tool has no updates or development team and is effectively abandoned. It was not created by the Group Policy Windows developer group nor is it maintained by them – it doesn’t have a dev team at all. It probably should have released in CodePlex instead of the download center. The genie cannot be put back in the bottle now though, as people will just grab copies from elsewhere on the internet, likely packed with malware payloads.
  • This tool is not supported – it’s provided as-is. When Tim talked about it, the tool had a bright future. Now it is gooey dirt.
  • This tool’s results and criteria are questionable, bordering on dangerous. It gives a very false sense of security if you pass, because it checks very little. It also incorrectly flags issues that do not exist – for example, it states that the Enterprise Domain Controllers group does not have Apply GP permissions to the Default Domain Controllers policy, and this is an error. The DCs are all members of Authenticated Users though, and that’s how they get Apply permissions. And why doesn’t it raise the same flag for the default domain policy? Who knows! The developers were not correct in this design or assumptions. The tool recommends you add more RPC ports for invalid reasons, which is silly. It talks about a few security settings, ignoring hundreds of others and giving no warning that changing these can break your entire environment. Gah!

If you are looking for security-related best practice recommendations for group policy, you should be using the Security Compliance Manager tool:

Microsoft Security Compliance Manager v1 (release)
http://www.microsoft.com/download/en/details.aspx?displaylang=en&id=16776

Microsoft Security Compliance Manager v2 (beta)
http://blogs.technet.com/b/secguide/archive/2011/06/27/scm-v2-beta-new-baselines-available-to-download.aspx

That tool at least has best effort support and a living dev team that is providing vetted recommendations.

More Comic-Con Cosplay

As you know, I spent last week at San Diego Comic-Con and even showed some pictures I snagged. Here is more amazing cosplay, courtesy of the rad Comicvine.com (click thumbnails to make with the bigness). And check out the eyes on Scorpion.

1924922-img_5311_super1922602-img_4639_super

1924610-sdcc2011_comicvine_cosplay_0594_super1924822-sdcc2011_comicvine_cosplay_0793_super1922187-sdcc2011_comicvine_cosplay_0163_super

1924714-sdcc2011_comicvine_cosplay_0689_super1927977-img_5855_super

1927913-img_5781_super1924812-sdcc2011_comicvine_cosplay_0782_super

1922309-sdcc2011_comicvine_cosplay_0268_super1927623-sdcc2011_comicvine_cosplay_0878_super1924723-sdcc2011_comicvine_cosplay_0698_super

1927997-img_5915_super1927975-img_5938_super
Comicvine.com – go there now, unless you hate awesomeness

Until next time.

Ned “I should go as a Keebler Elf next year” Pyle

Improved Group Policy Preference Targeting by Computer Group Membership

$
0
0

Hello AskDS readers, it's Mike again talking about Group Policy Preference targeting items. I posted an article in June entitled Targeting Group Policy Preferences by Container, not by Group. This post highlighted the common problems many people encounter when targeting preferences items based on a computer's group membership, why the problem occurs, and some workarounds.

Today, I'd like to introduce a hotfix released by Microsoft that improves targeting preference items by computer group membership. The behavior before the hotfix potentially resulted in slow computer group policy application. The slowness was caused by the way Security Group targeting applies against a computer account. The targeting item makes multiple round trips to a domain controller to determine group memberships (including nested groups). The slowness is more significant when the computer applying the targeting item does not have a local domain controller and must use a domain controller across a WAN link.

You can download the hotfix for Windows 7 and Windows Server 2008 R2 through Microsoft Knowledgebase article 2561285. This hotfix changes how the Security Group Targeting item calculates computer group membership. During policy application, the targeting item requests a copy of the computer's authentication token. This token is mostly identical to the token created during logon, which means it contains a list security identifiers (SIDs) for every group of which the computer is a member, including nested groups. The targeting item performs the configured comparison against this list of SIDs in the token, rather than multiple LDAP calls to a domain controller. This behavior aligns the behavior of computer security group targeting with that of user security group targeting. This should improve the performance of security group targeting.

Mike "Try, Try, Try Again" Stephens

Friday Mail Sack: Unintended Hilarity Edition

$
0
0

Hiya folks, Ned here again with another week’s questions, comments, and oddities. This time we’re talking:

Let’s get it.

Question

When we change security on our group policies using GPMC, we always get this disturbing message:

“The permissions for this GPO in the SYSVOL folder Are inconsistent with those in Active Directory”

image

We remove the “Read” and “Apply Group Policy” checkboxes from Authenticated Users by using the Delegation tab in GPMC, then substitute our own specific groups. The policies apply as expected with no errors even when we see this message.

Answer

It’s because you are not completely removing the Authenticated Users group. Authenticated Users does not only have “Read” and “Apply Group Policy”, it also has “List Object”, which is a “special” permission. The technique you’re using leaves Authenticated Users still ACL’ed, but with an invalid ACE of just “List”, and that’s what GPMC is sore about:

clip_image002

Instead of removing the two checkboxes, just remove Authenticated Users:

image

Better yet, don’t use the Delegation tab at all. The Security Filtering section on the main page sets the permissions for read and apply policy, which I presume is what you want. Just remove Authenticated Users and put in X. It gives you the desired resultant policy application, without any errors, and with less effort.

image

Delegation is designed for controlling who can manipulate policies. It only coincidentally manages who gets policies.

Question

Is it possible to setup multiple ADMT servers and allow both the ability to migrate passwords? I know during the setup of the PES service on a source DC consumes a key file generated from the ADMT server. I wasn’t sure if this ties only allows that server the ability to perform password migrations.

Answer

You can always have multiple ADMT copies, as long as they point to the same database; that’s where things tie together, not in ADMT itself. You could use multiple databases, but then you have to keep track of what you migrated in each one and it’s a real mess, especially for computer migration, which works in multiple phases.  You’d need multiple PES servers in the source domain and would have to point to the right one from the right ADMT DB instance when migrating users. This is highly discouraged and not a best practice.

Question

I was looking at Warren’s post on figuring out how much DFSR staging space to set aside. I have millions of files, how long can I expect that PowerShell to run? I want to schedule it to go once a week or so, but not if it runs for hours and incinerates the server.

Answer

It really depends on your hardware. But for a worst case, I used one of my gross physical test “servers” (it’s really workstation-class hardware) and generated many 1KB files plus 64 1MB files to have something to pick:

  • 500,000+64 files took 1 minute, 45 seconds to calculate
  • 1,000,000+64 files took 3 minutes, 30 seconds to calculate

The CPU and disk hit was negligible, but the memory usage significantly climbed. I would do this off hours if that server is starved for RAM.

Question

Can USMT migrate files that are longer in locations exceeding MAX_PATH rules of 260 characters?

Answer

image

image

Both scanstate and loadstate supports paths up to ~32,767 characters, with each “component” (file or folder name) in that path limited to 255 characters.

Question

According to this article, Windows Server 2008 and 2008 R2 DCs use static port 5722 for DFSR. We mainly use Win2008 R2 member servers, so when choosing a port to set DFSR to, should I choose a different port within the range 49152 – 65535? Or would it be OK to set DFSR to 5722 on member servers too, so that all traffic on 5722 will be DFSR regardless of whether it's a DC or a member server involved in the replication?

Answer

Totally OK to use 5722 on members and makes your life easier on the firewall config. Make sure you review: http://blogs.technet.com/b/askds/archive/2009/07/16/configuring-dfsr-to-a-static-port-the-rest-of-the-story.aspx

Question

What are the most common Active Directory-related support cases Microsoft gets? I’m planning some training and want to make sure I am hitting the most powerful topics.

Answer

In no particular order:

  • Slow Logon (i.e. between CTRL+ALT+DEL and a working, responsive desktop)
  • Group policy not applying
  • Kerberos failures (duplicate/missing SPNs and bloated token)
  • Domain upgrade best practices and failure (i.e. ADPREP, first new DC)
  • AD replication failing (USN rollback, lingering objects, tombstone lifetime exceeded)

The above five have remained the top issues for 12 years now. Within the rest of Directory Services support, ADFS and PKI have seen the most growth in the past year.

 

Other Things

In case you live on the Mariana Islands and only got your first Internet connection today, we’ve started talking about Windows 8. Shiny pictures and movies too.

 shinynewcopy

Preemptive strike: I cannot talk about Windows 8.

The power of inspirational infographics, via the awesome datavisualization.ch and from the brilliant H57 Design:

darthinfographic

The Cubs were robbed.

It’s time for IO9 2011 fall previews of science fiction and fantasy:

We released the Windows 7 theme you’ve been wanting, Jonathan!

Is this the greatest movie ever created? Certainly one of the most insane. It’s safe for work.


Unless you work in an anthropomorphic cannibalism outreach center

And finally, from an internal email thread discussing some new support case assignment procedures:

From: a manager
To: all DS support staff at Microsoft
Subject: case assignment changes

For cases that are dispatched to the Tier 3 queue and assigned based on an incorrect support topic or no support topic listed. Engineers will do the following:

1. Set appropriate Support topic

2. Update the SR Title-with: STFU\[insert new skill here]

3. Correct support topic for assignment

4. Dispatch the case back to the queue for re-assignment

Five minutes later:

From: a manager
To: all DS support staff at Microsoft
Subject: RE: case assignment changes

Incidentally, the acronym STFU stands for “Support Topic Field Update” :-)

 

Have a nice weekend, folks.

Ned “the F is for Frak” Pyle

Friday Mail Sack: Dang, This Year Went Fast Edition

$
0
0

Hi folks, Ned here again with your questions and comments. This week we talk:

On Dasher! On Comet! On Vixen! On --- wait, why does the Royal Navy name everything after magic reindeer? You weirdoes. 

Question

I am planning to increase my forest Tombstone Lifetime and I want to make sure there are no lingering object issues created by this operation. I am using doGarbageCollection to trigger garbage collection immediately, but finding with an increased Garbage Collection logging level that this does not reset the 12-hour schedule, so collection runs again sooner than I hoped. Is this expected?

Answer

Yes. The rules for garbage collection are:

  1. Runs 15 minutes after the DC boots up (15 minutes after the NTDS service starts, in Win2008 or later)
  2. Runs every 12 hours (by default) after that first time in #1
  3. Runs on the interval set in attribute garbageCollPeriod if you want to override the default 12 hours (minimum supported is 1 hour, no less)
  4. Runs when forced with doGarbageCollection

Manually running collection does not alter the schedule or “reset the timer”; only the boot/service start changes that, and only garbageCollPeriod alters the next time it will run automagically.

Therefore, if you wanted to control when it runs on all DCs and get them roughly “in sync”, restarting all the DCs or their NTDS services would do it. Just don’t do that to all DCs at precisely the same time or no one will be able to logon, mmmmkaaay?

Question

I’ve read your post on filtering group policy using WMI. The piece about Core versus Full was quite useful. Is there a way to filter based on installed roles and features though?

Answer

Yes, but only on Windows Server 2008 and later server SKUs, which supports a class named Win32_ServerFeature. This class returns an array of ID properties that populates only after installing roles and features. Since this is WMI, you can use the WMIC.EXE to see this before monkeying with the group policy:

image

So if you wanted to use the WQL filtering of group policy to apply a policy only to Win2008 FAX servers, for example:

image

On a server missing the FAX Server role, the policy does not apply:

image
If you still care about FAXes though, you have bigger issues. 

Question

We’re having issues with binding Macs (OS 10.6.8 and 10.7) to our AD domain that uses a '.LOCAL’ suffix.  Apple is suggesting we create Ipv6 AAAA and PTR records for all our DCs. Is this the only solution and could it cause issues?

Answer

That’s not the first time Apple has had issues with .local domains and may not be your only problem (here, here, here, etc.). Moreover, it’s not only Apple’s issue: .local is a pseudo top-level domain suffix used by multicast DNS. As our friend Mark Parris points out, it can lead to other aches and pains. There is no good reason to use .local and the MS recommendation is to register your top level domains then create roots based off children of that: for example, Microsoft’s AD forest root domain is corp.microsoft.com, then uses geography to denote other domains, like redmond.corp.microsoft.com and emea.corp.microsoft.com; geography usually doesn’t change faster than networks. The real problem was timing: AD was in development several years before the .local RFC released. Then mDNS variations had little usage in the next decade, compared to standard DNS. AD itself doesn’t care what you do as long as you use valid DNS syntax. Heck, we even used it automatically when creating Small Business Server domains.

Enough rambling. There should be no problem adding unused, internal network Ipv6 addresses to DNS; Win2008 and later already have IPv6 ISATAP auto-assigned addresses that they are not using either. If that’s what fixes these Apple machines, that’s what you must do. You should also add matching IPv6 network “subnets” to all your AD sites as well, just to be safe.

Although if it were me, I’d push back on Apple to fix their real issue and work with this domain, as they have done previously. This is a client problem on their end that they need to handle – these domains predate them by more than a decade. All they have to do is examine the SOA record and it will be clear that this is an internal domain, then use normal DNS in that scenario.

Oh, or you could rename your forest.

BBWWWWAAAAAAAHAHAHAHAHHAHAHAHHAHAHAHHAHAHAHAHAAA.

Sorry, had to do it. ツ

Question

We were reviewing your previous site coverage blog post. If I use this registry sitecoverage item on DCs in the two different sites to cover a DC-less site, will I get some form of load balancing from clients in that site? I expect that all servers with this value set will create SRV records in DNS to cover the site, and that DNS will simply follow normal round-robin load balancing when responding to client requests. Is this correct?

Answer

[From Sean Ivey, who continues to rock even after he traitorously left us for PFE – Ned]

From a client perspective, all that matters is the response they get from DC/DNS from invoking DCLocator.  So for clients in that site, I don’t care how it happens, but if DCs from other sites have DNS records registered for the DC-less site, then typical DNS round robin will happen (assuming you haven’t disabled that on the DNS server).

For me, the question is…”How do I get DCs from other sites to register DNS records for the DC-less site ?” review this:

http://technet.microsoft.com/en-us/library/cc937924.aspx

I’m partial to using group policy though.  I think it’s a cleaner solution.  You can find the GP setting that does the same thing here:

clip_image001

Simply enable the setting, enter the desired site, and make sure that it only applies to the DC’s you want it to apply to (you can do this with security filtering). 

Anyway, so I set this up in my lab just to confirm everything works as expected. 

My sites:

clip_image002

Notice TestCoverage has no DC’s.

My site links:

clip_image002[5]

Corp-HQ is my hub so auto site coverage should determine the DC’s in Corp-HQ are closest and should therefore cover site TestCoverage.

DNS:

clip_image002[7]

Whaddya know, Infra-DC1 is covering site TestCoverage as expected.

Next I enable the GPO I pointed out and apply it only to Infra-DC2 and voila!  Infra-DC2 (which is in the Corp-NA site) is now also covering the TestCoverage site:

clip_image002[9]

You have a slightly more complicated scenario because auto site coverage has to go one step farther (using the alphabet to decide who wins) but in the end, the result is the same. 

Question

We’re seeing very high CPU usage in DFSR and comparably poor performance. These are brand new servers - just unboxed from the factory - with excellent modern hardware. Are there any known issues that could cause this?

Answer

[Not mine, but instead paraphrased from an internal conversation with MS hardware experts; this resolved the issue – Ned]

Set the hardware C-State to maximize performance and not save power/lower noise. You must do this through the BIOS menu; it’s not a Microsoft software setting. We’ve also seen this issue with SQL and other I/O-intensive applications running on servers.

Question

Can NetApp devices host DFS Namespace folder targets?

Answer

This NetApp community article suggests that it works. Microsoft has no way to validate if this is true or not but sounds ok. In general, any OS that can present a Windows SMB/CIFS share should work, but it’s good to ask.

Question

How much disk performance reduction should we expect with DFSR, DFSN, FRS, Directory Services database, and other Active Directory “stuff” on Hyper-V servers, compared to physical machines?

Answer

We published a Virtual Hard Disk Performance whitepaper without much fanfare last year. While it does not go into specific details around any of those AD technologies, it provides tons of useful data for other enterprise systems like Exchange and SQL. Those apps are very “worst case” case as they tend to write much more than any of ours. It also thoroughly examines pure file IO performance, which makes for easy comparison with components like DFSR and FRS. It shows the metrics for physical disks, fixed VHD, dynamic VHD, and differencing VHD, plus it compares physical versus virtual loads (spoiler alert: physical is faster, but not as much as you might guess).

It’s an interesting read and not too long; I highly recommend it.  

Other Stuff

Joseph Conway (in black) was nearly beaten in his last marathon by a Pekinese:

clip_image00211
Looks ‘shopped, I’m pretty sure the dog had him

Weirdest Thanksgiving greeting I received last month? “Have a great Turkey experience.”

Autumn is over and Fail Blog is there (video SFW, site is often… not):

A couple excellent “lost” interviews from Star Wars. Mark Hamill before the UK release of the first film and the much of the cast just after Empire.  

New York City has outdone its hipster’itude again, with some new signage designed to prevent you from horrible mangling. For example:

image
Ewww?

IO9 has their annual Christmas mega super future gift guide out and there are some especially awesome suggestions this year. Some of my favorites:

Make also has a great DIY gift guide. Woo, mozzarella cheese kit!

Still can’t find the right gift for the girls in your life? I recommend Zombie Attack Barbie.

On a related topic, Microsoft has an internal distribution alias for these types of contingencies:

image
“A group whose goal is to formulate best practices in order to ensure the safety of Microsoft employees, physical assets, and IP in the event of a Zombie Apocalypse.” 

Finally

This is the last mail sack before 2012, as I am a lazy swine going on extended vacation December 16th. Mark and Joji have some posts in the pipeline to keep you occupied. Next year is going to be HUGE for AskDS, as Windows 8 info should start flooding out and we have all sorts of awesome plans. Stay tuned.

Merry Christmas and happy New Year to you all.

- Ned “oink” Pyle

Friday Mail Sack: Get Off My Lawn Edition

$
0
0

Hi folks, Ned here again. I know this is supposed to be the Friday Mail Sack but things got a little hectic and... ah heck, it doesn't need explaining, you're in IT. This week - with help from the ever-crotchety Jonathan Stephens - we talk about:

Now that Jonathan's Rascal Scooter has finished charging, on to the Q & A.

Question

We want to create a group policy for an OU that contains various computers needs to run for just Windows 7 notebooks only. All of our notebooks are named starting with an "N". Does group policy WMI filtering allows stacking conditions on the same group policy? 

Answer

Yes, you can chain together multiple query criteria, and they can even be from different classes or namespaces. For example, here I use both the Win32_OperatingSystem and Win32_ComputerSystem classes:

image

And here I use only the Win32_OperatingSystem class, with multiple filter criteria:

image

As long as they all evaluate TRUE, you get the policy. If you had a hundred of these criteria (please don’t) and 99 evaluate true but just one is false, the policy is skipped.

Note that my examples above would catch Win2008 R2 servers also; if you’ve read my previous posts, you know that you can also limit queries to client operating systems using the Win32_OperatingSystem property OperatingSystemSKU. Moreover, if you hadn’t used a predictable naming convention, you can also filter on with Win32_SystemEnclosure and query the ChassisTypes property for 8, 9, or 10 (respectively: “Portable”, “Laptop”, and “Notebook”). And no, I do not know the difference between these, it is OEM-specific. Just like “pizza box” is for servers. You stay classy, WMI.

Question

Is changing LDAP MaxPoolThreads a good or bad idea?

Answer

MaxPoolThreads controls the maximum number of simultaneous threads per-processor that a DC uses to work on LDAP requests. By default, it’s four per processor core. Increasing this value would allow a DC/GC to handle more LDAP requests. So if you have too many LDAP clients talking to too few DCs at once, raising this can reduce LDAP application timeouts and periodic “hangs”. As you might have guessed, the biggest complainer here is often MS Exchange and Outlook. If the performance counters “ATQ Threads LDAP" & "ATQ Threads Total" are constantly at the maximum number based on the number of processor and MaxPoolThreads value, then you are bottlenecking LDAP.

However!

DCs are already optimized to quickly return data from LDAP requests. If your hardware is even vaguely new and if you are not seeing actual issues, you should not increase this default value. MaxPoolThreads depends on non-paged pool memory, which on a Win2003 32-bit Windows OS is limited to 256MB (more on Win2008 32-bit). Meaning that if you still have not moved to at least x64 Windows Server 2003, don’t touch this value at all – you can easily hang your DCs. It also means you need to get with the times; we stopped making a 32-bit server OS nearly three years ago and OEMS stopped selling the hardware even before that. A 64-bit system's non-paged pool limit is 128GB.

In addition, changing the LDAP settings is often a Band-Aid that doesn’t address the real issue of DC capacity for your client/server base.  Use SPA or AD Data Collector sets to determine "Clients with the Most CPU Usage" under section "Ldap Requests”. Especially if the LDAP queries are not just frequent but also gross - there are also built-in diagnostics logs to find poorly-written requests:

HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\NTDS\Diagnostics\
15 Field Engineering

To categorize search operations as expensive or inefficient, two DWORD registry keys are used:

HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\NTDS\Parameters\
Expensive Search Results Threshold

HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\NTDS\Parameters\
Inefficient Search Results Threshold

These DWORD registry keys have the following default values:

  • Expensive Search Results Threshold: 10000
  • Inefficient Search Results Threshold: 1000

For example, here’s an inefficient result written in the DS event log; yuck, ick, argh!:

Event Type: Information
Event Source: NTDS General
Event Category: Field Engineering
Event ID: 1644
Description:
The Search operation based at RootDSE
using the filter:
& ( | ( & ( (objectCategory = <val>) (objectSid = *) ! ( (sAMAccountType | <bit_val>) ) ) & ( (objectCategory = <val>) ! ( (objectSid = *) ) ) & ( (objectCategory = <val>) (groupType | <bit_val>) ) ) (aNR = <substr>) <startSubstr>*) )

visited 40 entries and returned 0 entries.

Finally, this article should be required reading to any application developers in your company:

Creating More Efficient Microsoft Active Directory-Enabled Applications -
http://msdn.microsoft.com/en-us/library/windows/desktop/ms808539.aspx#efficientadapps_topic04

(The title should be altered to “Creating even slightly efficient…” in my experience).

Question

I want to implement many-to-one certificate mappings by using Issuer and Subject DN match. In altSecurityIdentities I put the following string:

X509:<I>DC=com,DC=contoso,CN=Contoso CA<S>DC=com,DC=contoso,CN=users,CN=user name

In a given example, a certificate with “cn=user name, cn=users, dc=contoso, dc=com” in the Subject field will be mapped to a user account, where I define the mappings. But in that example I get one-to-one mapping. Can I use wildcards here, say:

X509:<I>DC=com,DC=contoso,CN=Contoso CA<S>DC=com,DC=contoso,CN=users,CN=*

So that any certificate that contains “cn=<any value>, cn=users, dc=contoso, dc=com” will be mapped to the same user account?

Answer

[Sent from Jonathan while standing in the 4PM dinner line at Bob Evans]

Unfortunately, no. All that would do is map a certificate with a wildcard subject to that account. The only type of one-to-many mapping supported by the Active Directory mapper is configuring it to ignore the subject completely. Using this method, you can configure the AD mappings so that any certificate issued by a particular CA can be mapped to a single user account. See the following: http://technet.microsoft.com/en-us/library/bb742438.aspx#ECAA

Question

I've recently been working on extending my AD schema with a new back-linked attribute pair, and I used the instructions on this blog and MSDN to auto-generate the linkIDs for my new attributes. Confusingly, the resulting linkIDs are negative values (-912314983 and -912314984). The attributes and backlinks seem to work as expected, but when looking at the MSDN definition of the linkID attribute, it specifically states that the linkID should be a positive value. Do you know why I'm getting a negative value, and if I should be concerned?

Answer

[Sent from Jonathan’s favorite park bench where he feeds the pigeons]

The negative numbers are correct and expected, and are the result of a feature called AutoLinkID. Automatically generated linkIDs are in the range of 0xC0000000-0xFFFFFFFC (-1,073,741,824 to -4). This means that it is a good idea to use positive numbers if you are going to set the linkID manually. That way you are guaranteed not to conflict with automatically generated linkIDs.

The bottom line is, this is expected under the circumstances and you're all good.

Question

Is there any performance advantage to turning off the DFSR debug logging, lowering the number of logs, or moving the logs to another drive? You explained how to do this here in the DFSR debug series, but never mentioned it in your DFSR performance tuning article.

Answer

Yes, you will see some performance improvements turning off the logging or lowering the log count; naturally, all this logging isn’t free, it takes CPU and disk time. But before you run off to make changes, remember that if there are any problems, these logs are the only thing standing between you and the unemployment line. Your server will be much faster without any anti-virus software too, and your company’s profits higher without fire insurance; there are trade-offs in life. That’s why – after some brief agonizing, followed by heavy drinking – I decided not to include it in the performance article.

Moving the logs to another physical disk than Windows is safe and may take some pressure of the OS drive.

Question

When I try to join this Win2008 R2 computer to the domain, it gives an error I’ve never seen before:

"The following error occurred attempting to join the domain "contoso.com":
The request is not supported."

Answer

This server was once a domain controller. During demotion, something prevented the removal of the following registry value name:

HKEY_LOCAL_MACHINE\System\CurrentControlSet\Services\NTDS\Parameters\
DSA Database file

Delete that "Dsa Database File" value name and attempt to join the domain again. It should work this time. If you take a gander at the %systemroot%\debug\netsetup.log, you’ll see another clue that this is your issue:

NetpIsTargetImageADC: Determined this is a DC image as RegQueryValueExW loaded Services\NTDS\Parameters\DSA Database file: 0x0
NetpInitiateOfflineJoin: The image at C:\Windows\system32\config\SYSTEM is a DC: 0x32

We started performing this check in Windows Server 2008 R2, as part of the offline domain join code changes. Hurray for unintended consequences!

Question

We have a largish AD LDS (ADAM) instance we update daily through by importing CSV files that deletes all of yesterday’s user objects and import today’s. Since we don’t care about deleted objects, we reduced the tombstoneLifetime to 3 days. The NTDS.DIT usage, as shown by the 1646 Garbage Collection Event ID, shows 1336mb free with a total allocation of 1550mb – this would suggest that there is a total of 214MB of data in the database.

The problem is that Task Manager shows a total of 1,341,208K of Memory (Private Working Set) in use. The memory usage is reduced to around the 214MB size when LDS is restarted; however, when Garbage Collection runs the memory usage starts to climb. I have read many KB articles regarding GC but nothing explains what I am seeing here.

Answer

Generally speaking, LSASS (and DSAMAIN, it’s red-headed ADLDS cousin) is designed to allocate and retain more memory – especially ESE (aka “Jet”) cache memory – than ordinary processes, because LSASS/DSAMAIN are the core processes of a DC or AD/LDS server. I would expect memory usage to grow heavily during the import, the deletions, and then garbage collection; unless something else put pressure on the machine for memory, I’d expect the memory usage to remain. That’s how well-written Jet database applications work – they don’t give back the memory unless someone asks, because LSASS and Jet can reuse it much faster when needed if it’s already loaded; why return memory if no one wants it? That would be a performance bug unto itself.

The way to show this in practical terms is to start some other high-memory process and validate that DSAMAIN starts to return the demanded memory. There are test applications like this on the internet, or you can install some app that likes to gobble a lot of RAM. Sometimes I’ll just install Wireshark and load a really big saved network capture – that will do it in a pinch. :-D You can also use the ESE performance counters under the “Database” and “Database ==> Instances” to see more about how much of the memory usage is Jet database cache size.

Regular DCs have this behavior too, as does DFSR and do other applications. You paid for all that memory; you might as well use it.

(Follow up from the customer where he provided a useful PowerShell “memory gobbler” example)

I ran the following Windows PowerShell script a few times to consume all available memory and the DSAMAIN process started releasing memory immediately as expected:

$chunk = "XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX"
for ($i = 0; $i -lt 5000; $i++)

       $chunk += $chunk
}

Question

When I migrate users from Windows 7 to Windows 7 using USMT 4.0, their pinned and automatic taskbar jump lists are lost. Is this expected?

Answer

Yes. For those poor $#%^&#s readers still using XP, Windows 7 introduced application taskbar pinning and a special menu called a jump list:

image

Pinned and Recent jump lists are not migrated by USMT, because the built-in OS Shell32 manifest called by USMT (c:\windows\winsxs\manifests\*_microsoft-windows-shell32_31bf3856ad364e35_6.1.7601.17514_non_ca4f304d289b7800.manifest) contains this specific criterion:

<pattern type="File">%CSIDL_APPDATA%\Microsoft\Windows\Recent [*]</pattern>

Note how it is notRecent\* [*], which would grab the subfolder contents of Recent. It only copies the direct file contents of Recent. The pinned/automatic jump lists are stored in special files under the CustomDestinations and AutomaticDestinations folders inside the Recent folder. All the other contents of Recent are shortcut files to recently opened documents anywhere on the system:

image

If you examine these special files, you'll see that they are binary, unreadable, and totally proprietary:

image

Since these files are binary and embed all their data in a big blob of goo, they cannot simply be copied safely between operating systems using USMT. The paths they reference could easily change in the meantime, or the data they reference could have been intentionally skipped. The only way this would work is if the Shell team extended their shell migration plugin code to handle it. Which would be a fair amount of work, and at the time these manifests were being written, customers were not going to be migrating from Win7 to Win7. So no joy. You could always try copying them with custom XML, but I have no idea if it would work at all and you’re on your own anyway – it’s not supported.

Question

We have a third party application that requires DES encryption for Kerberos. It wasn’t working from our Windows 7 clients though, so we enabled the security group policy “Network security: Configure encryption types allowed for Kerberos” to allow DES. After that though, these Windows 7 clients stopped working in many other operations, with event log errors like:

Event ID: 4
Source: Kerberos
Type: Error
"The kerberos client received a KRB_AP_ERR_MODIFIED error from the server host/myserver.contoso.com. This indicates that the password used to encrypt the kerberos service ticket is different than that on the target server. Commonly, this is due to identically named machine accounts in the target realm (domain.com), and the client realm. Please contact your system administrator."

And “The target principal name is incorrect” or “The target account name is incorrect” errors connecting to network resources.

Answer

When you enable DES on Windows 7, you need to ensure you are not accidentally disabling the other cipher suites. So don’t do this:

image

That means only DES is supported and you just disabled RC4, AES, etc.

Instead, do this:

image

If it exists at all and you want DES, this registry DWORD value to be 0x7fffffff on Windows 7 or Win2008 R2:

MACHINE\Software\Microsoft\Windows\CurrentVersion\Policies\System\Kerberos\Parameters\
SupportedEncryptionTypes

If it’s set to 0x3, all heck will break loose. This security policy interface is admittedly tiresome in that it has no “enabled/disabled” toggle. Use GPRESULT /H or /Z to see how it’s applying if you’re not sure about the actual settings.

Other Stuff

Windows 8 Consumer Preview releases February 29th, as if you didn’t already know it. Don’t ask me if this also means Windows Server 8 Beta the same exact day, I can’t say. But it definitely means the last 16 months of my life finally start showing some results. As will this blog…

Apparently we’ve been wrong about Han and Greedo since day one. I want to be wrong though. Thanks for passing this along Tony. And speaking of which, thanks to Ted O and the rest of the gang at LucasArts for the awesome tee!

This is a … creepily good music video? Definitely a nice find, Mark!


This is basically my home video collection

My new favorite site of the week? The Awesomer. Do not visit if you have to be somewhere in an hour.

Wait, no… my new favorite site is That’s Nerdaliscious. Do not read if hungry or dorky. 

Sick of everyone going on about Angry Birds? Love Chuck Norris? Go here now. There are a lot of these; don't miss Mortal Combat versus Donkey Kong.

Ah, there’s Waldo.

Likely the coolest advertisement for something that doesn’t yet exist that you will see this year.


I need to buy stock in SC Johnson. Can you imagine the Windex sales?!

Until next time.

- Ned “Generation X” Pyle with Jonathan “The Greatest Generation” Stephens


Group Policy Management Improvements in Windows Server "8" Beta

$
0
0

Hi all, Ned here again. If you've been supporting group policy for years, you’ve grown used to its behaviors. For something designed to manage an enterprise, its initial implementation wasn’t easy to manage itself. The Group Policy Management Console improved this greatly after Windows Server 2003, but there was room for enhancement.

Windows Server "8" Beta introduces a number of interesting Group Policy management changes to advance things. These include detecting overall replication consistency as well as remote policy refresh and easier resultant set of policy troubleshooting. Windows 8 Consumer Preview benefits from some of these changes as well.

Let's dig in.

Infrastructure Status

Once upon a time, someone wrote a Windows 2000 resource kit utility called gpotool.exe (no longer supported). It was supposed to tell you if the SYSVOL and AD portions of a group policy were synchronized on a given domain controller and between DCs in a domain. If it returned message "Policies OK", you were supposed to be golden.

Unfortunately, gpotool is not very bright or honest, which is why we do not recommend customers use it. It only checks the gpt.ini files in SYSVOL. Anyone who manages group policy knows that each GP GUID folder in SYSVOL contains many files critical to applying group policy. The gpt.ini existing is immaterial if the registry.pol does not exist or is some heinous stale version. Furthermore, gpotool bases everything on the gpt.ini version matching between AD and SYSVOL and alerting you if they don't. Except that the version matching alone has not mattered since Windows 2000 and file consistency checking is super important.

Enter Windows Server "8" Beta. When you fire up GPMC from a server or RSAT, then navigate to a domain node, you now see a new Status tab (more properly called the Group Policy Infrastructure Status tool). GPMC sets the DC it connected to as a baseline source of comparison. By default, that would be the PDC emulator, which GPMC tries to connect to first.

image

If you click Detect Now, the computer running GPMC directly reaches out to all the domain controllers in that domain using the LDAP and SMB protocols. It compares all the SYSVOL group policy file hashes, file counts, ACLs, and GPT versions against the baseline server. It also checks each DC's AD group policy object count, versions, and ACLS against the baseline. If everything is copacetic, you get the good news right there in the UI.

image

If it's not, you don't:

image

Note how the report renders above. If the Active Directory and SYSVOL columns are blank, the versions match between gpt and AD, and this means that the file hashes or security are out of sync (an indication of latency at the least); otherwise you will see version messages. If the FRS or DFSR service isn't running on a DC other than the baseline or SYSVOL is not shared, the SysVol message changes to Inaccessible. If you turn off a DC or NTDS service, the Active Directory field changes to Inaccessible. If you just deleted or added a group policy, the Active Directory field changes to Number of GPOS for comparison. It's all straightforward.

This new tool doesn’t grant permission to turn off your brain, of course. It's perfectly normal for AD and SYSVOL to be latent and out of sync between DCs for periods of time. Don't assume that because you see servers showing replication in progress that it is an error - that's why it specifically doesn't say “error” in GPMC. Finally, keep in mind that this new functionality version in the public Beta is naturally a bit unstable; feel free to report issues the Windows Server 8 Beta Forumsalong with detailed repro steps, and we can chat about if your issue is unknown. For example, stopping the DFSR service on the PDCE and then then clicking Detect Now to use that DC as the baseline terminates the MMC. Don’t take it too hard - work in progress, right? We'd love your feedback.

Moving right along…

Remote Policy Refresh

You can now use GPMC to target an OU and force group policy refresh on all of its computers and their currently logged on users. Simply right click any organizational unit and click Group Policy Update. The update occurs within 10 minutes (randomized on each targeted computer) in order to prevent crushing some poor DC in a branch office.

image

image

image

Windows Server "8" Beta Group Policy also updates the GroupPolicy PowerShell module to include a new cmdlet named Invoke-GpUpdate. If you examine its help, you see that it is very much like the classic gpupdate.exe. If you -force using invoke-gpupdate, you do the same as /force in gpupdate.exe, for instance.

NAME

Invoke-GPUpdate

SYNTAX

Invoke-GPUpdate [[-Computer] <string>] [[-RandomDelayInMinutes] <int>] [-AsJob] [-Boot] [-Force] [-LogOff] [-Target <string>] [<CommonParameters>]

Obviously, this cmdlet gives you much more control over the remote policy refresh process than GPMC. For instance, you can target a particular computer:

Invoke-gpupdate -computer <some computer>

Moreover, unlike the "within 10 minutes" pseudo-random behavior of GPMC, you can make the policy refresh happen right now and forcing group policy to update regardless of version changes. I don't know about you, but if I am interactively invoking a policy update for a given computer, I am not interested in waiting!

image

Since this is PowerShell, you have a great deal of flexibility compared to a purpose-built graphical or command-line tool. For example, you can get a list of computers with an arbitrary description then invoke against each one using a pipeline to for-eachobject, regardless of OU:

image

If you’re interested, this tool works by creating remote scheduled tasks. That's how it works for logged on users and with randomized refresh times. Another good reason to ensure the Task Scheduler service is running.

image

New RSOP Logging Data

I saved the best for last. The group policy resultant set of planning logs include a number of changes designed make troubleshooting and policy analysis easier. Just like in the last few versions of Windows, you can still use GPMC Group Policy Results or GPRESULT /H to gather an html log file showing how and what policy applied to a user and computer.

When you open that resulting html file, you now see an updated Summary section that provides better "at a glance" information on policy working or not and the type of network speeds detected. Even better is the new Component Status area. This shows you the time taken for each element of group policy processing to complete processing.

image

It also stores the associated operational event log activity under View Log that used to require you running gplogview.exe. Rather than parsing the event log with an Activity ID for the computer and user portions of policy processing, you just click the link to see it all unfold before you.

image

Finally, there is a change to the HTML result file for the applied policies. After 12 years, we’ve reached a point where there are thousands of individual Administrative template entries; far more than anyone could possibly remember or reliably discern from their titles. To make this easier, the Windows 8 version of the report now includes explanatory hotlinks to each of those policy entries.

image

By clicking the links in the report, you get the full Explanation text included with that policy entry. Like in this case, the new Primary Computer policy for roaming profiles (which I’ll discuss in a future post).

image

Nifty.

Key Point

Remote RSOP logging and Group Policy refresh require that you open firewall ports on the targeted computers. This means allowing inbound communication for RPC, WMI/DCOM, event logs, and scheduled tasks. You can enable the built-in Windows Advanced Firewall inbound rules:

  • Remote Policy Update
    • Remote Scheduled Tasks Management (RPC)
    • Remote Scheduled Tasks Management (RPC-EPMAP)
    • Windows Management Instrumentation (WMI-in)
  • Remote Policy Logging
    • Remote Event Log Management (NP-in)
    • Remote Event Log Management (RPC)
    • Remote Event Log Management (RPC-EPMAP)
    • Windows Management Instrumentation (WMI-in)

These are part of the “Remote Scheduled Tasks Management”, “Remote Event Log Management”, and “Windows Management Instrumentation” groups. These are TCP RPC port 135, named pipe port 445, and the dynamic ports associated with the endpoint mapper, like always.

Feedback and Beta Reminder

The place to send issues is the IT Pro TechNet forums. That engages everyone from our side through our main conduits and makes your feedback noticeable. Not all developers are readers of this blog, naturally.

Furthermore, remember that this article references a pre-release product. Microsoft does not support Windows 8 Consumer Preview or Windows Server "8" Beta in production environments unless you have a special agreement with Microsoft. Read that EULA you accepted when installing!

Until next time,

Ned “I used a fancy arrow!” Pyle

Saturday Mail Sack: Because it turns out, Friday night was alright for fighting edition

$
0
0

Hello all, Ned here again with our first mail sack in a couple months. I have enough content built up here that I actually created multiple posts, which means I can personally guarantee there will be another one next week. Unless there isn't!

Today we answer your questions around:

One side note: as I was groveling old responses, I came across a handful of emails I'd overlooked and never responded to; <insert various excuses here>. People who know me know that I don’t ignore email lightly. Even if I hadn't the foggiest idea how to help, I'd have at least responded with a "Duuuuuuuuuuurrrrrrrr, no clue, sorry".

Therefore, I'll make you deal: if you sent us an email in the past few months and never heard back, please resend your question and I'll answer them as best I can. That way I don’t spend cycles answering something you already figured out later, but if you’re still stuck, you have another chance. Sorry about all that - what with Windows 8 work, writing our internal support engineer training, writing public content, Jonathan having some kind of south pacific death flu, and presenting at internal conferences… well, only the usual insane Microsoft Office clipart can sum up why we missed some of your questions:

clip_image002

On to the goods!

Question

Is it possible to create a WMI Filter that detects only virtual machines? We want a group policy that will apply specifically to our virtualized guests.

Answer

Totally possible for Hyper-V virtual machines: You can use the WMI class Win32_ComputerSystem with a property of Model like “Virtual Machine” and property Manufacturer of “Microsoft Corporation”. You can also use class Win32_BaseBoard for the Product property, which will be “Virtual Machine” and property Manufacturer that will be “Microsoft Corporation”.

image

Technically speaking, this might also capture Virtual PC machines, but I don’t have one handy to see, and I doubt you are allowing those to handle production workloads anyway. As for EMC VMWare, Citrix Xen, KVM, Oracle Virtual Box, etc. you’ll have to see what shows for Win32_BaseBoard/Win32_ComputerSystem in those cases and make sure your WMI filter looks for that too. I don’t have any way to test them, and even if I did, I'd still make you do it out of spite. Gimme money!

Which reminds me - Tad is back:

image

Question

The Understand and Troubleshoot AD DS Simplified Administration in Windows Server "8" Beta guide states:

Microsoft recommends that all domain controllers provide DNS and GC services for high availability in distributed environments; these options default to on when installing a domain controller in any mode or domain.

But when I run Install-ADDSDomainController -DomainName corp.contoso.com -whatif it returns that the cmdlet will not install the DNS Server (DNS Server: No).

If Microsoft recommends that all domain controllers provide DNS, why do I need to specify -InstallDNS argument?

Answer

The output of DNS Server: No is a cosmetic issue with the output of -whatif. It should say YES, but doesn't unless you specifically use the $true parameter. You don't have to specify -installdns; the cmdlet will automatically* install DNS server unless you specify -installdns:$false.

* If you are using Windows DNS on domain controllers, that is. The UTG isn't totally accurate in this version (but will be in the next). The logic is that if that domain already hosts the DNS, all subsequent DCs will also host the DNS by default. So to be very specific:

1. New forest: always install DNS
2. New child or new tree domain: if the parent/tree domain hosts DNS, install DNS
3. Replica: if the current domain hosts DNS, install DNS

Question

How can I disable a user on all domain controllers, without waiting for (or forcing) AD replication?

Answer

The universal in-box way that works in all operating systems would be to use DSMOD.EXE USER and feed it the DC names in a list. For example:

1. Create a text file that contains all your DC in a forest, in a line-separated list:

2008r2-01
2008r2-02

2. Run a FOR loop command to read that list and disable the specified user against each domain controller.

FOR /f %i IN (some text file) DO dsmod user "some DN" -disabled -yes -s %i

For instance:

image

You also have the AD PowerShell option in your Win2008 R2 DC environment, and it’s much easier to automate and maintain. You just tell it the domain controllers' OU and the user and let it rip:

get-adcomputer -searchbase "your DC OU" -filter * | foreach {disable-adaccount "user logon ID" -server $_.dnshostname}

For instance:

image

If you weren't strictly opposed to AD replication (short circuiting it like this isn't going to stop eventual replication traffic) you can always disable the user on one DC then force just that single object to replicate to all the other DCs. Check out repadmin /replsingleobj or the new Windows Server "8" Beta " sync-adobject cmdlet.

image

 The Internet also has many further thoughts on this. It's a very opinionated place.

Question

We have found that modifying the security on a DFSR replicated folder and its contents causes a big DFSR replication backlog. We need to make these permissions changes though; is there any way to avoid that backlog?

Answer

Not the way you are doing it. DFSR has to replicate changes and you are changing every single file; after all, how can you trust a replication system that does not replicate? You could consider changing permissions "from the bottom up" - where you modify perms on lower level folders first - in some sort of staged fashion to minimize the amount of replication that has to occur, but it just sounds like a recipe to get things wrong or end up replicating things twice, making it worse. You will just have to bite the bullet in Windows Server 2008 R2 and older DFSR. Do it on a weekend and next time, treat this as a lesson learned and plan your security design better so that all of your user base fits into the model using groups.

However…

It is a completely different story if you switch to Windows Server "8" Beta - well really, the RTM version when it ships. There you can use Central Access Policies (similar to Windows Server 2008 R2's global object access auditing). This new kind of security system is part of the Dynamic Access Control feature and abstracts the user access from NTFS, meaning you can change security using claims policy and not actually change the files on the disk (under some but not all circumstances - more on this when I write a proper post after RTM). It's amazing stuff; in my opinion, DAC is the first truly huge change in Windows file access control since Windows NT gave us NTFS.

image

Central Access Policy is not a trivial thing to implement, but this is the future of file servers. Admins should seriously evaluate this feature when testing Windows Server "8" Beta in their lab environments and thinking about future designs. Our very own Mike Stephens has written at length about this in the Understand and Troubleshoot Dynamic Access Control in Windows Server "8" Beta guide as well.

Question

[Perhaps interestingly to you the reader, this was my question to the developers of AD PowerShell. I don’t know everything after all… - Ned]

I am periodically seeing error "invalid enumeration context" when querying the Redmond domain using get-adcomputer. It’s a simple query to return all the active Windows 8 and Windows Server "8" computers that were logged into since February 15th and write them to a CSV file:

image

It runs for quite a while and sometimes works, sometimes fails. I don’t find any well-explained reference to what this error means or how to avoid it, but it smells like a “too much data asked for over too long a period of time” kind of issue.

Answer

The enumeration contexts do have a finite hardcoded lifetime and you will get an error if they expire. You might see this error when executing searches that search a huge quantity of data using limited indexed attributes and return a small data set. If we hit a DC that is not very busy then the query will run faster and could have enough time to complete for a big dataset like this query. Server hardware would also be a factor here. You can also try searching starting at a deeper level. You could also tweak the indexes, although obviously not in this case.

[For those interested, when the query worked, it returned roughly 75,000 active Windows 8 family machines from that domain alone. Microsoft dogfoods in production like nobody else, baby - Ned]

Question

Is there any chance that DFSR could lock a file while it is replicating outbound and prevent user access to their data?

Answer

DFSR uses the BackupRead() function when copying a file into the staging folder (i.e. any file over 64KB, by default), so that should prevent any “file in use” issues with applications or users; the file "copying" to the staging folder is effectively instantaneous and non-exclusive. Once staged and marshaled, the copy of the file is replicated and no user has any access to that version of the file.

For a file under 64KB, it is simply replicated without staging and that operation of making a copy and sending it into RPC is so fast there’s no reasonable way for anyone to ever see any issues there. I have certainly never seen it, for sure, and I should have by now after six years.

Question

Why does TechNet state that USMT 4.0 offline migrations don’t work for certain OS settings? How do I figure out the complete list?

Answer

Manifests that use migration plugin DLLs aren’t processed when running offline migrations. It's just a by design limitation of USMT and not a bug or anything. To see which manifests you need to examine and consider creating custom XML to handle, review the complete list at Understanding what the USMT 4.0 CONFIG manifests migrate (Part 1: Introduction).

Question

One of my customers has found that the "Everyone" group is added to the below folders in Windows 2003 and Windows 2008:

Windows Server 2008

C:\ProgramData\Microsoft\Crypto\DSS\MachineKeys

C:\ProgramData\Microsoft\Crypto/RSA\MachineKeys

Windows Server 2003

C:\Documents and Settings\All Users\Application Data\Microsoft\Crypto\DSS\MachineKeys

C:\Documents and Settings\All Users\Application Data\Microsoft\Crypto\RSA\MachineKeys

1. Can we remove the "Everyone" group and give permissions to another group like - Authenticated users for example?

2. Will replacing that default cause issues?

3. Why is this set like this by default?

Answer

[Courtesy of:

image

]

These permissions are intentional. They are intended to allow any process to generate a new private key, even an Anonymous one. You'll note that the permissions on the MachineKeys folder are limited to the folder only. Also, you should note that inheritance has been disabled, so the permissions on the MachineKeys folder will not propagate to new files created therein. Finally, the key generation code itself modifies the permissions on new key container files before the private key is actually written to the container file.

In short, messing with these permissions will probably lead to failures in creating or accessing keys belonging to the computer. So please don't touch them.

1. Exchanging Authenticated Users with Everyoneprobably won't cause any problems. Microsoft, however, doesn't test cryptographic operations after such a permission change; therefore, we cannot predict what will happen in all cases.

2. See my answer above. We haven't tested it. We have, however, been performing periodic security reviews of the default Windows system permissions, tightening them where possible, for the last decade. The default Everyone permissions on the MachineKeys folder have cleared several of these reviews.

3. In local operations, Everyone includes unidentified or anonymous users. The theory is that we always want to allow a process to generate a private key. When the key container is actually created and the key written to it, the permissions on the key container file are updated with a completely different set of default permissions. All the default permissions allow are the ability to create a file, read and write data. The permissions do not allow any process except System to launch any executable code.

Question

If I specify a USMT 4.0 config.xml child node to prevent migration, I am still seeing the settings migrate. But if I set the parent node, those settings do not migrate. The consequence being that no child nodesmigrate, which I do not want.

For example, on XP the Dot3Svc service is set to Manual startup.  On Win7, I want the Dot3Svc service set to Automatic startup.  If I use this config.xml on the loadstate, the service is set to manual like the XP machine and my "no" setting is ignored:

<componentdisplayname="Networking Connections"migrate="yes"ID="network_and_internet\networking_connections">

  <componentdisplayname="Microsoft-Windows-Wlansvc"migrate="yes"ID="<snip>"/>

  <componentdisplayname="Microsoft-Windows-VWiFi"migrate="yes"ID="<snip>"/>

  <componentdisplayname="Microsoft-Windows-RasConnectionManager"migrate="yes"ID="<snip>"/>

  <componentdisplayname="Microsoft-Windows-RasApi"migrate="yes"ID="<snip>"/>

  <componentdisplayname="Microsoft-Windows-PeerToPeerCollab"migrate="yes"ID="<snip>"/>

  <componentdisplayname="Microsoft-Windows-Native-80211"migrate="yes"ID="<snip>"/>

  <componentdisplayname="Microsoft-Windows-MPR"migrate="yes"ID="<snip>"/>

  <componentdisplayname="Microsoft-Windows-Dot3svc"migrate="no"ID="<snip>"/>

</component>

Answer

Two different configurations can cause this symptom:

1. You are using a config.xml file created on Windows 7, then running it on a Windows XP computer with scanstate /config

2. The source computer was Windows XP and it did not have a config.xml file set to block migration.

When coming from XP, where downlevel manifests were used, loadstate does not process those differently-named child nodes on the destination Win7 computer. So while the parent node set to NO would work, the child nodes would not, as they have different displayname and ID.

It’s a best practice to use a config.xml in scanstate as described in http://support.microsoft.com/kb/2481190, if going from x86 to x64; otherwise, you end up with damaged COM settings. Otherwise, you only need to generate per-OS config.xml files if you plan to change default behavior. All the manifests run by default if there is a config.xml with no modifications or if there is no config.xml at all.

Besides being required for XP to block settings, you should also definitely lean towards using config.xml on the scanstate rather than the loadstate. If using Vista to Vista, Vista to 7, or 7 to 7, you could use the config.xml on either side, but I’d still recommend sticking with the scanstate; it’s typically better to block migration from adding things to the store, as it will be faster and leaner.

Other Stuff

[Many courtesy of our pal Mark Morowczynski -Ned]

Happy belated 175th birthday Chicago. Here's a list of things you can thank us for, planet Earth; where would you be without your precious Twinkies!?

Speaking of Chicago…

All the new MCSE and certification news reminded me of the other side to that coin.

Do you know where your nearest gun store is located? Map of the Dead does. Review now; it will be too late when the zombies rise from their graves, and I don't plan to share my bunker, Jim.

image

If you call yourself an IT Pro, you owe it to yourself to visit moviecarposters.com right now and buy… everything. They make great alpha geek conversation pieces. To get things started, I recommend these:

clip_image002[6]clip_image004clip_image006
Sigh - there is never going to be another Firefly

And finally…

I started re-reading Terry Pratchett, picking up where from where I left off as a kid. Hooked again. Damn you English writers, with your understated awesomeness!

Ok, maybe not all English Writers…

image

Until next time,

- Ned "Jonathan is seriously going to kill me" Pyle

How to NOT Use Win32_Product in Group Policy Filtering

$
0
0

Hi all, Ned here again. I have worked many slow boot and slow logon cases over my career. The Directory Services support team here at Microsoft owns a sizable portion of those operations - user credentials, user profiles, logon and startup scripts, and of course, group policy processing. If I had to pick the initial finger pointing that customers routinely make, it's GP. Perhaps it's because group policy is the least well-understood part of the process, or maybe because it's the one with the most administrative fingers in the pie. When it comes down to reality though, group policy is more often not the culprit. Our new changes in Windows 8 will help you make that determination much quicker now.

Today I am going to talk about one of those times that GPO is the villain. Well, sort of... he's at least an enabler. More appropriately, the optional WMI Filtering portion of group policy using the Win32_Product class. Win32_Product has been around for many years and is both an inventory and administrative tool. It allows you to see all the installed MSI packages on a computer, install new ones, reinstall them, remove them, and configure them. When used correctly, it's a valuable option for scripters and Windows PowerShell junkies.

Unfortunately, Win32_Product also has some unpleasant behaviors. It uses a provider DLL that validates the consistency of every installed MSI package on the computer - or off of it, if using a remote administrative install point. That makes it very, very slow.

Where people trip up usually is group policy WMI filters. Perhaps the customer wants to apply managed Internet Explorer policy based on the IE version. Maybe they want to set AppLocker or Software Restriction policies only if the client has a certain program installed. Perhaps even use - yuck - Software Installation policy in a more controlled fashion.

Today I talk about some different options. Mike didn’t write this but he had some good thoughts when we talked about this offline so he gets some credit here too. A little bit. Tiny amount, really. Hardly worth mentioning.

If you have no idea what group policy WMI filters are, start here:

Back? Great, let's get to it.

Don’t use Win32_Product

The Win32_Product WMI class is part of the CIMV2 namespace and implements the MSI provider (msiprov.dll and associated msi.mof) to list and validateinstalled installation packages. You will see MsiInstaller event 1035 in the Application log for each application queried by the class:

Source: MsiInstaller
Event ID: 1035
Description:
Windows Installer reconfigured the product. Product Name: <ProductName>. Product Version: <VersionNumber>. Product Language: <languageID>. Reconfiguration success or error status: 0.

And constantly repeated System events:

Event Source: Service Control Manager

Event ID: 7035

Description:

The Windows Installer service was successfully sent a start control.

 

Event Type: Information

Event Source: Service Control Manager

Event ID: 7036

Description:

That validation piece is the real speed killer. So much, in fact, that it can lead to group policy processing taking many extra minutes in Windows XP when you use this class in a WMI filter - or even cause processing to time out and fail altogether.. This is even more likely when:

  • The client contains many installed applications
  • Installation packages are sourced from remote file servers
  • Install packages used certificate validation and the user cannot access the certificate revocation list for that package
  • Your client hardware is… crusty.

Furthermore, Windows Vista and later Windows versions cap WMI filters execution times at 30 seconds; if they fail to complete by then, they are treated as FALSE. On those OS versions, it will often appear that Win32_Product just doesn’t work at all.

image

What are your alternatives?

Group Policy Preferences, maybe

Depending on what you are trying to accomplish, Group Policy Preferences could be the solution. GPP includes item-level targeting that has fast, efficient filtering of just about any criteria you can imagine. If you are trying to set some computer-based settings that a user cannot change and don’t mind preferences instead of managed policy settings, GPP is the way to go. As with all software, make sure you evaluate our latest patches to ensure it works as desired. As of this writing, those are:

For instance, let's say you have a plotting printer that Marketing cannot correctly use without special Contoso client software. Rather than using managed computer policy to control client printer installation and settings, you can use GPP Registry or Printer settings to modify the values needed.

image

Then you can use Item Level Targeting to control the installation based on the specialty software's presence and version.

image

image

Alternatively, you can use the registry and file system for your criteria, which works even if the software doesn't install via MSI packages:

image

An alternative to Win32_Product

What to do if you really, really need to use a WMI filter to determine MSI installed versions and names though? If you look around the Internet, you will find a couple of older proposed solutions that - to be frank - will not work for most customers.

  1. Use the Win32reg_AddRemovePrograms class instead.
  2. Use a custom class (like described here and frequently copied/pasted on the Interwebz).

The Win32reg_AddRemovePrograms is not present on most client systems though; it is a legacy class, first delivered by the old SMS 2003 management WMI system. I suspect one of the reasons the System Center folks discarded its use years ago for their own native inventory system was the same reason that the customer class above doesn’t work in #2 - it didn’t return 32-bit software installed on 64-bit computers. The class has not been updated since initial release 10 years ago.

#2 had the right idea though, at least as a valid customer workaround to avoid using Win32_Product: by creating your own WMI class using the generic registry provider to examine just the MSI uninstall registry keys, you can get a fast and simple query that reasonably detects installed software. Armed with the "how", you can also extend this to any kind of registry queries you need, without risk of tanking group policy processing. To do this, you just need notepad.exe and a little understanding of WMI.

Roll Your Own Class

Windows Management Instrumentation uses Managed Operation Framework (MOF) files to describe the Common Information Model (CIM) classes. You can create your own MOF files and compile them into the CIM repository using a simple command-line tool called mofcomp.exe.

You need to be careful here. This means that once you write your MOF you should validate it by using the mofcomp.exe-check argument on your standard client and server images. It also means that you should test this on those same machines using the -class:createonly argument (and not setting the -autorecover argument or #PRAGMA AUTORECOVER pre-processor) to ensure it doesn't already exist. The last thing you want to do is break some other class.

When done testing, you're ready to give it a go. Here is a sample MOF, wrapped for readability. Note the highlighted sections that describe what the MOF examines and what the group policy WMI filter can use as querycriteria. Unlike the oft-copied sample, this one understands both the normal native architecture registry path as well as the Wow6432node path that covers 32-bit applications installed on a 64-bit system.

Start copy below =======>

// "AS-IS" sample MOF file for returning the two uninstall registry subkeys

// Unsupported, provided purely as a sample

// Requires compilation. Example: mofcomp.exe sampleproductslist.mof

// Implements sample classes: "SampleProductList" and "SampleProductlist32"

//   (for 64-bit systems with 32-bit software)

 

#PRAGMA AUTORECOVER

 

[dynamic, provider("RegProv"),

ProviderClsid("{fe9af5c0-d3b6-11ce-a5b6-00aa00680c3f}"),ClassContext("local|HKEY_LOCAL_MACHINE\\SOFTWARE\\Microsoft\\Windows\\CurrentVersion\\Uninstall")]

class SampleProductsList {

[key] string KeyName;

[read, propertycontext("DisplayName")] string DisplayName;

[read, propertycontext("DisplayVersion")] string DisplayVersion;

};

 

[dynamic, provider("RegProv"),

ProviderClsid("{fe9af5c0-d3b6-11ce-a5b6-00aa00680c3f}"),ClassContext("local|HKEY_LOCAL_MACHINE\\SOFTWARE\\Wow6432node\\Microsoft\\Windows\\CurrentVersion\\Uninstall")]

class SampleProductsList32 {

[key] string KeyName;

[read, propertycontext("DisplayName")] string DisplayName;

[read, propertycontext("DisplayVersion")] string DisplayVersion;

};

<======= End copy above

Examining this should also give you interesting ideas about other registry-to-WMI possibilities, I imagine.

Test Your Sample

Copy this sample to a text file named with a MOF extension, store it in the %systemroot%\system32\wbem folder on a test machine, and then compile it from an administrator-elevated CMD prompt using mofcomp.exe filename. For example:

image

To test if the sample is working you can use WMIC.EXE to list the installed MSI packages. For example, here I am on a Windows 7 x64 computer with Office 2010 installed; that suite contains both 64 and 32-bit software so I can use both of my custom classes to list out all the installed software:

image

Note that I did not specify a namespace in the sample MOF, which means it updates the \\root\default namespace, instead of the more commonly used \\root\cimv2 namespace. This is intentional: the Windows XP implementation of registry provider is in the Default namespace, so this makes your MOF OS agnostic. It will work perfectly well on XP, 2003, 2008, Vista, 7, or even the Windows 8 family. Moreover, I don’t like updating the CIMv2 namespace if I can avoid it - it already has enough classes and is a bit of a dumping ground.

Deploy Your Sample

Now I need a way to get this MOF file to all my computers. The easiest way is to return to Group Policy Preferences; create a GPP policy that copies the file and creates a scheduled task to run MOFCOMP at every boot up (you can change this scheduling later or even turn it off, once you are confident all your computers have the new classes).

image

image

image

image

You can also install and compile the MOF manually, use psexec.exe, make it part of your standard OS image, deploy it using a software distribution system, or whatever. The example above is just that - an example.

Now that all your computers know about your new WMI class, you can create a group policy WMI filter that uses it. Here are a couple examples; note that I remembered to change the namespace from CIMv2 to DEFAULT!

image

image

image

You're in business with a system that, while not optimal, is certainly is far better than Win32_Product. It’s fast and lightweight, relatively easy to manage, and like all adequate solutions, designed not to make things worse in its efforts to make things different.

And another idea (updated 4/23)

AskDS contributor Fabian Müller had another idea that he uses with customers:

1. Define environment variables using GPP based on Registry Item-Level targeting filters or just deploy the variables during software installation phase, e.g. %IEversion%= 9

2. Use this environment variable in WMI filters like this: Root\CIMV2;SELECT VARIABLEVALUE FROM Win32_Environment WHERE NAME='IEversion' AND VARIABLEVALUE='9'

Disadvantage: First computer start or user logon will not pass the WMI filter since the ENV variable had to be created (if set by GPP). It would be better having this environment variable being created during softwareinstallation / deployment (or whatever software being deployed).

Advantage: The environment WMI query is very fast compared. And you can use it “multi-purpose”. For example, as part of CMD-based startup and logon scripts.

An aside

Software Installation policy is not designed to be an enterprise software management solution and neither are individual application self-update systems. SI works fine in a small business network as a "no frills" solution but doesn’t offer real monitoring or remediation, and requires too much of the administrator to manage. If you are using these because of the old "we only fix IT when it's broken" answer, one argument you might take to management is that you are broken and operating at great risk: you have no way to deploy non-Microsoft updates in a timely and reliable fashion.

Even though the free Windows Update and Windows Software Update Service support Windows, Office, SQL, and Exchange patching, it’s probably not enough; anyone with more than five minutes in the IT industry knows that all of your software should be receiving periodic security updates. Does anyone here still think it's safe to run Adobe, Oracle, or thousands of other vendor products without controlled, monitored, and managed patching? If your network doesn't have a real software patching system, it's like a building with no sprinklers or emergency exits: nothing to worry about… until there's a fire. You wouldn’t run computers without anti-virus protection, but the number of customers I speak to that have zero security patching strategy is very worrying.

It's not 1998 anymore, folks. A software and patch management system isn’t an option anymore if you have a business with more than a hundred computers; those days are done for everyone. Even for Apple, although they haven't realized it yet. We make System Center, but there are other vendors out there too, and I’d rather you bought a competing product than have no patch management at all.

Until next time,

- Ned "pragma-tism" Pyle

New Slow Logon, Slow Boot Troubleshooting Content

$
0
0

Hi all, Ned here again. We get emailed here all the time about issues involving delays in user logons. Often enough that, a few years back, Bob wrote a multi-part article on the subject.

Taking it to the next level, some of my esteemed colleagues have created a multi-part TechNet Wiki series on understanding, analyzing, and troubleshooting slow logons and slow boots. These include:

Before you shrug this off, consider the following example, where we assume for our hypothetical company:

  • Employees work 250 days per year (50 weeks * 5 days per week)
  • Employee labor costs $2 per minute
  • Each employees boots and logs on to a single desktop computer only once per day
  • There are 25 and 30 seconds of removable delay from the boot and logon operations

That means an annual cost of:

image
Benjamin Franklin would not be pleased

Even if you take just the understated US Bureau of Labor private sector compensation cost numbers (roughly $0.50 average employee total compensation cost per minute), you are still hemorrhaging cash. And those numbers just cover direct compensation and benefit costs, not all the other overhead  that goes into an employee, as well as the fact that they are not producing anything during that time - you are paying them to do nothing. Need I mention that the computer-using employees are probably costing you nearly twice that number?

Get to reading, people – this is a big deal.

- Ned “a penny saved is a penny earned” Pyle

Monthly Mail Sack: Yes, I Finally Admit It Edition

$
0
0

Heya folks, Ned here again. Rather than continue the lie that this series comes out every Friday like it once did, I am taking the corporate approach and rebranding the mail sack. Maybe we’ll have the occasional Collector’s Edition versions.

This week month, I answer your questions on:

Let’s incentivize our value props!

Question

Everywhere I look, I find documentation saying that when Kerberos skew exceeds five minutes in a Windows forest, the sky falls and the four horsemen arrive.

I recall years ago at a Microsoft summit when I brought that time skew issue up and the developer I was speaking to said no, that isn't the case anymore, you can log on fine. I recently re-tested that and sure enough, no amount of skew on my member machine against a DC prevents me from authenticating.

Looking at the network trace I see the KRB_APP_ERR_SKEW response for the AS REQ which is followed by breaking down of the kerb connection which is immediately followed by reestablishing the kerb connection again and another AS REQ that works just fine and is responded to with a proper AS REP.

My first question is.... Am I missing something?

My second question is... While I realize that third party Kerb clients may or may not have this functionality, are there instances where it doesn't work within Windows Kerb clients? Or could it affect other scenarios like AD replication?

Answer

Nope, you’re not missing anything. If I try to logon from my highly-skewed Windows client and apply group policy, the network traffic will look approximately like:

Frame

Source

Destination

Packet Data Summary

1

Client

DC

AS Request Cname: client$ Realm: CONTOSO.COM Sname:

2

DC

Client

KRB_ERROR - KRB_AP_ERR_SKEW (37)

3

Client

DC

AS Request Cname: client$ Realm: CONTOSO.COM Sname: krbtgt/CONTOSO.COM

4

DC

Client

AS Response Ticket[Realm: CONTOSO.COM, Sname: krbtgt/CONTOSO.COM]

5

Client

DC

TGS Request Realm: CONTOSO.COM Sname: cifs/DC.CONTOSO.COM

6

DC

Client

KRB_ERROR - KRB_AP_ERR_SKEW (37)

7

Client

DC

TGS Request Realm: CONTOSO.COM Sname: cifs/DC.CONTOSO.COM

8

DC

Client

TGS Response Cname: client$

When your client sends a time stamp that is outside the range of Maximum tolerance for computer clock synchronization, the DC comes back with that KRB_APP_ERR_SKEW error – but it also contains an encrypted copy of his own time stamp. The client uses that to create a valid time stamp to send back. This doesn’t decrease security in the design because we are still using encryption and requiring knowledge of the secrets,  plus there is still only – by default – 5 minutes for an attacker to break the encryption and start impersonating the principal or attempt replay attacks. Which is not feasible with even XP’s 11 year old cipher suites, much less Windows 8’s.

This isn’t some Microsoft wackiness either – RFC 4430 states:

If the server clock and the client clock are off by more than the policy-determined clock skew limit (usually 5 minutes), the server MUST return a KRB_AP_ERR_SKEW.The optional client's time in the KRB-ERROR SHOULD be filled out.

If the server protects the error by adding the Cksum field and returning the correct client's time, the client SHOULD compute the difference (in seconds) between the two clocks based upon the client and server time contained in the KRB-ERROR message.

The client SHOULD store this clock difference and use it to adjust its clock in subsequent messages. If the error is not protected, the client MUST NOT use the difference to adjust subsequent messages, because doing so would allow an attacker to construct authenticators that can be used to mount replay attacks.

Hmmm… SHOULD. Here’s where things get more muddy and I address your second question. No one actually has to honor this skew correction:

  1. Windows 2000 didn’t always honor it. But it’s dead as fried chicken, so who cares.
  2. Not all third parties honor it.
  3. Windows XP and Windows Server 2003 do honor it, but there were bugs that sometimes prevented it (long gone, AFAIK). Later Windows OSes do of course and I know of no regressions.
  4. If the clock of the client computer is faster than the clock time of the domain controller plus the lifetime of Kerberos ticket (10 hours, by default), the Kerberos ticket is invalid and auth fails.
  5. Some non-client logon application scenarios enforce the strict skew tolerance and don’t care to adjust, because of other time needs tied to Kerberos and security. AD replication is one of them – event LSASRV 40960 with extended error 0xc000133 comes to mind in this scenario, as does trying to run DSSite.msc “replicate now” and getting back error 0x576 “There is a time and / or date difference between the client and the server.” I have recent case evidence of Dcpromo enforcing the 5 minutes with Kerberos strictly, even in Windows Server 2008 R2, although I have not personally tried to validate it. I’ve seen it with appliances and firewalls too.

With that RFC’s indecisiveness and the other caveats, we beat the “just make sure it’s no more than 5 minutes” drum in all of our docs and here on AskDS. It’s too much trouble to get into what-ifs.

We have a KB tucked away on this here but it is nearly un-findable.

Awesome question.

Question

I’ve found articles on using Windows PowerShell to locate all domain controllers in a domain, and even all GCs in a forest, but I can’t find one to return all DCs in a forest. Get-AdDomainController seems to be limited to a single domain. Is this possible?

Answer

It’s trickier than you might think. I can think of two ways to do this; perhaps commenters will have others. The first is to get the domains in the forest, then find one domain controller in each domain and ask it to list all the domain controllers in its own domain. This gets around the limitation of Get-AdDomainController for a single domain (single line wrapped).

(get-adforest).domains | foreach {Get-ADDomainController -discover -DomainName $_} | foreach {Get-addomaincontroller -filter * -server $_} | ft hostname

The second is to go directly to the the native  .NET AD DS forest class to return the domains for the forest, then loop through each one returning the domain controllers (single lined wrapped).

[system.directoryservices.activedirectory.Forest]::GetCurrentForest().domains | foreach {$_.DomainControllers} | foreach {$_.hostname}

This also lead to updated TechNet content. Good work, Internet!

Question

Hi, I've been reading up on RID issuance management and the new RID Master changes in Windows Server 2012. They still leave me with a question, however: why are RIDs even needed in a SID? Can't the SID be incremented on it's own? The domain identifier seems to be an adequately large number, larger than the 30-bit RID anyway. I know there's a good reason for it, but I just can't find any material that says why there are separate domain ID and relative ID in a SID.

Answer

The main reason was a SID needs the domain identifier portion to have a contextual meaning. By using the same domain identifier on all security principals from that domain, we can quickly and easily identify SIDs issued from one domain or another within a forest. This is useful for a variety of security reasons under the hood.

That also allows us a useful technique called “SID compression”, where we want to save space in a user’s security data in memory. For example, let’s say I am a member of five domain security groups:

DOMAINSID-RID1
DOMAINSID-RID2
DOMAINSID-RID3
DOMAINSID-RID4
DOMAINSID-RID5

With a constant domain identifier portion on all five, I now have the option to use one domain SID portion on all the other associated ones, without using all the memory up with duplicate data:

DOMAINSID-RID1
“-RID2
“-RID3
“-RID4
“-RID5

The consistent domain portion also fixes a big problem: if all of the SIDs held no special domain context, keeping track of where they were issued from would be a much bigger task. We’d need some sort of big master database (“The SID Master”?) in an environment that understood all forests and domains and local computers and everything. Otherwise we’d have a higher chance of duplication through differing parts of a company. Since the domain portion of the SID unique and the RID portion is an unsigned integer that only climbs, it’s pretty easy for RID masters to take care of that case in each domain.

You can read more about this in coma-inducing detail here: http://technet.microsoft.com/en-us/library/cc778824.aspx.

Question

When I want to set folder and application redirection for our user in different forest (with a forest trust) in our Remote Desktop Services server farm, I cannot find users or groups from other domain. Is there a workaround?

Answer

The Object Picker in this case doesn’t allow you to select objects from the other forest – this is a limitation of the UI the that Folder Redirection folks put in place. They write their own FR GP management tools, not the GP team.

Windows, by default, does not process group policy from user logon across a forest—it automatically uses loopback Replace.  Therefore, you can configure a Folder Redirection policy in the resource domain for users and link that policy to the OU in the domain where the Terminal Servers reside.  Only users from a different forest should receive the folder redirection policy, which you can then base on a group in the local forest.

Question

Does USMT support migrating multi-monitor settings from Windows XP computers, such as which one is primary, the resolutions, etc.?

Answer

USMT 4.0 does not supported migrating any monitor settings from any OS to any OS (screen resolution, monitor layout, multi-monitor, etc.). Migrating hardware settings and drivers from one computer to another is dangerous, so USMT does not attempt it. I strongly discourage you from trying to make this work through custom XML for the same reason – you may end up with unusable machines.

Starting in USMT 5.0, a new replacement manifest – Windows 7 to Windows 7, Windows 7 to Windows 8, or Windows 8 to Windows 8 only – named “DisplayConfigSettings_Win7Update.man” was added. For the first time in USMT, it migrates:

<pattern type="Registry">HKLM\System\CurrentControlSet\Control\GraphicsDrivers\Connectivity\* [*]</pattern>
<pattern type="Registry">HKLM\System\CurrentControlSet\Control\GraphicsDrivers\Configuration\* [*]</pattern>

This is OK on Win7 and Win8 because the OS itself knows what valid and invalid are in that context and discards/fixes things as necessary. I.e. this is safe is only because USMT doesn’t actually do anything but copy some values and relies on the OS to fix things after migration is over.

Question

Our proprietary application is having memory pressure issues and it manifests when someone runs gpupdate or waits for GP to refresh; some times it’s bad enough to cause a crash.  I was curious if there was a way to stop the policy refresh from occurring.

Answer

Only in Vista and later does preventing total refresh become possible vaguely possible; you could prevent the group policy service from running at all (no, I am not going to explain how). The internet is filled with thousands of people repeating a myth that preventing GP refresh is possible with an imaginary registry value on Win2003/XP – it isn’t.

What you could do here is prevent background refresh altogether. See the policies in the “administrative templates\system\group policy” section of GP:

1. You could enable policy “group policy refresh interval for computers” and apply it to that one server. You could set the background refresh interval to 45 days (the max). That way it be far more likely to reboot in the meantime for a patch Tuesday or whatever and never have a chance to refresh automatically.

2. You could also enable each of the group policy extension policies (ex: “disk quota policy processing”, “registry policy processing”) and set the “do not apply during periodic background processing” option on each one.  This may not actually prevent GPUPDATE /FORCE though – each CSE may decide to ignore your background refresh setting; you will have to test, as this sounds boring.

Keep in mind for #1 that there are two of those background refresh policies – one per user (“group policy refresh interval for users”), one per computer (“group policy refresh interval for computers”). They both operate in terms of each boot up or each interactive logon, on a per computer/per user basis respectively. I.e. if you logon as a user, you apply your policy. Policy will not refresh for 45 days for that user if you were to stay logged on that whole time. If you log off at 22 days and log back on, you get apply policy, because that is not a refresh – it’s interactive logon foreground policy application.

Ditto for computers, only replace “logon” with “boot up”. So it will apply the policy at every boot up, but since your computers reboot daily, never again until the next bootup.

After those thoughts… get a better server or a better app. :)

Question

I’m testing Virtualized Domain Controller cloning in Windows Server 2012 on Hyper-V and I have DCs with snapshots. Bad bad bad, I know, but we have our reasons and we at least know that we need to delete them when cloning.

Is there a way to keep the snapshots on the source computer, but not use VM exports? I.e. I just want the new copied VM to not have the old source machine’s snapshots.

Answer

Yes, through the new Hyper-V disk management Windows PowerShell cmdlets or through the management snap-in.

Graphical method

1. Examine the settings of your VM and determine which disk is the active one. When using snapshots, it will be an AVHD/X file.

image

2. Inspect that disk and you see the parent as well.

image

3. Now use the Edit Disk… option in the Hyper-V manager to select that AVHD/X file:

image

4. Merge the disk to a new copy:

image

image

Windows PowerShell method

Much simpler, although slightly counter-intuitive. Just use:

Convert-vhd

For example, to export the entire chain of a VM's disk snapshots and parent disk into a new single disk with no snapshots named DC4-CLONED.VHDX:

image
Violin!

You don’t actually have to convert the disk type in this scenario (note how I went from dynamic to dynamic). There is also Merge-VHD for more complex differencing disk and snapshot scenarios, but it requires some extra finagling and disk copying, and  isn’t usually necessary. The graphical merge option works well there too.

As a side note, the original Understand And Troubleshoot VDC guide now redirects to TechNet. Coming soon(ish) is an RTM-updated version of the original guide, in web format, with new architecture, troubleshooting, and other info. I robbed part of my answer above from it – as you can tell by the higher quality screenshots than you usually see on AskDS – and I’ll be sure to announce it. Hard.

Question

It has always been my opinion that if a DC with a FSMO role went down, the best approach is to seize the role on another DC, rebuild the failed DC from scratch, then transfer the role back. It’s also been my opinion that as long as you have more than one DC, and there has not been any data loss, or corruption, it is better to not restore.

What is the Microsoft take on this?

Answer

This is one of those “it depends” scenarios:

1. The downside to restoring from (usually proprietary) backup solutions is that the restore process just isn’t something most customers test and work out the kinks on until it actually happens; tons of time is spent digging out the right tapes, find the right software, looking up the restore process, contacting that vendor, etc. Often times a restore doesn’t work at all, so all the attempts are just wasted effort. I freely admit that my judgment is tainted through my MS Support experience here – customers do not call us to say how great their backups worked, only that they have a down DC and they can’t get their backups to restore.

The upside is if your recent backup contained local changes that had never replicated outbound due to latency, restoring them (even non-auth) still means that those changes will have a chance to replicate out. E.g. if someone changed their password or some group was created on that server and captured by the backup, you are not losing any changes. It also includes all the other things that you might not have been aware of – such as custom DFS configurations, operating as a DNS server that a bunch of machines were solely pointed to, 3rd party applications pointed directly to the DC by IP/Name for LDAP or PDC or whatever (looking at you, Open Source software!), etc. You don’t have to be as “aware”, per se.

2. The downside to seizing the FSMO roles and cutting your losses is the converse of my previous point around latent changes; those objects and attributes that could not replicate out but were caught by the backup are gone forever. You also might miss some of those one-offs where someone was specifically targeting that server – but you will hear from them, don’t worry; it won’t be too hard to put things back.

The upside is you get back in business much faster in most cases; I can usually rebuild a Win2008 R2 server and make it a DC before you even find the guy that has the combo to the backup tape vault. You also don’t get the interruptions in service for Windows from missing FSMO roles, such as DCs that were low on their RID pool and now cannot retrieve more (this only matters with default, obviously; some customers raise their pool sizes to combat this effect). It’s typically a more reliable approach too – after all, your backup may contain the same time bomb of settings or corruption or whatever that made your DC go offline in the first place. Moreover, the backup is unlikely to contain the most recent changes regardless – backups usually run overnight, so any un-replicated originating updates made during the day are going to be nuked in both cases.

For all these reasons, we in MS Support generallyrecommend a rebuild rather than a restore, all things being equal. Ideally, you fix the actual server and do neither!

As a side note, restoring the RID master usedto cause issues that we first fixed in Win2000 SP3. This unfortunately has live on as a myth that you cannot safely restore the RID master. Nevertheless, if someone impatiently seizes that role, then someone else restores that backup, you get a new problem where you cannot issue RIDs anymore. Your DC will also refuse to claim role ownership with a restored RID Master (or any FSMO role) if your restored server has an AD replication problem that prevents at least one good replication with a partner. Keep those in mind for planning no matter how the argument turns out!

Question

I am trying out Windows Server 2012 and its new Minimal Server Interface. Is there a way to use WMI to determine if a server is running with a Full Installation, Core Installation, or a Minimal Shell installation?

Answer

Indeed, although it’s not made it way to MSDN quite yet. The Win32_ServerFeature class returns a few new properties in our latest operating system. You can use WMIC or Windows PowerShell to browse the installed ones. For example:

image

The “99” ID is Server Graphical Shell, which means, in practical terms, “Full Installation”. If 99 alone is not present, that means it’s a minshell server. If the “478” ID is also missing, it’s a Core server.

E.g. if you wanted to apply some group policy that only applied to MinShell servers, you’d set your query to return true if 99 was not present but 478 was present.

Other Stuff

Speaking of which, Windows Server 2012 General Availability is September 4th. If you manage to miss the run up, you might want to visit an optometrist and/or social media consultant.

Stop worrying so much about the end of the world and think it through.

So awesome:


And so fake :(

If you are married to a psychotic Solitaire player who poo-poo’ed switching totally to the Windows 8 Consumer Preview because they could not get their mainline fix of card games, we have you covered now in Windows 8 RTM. Just run the Store app and swipe for the Charms Bar, then search for Solitaire.

image

It’s free and exactly 17 times better than the old in-box version:

image
OMG Lisa, stop yelling at me! 

Is this the greatest geek advert of all time?


Yes. Yes it is.

When people ask me why I stopped listening to Metallica after the Black Album, this is how I reply:

Hetfield in Milan
Ride the lightning Mercedes

We have quite a few fresh, youthful faces here in MS Support these days and someone asked me what “Mall Hair” was when I mentioned it. If you graduated high school between 1984 and 1994 in the Midwestern United States, you already know.

Finally – I am heading to Sydney in late September to yammer in-depth about Windows Server 2012 and Windows 8. Anyone have any good ideas for things to do? So far I’ve heard “bridge climb”, which is apparently the way Australians trick idiot tourists into paying for death. They probably follow it up with “funnel-web spider petting zoo” and “swim with the saltwater crocodiles”. Lunatics.

Until next time,

- Ned “I bet James Hetfield knows where I can get a tropical drink by the pool” Pyle

Viewing all 48 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>