Fix Windows Server 2012 R2 DFSR Event ID 4614

Recently had a ticket come in where a newly created Domain with two DC’s was not replicating properly. Upon logging into the DC’s I noted the following log entry in the DFS Replication Log


The DFS Replication service initialized SYSVOL at local path S:\SYSVOL\domain and is waiting to perform initial replication. The replicated folder will remain in the initial synchronization state until it has replicated with its partner . If the server was in the process of being promoted to a domain controller, the domain controller will not advertize and function as a domain controller until this issue is resolved. This can occur if the specified partner is also in the initial synchronization state, or if sharing violations are encountered on this server or the synchronization partner. If this event occurred during the migration of SYSVOL from File Replication service (FRS) to DFS Replication, changes will not replicate out until this issue is resolved. This can cause the SYSVOL folder on this server to become out of sync with other domain controllers.

Additional Information:
Replicated Folder Name: SYSVOL Share
Replicated Folder ID: C0037E37-EF20-4CDF-968A-932E669ED810
Replication Group Name: Domain System Volume
Replication Group ID: 35F23B97-543A-4310-A08E-5F28D6342C18
Member ID: 41E8F809-5114-48DC-8297-B7E866502101
Read-Only: 0

Working backwards through the log I found this entry


The DFS Replication service encountered an error communicating with partner AD-02 for replication group Domain System Volume.

Partner DNS address: AD-02.contoso.com

Optional data if available:
Partner WINS Address: AD-02
Partner IP Address: 192.168.1.4

The service will retry the connection periodically.

Additional Information:
Error: 1726 (The remote procedure call failed.)
Connection ID: 3E337B81-109F-4A97-880C-63E30F52E63F
Replication Group ID: 35F23B97-543A-4310-A08E-5F28D6342C18

I didn’t see that error on AD-02 but I did see several alerts like this one leading up to when the above alert fired on AD-01.


The DNS server is waiting for Active Directory Domain Services (AD DS) to signal that the initial synchronization of the directory has been completed. The DNS server service cannot start until the initial synchronization is complete because critical DNS data might not yet be replicated onto this domain controller. If events in the AD DS event log indicate that there is a problem with DNS name resolution, consider adding the IP address of another DNS server for this domain to the DNS server list in the Internet Protocol properties of this computer. This event will be logged every two minutes until AD DS has signaled that the initial synchronization has successfully completed.

As both DC’s are also DNS servers I configured the binding to be the statically set IP address. Which should once and for all rule out any sort of weird resolution issues. This had no affect either. There is not a lot else to go on from either server around the date where I think things went sideways. There is an odd 15 day gap in the DFSR Log, but maybe that’s normal? I don’t really see any indication of a problem in the Directory Services, System or Application logs around that time.

I then installed the Active Directory Replication Status Tool (https://www.microsoft.com/en-us/download/details.aspx?id=30005), while it was interesting to run there were no issues reported. The next thing I ran was the health report from the DFS Management console, which told me more or less what I already knew, DFS was waiting on initial synchronization.

So the next thing to try was to figure out how to force that synchronization to happen. I found the following article (https://support.microsoft.com/en-us/kb/2218556) on Microsoft’s Support site. I ran through the Authoritative Synchronization steps and after going through that the servers were happy. I was able to drop a test file in SYSVOL and see it replicate to the other server, delete that file on the server and see it drop off the other server.

While I don’t really know what the root cause for this issue was, hopefully the next time it crops up I’ll be able to figure it out a lot more quickly. As a side note to the steps I followed below I would tell you that I restarted the DFSR service after making changes in ADSI, each time.

Prior to running dfsrdiag, you may need to install that feature

Add-WindowsFeature -Name RSAT-DFS-Mgmt-Con

In order to force Active Director Replication throughout the domain you will run this command


repadmin /syncall /APed

You want to force the non-authoritative synchronization of SYSVOL on a domain controller. In the File Replication Service (FRS), this was controlled through the D2 and D4 data values for the Burflags registry values, but these values do not exist for the Distributed File System Replication (DFSR) service. You cannot use the DFS Management snap-in (Dfsmgmt.msc) or the Dfsradmin.exe command-line tool to achieve this. Unlike custom DFSR replicated folders, SYSVOL is intentionally protected from any editing through its management interfaces to prevent accidents.

How to perform a non-authoritative synchronization of DFSR-replicated SYSVOL (like “D2” for FRS)

  1. In the ADSIEDIT.MSC tool modify the following distinguished name (DN) value and attribute on each of the domain controllers that you want to make non-authoritative:
    1. CN=SYSVOL Subscription,CN=Domain System Volume,CN=DFSR-LocalSettings,CN=servername,OU=Domain Controllers,DC=domainname
    2. msDFSR-Enabled=FALSE
  2. Force Active Directory replication throughout the domain.
  3. Run the following command from an elevated command prompt on the same servers that you set as non-authoritative:
    1. DFSRDIAG POLLAD
  4. You will see Event ID 4114 in the DFSR event log indicating SYSVOL is no longer being replicated.
  5. On the same DN from Step 1, set:
    1. msDFSR-Enabled=TRUE
  6. Force Active Directory replication throughout the domain.
  7. Run the following command from an elevated command prompt on the same servers that you set as non-authoritative:
    1. DFSRDIAG POLLAD
  8. You will see Event ID 4614 and 4604 in the DFSR event log indicating SYSVOL has been initialized. That domain controller has now done a “D2” of SYSVOL.

How to perform an authoritative synchronization of DFSR-replicated SYSVOL (like “D4” for FRS)

  1. Stop the DFSR service on all domain controllers
  2. In the ADSIEDIT.MSC tool, modify the following DN and two attributes on the domain controller you want to make authoritative (preferrably the PDC Emulator, which is usually the most up to date for SYSVOL contents):
    1. CN=SYSVOL Subscription,CN=Domain System Volume,CN=DFSR-LocalSettings,CN=servername,OU=Domain Controllers,DC=domainname
    2. msDFSR-Enabled=FALSE
    3. msDFSR-options=1
  3. Modify the following DN and single attribute on all other domain controllers in that domain:
    1. CN=SYSVOL Subscription,CN=Domain System Volume,CN=DFSR-LocalSettings,CN=servername,OU=Domain Controllers,DC=domainname
    2. msDFSR-Enabled=FALSE
  4. Force Active Directory replication throughout the domain and validate its success on all DCs.
  5. Start the DFSR service set as authoritative:
  6. You will see Event ID 4114 in the DFSR event log indicating SYSVOL is no longer being replicated.
  7. On the same DN from Step 1, set:
    1. msDFSR-Enabled=TRUE
  8. Force Active Directory replication throughout the domain and validate its success on all DCs.
  9. Run the following command from an elevated command prompt on the same server that you set as authoritative:
    1. DFSRDIAG POLLAD
  10. You will see Event ID 4602 in the DFSR event log indicating SYSVOL has been initialized. That domain controller has now done a “D4” of SYSVOL.
  11. Start the DFSR service on the other non-authoritative DCs. You will see Event ID 4114 in the DFSR event log indicating SYSVOL is no longer being replicated on each of them.
  12. Modify the following DN and single attribute on all other domain controllers in that domain:
    1. CN=SYSVOL Subscription,CN=Domain System Volume,CN=DFSR-LocalSettings,CN=servername,OU=Domain Controllers,DC=domainname
    2. msDFSR-Enabled=TRUE
  13. Run the following command from an elevated command prompt on all non-authoritative DCs (i.e. all but the formerly authoritative one):
    1. DFSRDIAG POLLAD

Week In Review : 06-15-2014

It’s time for another exciting edition of WIR! This week was filled with updates! Rolled updates to our Domain Controllers and one of them took nearly two hours to come back from a reboot! Normally not a big deal, but when your 30mi away…a little stressful! I also rebuilt my work laptop this week, earlier this year I had done something stupid with an external drive and wound up with Windows installed on Partition 2, on a disk with just one partition! Needless to say, rebooting my laptop didn’t happen all that often at all!

Speaking of Active Directory Domains, we are moving ever closer to having just one domain on campus. The internal private Edwards domain went away this week! It’s always just a little nerve wracking when running through dcpromo to remove stuff, but it went well. Didn’t appear to leave any unsightly meta data floating around AD!

Also spent a fair amount of time talking with the guys at Edwards to go over how they image machines. They routinely call us to have a workstation DNS entry removed, and needless to say it’s a little annoying. They ought to be able to do this themselves, but since it’s not their DNS they don’t have rights. Not to mention they way they do their image is a little different.

This is how it goes, a user is up for a new computer. In an effort to minimize the inconvenience this can sometimes to be, they image the new computer, load their software, and finally join it to the domain. This last part is what gets them, they tack on a “-1” to the new workstation name. Normally not a big deal, but the last part is where it gets hairy.

The new workstation is delivered to the user, the old workstation is unjoined from the domain, the new computer is renamed to the old computer name…and boom. Sometimes this works (they say) but I can’t imagine how. So, the first comment was hey, how about using service tags, or mac addresses to identify these machines uniquely, then you will never get hit with this issue. Nope, they like usernames as computernames, it makes it easy to correlate user to workstation. Apparently it’s too difficult to track that down in SCCM? Not likely, but oh well.

So, what to do, well we could just have them call every time, but that’s a hassle, not to mention there’s no code involved! My solution, create an Orchestrator runbook, that is provided a computername. With that information it scrubs AD and removes the DNS entry as well. This Runbook would run in the context of a service account that has rights to do this. They would simply login to it with their admin account, we would use their group information to verify that the computer they want removed lives in their OU and then remove it and the DNS entry. If it doesn’t live in their OU it fails. Sounds elegant to me 😉

A final solution, which will take much longer to implement, will be an appliance from BlueCat that sits between AD DNS and Proteus DNS. This appliance will use the Proteus web service and the MS RPC to translate information between AD and DNS. This will get us to a very similar place as my Runbook idea, but the one advantage is this will also get us to a place where we can pull our AD DNS out of the public facing DNS, effectively hiding thousands of servers and workstations.

Another fun one that happened, you can’t push the ops client to a Domain Controller using SCCM Client Push. If someone tells you they can, they are lying to your face! I’m going to write up a post, but the short of it is, Client Push relies on a local administrator to work, how do you do that on a Domain Controller?

OH! I also polished off my SQL PowerShell, so I’ll write about that as well. It works pretty well, created some new functions to let me more accurately find SQL Instances, still don’t have a good way to talk to the WID but it’s kicking around in the back of my head.

I also broke Active Directory Certificate Services..see you next week!

Oh, I suppose we should talk about that? So, I’ve been slowly pulling servers out of the old Ops servers and bringing them over to the new. Doing pretty well, 230+ servers in the new and growing, and under 50 in the old. The Domain Controllers got pulled in this week as well as the Certificate servers.

So, I’m working through the alerts, tuning Ops so I only hear what I need to. So, I started getting alerts about ADCS (Active Directory Certificate Services) and started working on that issue. I was seeing errors about the CRL Distribution Point being offline.

As part of the troubleshooting I had already decided to stand up a vhost to hold CRL’s among other things. So I reconfigured the CA to use that, after restarting the service as prompted by Windows, Certificate Services failed to start. The net result here was that the CRL’s were out of date and just needed to be published and then copied to the web location.

The only bit left here is to automate both the publishing and the copying of the files over to the web server. Of course this seems well suited to creating a PowerShell solution, check back later for that!

See you next week!

Week In Review : 06/08/2014

Still a lot of programming this week, but like I said before I think anymore that is more the norm than not. We did some interesting Active Directory stuff this week. We had a handful of servers get their AD objects deleted at some point, and we found out about the beginning of this week. Now my guess is these were deleted close to about 3mo ago, and they either rebooted recently or attempted to change their password recently.

About a year or more ago we changed our audit policy and started using Advanced Auditing. We were concerned about user account and group account management, but it turns out we should also have put in computer account management as well. When a computer object is deleted event 4743 is logged in the security log of the domain controller. We searched and couldn’t find that entry anywhere, when I started researching that event is when I found you need to tick the boxes for computer management.

Along those lines we had a similar issue, our admin accounts in our QA domain were disabled, since we do very little auditing at all in there, I enabled the same features so we can see when that happens. When a user account is disabled event 4725 is logged. To go along with both of these events I’m going to update our reporting in Ops on things like this.

While doing all this I found a very nice support article listing our the various event id’s and what they mean.

All of the servers that are supposed to report in to System Center Advisor, are now doing so. I feel rather stupid about the issue originally. So my first problem is that I wasn’t patched up to where I needed to be in order to even use the preview, so that was step 1. The next part is where it gets a little fuzzy, I don’t actually recall patching the clients on any of the agents reporting in, yet all 3 domain controllers reported an update agent. Coincidentally all 3 domain controllers were the only servers showing up. After some investigating with the SCA guys from Microsoft they quickly realized I had not patched my agents. So, I must have patched the DC’s, I just don’t recall doing it, hence the stupid.

So the result of this is a working program in SCCM that will patch outdated clients, which is good as my next step in this whole saga is to patch production. It’s either patch, or move over to R2 and currently I’m leaning towards patching. So currently in QA when a server gets discovered the ops client gets pushed down to them, now it will also get patched. Then the only manual part of this process left is to add them to the advisor management pack.

It’s been lots of fun talking with these guys about stuff, I’ve been invited to participate in an SCA board to go over new features and talk about how things work. My recent experiences dealing with some of the internal folks with Microsoft really make me want to work there more.

I’ve done some fun things with PowerShell this week. A new SQL module has been fleshed out and validated against just about all instances of SQL. I’m still having a hard time working with a connectionstring for the Windows Internal Database, but it will come. I’ll most likely write about this module after I’m done with this WIR.

I’ve updated the Orchestrator module. The Start-scoRunboook function worked incredibly well if you only ever had one parameter, as soon as you throw more than one it freaks out. How I originally handled it was dumb, so now the function accepts requires a hashtable object, it then compares the key (property name) field against what the Parameter object returns. This worked out extremely well, again probably a topic for a whole blog post as well.

One last pure PowerShell item is a function that writes functions. It’s not too terribly complicated and I *WILL* post about this later, but basically the idea is that Orchestrator contains Runbooks that perform some action, my module reads those Runbooks in, gets their parameters and allows you the admin to run them. What if we could have a function that would build cmdlets based on that information on the fly…

SharePoint Online! How much fun is it working with UserProfiles in SPO? Well, let me tell you, in order to do anything meaningful it appears you have to access a 10yr old web service that must be ripe for deprecation but has been forgotten about! I’d really like to get some more information direct from Microsoft about that. At any rate, I’ve got some POC code that will allow me to programmatically populate a SharePoint user’s profile with information that we glean from another source. The next step down this rabbit hole is using a 7yr old SDK (Office Server 2007) to see if I can create UserProfile subtypes! I’ve got some examples of how this works, but I’ve not written anything up yet to see if it will go, fun times ahead!

Keeping in line with the SharePoint Online topic, creating admin cloud accounts. So we have an Azure subscription that allows us to get into Azure AD for our tenant, which isn’t anything special. If you have an Office365 subscription, you can create an Azure account, hook the two together and boom…Azure AD! So I created an admin account for me, and one of the other guys on the project. After that I enabled the Multi-Factor Authentication on these two accounts. Now, when I login with my admin account, I receive a txt message with a verification code. So we have looked at this as THE way to secure access to these accounts as we begin to think about the cloud.

With that out of the way, I can talk about the Orchestration. I’ve created a Runbook that will connect to our tenant and provision a user. This came out of the Provisioning project for the larger SPO project. This code takes a single parameter, samaccountname, and then provisions that user in o365 with the appropriate licensing. There are two differences between an o365 user and a cloud admin. The first is licensing, a cloud admin gets none by default (our design), second the all important UPN, user@tenant.onmicrosoft.com. The idea is these accounts live solely in the cloud, and are used specifically for administering cloud things. I have a couple modification in mind, first I need to populate the AlternateAddresses field, as well as the MobilePhone field. Then I need to see if I can enable MFA in Azure for these accounts automatically.

Lots of Orchestrator this week, but now that I’m ready and the network is ready it’s time to start working on Orchestrating Windows Updates. I’ve started a rough draft of that at the moment:

  1. basically get a list of servers (or service)
  2. for each server start maintenance mode (ops and Zenoss)
  3. get the applicable updates (SCCM perhaps)
  4. apply the updates
  5. reboot if needed
  6. make sure the server is back online
  7. check if required services are running
  8. leave maintenance mode
  9. and move on to the next server

If one server in a group fails then we need to stop the update process and throw an alert in ops and Zenoss. This will prevent an entire service from going offline if the updates cause an issue.

System Center Orchestrator PowerShell Module

This is one I’ve had on the back burner for a while, so yesterday morning I roughed up the basic framework for a PowerShell module. I have a few Runbooks at work, that it would be super cool to just run from PowerShell, and since lately I’ve been all up in the web services this was as good a time as any.

The Get cmdlets were all pretty simple, in fact there is really only one that does any real work Get-scoWebFeed. I probably could have used Invoke-WebService, but that’s no fun so I used .Net to make my own, and it’s really pretty simple. I just go ask the Orchestrator server (on a specially crafted url) to spit out the xml, then I just return it.

The individual functions for getting Runbooks, Jobs and Activities handle building the special URL,which isn’t really special as much as it is specific.

The Start-Runbook was the most complicated, I actually borrowed some code from MSDN, and another guys blog (Part 1, Part 2) to build mine. Turns out some of the xml you have to build to send up has to go in a certain way. I need to adjust my code to handle Runbooks with Parameters, but right now it’s good for what I need it do.

You can find the up to the minute code on GitHub, or you can find it in the TechNet Gallery.

Showing off some DSC Resources

Yesterday I wrote three articles ( Part 1, Part 2, Part 3 ) about Desired State Configuration. I thought I would post a slightly more complex Configuration. This configuration performs several actions on the target node.

  1. Install the Web-Server feature
  2. Install several additional features that depend on the Web-Server
  3. Install the WebDeploy application
  4. Configure Windows Firewall to allow WebDeploy traffic

This particular Configuration has some nice features to note. The first of which are the parameters. You must provide a ComputerName (or guid) and a path to the WebDeploy MSI. You can also optionally specify a source path for the features, this is useful if you have cloned a server.

You will also notice that nearly all the Resources use the DependsOn property. Since all the features are web server related, I set the DependsOn property to be the WebServerRole. If you look in the documentation I believe that Microsoft has this down as Requires, but I believe it’s changed since the docs came out.

The Package Resource installs WebDeploy. The ProductID I was able to pull from the MSI using ORCA ( SDK Download ). If you don’t have that installed, or don’t want to install it, you can install WebDeploy on a reference machine and ought to be able to query the ProductID from WMI.

The Script Resource was a little more difficult for me, and thankfully I found a wonderful article that did the deep diving. Basically a Script has three scripts that need to run. The TestScript must evaluate to True or False. If the TestScript == False then the SetScript runs. The GetScript must return a HashTable, and the only thing that it needs to return is the Result property, but you can also specify the contents of the GetScript, TestScript and SetScript scripts. Finally the SetScript is the script that will do the thing you need done. In this example create a firewall rule to allow port 8172.

So basically what happens is when you run Start-DSCConfiguration, the script will perform a test. If that test returns true then we can assume that the thing we need done is done. If that test returns false then we need to do the thing, whatever that thing is.

When you run Get-DSCConfiguration, the script will get the state of what we did, which is why all we need is a result.

DSC Part 3

It’s been a busy day, I haven’t posted anything since July and today three posts!

Well in Part 1 we talked about what Desired State Configuration was, in Part 2 I showed you how to manually setup the pull server. Now I’ll show you how to get your target node to pull configurations from the pull server. This is basically tying the loose ends together. I don’t anticipate adding any additional posts around this particular series as it really just fills a need for me. Detailed steps on how to get from point A to point B.

So in order for the client to talk to the server we need a GUID. This GUID will represent this client, if you have several clients it may be worthwhile noting these down, along with the name and or IP of the client. Honestly, the best way to make these things is in PowerShell, it’s a one-liner.

[System.Guid]::NewGuid()

Guid                                                                                                      ----
6e4bc22c-1ea3-4be6-b6a9-5694f0cfcaf8

Now that our pull server is up and running, we’ll need to modify our Configuration, and really all that needs to be changed is where we specify ComputerName. This time around our command looks like this

BasicWebServer -ComputerName "6e4bc22c-1ea3-4be6-b6a9-5694f0cfcaf8"

Note we passed in the GUID we just created, this is important as it will update the MOF we have locally stored with the GUID instead. If we look inside our .\BasicWebServer folder you will see a new MOF file with the GUID as the name. Now we need to create our checksum file.

New-DSCCheckSum -ConfigurationPath .\BasicWebServer -OutPath .\BasicWebServer

This result of this cmdlet is a .checksum file that is the same name as the MOF file that we just created. These two files are then copied to the pull server’s configuration directory. Once these have been copied over we can run the Configuration that configures the Local Configuration Manager.

This is run like you would run any other configuration EXCEPT, you must specify the GUID we just created, as well as the URL to the pull-server. In Part 2, our URL was not a virtual directory so we can just pass in the name of the server. If you created a virtual directory, you will need to pass in the full URL.

This particular configuration TURNS OFF SSL. I put that in caps because I think it’s important to note that DSC defaults to working over SSL only.

SetupDSCClient -NodeId "6e4bc22c-1ea3-4be6-b6a9-5694f0cfcaf8" -PullServer "webserver01"

Now, every 30 minutes your client will communicate with your pull-server and make sure that all the basic web server features are available. You should remove all those features and test it out.

I had some trial and tribulation when I was first setting this up, my first obstacle was my server was 2012, so I had to install WMF 4.0. Then I needed to change my configuration to run unsecure since I didn’t want to mess with certs. Finally since there were other web sites running on the server I needed to change to a different port.

DSC Part 2

In my previous article I talked about Desired State Configuration in a more or less generic way. I provided a sample Configuration that installed the basic services needed for a web server. This Configuration could be applied locally and once applied you could manage it manually as needed.

But the cool setup what is called a pull server ( a push server can also be setup but is more complex ). A pull server is basically a web server that has been configured with the proper DSC services and setup to listen.

The main difference between something local and a pull server is the Configurations are required to use GUID’s instead of computer names. Additionally the MOF files get hashed and the hash is stored in a file named after the MOF. These two files are then uploaded to the pull server and your client ( target node ) is configured to point at the pull server.

You can define intervals such that every 30 minutes the client will check in with the pull sever to validate it’s configuration. If something is missing, it will re-apply the configuration automatically.

So, if you already ran the Configuration from part 1, then you’re already halfway there. I’ll setup through the manual process for configuring the web portion of the pull server ( keep in mind Microsoft has released some Configurations that will assist with this ). We’ll need to make sure that the DSC service is available.

The next thing we need to do is setup the web portion of the pull-server. This entails copying files, creating an Application Pool, and setting up a website.

This leaves a few manual steps left to do, adding a few lines to the web.config file and setting the newly created Application Pool’s identity to LocalSystem.

<add key="dbprovider" value="System.Data.OleDb" />
<add key="dbconnectionstr" value="Provider=Microsoft.Jet.OLEDB.4.0;Data Source=C:\Program Files\WindowsPowerShell\DscService\Devices.mdb;" />
<add key="ConfigurationPath" value="C:\Program Files\WindowsPowerShell\DscService\Configuration" />
<add key="ModulePath" value="C:\Program Files\WindowsPowerShell\DscService\Modules" />

These lines are added inside the appSettings section of the web.config file. These are default values and can be left as is, they define where the modules are if you need any, and where the Configurations will be stored.

The last thing you need to do is open up IIS Manager, open Application Pools and find your Application Pool in the list. Click on it, and select Advanced Settings, click on Identity and then the build button, and from the list choose LocalSystem.

Once you’re done you should be able to point your browser at your server and see an xml output

http://pullserver.company.com/PSDSCPullServer.svc/

<?xml version="1.0" encoding="utf-8" ?> 
<service xml:base="http://pullserver.company.com/PSDSCPullServer.svc/" xmlns="http://www.w3.org/2007/app" xmlns:atom="http://www.w3.org/2005/Atom">
<workspace>
<atom:title>Default</atom:title> 
<collection href="Action">
<atom:title>Action</atom:title> 
</collection>
<collection href="Module">
<atom:title>Module</atom:title> 
</collection>
</workspace>
</service>

DSC Part 1

Desired State Configuration is a new feature of PowerShell 4.0 that is included out of the box with Windows 8.1 and Windows Server 2012 R2. This feature can be loaded on down-level clients by installing the Windows Management Framework 4.0.

The way it works is pretty straightforward, PowerShell 4.0 has introduced a few new keywords, one of which is Configuration. If you’ve written any PowerShell functions it operates in a similar fashion as the Function keyword. Within a Configuration you can have one or more Nodes, each Node is defined as either a string “computer name”, or a variable $Computer, or as a GUID. Within each Node you can have 0 or more resources, there are a dozen built-in resources, and you can roll your own. In addition Microsoft has just released a handful of custom resources that I’ve not played with yet.

Here is an overview of the process

The flow is very straightforward, you create a configuration and save it as a ps1. Executing the ps1 creates a new function, named for your configuration in memory. Run this new function and a subfolder is created named for the configuration and inside the folder a .MOF file is created named for the target node.

You will need to run the function in order to create the directory and proper MOF files

BasicWebServer -ComputerName webserver01

To apply that configuration to the local machine you simply run the following

Start-DSCConfiguration -Path .\BasicWebServer

This will run as a job, if you would like to see it happen, you can add –wait and –verbose to the command above and it will display everything it’s doing.

Start-DSCConfiguration -Wait -Verbose -Path .\BasicWebServer

This configuration is stored on the computer and you can test to see if the configuration has drifted any by running the following

Test-DSCConfiguration

It will return True if it’s happy or False if something is missing. This configuration is stored with the computer and survives reboots, so you can always run

Get-DSCConfiguration

That will return a collection of configurations that are to be applied to the computer. If you would like to bring the target node back in line with its configuration you simply run the following

Restore-DSCConfiguration

The end result of this command is that your server will now have all the features its supposed to have available again.

Setspn.exe wrapper

It’s been a while since I’ve posted anything, so I thought I would post about setspn, because you know, it’s so awesome right?

So one of the projects I’ve been working on lately is the upgrade to SCCM 2012. Outside of a few things it’s been going very well. We ran into an issue though when we rolled out the production server. Maybe I’ll write a post for that, needless to say part of the solution is SPN’s.

Now, I’m no stranger to this tool, but needless to say it leaves a LOT to be desired. Especially when we consider this came out for Windows Server 2003! So, since I had to do some work with SPN’s I decided I needed a PowerShell way of handling this.

There is really only handful of things we ever need setspn for, add an spn to an object, get an spn for an object, remove an spn from an object, find an spn or find duplicate spns.

So I came up with a handful of functions, based on the builtin help from the setspn utility and the TechNet article about setspn.

Reset-Spn -HostName

This will reset the SPN for the given hostname.

Asd-Spn -Service -Name -HostName -NoDupes

This will add an SPN to a given host and optionally check for duplicates within the domain first.

Remove-Spn -Service -Name -HostName

This removes an SPN from a given host.

Get-Spn -HostName

This will return the SPN’s for a given host.

Find-Spn -Service -Name

This will find all SPN’s of a given service, or of  given name, or both.

Find-DuplicateSpn -ForestWide

This will find all duplicate SPNs within the domain or optionally the entire forest.

Currently my functions are just wrappers for setspn.exe but I’m planning a V2 that will leverage .NET to handle this. I don’t get a lot of flexibility in error handling and output when I use a stand alone command.

  • I want to return objects
  • I want to be able to not have dependencies
  • I want the flexibility of .NET

Hyper-V Server 2012 Cluster with Powershell Deployment Toolkit

I recently came across a lovely show on Channel 9. It talks about setting up a simple Hyper-V Server 2012 cluster for use in a lab or test environment or whatever. I won’t go over the details, watch the show, it’s great! In addition to that I had come across an article on the Building Clouds Blog, about the PowerShell Deployment Toolkit. So over Memorial Day weekend I decided to stand up my cluster and spin up a test environment similar to what I use at work.

In my environment I have 6 servers, I have 3 set aside for Hyper-V, one is my firewall, one is a Domain Controller and the last is a management server. I’m using my DC as the file server as well. I didn’t need the iscsi target stuff, as I’m using Windows Server 2012 and used the new File and Storage Services to configure my iscsi drives.

I decided to let vmcreator.ps1 build the vm’s for me, originally I had spun up my own, but I was having difficulties getting the installer to work properly. Turns out that there is a requirement that the PDT tools be run from the C: drive of your computer. Also if you’re running them from the server OS, you will need to install the Hyper-V role in order vmcreator.ps1 to function properly. I don’t recall seeing either of those things mentioned in the TechNet article, but I may have overlooked that part.

So, linked from the vmcreator.ps1 article is a great utility, Convert-WindowsImage.ps1 that I used to create my base OS image. The utility is super handy and has a gui or cmdline version. I wimped out initially and used the gui version, pointed it an ISO of Windows Server 2012 and after a while I had a lovely vhdx ready for vmcreator.ps1.

After renaming the half dozen vm’s the script had created for me, in record time btw, I ran the installer.ps1. There’s not really a whole lot mentioned on the article about it’s use, it is rather self-explanatory and once you realize the limitation to the C: drive then it’s a no-brainer. That part took me a bit to figure out as I had an external drive with all the bits the downloader.ps1 had downloaded for me.

The end result is I now have the basic System Center infrastructure that I can play with locally to try out new features, or test the scripts and apps I create for work. It was really very slick, and I could totally see how I would use something like this in our QA environment at work.