Week In Review : 06/01/2014

Another very productive week! Spent a lot of time on Operations Manager, and getting the Low-Privilege SQL Monitoring to work. There appears to be a problem with how the MP calculates PLE and is using data and advice that is now about 10yrs old, so I disabled that monitor.

We have about 20 SQL servers that are directly under our control, so trying to get those setup manually would have been painful. So I worked up a nice little SQL PowerShell module for automating some of that for me. Considering the number of servers in total we have, that code is really going to help out.

I didn’t spend all my time in Ops though, I did do a lot of Orchestrator this week. It’s been so nice having the network setup in such a way as to make this all so easy now. There are still some kinks that I need to work out, but otherwise it’s been really fun. One of things I did this week for Orchestrator was build a PowerShell module for it as well. I talked about that one Posted in IT, Orchestrator, Projects, Uncategorized, Windows PowerShell, WIR | Leave a reply

System Center Orchestrator PowerShell Module

This is one I’ve had on the back burner for a while, so yesterday morning I roughed up the basic framework for a PowerShell module. I have a few Runbooks at work, that it would be super cool to just run from PowerShell, and since lately I’ve been all up in the web services this was as good a time as any.

The Get cmdlets were all pretty simple, in fact there is really only one that does any real work Get-scoWebFeed. I probably could have used Invoke-WebService, but that’s no fun so I used .Net to make my own, and it’s really pretty simple. I just go ask the Orchestrator server (on a specially crafted url) to spit out the xml, then I just return it.

The individual functions for getting Runbooks, Jobs and Activities handle building the special URL,which isn’t really special as much as it is specific.

The Start-Runbook was the most complicated, I actually borrowed some code from MSDN, and another guys blog (Part 1, Part 2) to build mine. Turns out some of the xml you have to build to send up has to go in a certain way. I need to adjust my code to handle Runbooks with Parameters, but right now it’s good for what I need it do.

You can find the up to the minute code on GitHub, or you can find it in the TechNet Gallery.

Week In Review : 05/25/2014

Well it’s been forever since I’ve written anything interesting so now is as good a time as any. Recently we were informed we needed to start keeping track of time spent on projects and since this is something I did for a couple of years at the School of Engineering, it’s not too difficult for me to get back into the swing. Although this go around I went with more of a journal style, not sure I like it but we’ll see.

LOTS of programming this week. I wrote a virtual machine provisioning app a while ago, and while functional I’m not sure how many folks actually use it. But there has been some renewed interest lately, mainly around removing a lot of the paperwork involved. So I’ve gotten to get my hands dirty playing around with various web services.

While not terribly fleshed out right now what I’ve got works for what we need.

  • I can communicate with VMware to provision a server
  • I can send the vlan information over to Proteus (Bluecat) to ask for the next available IP in the network
  • I can submit the details of the server, ram, cpu, disk, network information over to ServiceNOW for inventory
  • I can automatically generate tickets for handling backups and Zenoss monitoring
  • I can also talk direct to Zenoss to get the server into the system

I find more and more that programming is becoming more important for administering servers than perhaps it once was, or I’m just going off the deep-end with programming 😉

Here are the projects on GitHub associated with i’m working on now

You will note that I have a hyper-v module, but I’ve not talked about doing hyper-v at work. We actually have a little test cluster that we spun up earlier this month to start kicking the tires.

I’ve also been talking a lot with Microsoft. We’re working through an issue where provisioning users for Lync sometimes fails. It’s incredibly intermittent and next to impossible to reproduce. I’ve taken to having the guys turning on PowerShell logging (start-transcript) before they do anything just in case they catch it so we can pass that on to Microsoft.

Spent a few hours talking with one of their SQL support guys and now when an error occurs during provisioning, in addition to sending the error out to file; I also run a query that grabs data from one of the system tables regarding communication.

What else…System Center Advisor preview is AWESOME! I’ve been talking with a program manager at Microsoft as well as one or two guys who develop it about some feedback I had given and some issues I was having. Gotta say that’s been super fun, would so love to work there!

Oh! Finally got the monitoring VLAN all setup and started moving my servers into it. Had some fun issues there, first I couldn’t get to DNS so no name resolution, no accessing servers by names…fun times! Then, I forgot to file the change paperwork for changing the IP addresses of the servers, so the firewall rules never got updated…sigh

I’ve spent a fair amount of time getting Operations Manager all happy and cleaning up the various Management Pack issues that I’ve not dealt with since I’ve not been able to communicate with the servers. One of the more challenging parts for me lately has been getting the Low Privilege SQL monitoring working, I think I’ve got it all worked out now though so we’ll see how that goes next week.

In addition to being able to access the servers from the monitoring VLAN it also appears we have just about the same level of access from our desktops! No more RDP’ing into a dozen servers to do something like tweak a registry setting or stop a service!

Oh well, that’s it for this past week. I hope to start doing some more writing but I’ve decided to at least do these Week In Review posts.

Showing off some DSC Resources

Yesterday I wrote three articles ( Part 1, Part 2, Part 3 ) about Desired State Configuration. I thought I would post a slightly more complex Configuration. This configuration performs several actions on the target node.

  1. Install the Web-Server feature
  2. Install several additional features that depend on the Web-Server
  3. Install the WebDeploy application
  4. Configure Windows Firewall to allow WebDeploy traffic

This particular Configuration has some nice features to note. The first of which are the parameters. You must provide a ComputerName (or guid) and a path to the WebDeploy MSI. You can also optionally specify a source path for the features, this is useful if you have cloned a server.

You will also notice that nearly all the Resources use the DependsOn property. Since all the features are web server related, I set the DependsOn property to be the WebServerRole. If you look in the documentation I believe that Microsoft has this down as Requires, but I believe it’s changed since the docs came out.

The Package Resource installs WebDeploy. The ProductID I was able to pull from the MSI using ORCA ( SDK Download ). If you don’t have that installed, or don’t want to install it, you can install WebDeploy on a reference machine and ought to be able to query the ProductID from WMI.

The Script Resource was a little more difficult for me, and thankfully I found a wonderful article that did the deep diving. Basically a Script has three scripts that need to run. The TestScript must evaluate to True or False. If the TestScript == False then the SetScript runs. The GetScript must return a HashTable, and the only thing that it needs to return is the Result property, but you can also specify the contents of the GetScript, TestScript and SetScript scripts. Finally the SetScript is the script that will do the thing you need done. In this example create a firewall rule to allow port 8172.

So basically what happens is when you run Start-DSCConfiguration, the script will perform a test. If that test returns true then we can assume that the thing we need done is done. If that test returns false then we need to do the thing, whatever that thing is.

When you run Get-DSCConfiguration, the script will get the state of what we did, which is why all we need is a result.

DSC Part 3

It’s been a busy day, I haven’t posted anything since July and today three posts!

Well in Part 1 we talked about what Desired State Configuration was, in Part 2 I showed you how to manually setup the pull server. Now I’ll show you how to get your target node to pull configurations from the pull server. This is basically tying the loose ends together. I don’t anticipate adding any additional posts around this particular series as it really just fills a need for me. Detailed steps on how to get from point A to point B.

So in order for the client to talk to the server we need a GUID. This GUID will represent this client, if you have several clients it may be worthwhile noting these down, along with the name and or IP of the client. Honestly, the best way to make these things is in PowerShell, it’s a one-liner.

[System.Guid]::NewGuid()

Guid                                                                                                      ----
6e4bc22c-1ea3-4be6-b6a9-5694f0cfcaf8

Now that our pull server is up and running, we’ll need to modify our Configuration, and really all that needs to be changed is where we specify ComputerName. This time around our command looks like this

BasicWebServer -ComputerName "6e4bc22c-1ea3-4be6-b6a9-5694f0cfcaf8"

Note we passed in the GUID we just created, this is important as it will update the MOF we have locally stored with the GUID instead. If we look inside our .\BasicWebServer folder you will see a new MOF file with the GUID as the name. Now we need to create our checksum file.

New-DSCCheckSum -ConfigurationPath .\BasicWebServer -OutPath .\BasicWebServer

This result of this cmdlet is a .checksum file that is the same name as the MOF file that we just created. These two files are then copied to the pull server’s configuration directory. Once these have been copied over we can run the Configuration that configures the Local Configuration Manager.

This is run like you would run any other configuration EXCEPT, you must specify the GUID we just created, as well as the URL to the pull-server. In Part 2, our URL was not a virtual directory so we can just pass in the name of the server. If you created a virtual directory, you will need to pass in the full URL.

This particular configuration TURNS OFF SSL. I put that in caps because I think it’s important to note that DSC defaults to working over SSL only.

SetupDSCClient -NodeId "6e4bc22c-1ea3-4be6-b6a9-5694f0cfcaf8" -PullServer "webserver01"

Now, every 30 minutes your client will communicate with your pull-server and make sure that all the basic web server features are available. You should remove all those features and test it out.

I had some trial and tribulation when I was first setting this up, my first obstacle was my server was 2012, so I had to install WMF 4.0. Then I needed to change my configuration to run unsecure since I didn’t want to mess with certs. Finally since there were other web sites running on the server I needed to change to a different port.

DSC Part 2

In my previous article I talked about Desired State Configuration in a more or less generic way. I provided a sample Configuration that installed the basic services needed for a web server. This Configuration could be applied locally and once applied you could manage it manually as needed.

But the cool setup what is called a pull server ( a push server can also be setup but is more complex ). A pull server is basically a web server that has been configured with the proper DSC services and setup to listen.

The main difference between something local and a pull server is the Configurations are required to use GUID’s instead of computer names. Additionally the MOF files get hashed and the hash is stored in a file named after the MOF. These two files are then uploaded to the pull server and your client ( target node ) is configured to point at the pull server.

You can define intervals such that every 30 minutes the client will check in with the pull sever to validate it’s configuration. If something is missing, it will re-apply the configuration automatically.

So, if you already ran the Configuration from part 1, then you’re already halfway there. I’ll setup through the manual process for configuring the web portion of the pull server ( keep in mind Microsoft has released some Configurations that will assist with this ). We’ll need to make sure that the DSC service is available.

The next thing we need to do is setup the web portion of the pull-server. This entails copying files, creating an Application Pool, and setting up a website.

This leaves a few manual steps left to do, adding a few lines to the web.config file and setting the newly created Application Pool’s identity to LocalSystem.

<add key="dbprovider" value="System.Data.OleDb" />
<add key="dbconnectionstr" value="Provider=Microsoft.Jet.OLEDB.4.0;Data Source=C:\Program Files\WindowsPowerShell\DscService\Devices.mdb;" />
<add key="ConfigurationPath" value="C:\Program Files\WindowsPowerShell\DscService\Configuration" />
<add key="ModulePath" value="C:\Program Files\WindowsPowerShell\DscService\Modules" />

These lines are added inside the appSettings section of the web.config file. These are default values and can be left as is, they define where the modules are if you need any, and where the Configurations will be stored.

The last thing you need to do is open up IIS Manager, open Application Pools and find your Application Pool in the list. Click on it, and select Advanced Settings, click on Identity and then the build button, and from the list choose LocalSystem.

Once you’re done you should be able to point your browser at your server and see an xml output

http://pullserver.company.com/PSDSCPullServer.svc/

<?xml version="1.0" encoding="utf-8" ?> 
<service xml:base="http://pullserver.company.com/PSDSCPullServer.svc/" xmlns="http://www.w3.org/2007/app" xmlns:atom="http://www.w3.org/2005/Atom">
<workspace>
<atom:title>Default</atom:title> 
<collection href="Action">
<atom:title>Action</atom:title> 
</collection>
<collection href="Module">
<atom:title>Module</atom:title> 
</collection>
</workspace>
</service>

DSC Part 1

Desired State Configuration is a new feature of PowerShell 4.0 that is included out of the box with Windows 8.1 and Windows Server 2012 R2. This feature can be loaded on down-level clients by installing the Windows Management Framework 4.0.

The way it works is pretty straightforward, PowerShell 4.0 has introduced a few new keywords, one of which is Configuration. If you’ve written any PowerShell functions it operates in a similar fashion as the Function keyword. Within a Configuration you can have one or more Nodes, each Node is defined as either a string “computer name”, or a variable $Computer, or as a GUID. Within each Node you can have 0 or more resources, there are a dozen built-in resources, and you can roll your own. In addition Microsoft has just released a handful of custom resources that I’ve not played with yet.

Here is an overview of the process

The flow is very straightforward, you create a configuration and save it as a ps1. Executing the ps1 creates a new function, named for your configuration in memory. Run this new function and a subfolder is created named for the configuration and inside the folder a .MOF file is created named for the target node.

You will need to run the function in order to create the directory and proper MOF files

BasicWebServer -ComputerName webserver01

To apply that configuration to the local machine you simply run the following

Start-DSCConfiguration -Path .\BasicWebServer

This will run as a job, if you would like to see it happen, you can add –wait and –verbose to the command above and it will display everything it’s doing.

Start-DSCConfiguration -Wait -Verbose -Path .\BasicWebServer

This configuration is stored on the computer and you can test to see if the configuration has drifted any by running the following

Test-DSCConfiguration

It will return True if it’s happy or False if something is missing. This configuration is stored with the computer and survives reboots, so you can always run

Get-DSCConfiguration

That will return a collection of configurations that are to be applied to the computer. If you would like to bring the target node back in line with its configuration you simply run the following

Restore-DSCConfiguration

The end result of this command is that your server will now have all the features its supposed to have available again.

Setspn.exe wrapper

It’s been a while since I’ve posted anything, so I thought I would post about setspn, because you know, it’s so awesome right?

So one of the projects I’ve been working on lately is the upgrade to SCCM 2012. Outside of a few things it’s been going very well. We ran into an issue though when we rolled out the production server. Maybe I’ll write a post for that, needless to say part of the solution is SPN’s.

Now, I’m no stranger to this tool, but needless to say it leaves a LOT to be desired. Especially when we consider this came out for Windows Server 2003! So, since I had to do some work with SPN’s I decided I needed a PowerShell way of handling this.

There is really only handful of things we ever need setspn for, add an spn to an object, get an spn for an object, remove an spn from an object, find an spn or find duplicate spns.

So I came up with a handful of functions, based on the builtin help from the setspn utility and the TechNet article about setspn.

Reset-Spn -HostName

This will reset the SPN for the given hostname.

Asd-Spn -Service -Name -HostName -NoDupes

This will add an SPN to a given host and optionally check for duplicates within the domain first.

Remove-Spn -Service -Name -HostName

This removes an SPN from a given host.

Get-Spn -HostName

This will return the SPN’s for a given host.

Find-Spn -Service -Name

This will find all SPN’s of a given service, or of  given name, or both.

Find-DuplicateSpn -ForestWide

This will find all duplicate SPNs within the domain or optionally the entire forest.

Currently my functions are just wrappers for setspn.exe but I’m planning a V2 that will leverage .NET to handle this. I don’t get a lot of flexibility in error handling and output when I use a stand alone command.

  • I want to return objects
  • I want to be able to not have dependencies
  • I want the flexibility of .NET

Hyper-V Server 2012 Cluster with Powershell Deployment Toolkit

I recently came across a lovely show on Channel 9. It talks about setting up a simple Hyper-V Server 2012 cluster for use in a lab or test environment or whatever. I won’t go over the details, watch the show, it’s great! In addition to that I had come across an article on the Building Clouds Blog, about the PowerShell Deployment Toolkit. So over Memorial Day weekend I decided to stand up my cluster and spin up a test environment similar to what I use at work.

In my environment I have 6 servers, I have 3 set aside for Hyper-V, one is my firewall, one is a Domain Controller and the last is a management server. I’m using my DC as the file server as well. I didn’t need the iscsi target stuff, as I’m using Windows Server 2012 and used the new File and Storage Services to configure my iscsi drives.

I decided to let vmcreator.ps1 build the vm’s for me, originally I had spun up my own, but I was having difficulties getting the installer to work properly. Turns out that there is a requirement that the PDT tools be run from the C: drive of your computer. Also if you’re running them from the server OS, you will need to install the Hyper-V role in order vmcreator.ps1 to function properly. I don’t recall seeing either of those things mentioned in the TechNet article, but I may have overlooked that part.

So, linked from the vmcreator.ps1 article is a great utility, Convert-WindowsImage.ps1 that I used to create my base OS image. The utility is super handy and has a gui or cmdline version. I wimped out initially and used the gui version, pointed it an ISO of Windows Server 2012 and after a while I had a lovely vhdx ready for vmcreator.ps1.

After renaming the half dozen vm’s the script had created for me, in record time btw, I ran the installer.ps1. There’s not really a whole lot mentioned on the article about it’s use, it is rather self-explanatory and once you realize the limitation to the C: drive then it’s a no-brainer. That part took me a bit to figure out as I had an external drive with all the bits the downloader.ps1 had downloaded for me.

The end result is I now have the basic System Center infrastructure that I can play with locally to try out new features, or test the scripts and apps I create for work. It was really very slick, and I could totally see how I would use something like this in our QA environment at work.

 

Windows Server 2012 Single Node Cluster

So this posting is a re-hash of a post that I did over a year ago when Windows 2012 was still Windows Server 8.

 

PS C:Windowssystem32> Add-WindowsFeature Failover-Clustering

Success Restart Needed Exit Code Feature Result
------- -------------- --------- --------------
True No Success {Failover Clustering}
PS C:Windowssystem32> Add-WindowsFeature RSAT-Clustering-PowerShell

Success Restart Needed Exit Code Feature Result
------- -------------- --------- --------------
True No Success {Remote Server Administration Tools, Failo...
PS C:Windowssystem32> Import-Module FailoverClusters
PS C:Windowssystem32> Update-Help –Module FailoverClusters
PS C:Windowssystem32> New-Cluster -Name sql -Node sql-om12 -StaticAddress 192.168.1.230
PS C:Windowssystem32> Get-Cluster sql |Format-List *


Domain : company.com
Name : sql
AddEvictDelay : 60
BackupInProgress : 0
ClusSvcHangTimeout : 60
ClusSvcRegroupOpeningTimeout : 5
ClusSvcRegroupPruningTimeout : 5
ClusSvcRegroupStageTimeout : 5
ClusSvcRegroupTickInMilliseconds : 300
ClusterGroupWaitDelay : 120
MinimumNeverPreemptPriority : 3000
MinimumPreemptorPriority : 1
ClusterEnforcedAntiAffinity : 0
ClusterLogLevel : 3
ClusterLogSize : 300
CrossSubnetDelay : 1000
CrossSubnetThreshold : 5
DefaultNetworkRole : 2
Description :
FixQuorum : 0
HangRecoveryAction : 3
IgnorePersistentStateOnStartup : 0
LogResourceControls : 0
PlumbAllCrossSubnetRoutes : 0
PreventQuorum : 0
QuorumArbitrationTimeMax : 20
RequestReplyTimeout : 60
RootMemoryReserved : 4294967295
RouteHistoryLength : 10
SameSubnetDelay : 1000
SameSubnetThreshold : 5
SecurityLevel : 1
SharedVolumeCompatibleFilters : {}
SharedVolumeIncompatibleFilters : {}
SharedVolumesRoot : C:ClusterStorage
SharedVolumeSecurityDescriptor : {1, 0, 4, 128...}
ShutdownTimeoutInMinutes : 20
SharedVolumeVssWriterOperationTimeout : 1800
UseClientAccessNetworksForSharedVolumes : 0
SharedVolumeBlockCacheSizeInMB : 0
WitnessDatabaseWriteTimeout : 300
WitnessRestartInterval : 15
RecentEventsResetTime : 4/17/2013 9:38:07 PM
EnableSharedVolumes : Enabled
DynamicQuorum : 1
Id : 954f3834-595a-410f-8b2b-67648864d089