DSC Part 1

Desired State Configuration is a new feature of PowerShell 4.0 that is included out of the box with Windows 8.1 and Windows Server 2012 R2. This feature can be loaded on down-level clients by installing the Windows Management Framework 4.0.

The way it works is pretty straightforward, PowerShell 4.0 has introduced a few new keywords, one of which is Configuration. If you’ve written any PowerShell functions it operates in a similar fashion as the Function keyword. Within a Configuration you can have one or more Nodes, each Node is defined as either a string “computer name”, or a variable $Computer, or as a GUID. Within each Node you can have 0 or more resources, there are a dozen built-in resources, and you can roll your own. In addition Microsoft has just released a handful of custom resources that I’ve not played with yet.

Here is an overview of the process

The flow is very straightforward, you create a configuration and save it as a ps1. Executing the ps1 creates a new function, named for your configuration in memory. Run this new function and a subfolder is created named for the configuration and inside the folder a .MOF file is created named for the target node.

You will need to run the function in order to create the directory and proper MOF files

BasicWebServer -ComputerName webserver01

To apply that configuration to the local machine you simply run the following

Start-DSCConfiguration -Path .\BasicWebServer

This will run as a job, if you would like to see it happen, you can add –wait and –verbose to the command above and it will display everything it’s doing.

Start-DSCConfiguration -Wait -Verbose -Path .\BasicWebServer

This configuration is stored on the computer and you can test to see if the configuration has drifted any by running the following

Test-DSCConfiguration

It will return True if it’s happy or False if something is missing. This configuration is stored with the computer and survives reboots, so you can always run

Get-DSCConfiguration

That will return a collection of configurations that are to be applied to the computer. If you would like to bring the target node back in line with its configuration you simply run the following

Restore-DSCConfiguration

The end result of this command is that your server will now have all the features its supposed to have available again.

Setspn.exe wrapper

It’s been a while since I’ve posted anything, so I thought I would post about setspn, because you know, it’s so awesome right?

So one of the projects I’ve been working on lately is the upgrade to SCCM 2012. Outside of a few things it’s been going very well. We ran into an issue though when we rolled out the production server. Maybe I’ll write a post for that, needless to say part of the solution is SPN’s.

Now, I’m no stranger to this tool, but needless to say it leaves a LOT to be desired. Especially when we consider this came out for Windows Server 2003! So, since I had to do some work with SPN’s I decided I needed a PowerShell way of handling this.

There is really only handful of things we ever need setspn for, add an spn to an object, get an spn for an object, remove an spn from an object, find an spn or find duplicate spns.

So I came up with a handful of functions, based on the builtin help from the setspn utility and the TechNet article about setspn.

Reset-Spn -HostName

This will reset the SPN for the given hostname.

Asd-Spn -Service -Name -HostName -NoDupes

This will add an SPN to a given host and optionally check for duplicates within the domain first.

Remove-Spn -Service -Name -HostName

This removes an SPN from a given host.

Get-Spn -HostName

This will return the SPN’s for a given host.

Find-Spn -Service -Name

This will find all SPN’s of a given service, or of  given name, or both.

Find-DuplicateSpn -ForestWide

This will find all duplicate SPNs within the domain or optionally the entire forest.

Currently my functions are just wrappers for setspn.exe but I’m planning a V2 that will leverage .NET to handle this. I don’t get a lot of flexibility in error handling and output when I use a stand alone command.

  • I want to return objects
  • I want to be able to not have dependencies
  • I want the flexibility of .NET

Hyper-V Server 2012 Cluster with Powershell Deployment Toolkit

I recently came across a lovely show on Channel 9. It talks about setting up a simple Hyper-V Server 2012 cluster for use in a lab or test environment or whatever. I won’t go over the details, watch the show, it’s great! In addition to that I had come across an article on the Building Clouds Blog, about the PowerShell Deployment Toolkit. So over Memorial Day weekend I decided to stand up my cluster and spin up a test environment similar to what I use at work.

In my environment I have 6 servers, I have 3 set aside for Hyper-V, one is my firewall, one is a Domain Controller and the last is a management server. I’m using my DC as the file server as well. I didn’t need the iscsi target stuff, as I’m using Windows Server 2012 and used the new File and Storage Services to configure my iscsi drives.

I decided to let vmcreator.ps1 build the vm’s for me, originally I had spun up my own, but I was having difficulties getting the installer to work properly. Turns out that there is a requirement that the PDT tools be run from the C: drive of your computer. Also if you’re running them from the server OS, you will need to install the Hyper-V role in order vmcreator.ps1 to function properly. I don’t recall seeing either of those things mentioned in the TechNet article, but I may have overlooked that part.

So, linked from the vmcreator.ps1 article is a great utility, Convert-WindowsImage.ps1 that I used to create my base OS image. The utility is super handy and has a gui or cmdline version. I wimped out initially and used the gui version, pointed it an ISO of Windows Server 2012 and after a while I had a lovely vhdx ready for vmcreator.ps1.

After renaming the half dozen vm’s the script had created for me, in record time btw, I ran the installer.ps1. There’s not really a whole lot mentioned on the article about it’s use, it is rather self-explanatory and once you realize the limitation to the C: drive then it’s a no-brainer. That part took me a bit to figure out as I had an external drive with all the bits the downloader.ps1 had downloaded for me.

The end result is I now have the basic System Center infrastructure that I can play with locally to try out new features, or test the scripts and apps I create for work. It was really very slick, and I could totally see how I would use something like this in our QA environment at work.

 

Windows Server 2012 Single Node Cluster

So this posting is a re-hash of a post that I did over a year ago when Windows 2012 was still Windows Server 8.

 

PS C:Windowssystem32> Add-WindowsFeature Failover-Clustering

Success Restart Needed Exit Code Feature Result
------- -------------- --------- --------------
True No Success {Failover Clustering}
PS C:Windowssystem32> Add-WindowsFeature RSAT-Clustering-PowerShell

Success Restart Needed Exit Code Feature Result
------- -------------- --------- --------------
True No Success {Remote Server Administration Tools, Failo...
PS C:Windowssystem32> Import-Module FailoverClusters
PS C:Windowssystem32> Update-Help –Module FailoverClusters
PS C:Windowssystem32> New-Cluster -Name sql -Node sql-om12 -StaticAddress 192.168.1.230
PS C:Windowssystem32> Get-Cluster sql |Format-List *


Domain : company.com
Name : sql
AddEvictDelay : 60
BackupInProgress : 0
ClusSvcHangTimeout : 60
ClusSvcRegroupOpeningTimeout : 5
ClusSvcRegroupPruningTimeout : 5
ClusSvcRegroupStageTimeout : 5
ClusSvcRegroupTickInMilliseconds : 300
ClusterGroupWaitDelay : 120
MinimumNeverPreemptPriority : 3000
MinimumPreemptorPriority : 1
ClusterEnforcedAntiAffinity : 0
ClusterLogLevel : 3
ClusterLogSize : 300
CrossSubnetDelay : 1000
CrossSubnetThreshold : 5
DefaultNetworkRole : 2
Description :
FixQuorum : 0
HangRecoveryAction : 3
IgnorePersistentStateOnStartup : 0
LogResourceControls : 0
PlumbAllCrossSubnetRoutes : 0
PreventQuorum : 0
QuorumArbitrationTimeMax : 20
RequestReplyTimeout : 60
RootMemoryReserved : 4294967295
RouteHistoryLength : 10
SameSubnetDelay : 1000
SameSubnetThreshold : 5
SecurityLevel : 1
SharedVolumeCompatibleFilters : {}
SharedVolumeIncompatibleFilters : {}
SharedVolumesRoot : C:ClusterStorage
SharedVolumeSecurityDescriptor : {1, 0, 4, 128...}
ShutdownTimeoutInMinutes : 20
SharedVolumeVssWriterOperationTimeout : 1800
UseClientAccessNetworksForSharedVolumes : 0
SharedVolumeBlockCacheSizeInMB : 0
WitnessDatabaseWriteTimeout : 300
WitnessRestartInterval : 15
RecentEventsResetTime : 4/17/2013 9:38:07 PM
EnableSharedVolumes : Enabled
DynamicQuorum : 1
Id : 954f3834-595a-410f-8b2b-67648864d089

Operations Manager, Orchestrator and PowerShell Remoting

It’s been a very long time since I last posted, the primary reason is most likely laziness on my part and secondly I’ve not had a lot to write about. Recently I’ve been messing around with Orchestrator and automation as a means of passing information off to Zenoss. On the face of it, it seemed a rather trivial task, but it took much longer than I anticipated.

The first go round with this was a very simple runbook. It had two activities, Monitor Alert and Run .Net Script. The Monitor Alert activity was configured to look for alerts that were not Information alerts. Once an alert occurred that met that criteria it was passed off to the Run .Net Script. The Run .Net Script activity simply created a simple log entry with PowerShell.


New-EventLog -LogName 'SCOM Alerts' -Source Category
Write-EventLog -LogName 'SCOM Alerts' -Source Category -EntryType Severity -EventId 1 -Message Name

Note : I didn’t include all the gibberish typically seen when copying a runbook into notepad, so you can assume that Category, Severity and Name are prefixed by a big nasty GUID.

The first hurdle I had to get around was creating new sources. Since I didn’t know in advance what they would be, it seemed to me it would be easier to have them created programmatically. That’s what the first line does, but the context under which this runs didn’t have the ability to do that. So I created a group and added the service account to that group, and then added that group to the local Administrators group on the server. Finally I needed to disable UAC which was preventing this from happening, if someone has a better way of doing this I’m all ears.

The nice part about this stage is I was able to get some alerts generated and have them show up in the newly created log. For testing I picked a server that I was monitoring and then stopped the HealthService service. This would generate a failed heartbeat alert similar to a computer going offline unexpectedly. With some sample log entries I was able to configure the Zenoss server to pull in the specific log and start generating alerts with Zenoss.

While this worked well enough to get started I wasn’t satisfied with the quality of the data being returned. Specifically I noted that while some alerts contained the name of the computer with the problem, not all did. Looking at the data returned by the Monitor Alert activity it didn’t seem to me I was getting as many of the details as I needed.

So I decided that some remoting might do the trick for me. With remoting I’m able to use the

links
http://blog.tyang.org/2012/05/09/using-scom-powershell-snap-in-and-sdk-client-with-a-powershell-remote-session/
http://blogs.msdn.com/b/powershell/archive/2008/06/05/credssp-for-second-hop-remoting-part-i-domain-account.aspx
http://blogs.technet.com/b/stefan_stranger/archive/2010/11/02/using-powershell-remoting-to-connect-to-opsmgr-root-management-server-and-use-the-opsmgr-cmdlets.aspx
http://blogs.technet.com/b/jonathanalmquist/archive/2009/03/19/resolve-all-open-alerts-generated-by-specific-agent.aspx
http://www.systemcentercentral.com/BlogDetails/tabid/143/IndexID/70177/Default.aspx

my thread
http://social.technet.microsoft.com/Forums/en-US/operationsmanagergeneral/thread/360f3a42-9153-4e2e-b060-73740e8ffe4f/#360f3a42-9153-4e2e-b060-73740e8ffe4f

SCOM 2007 R2 and Get-Event

For whatever reason I’ve not been able to find what I’ve been looking for regarding this cmdlet. Namely a decent example of it’s use with regards to the –Criteria parameter. For better or worse I have several event collectors setup and it would be nice to ask SCOM for a list of specific events. Normally you would think that would be simple, and perhaps for some it is, but for me I was having some issues, that is until yesterday.

I poked around in my history but I couldn’t find the page I was looking at that enlightened me, so I’ll just add my own here in case anyone else is having the same problem.

So I’m looking at a screen that has the following columns:

  • Level
  • Date and Time
  • Source
  • Name
  • Event Number

Now the examples I have seen show that you pass field=value into the –Critera parameter, but the problem for me is that Event Number or EventNumber aren’t things. In the Event Viewer it’s called ID but in SCOM ID is the ID of the specific entry you’re looking at, much like a primary key in a database.

It turns out that the Event Number field, in SCOM is simply Number. I literally felt like Homer Simpson, D’OH!

Get-Event -Criteria ‘Number=4729’

That actually yields useful information, well assuming you’re logging Event ID 4729. At any rate, I needed to write this down somewhere as it’s a regular thing for me, that up until now has been very difficult.

DPM 2010 console crashes when pushing an agent Install

This is a new one for me, I’ve been running DPM for quite a while now and I’ve not seen this behavior. In a recent staff meeting it came up that the DPM server was having some RPC issues, so since I’m jonesing for stuff to do I said I wouldn’t mind taking a look at it.

When you open the DPM Management Console, click the Management tab and then Agents you are presented with all the servers that have the DPM agent installed. From here you are also able to install/uninstall/update the agent. Working through the Agent Install wizard, I selected the server to be backed up, entered my credentials and within a minute received a nasty error message.

<FatalServiceError>
    <__System>
        <ID>19ID>
        <Seq>0Seq>
        <TimeCreated>8/1/2012 3:12:18 PMTimeCreated>
        <Source>DpmThreadPool.csSource>
        <Line>163Line>
        <HasError>TrueHasError>
    __System>
    <ExceptionType>ArgumentExceptionExceptionType>
    <ExceptionMessage>Value does not fall within the expected range.ExceptionMessage>
    <ExceptionDetails>
    System.ArgumentException: Value does not fall within the expected range.
    at System.Management.ManagementScope.Initialize()
    at Microsoft.Internal.EnterpriseStorage.Dls.UI.InstallAgentsWizard.Win32Cluster.GetNodeClusterState(String nodeName, ConnectionOptions options, UInt32& clusterState)
    at Microsoft.Internal.EnterpriseStorage.Dls.UI.InstallAgentsWizard.CredentialsPage.CheckForCluster(ProductionServerCollection errorNodesAccessDenied, ProductionServerCollection errorNodesClusterDetectionFailed, ProductionServerCollection errorNodesDRDetectionFailed)
    at Microsoft.Internal.EnterpriseStorage.Dls.UI.InstallAgentsWizard.CredentialsPage.FormListOfTargetServers(WindowsIdentity runAsIdentity)
    at Microsoft.Internal.EnterpriseStorage.Dls.UI.InstallAgentsWizard.CredentialsPage.OnLeavePage(LeavePageEventArgs e)
    at Microsoft.Internal.EnterpriseStorage.UI.WizardFramework.WizardPage.RaiseLeavePage(LeavePageEventArgs e)
    at Microsoft.Internal.EnterpriseStorage.UI.WizardFramework.WizardForm.ValidateAndLeavePage(WizardPage page, LeavePageEventArgs e)
    at Microsoft.Internal.EnterpriseStorage.UI.WizardFramework.WizardForm.TraversePagesToTarget(WizardPage startPage, WizardPage targetPage, NavigationDirection direction)
    at Microsoft.Internal.EnterpriseStorage.UI.WizardFramework.WizardForm.InternalNavigateToPage(WizardPage targetPage, NavigateEventArgs e)
    at Microsoft.Internal.EnterpriseStorage.UI.WizardFramework.WizardForm.NextPage()
    at System.Windows.Forms.Control.OnClick(EventArgs e)
    at System.Windows.Forms.Button.WndProc(Message& m)
    at System.Windows.Forms.Control.ControlNativeWindow.WndProc(Message& m)
    at System.Windows.Forms.NativeWindow.Callback(IntPtr hWnd, Int32 msg, IntPtr wparam, IntPtr lparam)
    ExceptionDetails>
FatalServiceError>

I know, nasty right? At any rate we ran through several different things, making sure the server we wanted to get at had the proper firewall rules, could we access the admin hidden share, were the groups there and so on. We even fired up netmon and reproduced the problem just to make sure they were talking. Everything seemed ok, so we called up Microsoft and opened a ticket.

After talking with one of the DPM support tech’s we found that it was an issue with the remote server we were attempting to connect to. While everything appeared to be ok, there was a problem with the RPC settings in the registry. At some point all the entries in the Internet subkey of RPC were removed. Turns out it’s OK if the entire key is missing, or if the key is there and has the proper settings in it, but if it’s there and empty…that’s hurty.

Here is some information he pasted over to me about this key:

With Registry Editor, you can modify the following parameters for RPC. The RPC Port key values discussed below are all located in the following key in the registry: HKEY_LOCAL_MACHINESoftwareMicrosoftRpcInternet
Key Data Type

Ports REG_MULTI_SZ
Specifies a set of IP port ranges consisting of either all the ports available from the Internet or all the ports not available from the Internet. Each string represents a single port or an inclusive set of ports.

For example, a single port may be represented by 5984, and a set of ports may be represented by 5000-5100. If any entries are outside the range of 0 to 65535, or if any string cannot be interpreted, the RPC runtime treats the entire configuration as invalid.

PortsInternetAvailable REG_SZ Y or N (not case-sensitive)
If Y, the ports listed in the Ports key are all the Internet-available ports on that computer. If N, the ports listed in the Ports key are all those ports that are not Internet-available.

UseInternetPorts REG_SZ ) Y or N (not case-sensitive
Specifies the system default policy.
If Y, the processes using the default will be assigned ports from the set of Internet-available ports, as defined previously.
If N, the processes using the default will be assigned ports from the set of intranet-only ports.
Example:

In this example ports 5000 through 5100 inclusive have been arbitrarily selected to help illustrate how the new registry key can be configured. This is not a recommendation of a minimum number of ports needed for any particular system. 1.  Add the Internet key under: HKEY_LOCAL_MACHINESoftwareMicrosoftRpc 
2.  Under the Internet key, add the values “Ports” (MULTI_SZ), “PortsInternetAvailable” (REG_SZ), and “UseInternetPorts” (REG_SZ).

For example, the new registry key appears as follows:
Ports: REG_MULTI_SZ: 5000-5100
PortsInternetAvailable: REG_SZ: Y
UseInternetPorts: REG_SZ: Y 
3.  Restart the server. All applications that use RPC dynamic port allocation use ports 5000 through 5100, inclusive. In most environments, a minimum of 100 ports should be opened, because several system services rely on these RPC ports to communicate with each other. 

The solution was very easy, simply delete (or correct) the malformed entry and reboot. Worked like a charm!

Managing Hotfixes Centrally

Historically I’ve not paid much attention to hotfixes or patches, but lately I’ve decided that I need to change that aspect of my management. I used to create a folder on the server that had the problem, copy the patch down and then apply it. As I become responsible for more and more servers, I decided that I needed a slightly better way to handle that. Also, since I tend to do pretty much anything in PowerShell I figured I needed to write some functions to do it for me. The result was the QfeLibrary.ps1, it contains several functions that can be used to help you manage your hotfixes.

I thought about how I go about patching a system, and then tried to condense that down into code. The first step is identifying when I have a problem, I usually use System Center Operations Manager to help identify when I have a problem. Once I find a problem I start researching on the internet for a solution. When I run into an issue that needs a hotfix, I’ll open the Microsoft Support page for it, and figure out if I need this patch.

A lot of patches seem to modify files, and usually the KB page will tell you what the previous version of the file was, and what the new version should be. To be honest this is where the idea came from, I was using Get-Item to retrieve the VersionInfo of a particular file and thought I should write a script for this, and then decided a script would be too much. So of course the next thought in the progression is write a pile of code with several functions…oh well.

The premise behind this is a central location to store the hotfixes. This can be a local folder or a file share, the preference would be a file share that is accessible from your servers. Then a function that will create an XML file that will hold the test. Then we’ll need a function to run that test to make sure that it’s applicable. Then a function to list all the available hotfixes in the file share. Finally a couple of functions to install and uninstall the patch, and finally a few ancillary functions to clean up, set the file share as a global variable, and perhaps one to view the URL for a given hotfix.

The first function I wanted was something that would output an object that had the Url of the article, the actual KB article number, and a way to test it. This was actually rather difficult for me, I had no problem creating an object that had a property that was a scriptblock, the problem was when I exported that to an XML file it turned into a string. In searching how to convert a string into a scriptblock I found this lovely article from three years ago.

Once I had that straightened out everything else fell into place nicely. A function to run the test that was stored in the XML file and a pair of functions to install and uninstall a patch. The next thing I needed was a function to get a list of hotfixes available for the local system.

The function to list available hotfixes is a little complex, it will list all the hotfixes that are available based on the OS of the target system. Optionally you can get a listing of all hotfixes that are available, additionally you can then download those hotfixes. No real magic here, when I create the XML for the hotfix I use the caption property of the Win32_OperatingSystem class. I use this in the function to list hotfixes as my comparison.

QFE Workflow

The first thing that we need to do is define where we will store our hotfix files, this is done with the Set-QfeServer function. Once we’ve done that we need to figure out what the test will be, once we have the test figured out we run the New-Qfepatch function and provide the URL, KB article number, OS, processor architecture, the test and the answer. We will run this for each SKU that we are responsible for. Copy the hotfix file manually to our QfeServer location.

From the target computer, we run the Get-Qfelist function to get a list of hotfixes that are available for our OS. We can run the Get-Qfe function with the online switch to view the support article. We can run the Test-QfePatch function on the target OS to see if it applies. If we pass the download switch to Get-QfeList we will download all hotfixes for the target OS. Finally we can Install-QfePatch on the target OS.

When we’re all done, we can review the logs stored locally if we need to, and finally Clear-QfeLocalStore to zip up the installed hotfixes and log files into a time-stamped zip file.

  1. Set-QfeServer -QfeServer \serversharehotfixes
  2. New-QfePatch -URL -KB -OS -Arch -Qfefilename -Test -Answer -QfeServer
  3. Get-QfeList -QfeServer
  4. Get-Qfe -QfeServer -QfeId -Online
  5. Test-QfePatch -QfeId -QfeServer
  6. Get-QfeList -QfeServer -Download
  7. Install-QfePatch -QfeFilename
  8. Clear-QfeLocalStore

This function can be downloaded from the Mod-Posh site, or from Technet.

PowerShell forensic use

Printing from a Scheduled Task as a different user

It does sound a bit odd, but I’m in the process of moving all the regular monitoring I do to scheduled tasks, and this particular one caused me headaches all afternoon.

I have a script that I run that will update the DPM VolumeSizing spreadsheet that Microsoft put together for System Center Data Protection Manager. It’s a great tool, if you’ve not looked at it and are running DPM you should check it out!

The problem I had was I scheduled this to run as my account and it worked just fine. As soon as I configured this to run as a service account, the script would go, but nothing with Excel worked. I found several threads on Google that mention as much.

I finally found a very nice thread on Technet, the answer is from a user named JensKalski who recommends creating a desktop folder under systemprofile. I have read this before and it escapes me now where I saw this, but as soon as I created this folder on my server, I got the printout!

YAY! Thanks Jens!