Printing from a Scheduled Task as a different user

It does sound a bit odd, but I’m in the process of moving all the regular monitoring I do to scheduled tasks, and this particular one caused me headaches all afternoon.

I have a script that I run that will update the DPM VolumeSizing spreadsheet that Microsoft put together for System Center Data Protection Manager. It’s a great tool, if you’ve not looked at it and are running DPM you should check it out!

The problem I had was I scheduled this to run as my account and it worked just fine. As soon as I configured this to run as a service account, the script would go, but nothing with Excel worked. I found several threads on Google that mention as much.

I finally found a very nice thread on Technet, the answer is from a user named JensKalski who recommends creating a desktop folder under systemprofile. I have read this before and it escapes me now where I saw this, but as soon as I created this folder on my server, I got the printout!

YAY! Thanks Jens!

Exporting Event logs in the normal Event Log format

I’ve decided that I’d like to be able to export my event logs in their native .evtx file format. This appears to be faster than converting them all to .csv files. Early on I ran into a few problems, the first of which I was unable to convert what was in my head to something that Google understood! Once I got over that I found what I was looking for.

System.Diagnostics.Eventing

System.Diagnostics.Eventing.Reader

For the purposes my function, what I’m looking for is found within the Reader namespace. I’d like my function to have a similar look and feel to the built-in cmdlet’s, like Get-WinEvent. So the first thing I decided I would do is implement a –ListLog switch parameter.

This parameter will call the GetLogNames() method of the EventLogSession Class. So the first thing you need to do is create a new sessions.

$EventSession = New-Object System.Diagnostics.Eventing.Reader.EventLogSession

Once we’ve done that we simply call the GetLogNames() method from our new object and a list of logs will appear

$EventSession.GetLogNames()

Application
HardwareEvents
Internet Explorer
Key Management Service
Media Center
OAlerts
Operations Manager
Security
System
Windows PowerShell

The next thing I need to be able to do is the actual exporting of the logs. There are actually two methods exposed in the EventLogSession class. The first is ExportLog() and the second is ExportLogAndMessages(). The documentation states that the difference between the two is the latter exports the log and it’s messages. To be safe, I’ll use the latter, ExportLogAndMessages() which will grab that metadata.

This is where I ran into the first hiccup. The breakdown of each of those is as follows

  • Path | LogName as String
  • PathType as PathType
  • Query as String
  • targetFilePath as String

Now, most of the examples I found online appeared to use PathType as an object. The problem is it really isn’t, it’s a string that contains either the word ‘LogName’ or ‘FilePath’. Technically that really isn’t even a problem, it seems to me to be more of a documentation issue. But it could also be poor understanding on my part, at any rate, there are several ways to deal with this and I chose the easy one.

Since I’m going to assume that you want to export an actual EventLog and not a file, for obvious reasons, then I’m only going to give you the option of LogName. This makes exporting your log look something like this.

$EventSession.ExportLogAndMessages($LogName,’LogName’,’*’,$Destination)

Now I could have made it look much more complicated by changing ‘LogName’ to something like this

$EventSession.ExportLogAndMessages($LogName,[System.Diagnostics.Eventing.Reader]::LogName,’*’,$Destination)

But that just seemed to me to be too much.

I’m ignoring the Query option first and focusing on targetFilePath. In testing, this works beautifully you pass in the full path and filename of the file to be created, and it appears. Now when I started testing this against remote machines I ran into my second problem.

When I create my session against a remote computer

$ComputerName = ‘ServerA’

$EventSession = New-Object System.Diagnostics.Eventing.Reader.EventLogSession($ComputerName)

I can get the proper list of logs, but when I ran the ExportLogAndMessages() method, I didn’t see the exported logfile in my folder. Turns out you need to be aware of the context, if you are connecting to a remote machine you need to remember that everything get’s executed on that remote machine. That means that when the following code is executed

$Destination = ‘C:LogFilesApplication.evtx’

$EventSession.ExportLogAndMessages($LogName,’LogName’,’*’,$Destination)

That file actually exists on the remote filesystem (ServerA) and not the local disk. At the moment I’ve not decided how I want to handle this, or if I even want to bother. You see when I attempt to trick the method and provide a UNC I get the following

$EventSession.ExportLogAndMessages(‘Application’,’LogName’,’*’,’\pc01C$LogFilesapp.evtx’)

Exception calling “ExportLogAndMessages” with “4” argument(s): “Attempted to perform an unauthorized operation.”
At line:1 char:29
+ $EventSession.ExportLogAndMessages <<<< ('Application','LogName','*','\pc01C$LogFilesapp.evtx')
    + CategoryInfo          : NotSpecified: (:) [], MethodInvocationException
    + FullyQualifiedErrorId : DotNetMethodException

My next obstacle was credentials. Remote machines may require a different user/pass combination than what you’re current login context might be. Fortunately I can pass that information into the class. One of the constructors has 5 properties

  • ComputerName as string
  • Domain as string
  • Username as string
  • Password as SecureString
  • LogonType as SessionAuthentication

Since I store my own admin credentials locally in a file I know I have access to most of that information right from the console. The first two examples will display my logon domain and username.

$Credential.GetNetworkCredential().Domain

$Credential.GetNetworkCredential().Username

The next one is a little scary, but if you think about it, it’s not as bad as you think it might be. First off, running this command will display my unencrypted password on the console! The HORROR! It’s really ok, the reason that works is because I set it in my context, so I have access to it. Get it? It’s ok if you don’t, it took me a while to figure that out as well, it’s encrypted in memory so while I can view it clear text, another use on the same system shouldn’t be able to.

$Credential.GetNetworkCredential().Password

The only problem with the previous is the outputted password is a string. The constructor needs this as a SecureString. Fortunately the following command is just that.

$Credential.Password

Now, I’m by no means an expert on .Net. I’m not even sure I would say I’m knowledgeable, but I certainly know enough to be extremely dangerous. As I was looking at the page that listed how to connect remotely I noted that LogonType was worded in a similar fashion as was PathType, so before I got carried away I decided to try each of the 4 LogonTypes.

  • Default
  • Negotiate
  • Kerberos
  • NTLM

In my testing against a remote machine that my current user context had no rights on, my admin credentials worked for each of the various types. So, as far as I’m concerned that seems to work, so I decided to stick with Default.

So now I’m able to connect to a local or remote machine and export out the logs to an existing folder on the hard drive. That leaves one final problem to deal with, handling a folder that doesn’t exist yet. I leave it up to the user to pass in the folder and filename to write to. So if it doesn’t exist I need to make it. I had thought about splitting the Destination variable into two, FilePath and FileName, but decided I didn’t want to do that.

Since I’m treading the deep waters of .Net I decided that since my Destination looks like a legitimate path, it may behave like one. I started browsing the System.IO namespace and originally was looking at File, and then realized I was dealing with a directory, which made things much easier.

I know that there is a parent property when you grab a path using Get-ChildItem so I figured there ought to be something similar in System.IO.Directory. Turns out it’s more or less exactly the same thing.

I kind of have this phobia about tweaking data that is passed into my scripts, so while this looks ugly, I’m really quite pleased.

([System.IO.Directory]::GetParent($Destination)).FullName

What does this do? Well assuming that Destination is C:LogFiles, that code returns C:LogFiles, but if it happens to be, C:LogFilesPathToRealyDeepFolder it returns everything above Folder. Which works out quite nicely. I’m assuming that the tail end will be a filename, so I ask .Net for the parent path of the filename and then create that path.

Locally creating this was simple, but we run into issues again remotely. While New-Item has a Credential property, the underlying file system doesn’t support that. So instead of getting crazy I decided to use a ScriptBlock and the Invoke-Command cmdlet.

Since we are passing variables to a remote machine, by default ServerA won’t know what Destination represents, so we use the Argumentlist property of Invoke-Command.

$ScriptBlock = {New-Item -Path $args[0] -ItemType Directory -Force}

Invoke-Command -ScriptBlock $ScriptBlock -ComputerName $ComputerName -Credential $Credential -ArgumentList (([System.IO.Directory]::GetParent($Destination)).FullName) |Out-Null

As you can see in my ScriptBlock, args[0] is represents the Path we need to create. In order for that to make it over to the remote machine you will see in the Invoke-Command line I pass in my corrected Destination as an ArgumentList.

The result is a working Export-EventLogs function that will actually export the log in the native format. It was a lot of work to get this all together, but I think it will be very useful. I decided against any sort of clearing function since there is already a built-in for that, but I haven’t seen a built-in for exporting the logs.

This function can also be downloaded from my TechNet Gallery

Get recent events from servers

I’ve been working with Microsoft on an issue that I am having with my DPM server. We have been doing some fairly intense logging, and today I enable several performance counters in an attempt to ascertain if something external is triggering this issue.

Along those lines I thought it would be cool to get a list of log entries from two hours before the event occurs. The event I’m tracking is DPM 3101, Volume Missing. We have seen that during a regular backup something happens and then DPM stops with the message that the disk I’m backing up to is no longer connected.

I’ve started a thread and have participated in several other threads on the forums about this issue.

At any rate, I decided that I would write a script that would grab up all the events from my DPM server and my two file servers, that I’m backing up. The hope is that maybe something interesting will be logged.

Why the two hours? Well, it’s silly, but I’ve noticed that two hours seems to be significant in the timeline of how these things are happening.

The script is also available on the TechNet Gallery

Updated New-PrintJob script

The information I’m going to cover here was previously covered on TechNet. I’m posting this because this morning I came across an error in my PrintLogger script.To be fair it was an error in the script, there is something else going on. I have created a thread, but I don’t know if I’ll get much in the way of response, as the only hit on Google for the exact error message is a German site.

The jist of my problem is that when a job is submitted, I use Get-WinEvent to pull in all the events where the Event ID is 307. This is the job printed event and has all the details for the job that I’m interested in. On a busy server this can be a fairly large list, and while at the time of the error there were only about 2100 entries in the log, it was causing it to fail and not log anything.

The quick fix was to tack on –ErrorAction SilentlyContinue to the Get-WinEvent cmdlet. This allowed the code to continue through the error. Another fix would have been to limit the number of entries returned, but still not terribly accurate. Then I remembered that article I listed up at the top, and that I had been messing around with it.

The idea here is, when EventID 307 occurs to pass to the script the Record ID of the event that originally triggered the task. The original article talks about various ways of displaying this information, since I’m working in PowerShell I was more interested in the second.

The code to add is below, and you can add more entries based on the detailed view of a given event. I’ve not tried any others as all I need is the EventRecordID.



Event/System/Channel
Event/System/EventRecordID
Event/System/Level


I followed the steps below, with the exception of not using the command-line to create and delete a task. I did this originally but later skipped that part as an import was much more simple.

  1. Create a task based event
  2. Right click the task and choose to export it
  3. Edit the XML file add the code above between the EventTrigger tags, and save
  4. Delete the original task
  5. Import the XML file and modify the properties for the action

For the start a program action, I will just refer you back to the article, all you need to remember is you will need to add two additional Parameters to your PowerShell script, $EventRecordID and $EventChannel.

$EventRecordID is the record number of the event that triggered this task

$EventChannel is the log where the event can be found

There was very little adjustment that needed to be done to the original script. I’ll test it for a day, but in limited testing the updated script produced identical results to the original.

This script is also available on the TechNet Gallery.

DPM Sizing Script

Yesterday I told you how I had decided to automate a portion of my DPM routine. As usual this got the fires burning and a second script was born. I would have told you about it yesterday but I wanted to make the appearance of doing actual work 😉

So today I give you the Get-DPMSizingValues.ps1 script. This is basically the portion of the DPM Sizing tool that I use regularly, the part that deals with file servers. I must say I’m rather proud of it as it worked out better than I thought it would. It uses some of the same basic stuff as the previous script, which was nice for me.

My Get-PSDrive statement is a little different. I noticed when I ran this against my Windows 7 machine I had a lot of cruft I didn’t care about, so you’ll note the Where-Object bit. That filters out any results that have less than or no used space.


Get-PSDrive -PSProvider FileSystem |Where-Object {$_.Used -gt 0} |Select-Object -Property Name, @{Label='Used';Expression={$_.Used /1gb}}

The nitty gritty part of it uses the same formula found in the spreadsheet. Now, there are some values that are hard-coded as these are direct from Microsoft and I don’t really know what they mean as they have not been terribly forthcoming about it, or my fu is just not working for me today.


if (($ReplicaOverheadFactor/100) -gt 1)
{
$ReplicaVolume = $VolumeIdentifier.Used * ($ReplicaOverheadFactor/100)
}
else
{
$ReplicaVolume = $VolumeIdentifier.Used * 1.5
}

if ($VolumeIdentifier.Used -gt 0)
{
$ShadowCopyVolume = ($VolumeIdentifier.Used * $RetentionRange * ($DataChange/100)) + (1600/1024)
}

So I just found a bug while writing this and fixed it, turned out I forgot to convert the ReplicaOverheadFactor into a fraction in that first test. Oh well, it’s working now which is good. At any rate, that is the heart of the script, that gets looped through for every drive that has used space. I had thought about not doing the second test, since my scriptblock actually shouldn’t return any volumes that have zero used space, but what the heck, it doesn’t hurt anything.

The resulting output is pretty nifty, I would imagine you could potentially pipe this into dpm cmdlet, but I haven’t verified that. If someone needs it I’ll look into doing that but for now, it’s a very nice little reporting tool that will give you calculated values for Replica Volumes and ShadowCopy Volumes.


Name : C
UsedSpace : 44.3877143859863
Retention : 7
Replica : 53.2652572631836
ShadowCopy : 32.6339000701904
DataChange : 10
ReplicaOverhead : 120

There is also a version on the Technet Gallery.

Weekly DPM Monitoring

Part of my responsibility is handling storage. This includes allocating, deallocating, backing up and restoring. Now we’ve been using DPM for quite some time and are currently running on DPM 2010. Since this past summer I have personally come to peace with the fact that my users don’t know what the delete key is, so I have set some things in place to make it easy for me to monitor overall usage of storage for the School.

Since storage is always increasing, three weeks ago I decided that I would start to regularly monitor the used space on the file servers and update DPM accordingly. For that I used the DPM sizing tool, it’s a wonderful set of spreadsheets and scripts and if you haven’t played with them, you should!

What I love most about this tool is that you can just type in the used space of a given volume and it will calculate, based on various settings, the new size of the Replica Volume and Recovery Point Volume. So, for the past three weeks I’ve been manually opening up the spreadsheet, firing up RDP, connecting to my server and running Get-PSDrive from inside PowerShell.

For whatever reason, today I decided that enough was enough and to automate this for myself. After all I get regular updates from my file server when it runs out of space, so I can add more why can’t I have something similar for DPM? That’s how the Update-DPMSpreadSheet.ps1 script was born.

The idea is pretty simple, for each file server get a list of drives and the amount of used space in GB. So I created a scriptblock that gives me the bits of information I require.


Get-PSDrive -PSProvider FileSystem |Select-Object -Property Name, @{Label='Used';Expression={$_.Used /1gb}}

I use Invoke-Command and pass it in a session object and the above scriptblock and capture the results. When I’m done I close out of my session with Remove-PSSession that way I don’t consume too many resources.

There is a max limit on the number of concurrent sessions an account can have open. This default is 5, and can be modified as needed. Please see the following article for details on how to do this.

Once I have all that data I create a new instance of Excel, open the DPM Sizing Tool spreadsheet, and set my worksheet to the DPM File Volume sheet. I use the Volume Identification column to match up against the list of drives that are returned from my servers. As of v3.3 of this tool that column is column D. Once I find the current drive in the spreadsheet I hop over one column and update the value of the Used space in GB column (Column E as of v3.3).

If there are any errors along the way, I log them to the Application log and close out of everything.

I had thought about creating a scheduled job to have this run every Monday, but seeing as how my computer might be off or something I took the low-tech route. I updated my $PROFILE with the following chunk of code.


if ((Get-Date).dayofweek -eq 'Monday')
{
C:scriptspowershellproductionUpdate-DPMSpreadSheet.ps1
Invoke-Item 'C:UsersjspattonSyncStuffDPMvolumeSizing v3.3DPMvolumeSizing.xlsx'
}

Hopefully it’s pretty straightforward, if today is Monday, run the Update-DPMSpreadSheet.ps1 script, and then open it up in Excel.

I have also uploaded a version of this script to the Technet Gallery.

Windows EventLog Management–Part2

How to get the log to let you know when something happened

Event Triggers

  • Specify a custom action when a particular event occurs
    • Start a program
    • Send an email
    • Display a message
  • Use scripting to give yourself flexibility
  • Be careful about email

Triggers are one of those really awesome things that you wish had been around in Windows from the beginning. The idea is that when a particular event occurs, you want to perform some action. You can start a program or script or send an email, those first two are perhaps the one’s you’ll use most.

For myself I find the Start Program option the best of the bunch, being a sysadmin I find myself routinely writing scripts to perform one or more things. If I’m interested in a particular event I can create a script that will give me additional information surrounding that event.

I have a few of these in place right now on my file server I have a trigger on Event ID 2013, the low disk message. The default message is rather cryptic, simply stating that a given disk is getting close to full. Fortunately it does give me a vital piece of information, the drive letter. So I have a script that pulls that entry from the log, grabs the disk letter, and queries WMI for the free space of the disk, the script stores that as an XML file that I have the Task email to me. So you can use a script to flesh out a rather vague entry.

On the opposite side of that coin, there are some events that you are interested in that happen so frequently that sending you an email each time they occur would be overwhelming. Going back to my example of the Print Server logs, I manage two print servers that I have divided between lab use and staff/faculty use. I have written up my own print logging script that generates a daily CSV of printer usage. With two servers, about 50 printers and over 3,000 users who can print to them you could imagine what my inbox would look like if I had that emailed to me at each print.

Creating an Event Trigger

  • Find the event you want to be notified about
  • Create a script that gives you more info
  • Attach a task to the Event
  • Choose an Action
  • Configure the Action
  • Set the context for the Task

Now that you are familiar with your logs, and have determined what specific log entry you want to know about, it’s time to do something about it. The example I will be using is from my DHCP server, I’d like to know when a computer asks for an IP and is denied because the MAC address is unknown to me.

I have written a script that gives me the MAC, Hostname, Message, and Time at which the client asked. Since a given client may potentially ask every 5 minutes until it gets a lease, I don’t want an email. In fact, since a given client can ask multiple times, I just want a file with the MAC address as part of it so I can, at a glance, get an idea of how many devices are trying to connect.

Create s script

FindEvent

There are actually two events that I’m interested in, this means that I’ll need my script to accept the Event ID as a parameter. Also, neither of these events are Error or Warning events, merely informational, letting me know a computer was unable to get an address.

Create a script

Get-DHCPDenies

I’m pretty good at writing scripts to get the information I need, but if you’re not comfortable scripting by all means you could run a command-line utility. There are quite a few available in the Sysinternals suite, not to mention some very handy built-in tools on Windows Server 2008. This script accepts the EventID and outputs an XML file named for the MAC that triggered the event.

Create the trigger

task1

Give your task a name and a description.

Choose an action

task2

Pick whether you need to start a program, send an email or display a message. The wizard allows you to only set one Action, but you should be aware that you can have as many as you want so pick one to start with and then mix and match later!

Configure your action

task3

So if you’re using a script you need to specify the script interpreter to run. For this example I’m running a PowerShell script which is why I typed in powershell.exe. But it could just as easily have been Cscript, or Python, or the utility of your choice. If you’re running a script then the argument is the script itself along with any parameters you need to pass it. I keep all my scripts in the same place, so I define the Start In folder to be that location.

Set the context

task4

You will notice that I have set this task to run whether or not someone is logged in. I have not stored a password with this account so it will run as the system. That’s something to keep in mind, if you’re uncomfortable doing this, you may want to create a service account to run as.

That’s it, after you click Ok, the trigger is done. All you need to do now is sit back and watch as those files get created.

Now that we have our triggers, let’s see how we can get a notification when something happens.

Part 1

Part 2

Part 3

ExitCodes Part 2

So, yesterday I mentioned that I re-wrote the inventory script. Today I decided to re-write the reboot script. The idea behind the script is that once a week we bounce all the lab computers. We do this for various reasons, but since I’m in the mood decided today was the day to tackle that problem.

The last time I talked about this, I got a little off the beaten path hunting down all possible exit codes for the shutdown.exe command. While not wrapped around the axles this time, I did have to figure out how to deal with it.

The nice thing about PowerShell is that when running a command you have access to $LASTEXITCODE. This contains exactly what you think it contains, the number of the return code from the command-line program. Before I get to far ahead I do want to mention that when last I wrote about exit codes I found them on the Symantec site (still works). Today I found an archived newsgroup that had a link to the MSDN site, so I’ll put that here.

Ok, so I decided since I was re-writing this thing I wanted to be a little more accurate in my reporting of errors encountered. Now it was impossible for me to find what error codes are returned from shutdown.exe, most likely because it could be any number. So then I started looking at how I could get what it was using $LASTEXITCODE.

Buried deep in my brain I remembered that there was a net command that would give you a text version of the number.


net helpmsg 53

The network path was not found.

That seemed perfect, What happens if I use $LASTEXITCODE


net helpmsg $LASTEXITCODE

The operation completed successfully.

BRILLIANT! This was perfect, I decided to store the result in a variable and then write it out. The only problem, really more of a hassle, was that it returns a string array.


$result = (& net helpmsg $LASTEXITCODE)

$Result.Count

3

After some poking around I realized that the first row is blank, the second row contains the message and that the remaining rows were empty. So in my case, one line padded top and bottom with empty rows. Then I began to wonder, are all the messages one-liners? So I wrote up a little routine to display all the messages, I’ll give you the final version of it.


$ExitCodes = (0..15818)
foreach ($ExitCode in $ExitCodes)
{
try
{
$ErrorActionPreference = 'SilentlyContinue'
(& net helpmsg $ExitCode)[1]
}
catch
{}
}

You might be asking why am I stopping at 15818? If you visited the link I gave you earlier you would have noticed that the codes ran higher than that. In fact the last page of that list is System Error Codes (12000-15999). Well if you scroll to the bottom of that page, you will note it stops at the above listed 15818. Now I don’t know why, but I figured why go any higher right? Well, I did and there isn’t anything there.

This script is pretty straightforward, it loops through each number and passes it to net helpmsg. All I did then was just ask for the second row [1] of that returned object. While I didn’t count all the returned messages, there were a lot, and for my situation, the one line on the second row was plenty for me.

 

The script can also be downloaded from TechNet.

PowerShell New-AdInventory script

I may have mentioned on here before that we rely quite heavily on Active Directory, and it’s true. It’s at the core of nearly all the services we deliver, the only exception would be the web, and that would really only be the public facing web sites.

I’ve also mentioned before that I’ve been moving over from VbScript to PowerShell, and I think it’s safe to say that I moved over quite a while ago. If you’ve not browsed my scripts you should head over to my code.google.com site to see what I’ve done.

Anyway, today I was working on a problem with a script that runs from a cron and after fixing that one, I realized I was still using my old inventory script to update Active Directory computer objects with some useful information. So I decided it was time that I rolled this script over to PowerShell. Now while I’d like to say the new and improved one is much more wicked awesome, it’s not, it’s just all PowerShell’d up.

The previous script I had created several functions to do things like send data to the event log. A rather generic function to return values from a remote computer via WMI. A nice little function to ping the computer, although looking back at the code I noticed that it’s not actually there, I should fix that.

At any rate the new script seems to go a little faster, and it certainly doesn’t look any shorter but most of that is actually documentation. Although technically since I dot-source in a library it’s significantly larger than the previous script.

This runs every hour and pulls the UserName, MacAddress, IPAddress and SerialNumber from the remote computer via WMI. I then write these values back to the computer object more or less using the same properties. Although description becomes UserName and ipHostNumber becomes IPAddress.

The nice thing is that we can then visually scan a given OU and see who might be logged into a computer. If there is an issue connecting to a computer, that is also written to the description property. That way as you browse your AD you can easily see which computers have problems, typically these are also dead computer accounts.

The code is also available on Technet.

Windows Server 8 Beta Failover Clustering and PowerShell

So the last two posts (one, two) were just some screenshots and comments as I went through and created a failover cluster. To be fair this wasn’t the first go round with the cluster I created one earlier with just one computer so I could see the PowerShell stuff.

I must say, 81 PowerShell commands to handle clustering, not too shabby. The first cluster I created was with the New-Cluster cmdlet.

New-Cluster -Name win8-hv -Node win8-hv1 -NoStorage -Verbose
Name
----
win8-hv

The progress bar flashed for a bit as it did stuff and then there was a cluster. It didn’t take a long time but it was rather hot I must say.

Here are all the new commands

Get-Command |Where-Object {$_.ModuleName -eq 'FailoverClusters'} | Format-Table -Property Capability, Name -AutoSize

Capability Name
---------- ----
Cmdlet Add-VMToCluster
Cmdlet Remove-VMFromCluster
Cmdlet Add-ClusterCheckpoint
Cmdlet Add-ClusterDisk
Cmdlet Add-ClusterFileServerRole
Cmdlet Add-ClusterGenericApplicationRole
Cmdlet Add-ClusterGenericScriptRole
Cmdlet Add-ClusterGenericServiceRole
Cmdlet Add-ClusterGroup
Cmdlet Add-ClusteriSCSITargetServerRole
Cmdlet Add-ClusterNode
Cmdlet Add-ClusterPrintServerRole
Cmdlet Add-ClusterResource
Cmdlet Add-ClusterResourceDependency
Cmdlet Add-ClusterResourceType
Cmdlet Add-ClusterScaleOutFileServerRole
Cmdlet Add-ClusterServerRole
Cmdlet Add-ClusterSharedVolume
Cmdlet Add-ClusterVirtualMachineRole
Cmdlet Add-ClusterVMMonitoredItem
Cmdlet Block-ClusterAccess
Cmdlet Clear-ClusterDiskReservation
Cmdlet Clear-ClusterNode
Cmdlet Get-Cluster
Cmdlet Get-ClusterAccess
Cmdlet Get-ClusterAvailableDisk
Cmdlet Get-ClusterCheckpoint
Cmdlet Get-ClusterGroup
Cmdlet Get-ClusterLog
Cmdlet Get-ClusterNetwork
Cmdlet Get-ClusterNetworkInterface
Cmdlet Get-ClusterNode
Cmdlet Get-ClusterOwnerNode
Cmdlet Get-ClusterParameter
Cmdlet Get-ClusterQuorum
Cmdlet Get-ClusterResource
Cmdlet Get-ClusterResourceDependency
Cmdlet Get-ClusterResourceDependencyReport
Cmdlet Get-ClusterResourceType
Cmdlet Get-ClusterSharedVolume
Cmdlet Get-ClusterVMMonitoredItem
Cmdlet Grant-ClusterAccess
Cmdlet Move-ClusterGroup
Cmdlet Move-ClusterResource
Cmdlet Move-ClusterSharedVolume
Cmdlet Move-ClusterVirtualMachineRole
Cmdlet New-Cluster
Cmdlet Remove-Cluster
Cmdlet Remove-ClusterAccess
Cmdlet Remove-ClusterCheckpoint
Cmdlet Remove-ClusterGroup
Cmdlet Remove-ClusterNode
Cmdlet Remove-ClusterResource
Cmdlet Remove-ClusterResourceDependency
Cmdlet Remove-ClusterResourceType
Cmdlet Remove-ClusterSharedVolume
Cmdlet Remove-ClusterVMMonitoredItem
Cmdlet Repair-ClusterSharedVolume
Cmdlet Reset-ClusterVMMonitoredState
Cmdlet Resume-ClusterNode
Cmdlet Resume-ClusterResource
Cmdlet Set-ClusterLog
Cmdlet Set-ClusterOwnerNode
Cmdlet Set-ClusterParameter
Cmdlet Set-ClusterQuorum
Cmdlet Set-ClusterResourceDependency
Cmdlet Start-Cluster
Cmdlet Start-ClusterGroup
Cmdlet Start-ClusterNode
Cmdlet Start-ClusterResource
Cmdlet Stop-Cluster
Cmdlet Stop-ClusterGroup
Cmdlet Stop-ClusterNode
Cmdlet Stop-ClusterResource
Cmdlet Suspend-ClusterNode
Cmdlet Suspend-ClusterResource
Cmdlet Test-Cluster
Cmdlet Test-ClusterResourceFailure
Cmdlet Update-ClusterIPResource
Cmdlet Update-ClusterNetworkNameResource
Cmdlet Update-ClusterVirtualMachineConfiguration

So let’s play a little.

jeffpatton.admin@WIN8-HV1 | 12:56:01 | 03-20-2012 | C:Usersjeffpatton.admin #
Get-ClusterGroup

Name OwnerNode State
---- --------- -----
Available Storage win8-hv2 Offline
broker win8-hv2 Online
Cluster Group win8-hv2 Online


jeffpatton.admin@WIN8-HV1 | 12:56:04 | 03-20-2012 | C:Usersjeffpatton.admin #
Remove-ClusterGroup -Name broker -RemoveResources

Remove-ClusterGroup
Are you sure that you want to remove the clustered role 'broker'? The resources will be taken offline.
[Y] Yes [N] No [S] Suspend [?] Help (default is "Y"): y
jeffpatton.admin@WIN8-HV1 | 12:56:20 | 03-20-2012 | C:Usersjeffpatton.admin #
Remove-Cluster -Name win8-hv

Remove-Cluster
Are you sure you want to completely remove the cluster win8-hv?
[Y] Yes [N] No [S] Suspend [?] Help (default is "Y"): y
jeffpatton.admin@WIN8-HV1 | 12:56:45 | 03-20-2012 | C:Usersjeffpatton.admin #
Get-Cluster
Get-Cluster : The cluster service is not running. Make sure that the service is running on all nodes in the cluster.
There are no more endpoints available from the endpoint mapper
At line:1 char:1
+ Get-Cluster
+ ~~~~~~~~~~~
+ CategoryInfo : NotSpecified: (:) [Get-Cluster], ClusterCmdletException
+ FullyQualifiedErrorId : Get-Cluster,Microsoft.FailoverClusters.PowerShell.GetClusterCommand

So I just dumped the cluster, I think I’ll create the cluster again with a single node, and then add a node after the fact, since there is a cmdlet for that.

New-Cluster -Name win8-cluster -Node win8-hv1 -NoStorage -Verbose

Name
----
win8-cluster

Let’s confirm that it’s there.

Get-Cluster |Format-List -Property *


Domain : soecs.ku.edu
Name : win8-cluster
AddEvictDelay : 60
BackupInProgress : 0
ClusSvcHangTimeout : 60
ClusSvcRegroupOpeningTimeout : 5
ClusSvcRegroupPruningTimeout : 5
ClusSvcRegroupStageTimeout : 5
ClusSvcRegroupTickInMilliseconds : 300
ClusterGroupWaitDelay : 120
MinimumNeverPreemptPriority : 3000
MinimumPreemptorPriority : 1
ClusterEnforcedAntiAffinity : 0
ClusterLogLevel : 3
ClusterLogSize : 300
CrossSubnetDelay : 1000
CrossSubnetThreshold : 5
DefaultNetworkRole : 2
Description :
FixQuorum : 0
HangRecoveryAction : 3
IgnorePersistentStateOnStartup : 0
LogResourceControls : 0
PlumbAllCrossSubnetRoutes : 0
PreventQuorum : 0
QuorumArbitrationTimeMax : 20
RequestReplyTimeout : 60
RootMemoryReserved : 4294967295
RouteHistoryLength : 0
SameSubnetDelay : 1000
SameSubnetThreshold : 5
SecurityLevel : 1
SharedVolumeCompatibleFilters : {}
SharedVolumeIncompatibleFilters : {}
SharedVolumesRoot : C:ClusterStorage
SharedVolumeSecurityDescriptor : {1, 0, 4, 128...}
ShutdownTimeoutInMinutes : 20
UseNetftForSharedVolumes : 1
UseClientAccessNetworksForSharedVolumes : 0
SharedVolumeBlockCacheSizeInMB : 0
WitnessDatabaseWriteTimeout : 300
WitnessRestartInterval : 15
EnableSharedVolumes : Enabled
DynamicQuorum : 1
Id : d4e05676-cf3d-4814-a828-f32e106bb1c0

Let’s see some information about the node.

Get-ClusterNode |Format-List *


Cluster : win8-cluster
State : Up
Id : 1
Name : win8-hv1
NodeName : win8-hv1
NodeHighestVersion : 467002
NodeLowestVersion : 467002
MajorVersion : 6
MinorVersion : 2
BuildNumber : 8250
CSDVersion :
NodeInstanceID : 00000000-0000-0000-0000-000000000001
Description :
DrainStatus : NotInitiated
DrainTarget : 4294967295
DynamicWeight : 1
NodeWeight : 1

Okay, let’s add a node now. I chopped off the crazy long report filename.

Add-ClusterNode -Name win8-hv2 -Cluster win8-cluster -NoStorage -Verbose
Report file location: C:WindowsclusterReportsAdd Node Wizard

Verify

Get-ClusterNode

Name ID State
---- -- -----
win8-hv1 1 Up
win8-hv2 2 Up

How many cluster do I have? Seems like a lot, but the Windows 8, and dev-cluster aren’t actually there anymore.

Get-Cluster -Domain soecs.ku.edu

Name
----
CLUSTER
DEV-CLUSTER
HYPER-V
WIN8-CLUSTER
WIN8-HV

Let’s add server role, this is basically a cluster end-point

Add-ClusterServerRole -Name Win8ServerRole -Cluster win8-cluster -Verbose

Name OwnerNode State
---- --------- -----
Win8ServerRole win8-hv1 Online

How about some details on that role. I cut out the Type property to keep it readable.

Get-ClusterResource -Name win8serverrole |Get-ClusterParameter

Object Name Value
------ ---- -----
win8serverrole Name WIN8SERVERROLE
win8serverrole DnsName Win8ServerRole
win8serverrole Aliases
win8serverrole RemapPipeNames 0
win8serverrole HostRecordTTL 1200
win8serverrole RegisterAllProvidersIP 0
win8serverrole PublishPTRRecords 0
win8serverrole ResourceData {1, 0, 0, 0...}
win8serverrole StatusNetBIOS 0
win8serverrole StatusDNS 0
win8serverrole StatusKerberos 0
win8serverrole CreatingDC \DC1.soecs.ku.edu
win8serverrole LastDNSUpdateTime 3/20/2012 6:17:30 PM
win8serverrole ObjectGUID e3fbfe6ba596a447a09fd4e117...