DSC Part 1

Desired State Configuration is a new feature of PowerShell 4.0 that is included out of the box with Windows 8.1 and Windows Server 2012 R2. This feature can be loaded on down-level clients by installing the Windows Management Framework 4.0.

The way it works is pretty straightforward, PowerShell 4.0 has introduced a few new keywords, one of which is Configuration. If you’ve written any PowerShell functions it operates in a similar fashion as the Function keyword. Within a Configuration you can have one or more Nodes, each Node is defined as either a string “computer name”, or a variable $Computer, or as a GUID. Within each Node you can have 0 or more resources, there are a dozen built-in resources, and you can roll your own. In addition Microsoft has just released a handful of custom resources that I’ve not played with yet.

Here is an overview of the process

The flow is very straightforward, you create a configuration and save it as a ps1. Executing the ps1 creates a new function, named for your configuration in memory. Run this new function and a subfolder is created named for the configuration and inside the folder a .MOF file is created named for the target node.

You will need to run the function in order to create the directory and proper MOF files

BasicWebServer -ComputerName webserver01

To apply that configuration to the local machine you simply run the following

Start-DSCConfiguration -Path .\BasicWebServer

This will run as a job, if you would like to see it happen, you can add –wait and –verbose to the command above and it will display everything it’s doing.

Start-DSCConfiguration -Wait -Verbose -Path .\BasicWebServer

This configuration is stored on the computer and you can test to see if the configuration has drifted any by running the following

Test-DSCConfiguration

It will return True if it’s happy or False if something is missing. This configuration is stored with the computer and survives reboots, so you can always run

Get-DSCConfiguration

That will return a collection of configurations that are to be applied to the computer. If you would like to bring the target node back in line with its configuration you simply run the following

Restore-DSCConfiguration

The end result of this command is that your server will now have all the features its supposed to have available again.

Setspn.exe wrapper

It’s been a while since I’ve posted anything, so I thought I would post about setspn, because you know, it’s so awesome right?

So one of the projects I’ve been working on lately is the upgrade to SCCM 2012. Outside of a few things it’s been going very well. We ran into an issue though when we rolled out the production server. Maybe I’ll write a post for that, needless to say part of the solution is SPN’s.

Now, I’m no stranger to this tool, but needless to say it leaves a LOT to be desired. Especially when we consider this came out for Windows Server 2003! So, since I had to do some work with SPN’s I decided I needed a PowerShell way of handling this.

There is really only handful of things we ever need setspn for, add an spn to an object, get an spn for an object, remove an spn from an object, find an spn or find duplicate spns.

So I came up with a handful of functions, based on the builtin help from the setspn utility and the TechNet article about setspn.

Reset-Spn -HostName

This will reset the SPN for the given hostname.

Asd-Spn -Service -Name -HostName -NoDupes

This will add an SPN to a given host and optionally check for duplicates within the domain first.

Remove-Spn -Service -Name -HostName

This removes an SPN from a given host.

Get-Spn -HostName

This will return the SPN’s for a given host.

Find-Spn -Service -Name

This will find all SPN’s of a given service, or of  given name, or both.

Find-DuplicateSpn -ForestWide

This will find all duplicate SPNs within the domain or optionally the entire forest.

Currently my functions are just wrappers for setspn.exe but I’m planning a V2 that will leverage .NET to handle this. I don’t get a lot of flexibility in error handling and output when I use a stand alone command.

  • I want to return objects
  • I want to be able to not have dependencies
  • I want the flexibility of .NET

Hyper-V Server 2012 Cluster with Powershell Deployment Toolkit

I recently came across a lovely show on Channel 9. It talks about setting up a simple Hyper-V Server 2012 cluster for use in a lab or test environment or whatever. I won’t go over the details, watch the show, it’s great! In addition to that I had come across an article on the Building Clouds Blog, about the PowerShell Deployment Toolkit. So over Memorial Day weekend I decided to stand up my cluster and spin up a test environment similar to what I use at work.

In my environment I have 6 servers, I have 3 set aside for Hyper-V, one is my firewall, one is a Domain Controller and the last is a management server. I’m using my DC as the file server as well. I didn’t need the iscsi target stuff, as I’m using Windows Server 2012 and used the new File and Storage Services to configure my iscsi drives.

I decided to let vmcreator.ps1 build the vm’s for me, originally I had spun up my own, but I was having difficulties getting the installer to work properly. Turns out that there is a requirement that the PDT tools be run from the C: drive of your computer. Also if you’re running them from the server OS, you will need to install the Hyper-V role in order vmcreator.ps1 to function properly. I don’t recall seeing either of those things mentioned in the TechNet article, but I may have overlooked that part.

So, linked from the vmcreator.ps1 article is a great utility, Convert-WindowsImage.ps1 that I used to create my base OS image. The utility is super handy and has a gui or cmdline version. I wimped out initially and used the gui version, pointed it an ISO of Windows Server 2012 and after a while I had a lovely vhdx ready for vmcreator.ps1.

After renaming the half dozen vm’s the script had created for me, in record time btw, I ran the installer.ps1. There’s not really a whole lot mentioned on the article about it’s use, it is rather self-explanatory and once you realize the limitation to the C: drive then it’s a no-brainer. That part took me a bit to figure out as I had an external drive with all the bits the downloader.ps1 had downloaded for me.

The end result is I now have the basic System Center infrastructure that I can play with locally to try out new features, or test the scripts and apps I create for work. It was really very slick, and I could totally see how I would use something like this in our QA environment at work.

 

Operations Manager, Orchestrator and PowerShell Remoting

It’s been a very long time since I last posted, the primary reason is most likely laziness on my part and secondly I’ve not had a lot to write about. Recently I’ve been messing around with Orchestrator and automation as a means of passing information off to Zenoss. On the face of it, it seemed a rather trivial task, but it took much longer than I anticipated.

The first go round with this was a very simple runbook. It had two activities, Monitor Alert and Run .Net Script. The Monitor Alert activity was configured to look for alerts that were not Information alerts. Once an alert occurred that met that criteria it was passed off to the Run .Net Script. The Run .Net Script activity simply created a simple log entry with PowerShell.


New-EventLog -LogName 'SCOM Alerts' -Source Category
Write-EventLog -LogName 'SCOM Alerts' -Source Category -EntryType Severity -EventId 1 -Message Name

Note : I didn’t include all the gibberish typically seen when copying a runbook into notepad, so you can assume that Category, Severity and Name are prefixed by a big nasty GUID.

The first hurdle I had to get around was creating new sources. Since I didn’t know in advance what they would be, it seemed to me it would be easier to have them created programmatically. That’s what the first line does, but the context under which this runs didn’t have the ability to do that. So I created a group and added the service account to that group, and then added that group to the local Administrators group on the server. Finally I needed to disable UAC which was preventing this from happening, if someone has a better way of doing this I’m all ears.

The nice part about this stage is I was able to get some alerts generated and have them show up in the newly created log. For testing I picked a server that I was monitoring and then stopped the HealthService service. This would generate a failed heartbeat alert similar to a computer going offline unexpectedly. With some sample log entries I was able to configure the Zenoss server to pull in the specific log and start generating alerts with Zenoss.

While this worked well enough to get started I wasn’t satisfied with the quality of the data being returned. Specifically I noted that while some alerts contained the name of the computer with the problem, not all did. Looking at the data returned by the Monitor Alert activity it didn’t seem to me I was getting as many of the details as I needed.

So I decided that some remoting might do the trick for me. With remoting I’m able to use the

links
http://blog.tyang.org/2012/05/09/using-scom-powershell-snap-in-and-sdk-client-with-a-powershell-remote-session/
http://blogs.msdn.com/b/powershell/archive/2008/06/05/credssp-for-second-hop-remoting-part-i-domain-account.aspx
http://blogs.technet.com/b/stefan_stranger/archive/2010/11/02/using-powershell-remoting-to-connect-to-opsmgr-root-management-server-and-use-the-opsmgr-cmdlets.aspx
http://blogs.technet.com/b/jonathanalmquist/archive/2009/03/19/resolve-all-open-alerts-generated-by-specific-agent.aspx
http://www.systemcentercentral.com/BlogDetails/tabid/143/IndexID/70177/Default.aspx

my thread
http://social.technet.microsoft.com/Forums/en-US/operationsmanagergeneral/thread/360f3a42-9153-4e2e-b060-73740e8ffe4f/#360f3a42-9153-4e2e-b060-73740e8ffe4f

SCOM 2007 R2 and Get-Event

For whatever reason I’ve not been able to find what I’ve been looking for regarding this cmdlet. Namely a decent example of it’s use with regards to the –Criteria parameter. For better or worse I have several event collectors setup and it would be nice to ask SCOM for a list of specific events. Normally you would think that would be simple, and perhaps for some it is, but for me I was having some issues, that is until yesterday.

I poked around in my history but I couldn’t find the page I was looking at that enlightened me, so I’ll just add my own here in case anyone else is having the same problem.

So I’m looking at a screen that has the following columns:

  • Level
  • Date and Time
  • Source
  • Name
  • Event Number

Now the examples I have seen show that you pass field=value into the –Critera parameter, but the problem for me is that Event Number or EventNumber aren’t things. In the Event Viewer it’s called ID but in SCOM ID is the ID of the specific entry you’re looking at, much like a primary key in a database.

It turns out that the Event Number field, in SCOM is simply Number. I literally felt like Homer Simpson, D’OH!

Get-Event -Criteria ‘Number=4729’

That actually yields useful information, well assuming you’re logging Event ID 4729. At any rate, I needed to write this down somewhere as it’s a regular thing for me, that up until now has been very difficult.

DPM 2010 console crashes when pushing an agent Install

This is a new one for me, I’ve been running DPM for quite a while now and I’ve not seen this behavior. In a recent staff meeting it came up that the DPM server was having some RPC issues, so since I’m jonesing for stuff to do I said I wouldn’t mind taking a look at it.

When you open the DPM Management Console, click the Management tab and then Agents you are presented with all the servers that have the DPM agent installed. From here you are also able to install/uninstall/update the agent. Working through the Agent Install wizard, I selected the server to be backed up, entered my credentials and within a minute received a nasty error message.

<FatalServiceError>
    <__System>
        <ID>19ID>
        <Seq>0Seq>
        <TimeCreated>8/1/2012 3:12:18 PMTimeCreated>
        <Source>DpmThreadPool.csSource>
        <Line>163Line>
        <HasError>TrueHasError>
    __System>
    <ExceptionType>ArgumentExceptionExceptionType>
    <ExceptionMessage>Value does not fall within the expected range.ExceptionMessage>
    <ExceptionDetails>
    System.ArgumentException: Value does not fall within the expected range.
    at System.Management.ManagementScope.Initialize()
    at Microsoft.Internal.EnterpriseStorage.Dls.UI.InstallAgentsWizard.Win32Cluster.GetNodeClusterState(String nodeName, ConnectionOptions options, UInt32& clusterState)
    at Microsoft.Internal.EnterpriseStorage.Dls.UI.InstallAgentsWizard.CredentialsPage.CheckForCluster(ProductionServerCollection errorNodesAccessDenied, ProductionServerCollection errorNodesClusterDetectionFailed, ProductionServerCollection errorNodesDRDetectionFailed)
    at Microsoft.Internal.EnterpriseStorage.Dls.UI.InstallAgentsWizard.CredentialsPage.FormListOfTargetServers(WindowsIdentity runAsIdentity)
    at Microsoft.Internal.EnterpriseStorage.Dls.UI.InstallAgentsWizard.CredentialsPage.OnLeavePage(LeavePageEventArgs e)
    at Microsoft.Internal.EnterpriseStorage.UI.WizardFramework.WizardPage.RaiseLeavePage(LeavePageEventArgs e)
    at Microsoft.Internal.EnterpriseStorage.UI.WizardFramework.WizardForm.ValidateAndLeavePage(WizardPage page, LeavePageEventArgs e)
    at Microsoft.Internal.EnterpriseStorage.UI.WizardFramework.WizardForm.TraversePagesToTarget(WizardPage startPage, WizardPage targetPage, NavigationDirection direction)
    at Microsoft.Internal.EnterpriseStorage.UI.WizardFramework.WizardForm.InternalNavigateToPage(WizardPage targetPage, NavigateEventArgs e)
    at Microsoft.Internal.EnterpriseStorage.UI.WizardFramework.WizardForm.NextPage()
    at System.Windows.Forms.Control.OnClick(EventArgs e)
    at System.Windows.Forms.Button.WndProc(Message& m)
    at System.Windows.Forms.Control.ControlNativeWindow.WndProc(Message& m)
    at System.Windows.Forms.NativeWindow.Callback(IntPtr hWnd, Int32 msg, IntPtr wparam, IntPtr lparam)
    ExceptionDetails>
FatalServiceError>

I know, nasty right? At any rate we ran through several different things, making sure the server we wanted to get at had the proper firewall rules, could we access the admin hidden share, were the groups there and so on. We even fired up netmon and reproduced the problem just to make sure they were talking. Everything seemed ok, so we called up Microsoft and opened a ticket.

After talking with one of the DPM support tech’s we found that it was an issue with the remote server we were attempting to connect to. While everything appeared to be ok, there was a problem with the RPC settings in the registry. At some point all the entries in the Internet subkey of RPC were removed. Turns out it’s OK if the entire key is missing, or if the key is there and has the proper settings in it, but if it’s there and empty…that’s hurty.

Here is some information he pasted over to me about this key:

With Registry Editor, you can modify the following parameters for RPC. The RPC Port key values discussed below are all located in the following key in the registry: HKEY_LOCAL_MACHINESoftwareMicrosoftRpcInternet
Key Data Type

Ports REG_MULTI_SZ
Specifies a set of IP port ranges consisting of either all the ports available from the Internet or all the ports not available from the Internet. Each string represents a single port or an inclusive set of ports.

For example, a single port may be represented by 5984, and a set of ports may be represented by 5000-5100. If any entries are outside the range of 0 to 65535, or if any string cannot be interpreted, the RPC runtime treats the entire configuration as invalid.

PortsInternetAvailable REG_SZ Y or N (not case-sensitive)
If Y, the ports listed in the Ports key are all the Internet-available ports on that computer. If N, the ports listed in the Ports key are all those ports that are not Internet-available.

UseInternetPorts REG_SZ ) Y or N (not case-sensitive
Specifies the system default policy.
If Y, the processes using the default will be assigned ports from the set of Internet-available ports, as defined previously.
If N, the processes using the default will be assigned ports from the set of intranet-only ports.
Example:

In this example ports 5000 through 5100 inclusive have been arbitrarily selected to help illustrate how the new registry key can be configured. This is not a recommendation of a minimum number of ports needed for any particular system. 1.  Add the Internet key under: HKEY_LOCAL_MACHINESoftwareMicrosoftRpc 
2.  Under the Internet key, add the values “Ports” (MULTI_SZ), “PortsInternetAvailable” (REG_SZ), and “UseInternetPorts” (REG_SZ).

For example, the new registry key appears as follows:
Ports: REG_MULTI_SZ: 5000-5100
PortsInternetAvailable: REG_SZ: Y
UseInternetPorts: REG_SZ: Y 
3.  Restart the server. All applications that use RPC dynamic port allocation use ports 5000 through 5100, inclusive. In most environments, a minimum of 100 ports should be opened, because several system services rely on these RPC ports to communicate with each other. 

The solution was very easy, simply delete (or correct) the malformed entry and reboot. Worked like a charm!

Managing Hotfixes Centrally

Historically I’ve not paid much attention to hotfixes or patches, but lately I’ve decided that I need to change that aspect of my management. I used to create a folder on the server that had the problem, copy the patch down and then apply it. As I become responsible for more and more servers, I decided that I needed a slightly better way to handle that. Also, since I tend to do pretty much anything in PowerShell I figured I needed to write some functions to do it for me. The result was the QfeLibrary.ps1, it contains several functions that can be used to help you manage your hotfixes.

I thought about how I go about patching a system, and then tried to condense that down into code. The first step is identifying when I have a problem, I usually use System Center Operations Manager to help identify when I have a problem. Once I find a problem I start researching on the internet for a solution. When I run into an issue that needs a hotfix, I’ll open the Microsoft Support page for it, and figure out if I need this patch.

A lot of patches seem to modify files, and usually the KB page will tell you what the previous version of the file was, and what the new version should be. To be honest this is where the idea came from, I was using Get-Item to retrieve the VersionInfo of a particular file and thought I should write a script for this, and then decided a script would be too much. So of course the next thought in the progression is write a pile of code with several functions…oh well.

The premise behind this is a central location to store the hotfixes. This can be a local folder or a file share, the preference would be a file share that is accessible from your servers. Then a function that will create an XML file that will hold the test. Then we’ll need a function to run that test to make sure that it’s applicable. Then a function to list all the available hotfixes in the file share. Finally a couple of functions to install and uninstall the patch, and finally a few ancillary functions to clean up, set the file share as a global variable, and perhaps one to view the URL for a given hotfix.

The first function I wanted was something that would output an object that had the Url of the article, the actual KB article number, and a way to test it. This was actually rather difficult for me, I had no problem creating an object that had a property that was a scriptblock, the problem was when I exported that to an XML file it turned into a string. In searching how to convert a string into a scriptblock I found this lovely article from three years ago.

Once I had that straightened out everything else fell into place nicely. A function to run the test that was stored in the XML file and a pair of functions to install and uninstall a patch. The next thing I needed was a function to get a list of hotfixes available for the local system.

The function to list available hotfixes is a little complex, it will list all the hotfixes that are available based on the OS of the target system. Optionally you can get a listing of all hotfixes that are available, additionally you can then download those hotfixes. No real magic here, when I create the XML for the hotfix I use the caption property of the Win32_OperatingSystem class. I use this in the function to list hotfixes as my comparison.

QFE Workflow

The first thing that we need to do is define where we will store our hotfix files, this is done with the Set-QfeServer function. Once we’ve done that we need to figure out what the test will be, once we have the test figured out we run the New-Qfepatch function and provide the URL, KB article number, OS, processor architecture, the test and the answer. We will run this for each SKU that we are responsible for. Copy the hotfix file manually to our QfeServer location.

From the target computer, we run the Get-Qfelist function to get a list of hotfixes that are available for our OS. We can run the Get-Qfe function with the online switch to view the support article. We can run the Test-QfePatch function on the target OS to see if it applies. If we pass the download switch to Get-QfeList we will download all hotfixes for the target OS. Finally we can Install-QfePatch on the target OS.

When we’re all done, we can review the logs stored locally if we need to, and finally Clear-QfeLocalStore to zip up the installed hotfixes and log files into a time-stamped zip file.

  1. Set-QfeServer -QfeServer \serversharehotfixes
  2. New-QfePatch -URL -KB -OS -Arch -Qfefilename -Test -Answer -QfeServer
  3. Get-QfeList -QfeServer
  4. Get-Qfe -QfeServer -QfeId -Online
  5. Test-QfePatch -QfeId -QfeServer
  6. Get-QfeList -QfeServer -Download
  7. Install-QfePatch -QfeFilename
  8. Clear-QfeLocalStore

This function can be downloaded from the Mod-Posh site, or from Technet.

Printing from a Scheduled Task as a different user

It does sound a bit odd, but I’m in the process of moving all the regular monitoring I do to scheduled tasks, and this particular one caused me headaches all afternoon.

I have a script that I run that will update the DPM VolumeSizing spreadsheet that Microsoft put together for System Center Data Protection Manager. It’s a great tool, if you’ve not looked at it and are running DPM you should check it out!

The problem I had was I scheduled this to run as my account and it worked just fine. As soon as I configured this to run as a service account, the script would go, but nothing with Excel worked. I found several threads on Google that mention as much.

I finally found a very nice thread on Technet, the answer is from a user named JensKalski who recommends creating a desktop folder under systemprofile. I have read this before and it escapes me now where I saw this, but as soon as I created this folder on my server, I got the printout!

YAY! Thanks Jens!

VMware Update Manager not responding

I received a lovely notice this morning as I was working through my servers and performing updates. I decided I would check my ESXi servers for updates using the VMware Update Manager plugin. This lovely plugin will go out and grab updates for your servers from VMware and I think optionally for other sources you define, but not today.

vum-error

I googled around and found some promising threads on VMware’s forums, but nothing seemed to do the trick for me. Then I found this KB article, while not quite exactly what I was experiencing, it was very close. Originally I didn’t think that this article applied to my situation as my SQL instance is not using Windows Authentication, and my service runs as localsystem.
But when I looked in the vci-integrity.xml file I noted that there was a URL that was pointing at an IP address. Since IP’s are dynamic for me, I changed this to the hostname of the server, and all was right in the world!
I’m not sure why an IP address was listed in there, I assume this is done at install and most likely that IP address was the IP of my server at the time, and it recently changed so it no longer worked. Some might say that I should hard set my server IP addresses, I say your installer shouldn’t assume that an IP address will always be the same. After all how hard is it to find out if the host IP is static or dynamic?
Not hard at all anymore…

Exporting Event logs in the normal Event Log format

I’ve decided that I’d like to be able to export my event logs in their native .evtx file format. This appears to be faster than converting them all to .csv files. Early on I ran into a few problems, the first of which I was unable to convert what was in my head to something that Google understood! Once I got over that I found what I was looking for.

System.Diagnostics.Eventing

System.Diagnostics.Eventing.Reader

For the purposes my function, what I’m looking for is found within the Reader namespace. I’d like my function to have a similar look and feel to the built-in cmdlet’s, like Get-WinEvent. So the first thing I decided I would do is implement a –ListLog switch parameter.

This parameter will call the GetLogNames() method of the EventLogSession Class. So the first thing you need to do is create a new sessions.

$EventSession = New-Object System.Diagnostics.Eventing.Reader.EventLogSession

Once we’ve done that we simply call the GetLogNames() method from our new object and a list of logs will appear

$EventSession.GetLogNames()

Application
HardwareEvents
Internet Explorer
Key Management Service
Media Center
OAlerts
Operations Manager
Security
System
Windows PowerShell

The next thing I need to be able to do is the actual exporting of the logs. There are actually two methods exposed in the EventLogSession class. The first is ExportLog() and the second is ExportLogAndMessages(). The documentation states that the difference between the two is the latter exports the log and it’s messages. To be safe, I’ll use the latter, ExportLogAndMessages() which will grab that metadata.

This is where I ran into the first hiccup. The breakdown of each of those is as follows

  • Path | LogName as String
  • PathType as PathType
  • Query as String
  • targetFilePath as String

Now, most of the examples I found online appeared to use PathType as an object. The problem is it really isn’t, it’s a string that contains either the word ‘LogName’ or ‘FilePath’. Technically that really isn’t even a problem, it seems to me to be more of a documentation issue. But it could also be poor understanding on my part, at any rate, there are several ways to deal with this and I chose the easy one.

Since I’m going to assume that you want to export an actual EventLog and not a file, for obvious reasons, then I’m only going to give you the option of LogName. This makes exporting your log look something like this.

$EventSession.ExportLogAndMessages($LogName,’LogName’,’*’,$Destination)

Now I could have made it look much more complicated by changing ‘LogName’ to something like this

$EventSession.ExportLogAndMessages($LogName,[System.Diagnostics.Eventing.Reader]::LogName,’*’,$Destination)

But that just seemed to me to be too much.

I’m ignoring the Query option first and focusing on targetFilePath. In testing, this works beautifully you pass in the full path and filename of the file to be created, and it appears. Now when I started testing this against remote machines I ran into my second problem.

When I create my session against a remote computer

$ComputerName = ‘ServerA’

$EventSession = New-Object System.Diagnostics.Eventing.Reader.EventLogSession($ComputerName)

I can get the proper list of logs, but when I ran the ExportLogAndMessages() method, I didn’t see the exported logfile in my folder. Turns out you need to be aware of the context, if you are connecting to a remote machine you need to remember that everything get’s executed on that remote machine. That means that when the following code is executed

$Destination = ‘C:LogFilesApplication.evtx’

$EventSession.ExportLogAndMessages($LogName,’LogName’,’*’,$Destination)

That file actually exists on the remote filesystem (ServerA) and not the local disk. At the moment I’ve not decided how I want to handle this, or if I even want to bother. You see when I attempt to trick the method and provide a UNC I get the following

$EventSession.ExportLogAndMessages(‘Application’,’LogName’,’*’,’\pc01C$LogFilesapp.evtx’)

Exception calling “ExportLogAndMessages” with “4” argument(s): “Attempted to perform an unauthorized operation.”
At line:1 char:29
+ $EventSession.ExportLogAndMessages <<<< ('Application','LogName','*','\pc01C$LogFilesapp.evtx')
    + CategoryInfo          : NotSpecified: (:) [], MethodInvocationException
    + FullyQualifiedErrorId : DotNetMethodException

My next obstacle was credentials. Remote machines may require a different user/pass combination than what you’re current login context might be. Fortunately I can pass that information into the class. One of the constructors has 5 properties

  • ComputerName as string
  • Domain as string
  • Username as string
  • Password as SecureString
  • LogonType as SessionAuthentication

Since I store my own admin credentials locally in a file I know I have access to most of that information right from the console. The first two examples will display my logon domain and username.

$Credential.GetNetworkCredential().Domain

$Credential.GetNetworkCredential().Username

The next one is a little scary, but if you think about it, it’s not as bad as you think it might be. First off, running this command will display my unencrypted password on the console! The HORROR! It’s really ok, the reason that works is because I set it in my context, so I have access to it. Get it? It’s ok if you don’t, it took me a while to figure that out as well, it’s encrypted in memory so while I can view it clear text, another use on the same system shouldn’t be able to.

$Credential.GetNetworkCredential().Password

The only problem with the previous is the outputted password is a string. The constructor needs this as a SecureString. Fortunately the following command is just that.

$Credential.Password

Now, I’m by no means an expert on .Net. I’m not even sure I would say I’m knowledgeable, but I certainly know enough to be extremely dangerous. As I was looking at the page that listed how to connect remotely I noted that LogonType was worded in a similar fashion as was PathType, so before I got carried away I decided to try each of the 4 LogonTypes.

  • Default
  • Negotiate
  • Kerberos
  • NTLM

In my testing against a remote machine that my current user context had no rights on, my admin credentials worked for each of the various types. So, as far as I’m concerned that seems to work, so I decided to stick with Default.

So now I’m able to connect to a local or remote machine and export out the logs to an existing folder on the hard drive. That leaves one final problem to deal with, handling a folder that doesn’t exist yet. I leave it up to the user to pass in the folder and filename to write to. So if it doesn’t exist I need to make it. I had thought about splitting the Destination variable into two, FilePath and FileName, but decided I didn’t want to do that.

Since I’m treading the deep waters of .Net I decided that since my Destination looks like a legitimate path, it may behave like one. I started browsing the System.IO namespace and originally was looking at File, and then realized I was dealing with a directory, which made things much easier.

I know that there is a parent property when you grab a path using Get-ChildItem so I figured there ought to be something similar in System.IO.Directory. Turns out it’s more or less exactly the same thing.

I kind of have this phobia about tweaking data that is passed into my scripts, so while this looks ugly, I’m really quite pleased.

([System.IO.Directory]::GetParent($Destination)).FullName

What does this do? Well assuming that Destination is C:LogFiles, that code returns C:LogFiles, but if it happens to be, C:LogFilesPathToRealyDeepFolder it returns everything above Folder. Which works out quite nicely. I’m assuming that the tail end will be a filename, so I ask .Net for the parent path of the filename and then create that path.

Locally creating this was simple, but we run into issues again remotely. While New-Item has a Credential property, the underlying file system doesn’t support that. So instead of getting crazy I decided to use a ScriptBlock and the Invoke-Command cmdlet.

Since we are passing variables to a remote machine, by default ServerA won’t know what Destination represents, so we use the Argumentlist property of Invoke-Command.

$ScriptBlock = {New-Item -Path $args[0] -ItemType Directory -Force}

Invoke-Command -ScriptBlock $ScriptBlock -ComputerName $ComputerName -Credential $Credential -ArgumentList (([System.IO.Directory]::GetParent($Destination)).FullName) |Out-Null

As you can see in my ScriptBlock, args[0] is represents the Path we need to create. In order for that to make it over to the remote machine you will see in the Invoke-Command line I pass in my corrected Destination as an ArgumentList.

The result is a working Export-EventLogs function that will actually export the log in the native format. It was a lot of work to get this all together, but I think it will be very useful. I decided against any sort of clearing function since there is already a built-in for that, but I haven’t seen a built-in for exporting the logs.

This function can also be downloaded from my TechNet Gallery