Exporting Event logs in the normal Event Log format

I’ve decided that I’d like to be able to export my event logs in their native .evtx file format. This appears to be faster than converting them all to .csv files. Early on I ran into a few problems, the first of which I was unable to convert what was in my head to something that Google understood! Once I got over that I found what I was looking for.



For the purposes my function, what I’m looking for is found within the Reader namespace. I’d like my function to have a similar look and feel to the built-in cmdlet’s, like Get-WinEvent. So the first thing I decided I would do is implement a –ListLog switch parameter.

This parameter will call the GetLogNames() method of the EventLogSession Class. So the first thing you need to do is create a new sessions.

$EventSession = New-Object System.Diagnostics.Eventing.Reader.EventLogSession

Once we’ve done that we simply call the GetLogNames() method from our new object and a list of logs will appear


Internet Explorer
Key Management Service
Media Center
Operations Manager
Windows PowerShell

The next thing I need to be able to do is the actual exporting of the logs. There are actually two methods exposed in the EventLogSession class. The first is ExportLog() and the second is ExportLogAndMessages(). The documentation states that the difference between the two is the latter exports the log and it’s messages. To be safe, I’ll use the latter, ExportLogAndMessages() which will grab that metadata.

This is where I ran into the first hiccup. The breakdown of each of those is as follows

  • Path | LogName as String
  • PathType as PathType
  • Query as String
  • targetFilePath as String

Now, most of the examples I found online appeared to use PathType as an object. The problem is it really isn’t, it’s a string that contains either the word ‘LogName’ or ‘FilePath’. Technically that really isn’t even a problem, it seems to me to be more of a documentation issue. But it could also be poor understanding on my part, at any rate, there are several ways to deal with this and I chose the easy one.

Since I’m going to assume that you want to export an actual EventLog and not a file, for obvious reasons, then I’m only going to give you the option of LogName. This makes exporting your log look something like this.


Now I could have made it look much more complicated by changing ‘LogName’ to something like this


But that just seemed to me to be too much.

I’m ignoring the Query option first and focusing on targetFilePath. In testing, this works beautifully you pass in the full path and filename of the file to be created, and it appears. Now when I started testing this against remote machines I ran into my second problem.

When I create my session against a remote computer

$ComputerName = ‘ServerA’

$EventSession = New-Object System.Diagnostics.Eventing.Reader.EventLogSession($ComputerName)

I can get the proper list of logs, but when I ran the ExportLogAndMessages() method, I didn’t see the exported logfile in my folder. Turns out you need to be aware of the context, if you are connecting to a remote machine you need to remember that everything get’s executed on that remote machine. That means that when the following code is executed

$Destination = ‘C:LogFilesApplication.evtx’


That file actually exists on the remote filesystem (ServerA) and not the local disk. At the moment I’ve not decided how I want to handle this, or if I even want to bother. You see when I attempt to trick the method and provide a UNC I get the following


Exception calling “ExportLogAndMessages” with “4” argument(s): “Attempted to perform an unauthorized operation.”
At line:1 char:29
+ $EventSession.ExportLogAndMessages <<<< ('Application','LogName','*','\pc01C$LogFilesapp.evtx')
    + CategoryInfo          : NotSpecified: (:) [], MethodInvocationException
    + FullyQualifiedErrorId : DotNetMethodException

My next obstacle was credentials. Remote machines may require a different user/pass combination than what you’re current login context might be. Fortunately I can pass that information into the class. One of the constructors has 5 properties

  • ComputerName as string
  • Domain as string
  • Username as string
  • Password as SecureString
  • LogonType as SessionAuthentication

Since I store my own admin credentials locally in a file I know I have access to most of that information right from the console. The first two examples will display my logon domain and username.



The next one is a little scary, but if you think about it, it’s not as bad as you think it might be. First off, running this command will display my unencrypted password on the console! The HORROR! It’s really ok, the reason that works is because I set it in my context, so I have access to it. Get it? It’s ok if you don’t, it took me a while to figure that out as well, it’s encrypted in memory so while I can view it clear text, another use on the same system shouldn’t be able to.


The only problem with the previous is the outputted password is a string. The constructor needs this as a SecureString. Fortunately the following command is just that.


Now, I’m by no means an expert on .Net. I’m not even sure I would say I’m knowledgeable, but I certainly know enough to be extremely dangerous. As I was looking at the page that listed how to connect remotely I noted that LogonType was worded in a similar fashion as was PathType, so before I got carried away I decided to try each of the 4 LogonTypes.

  • Default
  • Negotiate
  • Kerberos
  • NTLM

In my testing against a remote machine that my current user context had no rights on, my admin credentials worked for each of the various types. So, as far as I’m concerned that seems to work, so I decided to stick with Default.

So now I’m able to connect to a local or remote machine and export out the logs to an existing folder on the hard drive. That leaves one final problem to deal with, handling a folder that doesn’t exist yet. I leave it up to the user to pass in the folder and filename to write to. So if it doesn’t exist I need to make it. I had thought about splitting the Destination variable into two, FilePath and FileName, but decided I didn’t want to do that.

Since I’m treading the deep waters of .Net I decided that since my Destination looks like a legitimate path, it may behave like one. I started browsing the System.IO namespace and originally was looking at File, and then realized I was dealing with a directory, which made things much easier.

I know that there is a parent property when you grab a path using Get-ChildItem so I figured there ought to be something similar in System.IO.Directory. Turns out it’s more or less exactly the same thing.

I kind of have this phobia about tweaking data that is passed into my scripts, so while this looks ugly, I’m really quite pleased.


What does this do? Well assuming that Destination is C:LogFiles, that code returns C:LogFiles, but if it happens to be, C:LogFilesPathToRealyDeepFolder it returns everything above Folder. Which works out quite nicely. I’m assuming that the tail end will be a filename, so I ask .Net for the parent path of the filename and then create that path.

Locally creating this was simple, but we run into issues again remotely. While New-Item has a Credential property, the underlying file system doesn’t support that. So instead of getting crazy I decided to use a ScriptBlock and the Invoke-Command cmdlet.

Since we are passing variables to a remote machine, by default ServerA won’t know what Destination represents, so we use the Argumentlist property of Invoke-Command.

$ScriptBlock = {New-Item -Path $args[0] -ItemType Directory -Force}

Invoke-Command -ScriptBlock $ScriptBlock -ComputerName $ComputerName -Credential $Credential -ArgumentList (([System.IO.Directory]::GetParent($Destination)).FullName) |Out-Null

As you can see in my ScriptBlock, args[0] is represents the Path we need to create. In order for that to make it over to the remote machine you will see in the Invoke-Command line I pass in my corrected Destination as an ArgumentList.

The result is a working Export-EventLogs function that will actually export the log in the native format. It was a lot of work to get this all together, but I think it will be very useful. I decided against any sort of clearing function since there is already a built-in for that, but I haven’t seen a built-in for exporting the logs.

This function can also be downloaded from my TechNet Gallery

Get recent events from servers

I’ve been working with Microsoft on an issue that I am having with my DPM server. We have been doing some fairly intense logging, and today I enable several performance counters in an attempt to ascertain if something external is triggering this issue.

Along those lines I thought it would be cool to get a list of log entries from two hours before the event occurs. The event I’m tracking is DPM 3101, Volume Missing. We have seen that during a regular backup something happens and then DPM stops with the message that the disk I’m backing up to is no longer connected.

I’ve started a thread and have participated in several other threads on the forums about this issue.

At any rate, I decided that I would write a script that would grab up all the events from my DPM server and my two file servers, that I’m backing up. The hope is that maybe something interesting will be logged.

Why the two hours? Well, it’s silly, but I’ve noticed that two hours seems to be significant in the timeline of how these things are happening.

The script is also available on the TechNet Gallery

RedHat 6 Enterprise + VMware VSphere 4.0

Odd thing happened today while I was setting up a server for someone. The RHEL 6 install went just fine, it found the network card, asked if I wanted to configure it via DHCP and all that. When the server rebooted, there was no eth0. When I ran ifconfig eth0 up, the adapter showed up, but then wouldn’t get an IP from DHCP. A quick look at the messages showed the following


No broadcast interfaces found?

Unhandled state?

Now I admit that I’m not much of a Linux admin, I have setup and done some cool stuff, but it’s just not my forte. So I resorted to Google, when Nick and Carson didn’t the answer for me!

I found this thread. In it the user stated that if he ran dhclient eth0 the adapter would go, so I did that and sure enough I got an IP address. When I started poking around I noted that in /etc/sysconfig/network-scripts/ifcfg-eth0 that ONBOOT was set to NO, I changed this to YES. Then to verify that it was working properly, I stopped the network service, shutdown the computer, and moved it to a different VLAN.

When the server came back up, it had an IP address, I’ve never had RHEL do that before, and I’ve done several installs, so I guess it’s a fluke? But if not, at least I’ve documented how I got it to go if it happens to me again!

HA/FDM fails to restart a virtual machine with the error: Failed to open file /vmfs/volumes/UUID/.dvsData/ID/100 Status (bad0003)= Not found

This came across my newsfeed last night and this morning, and before I lose the links I thought I’d post them up here.

VMware KB article

Description of the Problem

Perl script to detect and resolve

PowerShell script to detect and resolve

Updated New-PrintJob script

The information I’m going to cover here was previously covered on TechNet. I’m posting this because this morning I came across an error in my PrintLogger script.To be fair it was an error in the script, there is something else going on. I have created a thread, but I don’t know if I’ll get much in the way of response, as the only hit on Google for the exact error message is a German site.

The jist of my problem is that when a job is submitted, I use Get-WinEvent to pull in all the events where the Event ID is 307. This is the job printed event and has all the details for the job that I’m interested in. On a busy server this can be a fairly large list, and while at the time of the error there were only about 2100 entries in the log, it was causing it to fail and not log anything.

The quick fix was to tack on –ErrorAction SilentlyContinue to the Get-WinEvent cmdlet. This allowed the code to continue through the error. Another fix would have been to limit the number of entries returned, but still not terribly accurate. Then I remembered that article I listed up at the top, and that I had been messing around with it.

The idea here is, when EventID 307 occurs to pass to the script the Record ID of the event that originally triggered the task. The original article talks about various ways of displaying this information, since I’m working in PowerShell I was more interested in the second.

The code to add is below, and you can add more entries based on the detailed view of a given event. I’ve not tried any others as all I need is the EventRecordID.


I followed the steps below, with the exception of not using the command-line to create and delete a task. I did this originally but later skipped that part as an import was much more simple.

  1. Create a task based event
  2. Right click the task and choose to export it
  3. Edit the XML file add the code above between the EventTrigger tags, and save
  4. Delete the original task
  5. Import the XML file and modify the properties for the action

For the start a program action, I will just refer you back to the article, all you need to remember is you will need to add two additional Parameters to your PowerShell script, $EventRecordID and $EventChannel.

$EventRecordID is the record number of the event that triggered this task

$EventChannel is the log where the event can be found

There was very little adjustment that needed to be done to the original script. I’ll test it for a day, but in limited testing the updated script produced identical results to the original.

This script is also available on the TechNet Gallery.

DPM Sizing Script

Yesterday I told you how I had decided to automate a portion of my DPM routine. As usual this got the fires burning and a second script was born. I would have told you about it yesterday but I wanted to make the appearance of doing actual work 😉

So today I give you the Get-DPMSizingValues.ps1 script. This is basically the portion of the DPM Sizing tool that I use regularly, the part that deals with file servers. I must say I’m rather proud of it as it worked out better than I thought it would. It uses some of the same basic stuff as the previous script, which was nice for me.

My Get-PSDrive statement is a little different. I noticed when I ran this against my Windows 7 machine I had a lot of cruft I didn’t care about, so you’ll note the Where-Object bit. That filters out any results that have less than or no used space.

Get-PSDrive -PSProvider FileSystem |Where-Object {$_.Used -gt 0} |Select-Object -Property Name, @{Label='Used';Expression={$_.Used /1gb}}

The nitty gritty part of it uses the same formula found in the spreadsheet. Now, there are some values that are hard-coded as these are direct from Microsoft and I don’t really know what they mean as they have not been terribly forthcoming about it, or my fu is just not working for me today.

if (($ReplicaOverheadFactor/100) -gt 1)
$ReplicaVolume = $VolumeIdentifier.Used * ($ReplicaOverheadFactor/100)
$ReplicaVolume = $VolumeIdentifier.Used * 1.5

if ($VolumeIdentifier.Used -gt 0)
$ShadowCopyVolume = ($VolumeIdentifier.Used * $RetentionRange * ($DataChange/100)) + (1600/1024)

So I just found a bug while writing this and fixed it, turned out I forgot to convert the ReplicaOverheadFactor into a fraction in that first test. Oh well, it’s working now which is good. At any rate, that is the heart of the script, that gets looped through for every drive that has used space. I had thought about not doing the second test, since my scriptblock actually shouldn’t return any volumes that have zero used space, but what the heck, it doesn’t hurt anything.

The resulting output is pretty nifty, I would imagine you could potentially pipe this into dpm cmdlet, but I haven’t verified that. If someone needs it I’ll look into doing that but for now, it’s a very nice little reporting tool that will give you calculated values for Replica Volumes and ShadowCopy Volumes.

Name : C
UsedSpace : 44.3877143859863
Retention : 7
Replica : 53.2652572631836
ShadowCopy : 32.6339000701904
DataChange : 10
ReplicaOverhead : 120

There is also a version on the Technet Gallery.

Weekly DPM Monitoring

Part of my responsibility is handling storage. This includes allocating, deallocating, backing up and restoring. Now we’ve been using DPM for quite some time and are currently running on DPM 2010. Since this past summer I have personally come to peace with the fact that my users don’t know what the delete key is, so I have set some things in place to make it easy for me to monitor overall usage of storage for the School.

Since storage is always increasing, three weeks ago I decided that I would start to regularly monitor the used space on the file servers and update DPM accordingly. For that I used the DPM sizing tool, it’s a wonderful set of spreadsheets and scripts and if you haven’t played with them, you should!

What I love most about this tool is that you can just type in the used space of a given volume and it will calculate, based on various settings, the new size of the Replica Volume and Recovery Point Volume. So, for the past three weeks I’ve been manually opening up the spreadsheet, firing up RDP, connecting to my server and running Get-PSDrive from inside PowerShell.

For whatever reason, today I decided that enough was enough and to automate this for myself. After all I get regular updates from my file server when it runs out of space, so I can add more why can’t I have something similar for DPM? That’s how the Update-DPMSpreadSheet.ps1 script was born.

The idea is pretty simple, for each file server get a list of drives and the amount of used space in GB. So I created a scriptblock that gives me the bits of information I require.

Get-PSDrive -PSProvider FileSystem |Select-Object -Property Name, @{Label='Used';Expression={$_.Used /1gb}}

I use Invoke-Command and pass it in a session object and the above scriptblock and capture the results. When I’m done I close out of my session with Remove-PSSession that way I don’t consume too many resources.

There is a max limit on the number of concurrent sessions an account can have open. This default is 5, and can be modified as needed. Please see the following article for details on how to do this.

Once I have all that data I create a new instance of Excel, open the DPM Sizing Tool spreadsheet, and set my worksheet to the DPM File Volume sheet. I use the Volume Identification column to match up against the list of drives that are returned from my servers. As of v3.3 of this tool that column is column D. Once I find the current drive in the spreadsheet I hop over one column and update the value of the Used space in GB column (Column E as of v3.3).

If there are any errors along the way, I log them to the Application log and close out of everything.

I had thought about creating a scheduled job to have this run every Monday, but seeing as how my computer might be off or something I took the low-tech route. I updated my $PROFILE with the following chunk of code.

if ((Get-Date).dayofweek -eq 'Monday')
Invoke-Item 'C:UsersjspattonSyncStuffDPMvolumeSizing v3.3DPMvolumeSizing.xlsx'

Hopefully it’s pretty straightforward, if today is Monday, run the Update-DPMSpreadSheet.ps1 script, and then open it up in Excel.

I have also uploaded a version of this script to the Technet Gallery.

Windows EventLog Management–Part2

How to get the log to let you know when something happened

Event Triggers

  • Specify a custom action when a particular event occurs
    • Start a program
    • Send an email
    • Display a message
  • Use scripting to give yourself flexibility
  • Be careful about email

Triggers are one of those really awesome things that you wish had been around in Windows from the beginning. The idea is that when a particular event occurs, you want to perform some action. You can start a program or script or send an email, those first two are perhaps the one’s you’ll use most.

For myself I find the Start Program option the best of the bunch, being a sysadmin I find myself routinely writing scripts to perform one or more things. If I’m interested in a particular event I can create a script that will give me additional information surrounding that event.

I have a few of these in place right now on my file server I have a trigger on Event ID 2013, the low disk message. The default message is rather cryptic, simply stating that a given disk is getting close to full. Fortunately it does give me a vital piece of information, the drive letter. So I have a script that pulls that entry from the log, grabs the disk letter, and queries WMI for the free space of the disk, the script stores that as an XML file that I have the Task email to me. So you can use a script to flesh out a rather vague entry.

On the opposite side of that coin, there are some events that you are interested in that happen so frequently that sending you an email each time they occur would be overwhelming. Going back to my example of the Print Server logs, I manage two print servers that I have divided between lab use and staff/faculty use. I have written up my own print logging script that generates a daily CSV of printer usage. With two servers, about 50 printers and over 3,000 users who can print to them you could imagine what my inbox would look like if I had that emailed to me at each print.

Creating an Event Trigger

  • Find the event you want to be notified about
  • Create a script that gives you more info
  • Attach a task to the Event
  • Choose an Action
  • Configure the Action
  • Set the context for the Task

Now that you are familiar with your logs, and have determined what specific log entry you want to know about, it’s time to do something about it. The example I will be using is from my DHCP server, I’d like to know when a computer asks for an IP and is denied because the MAC address is unknown to me.

I have written a script that gives me the MAC, Hostname, Message, and Time at which the client asked. Since a given client may potentially ask every 5 minutes until it gets a lease, I don’t want an email. In fact, since a given client can ask multiple times, I just want a file with the MAC address as part of it so I can, at a glance, get an idea of how many devices are trying to connect.

Create s script


There are actually two events that I’m interested in, this means that I’ll need my script to accept the Event ID as a parameter. Also, neither of these events are Error or Warning events, merely informational, letting me know a computer was unable to get an address.

Create a script


I’m pretty good at writing scripts to get the information I need, but if you’re not comfortable scripting by all means you could run a command-line utility. There are quite a few available in the Sysinternals suite, not to mention some very handy built-in tools on Windows Server 2008. This script accepts the EventID and outputs an XML file named for the MAC that triggered the event.

Create the trigger


Give your task a name and a description.

Choose an action


Pick whether you need to start a program, send an email or display a message. The wizard allows you to only set one Action, but you should be aware that you can have as many as you want so pick one to start with and then mix and match later!

Configure your action


So if you’re using a script you need to specify the script interpreter to run. For this example I’m running a PowerShell script which is why I typed in powershell.exe. But it could just as easily have been Cscript, or Python, or the utility of your choice. If you’re running a script then the argument is the script itself along with any parameters you need to pass it. I keep all my scripts in the same place, so I define the Start In folder to be that location.

Set the context


You will notice that I have set this task to run whether or not someone is logged in. I have not stored a password with this account so it will run as the system. That’s something to keep in mind, if you’re uncomfortable doing this, you may want to create a service account to run as.

That’s it, after you click Ok, the trigger is done. All you need to do now is sit back and watch as those files get created.

Now that we have our triggers, let’s see how we can get a notification when something happens.

Part 1

Part 2

Part 3

Windows EventLog Management–Part 1

Using Event Triggers and Event Forwarding to get what you want from the Event Subsystem

Event logs are horrible, and depending on which log you’re looking at they could be even more horrible!


Seriously though, I shouldn’t say they are horrible, there is just so much that sometimes things get lost in the chatter. Prior to Windows 2008 there were only 3 logs that we had to worry about Application, System and the dreaded Security log. With the Release of Windows Server 2008, countless other logs have been added, Event ID’s have been changed, and the underlying services that report events potentially have their very own log to write to, that is of course if it’s been enabled!

What do you want to know?

  • Low disk space
  • Invalid logon attempts
  • Network outage
  • Service failures
  • Time synchronization issues

The answer to this question depends on so many things, there are literally no wrong answers when it comes to event monitoring. The key is to start looking! You’re never going to know what it is you want to focus your attention on, if you never open up the console.


The nice thing about the Event Viewer is that we can apply filtering so instead of seeing a year’s worth of entries, narrow it down to this month, or this week, or just today. While we would all love to see the friendly blue icon that lets us know the server is happy, the fun only begins when we start looking at the Warning and Error entries. You might be surprised but your log could dwindle from 36,981 entries to a paltry 224!

Granted the resultant log looks way more scary because it’s filled with yellow and red icons, but this view is way more interesting in terms of troubleshooting and monitoring.

Be familiar with the log

  • Are Error and Warning entries all I need to worry about?
  • Is it ok that the Operational log I’m looking at is empty?
  • Do I need to be concerned about each Error or Warning entry?

This goes back to what I said earlier about opening up the console. While it’s ok to start out with a filter for Error and Warning entries, not all logs report problems as an Error or Warning. When you drill down into the Applications and Services logs, often times they are filled with Information entries, and that entry may let you know something either did or didn’t happen. So you need to be familiar with your log and know what events are things you want to be aware of.

For example, the DHCP Filtering log reports MAC denies as an Information entry, if you were filtering for Error and Warning entries you would never see that, assuming you care.

As I mentioned earlier, not all of those new logs are enabled by default. If you have a print server and want to know who is printing, when you open up the Print Server Operational log, it’s empty. Does that mean that nobody is printing? Perhaps, but since the log is off by default, you may want to enable it before you make your decision. Once you become familiar with whatever log you’re looking at, you’ll be able to determine if those red error entries are really something to worry about.

For example, once you have enabled that Print Server Operational log, you may see a recurring Error event, Event ID 812. In our environment our user accounts reside in an external domain, and that entry is indicating more or less a false-positive. The error is the spool file was unable to be deleted, access is denied. The reality is that the spool file did in fact get deleted, so this particular error I don’t need to worry about.

But when I first encountered it, I was concerned about it. I searched for that Event ID on the TechNet site, asked questions in the forums, and searched Google. Only when I satisfied myself that there was nothing I could do to keep this error from occurring, and that the error really wasn’t an error did I decide to ignore it.

Now that we’re comfortable with our logs let look at some fun things to do with them.

Part 1

Part 2

Part 3