Fix Windows Server 2012 R2 DFSR Event ID 4614

Recently had a ticket come in where a newly created Domain with two DC’s was not replicating properly. Upon logging into the DC’s I noted the following log entry in the DFS Replication Log

The DFS Replication service initialized SYSVOL at local path S:\SYSVOL\domain and is waiting to perform initial replication. The replicated folder will remain in the initial synchronization state until it has replicated with its partner . If the server was in the process of being promoted to a domain controller, the domain controller will not advertize and function as a domain controller until this issue is resolved. This can occur if the specified partner is also in the initial synchronization state, or if sharing violations are encountered on this server or the synchronization partner. If this event occurred during the migration of SYSVOL from File Replication service (FRS) to DFS Replication, changes will not replicate out until this issue is resolved. This can cause the SYSVOL folder on this server to become out of sync with other domain controllers.

Additional Information:
Replicated Folder Name: SYSVOL Share
Replicated Folder ID: C0037E37-EF20-4CDF-968A-932E669ED810
Replication Group Name: Domain System Volume
Replication Group ID: 35F23B97-543A-4310-A08E-5F28D6342C18
Member ID: 41E8F809-5114-48DC-8297-B7E866502101
Read-Only: 0

Working backwards through the log I found this entry

The DFS Replication service encountered an error communicating with partner AD-02 for replication group Domain System Volume.

Partner DNS address:

Optional data if available:
Partner WINS Address: AD-02
Partner IP Address:

The service will retry the connection periodically.

Additional Information:
Error: 1726 (The remote procedure call failed.)
Connection ID: 3E337B81-109F-4A97-880C-63E30F52E63F
Replication Group ID: 35F23B97-543A-4310-A08E-5F28D6342C18

I didn’t see that error on AD-02 but I did see several alerts like this one leading up to when the above alert fired on AD-01.

The DNS server is waiting for Active Directory Domain Services (AD DS) to signal that the initial synchronization of the directory has been completed. The DNS server service cannot start until the initial synchronization is complete because critical DNS data might not yet be replicated onto this domain controller. If events in the AD DS event log indicate that there is a problem with DNS name resolution, consider adding the IP address of another DNS server for this domain to the DNS server list in the Internet Protocol properties of this computer. This event will be logged every two minutes until AD DS has signaled that the initial synchronization has successfully completed.

As both DC’s are also DNS servers I configured the binding to be the statically set IP address. Which should once and for all rule out any sort of weird resolution issues. This had no affect either. There is not a lot else to go on from either server around the date where I think things went sideways. There is an odd 15 day gap in the DFSR Log, but maybe that’s normal? I don’t really see any indication of a problem in the Directory Services, System or Application logs around that time.

I then installed the Active Directory Replication Status Tool (, while it was interesting to run there were no issues reported. The next thing I ran was the health report from the DFS Management console, which told me more or less what I already knew, DFS was waiting on initial synchronization.

So the next thing to try was to figure out how to force that synchronization to happen. I found the following article ( on Microsoft’s Support site. I ran through the Authoritative Synchronization steps and after going through that the servers were happy. I was able to drop a test file in SYSVOL and see it replicate to the other server, delete that file on the server and see it drop off the other server.

While I don’t really know what the root cause for this issue was, hopefully the next time it crops up I’ll be able to figure it out a lot more quickly. As a side note to the steps I followed below I would tell you that I restarted the DFSR service after making changes in ADSI, each time.

Prior to running dfsrdiag, you may need to install that feature

Add-WindowsFeature -Name RSAT-DFS-Mgmt-Con

In order to force Active Director Replication throughout the domain you will run this command

repadmin /syncall /APed

You want to force the non-authoritative synchronization of SYSVOL on a domain controller. In the File Replication Service (FRS), this was controlled through the D2 and D4 data values for the Burflags registry values, but these values do not exist for the Distributed File System Replication (DFSR) service. You cannot use the DFS Management snap-in (Dfsmgmt.msc) or the Dfsradmin.exe command-line tool to achieve this. Unlike custom DFSR replicated folders, SYSVOL is intentionally protected from any editing through its management interfaces to prevent accidents.

How to perform a non-authoritative synchronization of DFSR-replicated SYSVOL (like “D2” for FRS)

  1. In the ADSIEDIT.MSC tool modify the following distinguished name (DN) value and attribute on each of the domain controllers that you want to make non-authoritative:
    1. CN=SYSVOL Subscription,CN=Domain System Volume,CN=DFSR-LocalSettings,CN=servername,OU=Domain Controllers,DC=domainname
    2. msDFSR-Enabled=FALSE
  2. Force Active Directory replication throughout the domain.
  3. Run the following command from an elevated command prompt on the same servers that you set as non-authoritative:
  4. You will see Event ID 4114 in the DFSR event log indicating SYSVOL is no longer being replicated.
  5. On the same DN from Step 1, set:
    1. msDFSR-Enabled=TRUE
  6. Force Active Directory replication throughout the domain.
  7. Run the following command from an elevated command prompt on the same servers that you set as non-authoritative:
  8. You will see Event ID 4614 and 4604 in the DFSR event log indicating SYSVOL has been initialized. That domain controller has now done a “D2” of SYSVOL.

How to perform an authoritative synchronization of DFSR-replicated SYSVOL (like “D4” for FRS)

  1. Stop the DFSR service on all domain controllers
  2. In the ADSIEDIT.MSC tool, modify the following DN and two attributes on the domain controller you want to make authoritative (preferrably the PDC Emulator, which is usually the most up to date for SYSVOL contents):
    1. CN=SYSVOL Subscription,CN=Domain System Volume,CN=DFSR-LocalSettings,CN=servername,OU=Domain Controllers,DC=domainname
    2. msDFSR-Enabled=FALSE
    3. msDFSR-options=1
  3. Modify the following DN and single attribute on all other domain controllers in that domain:
    1. CN=SYSVOL Subscription,CN=Domain System Volume,CN=DFSR-LocalSettings,CN=servername,OU=Domain Controllers,DC=domainname
    2. msDFSR-Enabled=FALSE
  4. Force Active Directory replication throughout the domain and validate its success on all DCs.
  5. Start the DFSR service set as authoritative:
  6. You will see Event ID 4114 in the DFSR event log indicating SYSVOL is no longer being replicated.
  7. On the same DN from Step 1, set:
    1. msDFSR-Enabled=TRUE
  8. Force Active Directory replication throughout the domain and validate its success on all DCs.
  9. Run the following command from an elevated command prompt on the same server that you set as authoritative:
  10. You will see Event ID 4602 in the DFSR event log indicating SYSVOL has been initialized. That domain controller has now done a “D4” of SYSVOL.
  11. Start the DFSR service on the other non-authoritative DCs. You will see Event ID 4114 in the DFSR event log indicating SYSVOL is no longer being replicated on each of them.
  12. Modify the following DN and single attribute on all other domain controllers in that domain:
    1. CN=SYSVOL Subscription,CN=Domain System Volume,CN=DFSR-LocalSettings,CN=servername,OU=Domain Controllers,DC=domainname
    2. msDFSR-Enabled=TRUE
  13. Run the following command from an elevated command prompt on all non-authoritative DCs (i.e. all but the formerly authoritative one):

Stuck in Maintenance Mode

It’s been ages, I do have plans to start writing again, and in fact this particular posting came about because I was going to write up something I just came across at work. I went to login to my site, and was presented with a 500 error. Since I’ve recently moved the entirety of both my sites to Azure Web Apps, I was mildly concerned as I had no real access to any of the switches, buttons or levers that make things go. I checked to see if I could login to my other site and received the same error message, which I guess is good. They are both running on the same App Service in Azure and while the actual pages would load and such I just was unable to login.

I thought about restarting the App Service itself, but chose to start small. I restarted the Web Application and was able to login, I then repeated that process on the other site and logged in to it as well.

That’s when I noticed that there were some updates available, specifically the plugin for Windows Azure Storage, and WordPress itself. So I updated WP first, it took longer than HTTP was willing to hang around so I got a nifty timeout. Reloaded the page and it came back, and I updated the rest of my plugins. Then I rolled over to update this site, same process, same plugins, same timeout. Only a refresh didn’t bring the site back after the plugins updated. I gave it a little more time and still I got the maintenance page. Keep in mind there is no server that I have access to, to really do anything that I would normally do. So i did a search and came across this article.

Apparently the update process drops a hidden file .maintenance in the root of the site, neat. All I needed to do was delete that file, but how do you do that when you don’t have a thing to login into?

Turns out Web Apps have a console, this can be found under the Development Tools on the Web App. This drops you into a shell on the root of your site, even though I know this is Windows it felt like Linux, so

> uname -a
MSYS_NT-6.2-WOW RD0003FF42E552 2.5.0(0.295/5/3) 2016-03-31 18:26 i686 Msys

That’s neat, all I needed to do then was run ls -a to make sure .maintenance was there, and then just rm .maintenance to get rid of it, refresh and the site is back up and available. I’m going to paste the contents of that page here, in case it goes away.

Has your WordPress site ever gotten stuck in maintenance mode? Many people seem to run into this problem. They update a WordPress theme or plugin or maybe even WordPress itself, and in the middle of the update, something goes wrong, and their site gets stuck in maintenance mode with nothing appearing but the message, “Briefly unavailable for scheduled maintenance. Check back in a minute.”
Not only are visitors locked out, but even the Admin is locked out of the backend.
If this happens to you, try one of the following solutions.
1. The solution that seems to work for most is to delete a file called “.maintenance” from the root of your site. This is a temporary file that gets created in the update process, and more than likely, this is your culprit. (Notice the *dot* at the beginning of the file name.)
Again, this file rests on your server in your main WordPress install section. You CANNOT access it through the Admin area of your site. You will need to access the server through your webhost’s system (like Cpanel) or via FTP.
If you do not have access to files on your server, then contact your host and let them know the problem (and the solution, of course).

2. Although removing the .maintenance file seems to work for most, it doesn’t work for all. If it doesn’t work for you, then try the following:
a Delete the .maintenance file as outlined in option #1 above.
b Delete the plugin or theme that you were attempting to update.
c. If your site does is not back at this point, then in your wp-content folder, you will find a folder called “upgrade.” Delete the files or folders you find there.
* Remember to clear the cache in your browser (or use a different browser) to make sure that you aren’t getting an old version of your site.
Once you have your site back, you may want to run your updates again to make sure they’ve taken.

Week In Review : 06-15-2014

It’s time for another exciting edition of WIR! This week was filled with updates! Rolled updates to our Domain Controllers and one of them took nearly two hours to come back from a reboot! Normally not a big deal, but when your 30mi away…a little stressful! I also rebuilt my work laptop this week, earlier this year I had done something stupid with an external drive and wound up with Windows installed on Partition 2, on a disk with just one partition! Needless to say, rebooting my laptop didn’t happen all that often at all!

Speaking of Active Directory Domains, we are moving ever closer to having just one domain on campus. The internal private Edwards domain went away this week! It’s always just a little nerve wracking when running through dcpromo to remove stuff, but it went well. Didn’t appear to leave any unsightly meta data floating around AD!

Also spent a fair amount of time talking with the guys at Edwards to go over how they image machines. They routinely call us to have a workstation DNS entry removed, and needless to say it’s a little annoying. They ought to be able to do this themselves, but since it’s not their DNS they don’t have rights. Not to mention they way they do their image is a little different.

This is how it goes, a user is up for a new computer. In an effort to minimize the inconvenience this can sometimes to be, they image the new computer, load their software, and finally join it to the domain. This last part is what gets them, they tack on a “-1” to the new workstation name. Normally not a big deal, but the last part is where it gets hairy.

The new workstation is delivered to the user, the old workstation is unjoined from the domain, the new computer is renamed to the old computer name…and boom. Sometimes this works (they say) but I can’t imagine how. So, the first comment was hey, how about using service tags, or mac addresses to identify these machines uniquely, then you will never get hit with this issue. Nope, they like usernames as computernames, it makes it easy to correlate user to workstation. Apparently it’s too difficult to track that down in SCCM? Not likely, but oh well.

So, what to do, well we could just have them call every time, but that’s a hassle, not to mention there’s no code involved! My solution, create an Orchestrator runbook, that is provided a computername. With that information it scrubs AD and removes the DNS entry as well. This Runbook would run in the context of a service account that has rights to do this. They would simply login to it with their admin account, we would use their group information to verify that the computer they want removed lives in their OU and then remove it and the DNS entry. If it doesn’t live in their OU it fails. Sounds elegant to me 😉

A final solution, which will take much longer to implement, will be an appliance from BlueCat that sits between AD DNS and Proteus DNS. This appliance will use the Proteus web service and the MS RPC to translate information between AD and DNS. This will get us to a very similar place as my Runbook idea, but the one advantage is this will also get us to a place where we can pull our AD DNS out of the public facing DNS, effectively hiding thousands of servers and workstations.

Another fun one that happened, you can’t push the ops client to a Domain Controller using SCCM Client Push. If someone tells you they can, they are lying to your face! I’m going to write up a post, but the short of it is, Client Push relies on a local administrator to work, how do you do that on a Domain Controller?

OH! I also polished off my SQL PowerShell, so I’ll write about that as well. It works pretty well, created some new functions to let me more accurately find SQL Instances, still don’t have a good way to talk to the WID but it’s kicking around in the back of my head.

I also broke Active Directory Certificate Services..see you next week!

Oh, I suppose we should talk about that? So, I’ve been slowly pulling servers out of the old Ops servers and bringing them over to the new. Doing pretty well, 230+ servers in the new and growing, and under 50 in the old. The Domain Controllers got pulled in this week as well as the Certificate servers.

So, I’m working through the alerts, tuning Ops so I only hear what I need to. So, I started getting alerts about ADCS (Active Directory Certificate Services) and started working on that issue. I was seeing errors about the CRL Distribution Point being offline.

As part of the troubleshooting I had already decided to stand up a vhost to hold CRL’s among other things. So I reconfigured the CA to use that, after restarting the service as prompted by Windows, Certificate Services failed to start. The net result here was that the CRL’s were out of date and just needed to be published and then copied to the web location.

The only bit left here is to automate both the publishing and the copying of the files over to the web server. Of course this seems well suited to creating a PowerShell solution, check back later for that!

See you next week!

DPM 2010 console crashes when pushing an agent Install

This is a new one for me, I’ve been running DPM for quite a while now and I’ve not seen this behavior. In a recent staff meeting it came up that the DPM server was having some RPC issues, so since I’m jonesing for stuff to do I said I wouldn’t mind taking a look at it.

When you open the DPM Management Console, click the Management tab and then Agents you are presented with all the servers that have the DPM agent installed. From here you are also able to install/uninstall/update the agent. Working through the Agent Install wizard, I selected the server to be backed up, entered my credentials and within a minute received a nasty error message.

        <TimeCreated>8/1/2012 3:12:18 PMTimeCreated>
    <ExceptionMessage>Value does not fall within the expected range.ExceptionMessage>
    System.ArgumentException: Value does not fall within the expected range.
    at System.Management.ManagementScope.Initialize()
    at Microsoft.Internal.EnterpriseStorage.Dls.UI.InstallAgentsWizard.Win32Cluster.GetNodeClusterState(String nodeName, ConnectionOptions options, UInt32& clusterState)
    at Microsoft.Internal.EnterpriseStorage.Dls.UI.InstallAgentsWizard.CredentialsPage.CheckForCluster(ProductionServerCollection errorNodesAccessDenied, ProductionServerCollection errorNodesClusterDetectionFailed, ProductionServerCollection errorNodesDRDetectionFailed)
    at Microsoft.Internal.EnterpriseStorage.Dls.UI.InstallAgentsWizard.CredentialsPage.FormListOfTargetServers(WindowsIdentity runAsIdentity)
    at Microsoft.Internal.EnterpriseStorage.Dls.UI.InstallAgentsWizard.CredentialsPage.OnLeavePage(LeavePageEventArgs e)
    at Microsoft.Internal.EnterpriseStorage.UI.WizardFramework.WizardPage.RaiseLeavePage(LeavePageEventArgs e)
    at Microsoft.Internal.EnterpriseStorage.UI.WizardFramework.WizardForm.ValidateAndLeavePage(WizardPage page, LeavePageEventArgs e)
    at Microsoft.Internal.EnterpriseStorage.UI.WizardFramework.WizardForm.TraversePagesToTarget(WizardPage startPage, WizardPage targetPage, NavigationDirection direction)
    at Microsoft.Internal.EnterpriseStorage.UI.WizardFramework.WizardForm.InternalNavigateToPage(WizardPage targetPage, NavigateEventArgs e)
    at Microsoft.Internal.EnterpriseStorage.UI.WizardFramework.WizardForm.NextPage()
    at System.Windows.Forms.Control.OnClick(EventArgs e)
    at System.Windows.Forms.Button.WndProc(Message& m)
    at System.Windows.Forms.Control.ControlNativeWindow.WndProc(Message& m)
    at System.Windows.Forms.NativeWindow.Callback(IntPtr hWnd, Int32 msg, IntPtr wparam, IntPtr lparam)

I know, nasty right? At any rate we ran through several different things, making sure the server we wanted to get at had the proper firewall rules, could we access the admin hidden share, were the groups there and so on. We even fired up netmon and reproduced the problem just to make sure they were talking. Everything seemed ok, so we called up Microsoft and opened a ticket.

After talking with one of the DPM support tech’s we found that it was an issue with the remote server we were attempting to connect to. While everything appeared to be ok, there was a problem with the RPC settings in the registry. At some point all the entries in the Internet subkey of RPC were removed. Turns out it’s OK if the entire key is missing, or if the key is there and has the proper settings in it, but if it’s there and empty…that’s hurty.

Here is some information he pasted over to me about this key:

With Registry Editor, you can modify the following parameters for RPC. The RPC Port key values discussed below are all located in the following key in the registry: HKEY_LOCAL_MACHINESoftwareMicrosoftRpcInternet
Key Data Type

Specifies a set of IP port ranges consisting of either all the ports available from the Internet or all the ports not available from the Internet. Each string represents a single port or an inclusive set of ports.

For example, a single port may be represented by 5984, and a set of ports may be represented by 5000-5100. If any entries are outside the range of 0 to 65535, or if any string cannot be interpreted, the RPC runtime treats the entire configuration as invalid.

PortsInternetAvailable REG_SZ Y or N (not case-sensitive)
If Y, the ports listed in the Ports key are all the Internet-available ports on that computer. If N, the ports listed in the Ports key are all those ports that are not Internet-available.

UseInternetPorts REG_SZ ) Y or N (not case-sensitive
Specifies the system default policy.
If Y, the processes using the default will be assigned ports from the set of Internet-available ports, as defined previously.
If N, the processes using the default will be assigned ports from the set of intranet-only ports.

In this example ports 5000 through 5100 inclusive have been arbitrarily selected to help illustrate how the new registry key can be configured. This is not a recommendation of a minimum number of ports needed for any particular system. 1.  Add the Internet key under: HKEY_LOCAL_MACHINESoftwareMicrosoftRpc 
2.  Under the Internet key, add the values “Ports” (MULTI_SZ), “PortsInternetAvailable” (REG_SZ), and “UseInternetPorts” (REG_SZ).

For example, the new registry key appears as follows:
Ports: REG_MULTI_SZ: 5000-5100
PortsInternetAvailable: REG_SZ: Y
UseInternetPorts: REG_SZ: Y 
3.  Restart the server. All applications that use RPC dynamic port allocation use ports 5000 through 5100, inclusive. In most environments, a minimum of 100 ports should be opened, because several system services rely on these RPC ports to communicate with each other. 

The solution was very easy, simply delete (or correct) the malformed entry and reboot. Worked like a charm!

Printing from a Scheduled Task as a different user

It does sound a bit odd, but I’m in the process of moving all the regular monitoring I do to scheduled tasks, and this particular one caused me headaches all afternoon.

I have a script that I run that will update the DPM VolumeSizing spreadsheet that Microsoft put together for System Center Data Protection Manager. It’s a great tool, if you’ve not looked at it and are running DPM you should check it out!

The problem I had was I scheduled this to run as my account and it worked just fine. As soon as I configured this to run as a service account, the script would go, but nothing with Excel worked. I found several threads on Google that mention as much.

I finally found a very nice thread on Technet, the answer is from a user named JensKalski who recommends creating a desktop folder under systemprofile. I have read this before and it escapes me now where I saw this, but as soon as I created this folder on my server, I got the printout!

YAY! Thanks Jens!

VMware Update Manager not responding

I received a lovely notice this morning as I was working through my servers and performing updates. I decided I would check my ESXi servers for updates using the VMware Update Manager plugin. This lovely plugin will go out and grab updates for your servers from VMware and I think optionally for other sources you define, but not today.


I googled around and found some promising threads on VMware’s forums, but nothing seemed to do the trick for me. Then I found this KB article, while not quite exactly what I was experiencing, it was very close. Originally I didn’t think that this article applied to my situation as my SQL instance is not using Windows Authentication, and my service runs as localsystem.
But when I looked in the vci-integrity.xml file I noted that there was a URL that was pointing at an IP address. Since IP’s are dynamic for me, I changed this to the hostname of the server, and all was right in the world!
I’m not sure why an IP address was listed in there, I assume this is done at install and most likely that IP address was the IP of my server at the time, and it recently changed so it no longer worked. Some might say that I should hard set my server IP addresses, I say your installer shouldn’t assume that an IP address will always be the same. After all how hard is it to find out if the host IP is static or dynamic?
Not hard at all anymore…

Exporting Event logs in the normal Event Log format

I’ve decided that I’d like to be able to export my event logs in their native .evtx file format. This appears to be faster than converting them all to .csv files. Early on I ran into a few problems, the first of which I was unable to convert what was in my head to something that Google understood! Once I got over that I found what I was looking for.



For the purposes my function, what I’m looking for is found within the Reader namespace. I’d like my function to have a similar look and feel to the built-in cmdlet’s, like Get-WinEvent. So the first thing I decided I would do is implement a –ListLog switch parameter.

This parameter will call the GetLogNames() method of the EventLogSession Class. So the first thing you need to do is create a new sessions.

$EventSession = New-Object System.Diagnostics.Eventing.Reader.EventLogSession

Once we’ve done that we simply call the GetLogNames() method from our new object and a list of logs will appear


Internet Explorer
Key Management Service
Media Center
Operations Manager
Windows PowerShell

The next thing I need to be able to do is the actual exporting of the logs. There are actually two methods exposed in the EventLogSession class. The first is ExportLog() and the second is ExportLogAndMessages(). The documentation states that the difference between the two is the latter exports the log and it’s messages. To be safe, I’ll use the latter, ExportLogAndMessages() which will grab that metadata.

This is where I ran into the first hiccup. The breakdown of each of those is as follows

  • Path | LogName as String
  • PathType as PathType
  • Query as String
  • targetFilePath as String

Now, most of the examples I found online appeared to use PathType as an object. The problem is it really isn’t, it’s a string that contains either the word ‘LogName’ or ‘FilePath’. Technically that really isn’t even a problem, it seems to me to be more of a documentation issue. But it could also be poor understanding on my part, at any rate, there are several ways to deal with this and I chose the easy one.

Since I’m going to assume that you want to export an actual EventLog and not a file, for obvious reasons, then I’m only going to give you the option of LogName. This makes exporting your log look something like this.


Now I could have made it look much more complicated by changing ‘LogName’ to something like this


But that just seemed to me to be too much.

I’m ignoring the Query option first and focusing on targetFilePath. In testing, this works beautifully you pass in the full path and filename of the file to be created, and it appears. Now when I started testing this against remote machines I ran into my second problem.

When I create my session against a remote computer

$ComputerName = ‘ServerA’

$EventSession = New-Object System.Diagnostics.Eventing.Reader.EventLogSession($ComputerName)

I can get the proper list of logs, but when I ran the ExportLogAndMessages() method, I didn’t see the exported logfile in my folder. Turns out you need to be aware of the context, if you are connecting to a remote machine you need to remember that everything get’s executed on that remote machine. That means that when the following code is executed

$Destination = ‘C:LogFilesApplication.evtx’


That file actually exists on the remote filesystem (ServerA) and not the local disk. At the moment I’ve not decided how I want to handle this, or if I even want to bother. You see when I attempt to trick the method and provide a UNC I get the following


Exception calling “ExportLogAndMessages” with “4” argument(s): “Attempted to perform an unauthorized operation.”
At line:1 char:29
+ $EventSession.ExportLogAndMessages <<<< ('Application','LogName','*','\pc01C$LogFilesapp.evtx')
    + CategoryInfo          : NotSpecified: (:) [], MethodInvocationException
    + FullyQualifiedErrorId : DotNetMethodException

My next obstacle was credentials. Remote machines may require a different user/pass combination than what you’re current login context might be. Fortunately I can pass that information into the class. One of the constructors has 5 properties

  • ComputerName as string
  • Domain as string
  • Username as string
  • Password as SecureString
  • LogonType as SessionAuthentication

Since I store my own admin credentials locally in a file I know I have access to most of that information right from the console. The first two examples will display my logon domain and username.



The next one is a little scary, but if you think about it, it’s not as bad as you think it might be. First off, running this command will display my unencrypted password on the console! The HORROR! It’s really ok, the reason that works is because I set it in my context, so I have access to it. Get it? It’s ok if you don’t, it took me a while to figure that out as well, it’s encrypted in memory so while I can view it clear text, another use on the same system shouldn’t be able to.


The only problem with the previous is the outputted password is a string. The constructor needs this as a SecureString. Fortunately the following command is just that.


Now, I’m by no means an expert on .Net. I’m not even sure I would say I’m knowledgeable, but I certainly know enough to be extremely dangerous. As I was looking at the page that listed how to connect remotely I noted that LogonType was worded in a similar fashion as was PathType, so before I got carried away I decided to try each of the 4 LogonTypes.

  • Default
  • Negotiate
  • Kerberos
  • NTLM

In my testing against a remote machine that my current user context had no rights on, my admin credentials worked for each of the various types. So, as far as I’m concerned that seems to work, so I decided to stick with Default.

So now I’m able to connect to a local or remote machine and export out the logs to an existing folder on the hard drive. That leaves one final problem to deal with, handling a folder that doesn’t exist yet. I leave it up to the user to pass in the folder and filename to write to. So if it doesn’t exist I need to make it. I had thought about splitting the Destination variable into two, FilePath and FileName, but decided I didn’t want to do that.

Since I’m treading the deep waters of .Net I decided that since my Destination looks like a legitimate path, it may behave like one. I started browsing the System.IO namespace and originally was looking at File, and then realized I was dealing with a directory, which made things much easier.

I know that there is a parent property when you grab a path using Get-ChildItem so I figured there ought to be something similar in System.IO.Directory. Turns out it’s more or less exactly the same thing.

I kind of have this phobia about tweaking data that is passed into my scripts, so while this looks ugly, I’m really quite pleased.


What does this do? Well assuming that Destination is C:LogFiles, that code returns C:LogFiles, but if it happens to be, C:LogFilesPathToRealyDeepFolder it returns everything above Folder. Which works out quite nicely. I’m assuming that the tail end will be a filename, so I ask .Net for the parent path of the filename and then create that path.

Locally creating this was simple, but we run into issues again remotely. While New-Item has a Credential property, the underlying file system doesn’t support that. So instead of getting crazy I decided to use a ScriptBlock and the Invoke-Command cmdlet.

Since we are passing variables to a remote machine, by default ServerA won’t know what Destination represents, so we use the Argumentlist property of Invoke-Command.

$ScriptBlock = {New-Item -Path $args[0] -ItemType Directory -Force}

Invoke-Command -ScriptBlock $ScriptBlock -ComputerName $ComputerName -Credential $Credential -ArgumentList (([System.IO.Directory]::GetParent($Destination)).FullName) |Out-Null

As you can see in my ScriptBlock, args[0] is represents the Path we need to create. In order for that to make it over to the remote machine you will see in the Invoke-Command line I pass in my corrected Destination as an ArgumentList.

The result is a working Export-EventLogs function that will actually export the log in the native format. It was a lot of work to get this all together, but I think it will be very useful. I decided against any sort of clearing function since there is already a built-in for that, but I haven’t seen a built-in for exporting the logs.

This function can also be downloaded from my TechNet Gallery

Get recent events from servers

I’ve been working with Microsoft on an issue that I am having with my DPM server. We have been doing some fairly intense logging, and today I enable several performance counters in an attempt to ascertain if something external is triggering this issue.

Along those lines I thought it would be cool to get a list of log entries from two hours before the event occurs. The event I’m tracking is DPM 3101, Volume Missing. We have seen that during a regular backup something happens and then DPM stops with the message that the disk I’m backing up to is no longer connected.

I’ve started a thread and have participated in several other threads on the forums about this issue.

At any rate, I decided that I would write a script that would grab up all the events from my DPM server and my two file servers, that I’m backing up. The hope is that maybe something interesting will be logged.

Why the two hours? Well, it’s silly, but I’ve noticed that two hours seems to be significant in the timeline of how these things are happening.

The script is also available on the TechNet Gallery

HA/FDM fails to restart a virtual machine with the error: Failed to open file /vmfs/volumes/UUID/.dvsData/ID/100 Status (bad0003)= Not found

This came across my newsfeed last night and this morning, and before I lose the links I thought I’d post them up here.

VMware KB article

Description of the Problem

Perl script to detect and resolve

PowerShell script to detect and resolve

Updated New-PrintJob script

The information I’m going to cover here was previously covered on TechNet. I’m posting this because this morning I came across an error in my PrintLogger script.To be fair it was an error in the script, there is something else going on. I have created a thread, but I don’t know if I’ll get much in the way of response, as the only hit on Google for the exact error message is a German site.

The jist of my problem is that when a job is submitted, I use Get-WinEvent to pull in all the events where the Event ID is 307. This is the job printed event and has all the details for the job that I’m interested in. On a busy server this can be a fairly large list, and while at the time of the error there were only about 2100 entries in the log, it was causing it to fail and not log anything.

The quick fix was to tack on –ErrorAction SilentlyContinue to the Get-WinEvent cmdlet. This allowed the code to continue through the error. Another fix would have been to limit the number of entries returned, but still not terribly accurate. Then I remembered that article I listed up at the top, and that I had been messing around with it.

The idea here is, when EventID 307 occurs to pass to the script the Record ID of the event that originally triggered the task. The original article talks about various ways of displaying this information, since I’m working in PowerShell I was more interested in the second.

The code to add is below, and you can add more entries based on the detailed view of a given event. I’ve not tried any others as all I need is the EventRecordID.


I followed the steps below, with the exception of not using the command-line to create and delete a task. I did this originally but later skipped that part as an import was much more simple.

  1. Create a task based event
  2. Right click the task and choose to export it
  3. Edit the XML file add the code above between the EventTrigger tags, and save
  4. Delete the original task
  5. Import the XML file and modify the properties for the action

For the start a program action, I will just refer you back to the article, all you need to remember is you will need to add two additional Parameters to your PowerShell script, $EventRecordID and $EventChannel.

$EventRecordID is the record number of the event that triggered this task

$EventChannel is the log where the event can be found

There was very little adjustment that needed to be done to the original script. I’ll test it for a day, but in limited testing the updated script produced identical results to the original.

This script is also available on the TechNet Gallery.