Fun things to know and share about a Dell PowerEdge NX3000 NAS Appliance

This was originally going to be a post about setting up a cluster on the NX3000, but I’ve had some fun messing around with them, so I thought I would share that and make a second post for the setup. Both Carson and Nick would likely attest to the fact that I’m a little picky about how my servers are setup (although I’m not nearly as Spartan as Nick can be). I have never left the factory setup on the servers that I’ve put out, and personally can’t imagine anyone doing that. So anyway, these arrived about a month ago and unfortunately my time has been consumed by printer issues and I’ve not had a chance to work on them.

20101119-10192010002_2

For whatever reason Dell will cut the physical drive in half, a System partition (C:) and an empty Data partition (D:), this is one of the things I find personally annoying and usually change it. The only problem I ran into was using either the OS DVD or the System Builder DVD I was unable to change the default partitioning that Dell uses. I will on occasion use the System Builder as it allows me to pump in all the information that setup asks for later, but apparently if you are going to mess around with an NX3000 NAS Appliance, you shouldn’t use the System Builder DVD.

But before I get to that, lets look at some fun pictures of the System Builder DVD. In this first one you will notice that I only get one choice of OS to select from. The thing I want you to notice is the checklist off to the right, you will notice that there are several steps in this process. But what happens when we click continue on this screen?

20101119-WP_000024_2

Well what should jump out right away is that I seem to have lost a few steps in my setup, in fact we went from 7 steps to 4, and instead of being at the beginning of the process we’re almost done! What happened to everything in between? Nobody will give me a good answer, but one answer I got from a Dell tech was, “it’s easier.” Ya, I don’t know what that mean’s either but it’s as good an answer as any!

20101119-WP_000025_2

So if you can’t or shouldn’t use the System Builder DVD what should you do? According to a different Dell tech, with this appliance you can only use the OS DVD, when asked why they packed the System Builder in there, I was told it has useful tools on it…sure it does.  Well the fun part is after I went through this, everything seemed happy and wonderful. The server rebooted, I saw the Windows 2008 starting up screen, and then I got the following message.

20101119-WP_000070_4

The System Builder only had one OS choice selection, “Microsoft Windows Server 2008 x64 SP2”. My OS DVD read, “Microsoft Windows Storage Server 2008 Enterprise With Service Pack 2”. The error message reads, “Valid Microsoft Windows Server 2008 media was not found”.

20101119-WP_000068_4

Anyway, here are some entertaining screen shots from using the OS DVD.

20101119-WP_000020_2

Ok, so I took these from my phone so they are a bit shaky but you get the idea, I really like the Display Log Console on the right. Nothing EVER showed up in there, EVER.

20101119-WP_000021_2

I know I’m being nit-picky, but really? My OS is Landing? Wouldn’t “loading” make more sense? In my last post I mentioned that Carson gave me a hard time for not looking at the default web site.

20101119-WP_000071_2

There was absolutely nothing there, if I had installed the Server Manager I would have seen something, but nothing cool by default.

“Single node cluster” oxymoron or reality?

You might be asking yourself, what? how? why? To be honest I asked myself the very same questions. I don’t purport to have all the answers to these important questions, but I have found a few resources, and I wrote a little step-by-step to show answer the how.

What is a “single node” cluster? Well, it’s technically not a cluster at all, but it provides the same sets of services an actual cluster would. It’s a way to have different views of the same server, Microsoft refers to these as “virtual servers”. To simply, a virtual server in this sense is simply a hostname associated with a specific IP address on your cluster.

Why would I want a “single node” cluster?  That is a really good question, why would you want a cluster that provided none of the wonderful things that clusters provide; failover, redundancy, fault-tolerance, high-availability. These are really all saying the same thing, the single most important thing a cluster provides is failover. When one node fails, those services roll over to the next available node. On a “single node”, there isn’t another available node to fail over to.

So really, what is the point? Well, the example I found on the TechNet site was very simple. You have a single server in your office, but for administrative reasons, you want each department to access their “own server”. So you create Resource groups for each that contain at a minimum an IP Address resource and a Network Name resource.

Now you might be asking yourself, why did he decide to write an article about this? Well, it turns out that we are in the middle of testing an IPAM solution. We needed to be able to test how our existing cluster would behave, so we created a vm, installed Windows and setup a cluster, that we could use to duplicate our current infrastructure, without having to setup anything crazy.

How do I setup a “single node” Windows Server 2003 cluster? (Based on this TechNet article)

20101005-Cluster2003Step1
20101005-Cluster2003Step2
20101005-Cluster2003Step3
20101005-Cluster2003Step4
20101005-Cluster2003Step5
20101005-Cluster2003Step6

 

 

Configuring a Windows Server 2008 “Single node” cluster.

20101005-Cluster2008Step1
20101005-Cluster2008Step2
20101005-Cluster2008Step3
20101005-Cluster2008Step4
20101005-Cluster2008Step5
20101005-Cluster2008Step6
20101005-Cluster2008Step8
20101005-ClusterSuccess

 

If you have a third party DNS Server, you may see this after your cluster is done.

20101005-Cluster2008BadDNSPacket

 

I wrote an article a while ago about how to resolve this issue. But there is also a good article available on the Technet Wiki.

Original Article

How to attach a custom task to an event on WS08

In an earlier post I talked about how to forward events from one machine to another. That works out quite well, but unless you’re sitting at the log and watching the events go by, you might miss something. You might agree that there are some events that you are more interested in than others. For example, while it’s neat that Windows let’s you know server uptime in seconds as an event but I’d be much more interested in an event related to a computer that is unable to establish a trust relationship with it’s domain controller.

For details on Event 5722

The nice thing about Windows Server is that you can set up a custom task to let you know when a particular event occurs. This doesn’t really replace any sort of quality monitoring software like System Center Operations Manager or Zenoss, but it’s most certainly a nice thing to have built-in. So how do you go about setting this up? In my example I’ll be creating a task that sends me an email based on an event forwarded from my domain controller. It will require a script, but I’ll include it for you.

First thing is to have a script that writes the event to a file. The reason is there is sadly no built-in feature to pull the event details and put it into an email task. So we’ll have two tasks that run, one that runs the script to make the file. The second to take that file as an attachment and email it to you.

Option Explicit

Const ForWriting = 2
Dim strComputer
Dim strEventCode
Dim objWMIService
Dim objEvents
Dim objEvent
Dim objFSO
Dim objFile

strComputer = “.”
strEventCode = CInt(Wscript.Arguments.Item(0))

Set objWMIService = GetObject(“winmgmts:{(Security)}\” & strComputer & “rootcimv2”)
Set objEvents = objWMIService.ExecQuery(“Select * from Win32_NTLogEvent where logfile=’system'”)
Set objFSO = CreateObject(“Scripting.FileSystemObject”)

If objFSO.FileExists(“C:scripts” & strEventCode & “.txt”) Then
    Set objFile = objFSO.OpenTextFile(“C:scripts” & strEventCode & “.txt”, ForWriting)
Else
    Set objFile = objFSO.CreateTextFile(“C:scripts” & strEventCode & “.txt”)
End If

For Each objEvent in objEvents
    If objEvent.eventcode = strEventCode Then
        objFile.WriteLine objEvent.eventtype
        objFile.WriteLine objEvent.eventcode
        objFile.WriteLine objEvent.message
        Wscript.Quit
    End If
Next

The script above is very simple. It accepts a single command-line parameter, the event code. It connects to the WMI service of the local computer and pulls in the System log. Then it checks to see if there is a file named for the event code in the scripts directory, if not it creates one, otherwise it overwrites the existing one. Then inside the For Each loop, we find the first event code that matches the provided code, and write out the type, code and message and then exit out of the script altogether.

Now we need to find a task that we are interested in getting notified about, and attach a task to it. The first thing you need to do is head over to the event viewer, and right-click on a task.

20100930-attachtask

Fortunately a wizard will guide you through the entire process. The first thing you need to do is provide a name for your task that makes sense for you.

20100930-taskwizard

Since we selected a specific event from the log, we are unable to edit the event to log.

20100930-loggedevent

The next thing we need to do is pick what action we want to trigger when our event shows up. There are three options; Start a Program, Send an email or Display a Message. The first time through we will choose, Start a Program.

20100930-taskaction

Then you need to specify the options for your program. You can browse to the program you wish to run, I have selected the above script as my program. Since my script accepts an argument I set the argument of my script.

20100930-scriptaction

You can click through the remaining screens and your custom task is now set. The next step is to associate our second task to this event. The second task is to email the file that gets created as an attachment to yourself. Go through the first series of steps like we just did for the script, except we will choose Send an Email.

Fill in all the needed details for your email message and browse to the folder where your attachment is created.

20100930-emailtask

Again click through the remaining screens to save our email task, and as soon as the event your are interested in appears you should have an email.

Forwarding Events from WS08 Core DC

There was some changes between Windows Server 2008 and Windows Server 2008 R2, the one I’m interested in is WinRM. The default HTTP port on Windows Server 2008 is TCP 80, but on Windows Server 2008 R2 the default HTTP port is TCP 5985. There are a couple of ways to get around this; either change the listener port on the Windows Server 2008 machine, or use a Collector Initiated subscription and change the port on the Advanced tab. In my example these computers are both Domain joined. Setting this up in a Workgroup environment is a little different and I may write up something for that later.

Configuring the Source (Windows Server 2008 Core)

Verify that the WinRM is either on or off

winrm e winrm/config/listener

20100922-SourceWinRMDisabled

Here you can see that WinRM is not configured. So now we need to enable WinRM

winrm qc

20100922-SourceWinRMEnabled

The quickconfig (qc) option does the initial configuration of WinRM. It creates an HTTP listener on port 80, and enables firewall exceptions. In order to connect from the Collector and start getting events we also need to allow the remote administration service through the firewall.

netsh firewall set service type=remoteadmin mode=enable

20100922-SourceFirewallExceptionsOn

The configuration of the source server is done.

Configuring the Collector (Windows Server 2008 R2)

In order for this server to pull information from the Source you will need to setup a subscription

Start > Administrator Tools > Server Manager > Diagnostics > Event Viewer > Subscriptions

20100922-CollectorEnableWCESVC

You will notice that before you can setup a Subscription you need to enable the Windows Event Collector Service, click Yes. Now you can click Create Subscription from the Action pane on the right. At the very least you will need a Subscription name.

20100922-CollectorSubscriptionProperties

We are going to configure a Collector initiated subscription, so browse the Directory and find your Source server. If you click the Test button, you may receive an error message that lets you know the Collector can’t talk to the Source. That’s ok, we’re going to fix that in a minute.

20100922-CollectorSubscriptionError

Now we need to configure the list of Events that we are interested in. These are the default events available on any 2008 computer, you can write an XML query that you can paste into the XML tab.

20100922-CollectorSubscriptionEvents

The last thing we need to configure is the Protocol settings and Delivery Optimizations. If your Source server is not a Domain Controller then you can add the computer account of the Collector to the Local Administrators group on the Source. If your Source server is a Domain Controller, you may want to use a Service Account.

I set my Event Delivery Optimization to Minimize Latency, this ensures that events are delivered with minimal delay. If you are collecting events from the Security log, this may not be a setting you want to enable.

Finally the Protocol section, here you can change the HTTP port to 80.

20100922-CollectorSubscriptionAdvancedProperties

After a few minutes you should start to see events showing up under Forwarded Events.

20100922-CollectorForwardedEvents

Windows Server 2008 R2 and Print Logging

Yesterday I got a support ticket that stated the user was unable to connect to any printers on the network. I found this odd since we push out printers via GPO Preferences, and that user’s particular computer had the correct GPO linked in. So I started my troubleshooting process which usually involves grep’ing the logs looking for relevant entries.

We have two print servers, one is running Windows Server 2003 R2 and the other is running Windows Server 2008 R2. Both have about 40 odd printers installed, and that is their sole purpose in life. One of the things we realized a long time ago is that the event logs for Windows printing is really horrible. So we have downloaded and installed the PaperCut Print Logger software.

I noticed that he had connected to the print server earlier that morning to print, but nothing since. So I decided to check in the system log, which is where I would go on Windows Server 2003. I was surprised to see there were no entries related to printing at all, then I noticed there was a view for Roles, and that Print and Document Services was an option to view.

When I looked through that log, I noticed that the only events in it were related to installing printers and their associated drivers. So opening up the Applications and Services Logs folder, expanding the Microsoft folder, expanding the Windows folder,  and scrolling down to PrintServices I found an Operational log, that had nothing in it. When I opened that log up I noticed that from the list of Actions I could Enable Log, once I did that I started seeing entries show up.

It turns out the default setting for this log is off, even if you pick the File and Print server role which seems a bit counter-intuitive to me. So remember, when setting up your print server on Windows Server 2008, to enable logging perform the following steps:

  1. Login to the server
  2. From Server Manager, expand Diagnostics and Event Viewer
  3. Expand Applications and Services Logs
  4. Expand the Microsoft folder, expand the Windows folder
  5. Expand the PrintServices folder
  6. Right-click on the Operational Log, and click Enable Log

Working with computer objects

So I’m working with computer objects, if you can’t tell from my previous post. There are times when what you really want to know about a given computer is, who’s responsible for it. With good user education, your OU admins or computer admins will pre-stage a computer and populate the ManagedBy property. This effectively lets anyone who can read that property, know who is responsible for this object.

In an environment with thousands of computers, or even hundreds of computer’s the likelihood of this actually happening is most likely very slim. I freely admit that when I create a computer object, I leave that field empty. But it’s ok, if you have given your users the right to join their own computers to the domain this information is stored in the ACL for the object.

I got to write a new function that pulled the owner from the ACL. I found what I was looking for in the Technet Forums. The only thing I changed was making the stand-alone script a function that I could pull into my code. The output is the username formatted NT-style:

DOMAINUsername

I wrote a couple of scripts today around the managedby property. The first script walks through your ActiveDirectory and reports out the name of the computer and who the owner is. If you have a domain admin account and created that object then the owner becomes Domain Admins. If a non Domain Admin created the object, that account is listed as the owner.

The best way to run this particular script is as follows:

cscript //nologo GetOwner.vbs > ComputerOwners.csv

The resultant output file can be opened up in the spreadsheet program of your choice, and will have two columns. The name of the computer, and the DomainUsername of the owner. You will want to change the LDAP URI at the top, to point at the OU of your choice within your domain, or for fun point it at the root. This script does not modify anything at all, and can be run as a regular non-admin user.

That was the first part of what I needed, a way to get the user who owned the computer. For the next script, I needed was a way to write that information into the managedby property of the object. The only issue I had was that the username was reported NT4 style, and I needed to store it as a DN. So I got to write a second function that converted the NT4 username to a DN string.

I found the code on the Scripting Guys blog. The actual code from the blog returns a GUID, but they kindly added a table at the bottom that listed the other formats that could be returned from IADsNameTranslate. I modified the code to return ADS_NAME_TYPE_1779, and changed it from a stand-alone script to a function that returned the DN of the user.

There is no output in this script, it just runs and sets as it goes. If you have a problem with that, one thing you can do is change the line that reads:

Call WriteData(strADSIProp, strUserDN, strComputerPath)

To

Wscript.Echo "Changing " & strADSIProp & " property of " & strComputerPath & " to " & struserDN

This will output a line that will tell you what the script will do. Personally I would comment out the call to WriteData and add the Wscript.Echo as a new line below that. Once you are satisfied that it works the way you anticipate, delete that echo line and uncommenet the call.

DPM Finally Complete

Not really a whole lot to say here, just really happy that my backup of user data is complete! It has taken about a month to get a successful backup and now that I have one, I’m not really sure what to do next!

DPMGreen

Ok, that’s not really true, I do know what I need to work on next, it’s just no fun. We have no end of problems with DPM and the tape-drive. So the next task is to get the tape backups going for long term archival.

RegistrySizeLimit…who knew?

This past Friday we pushed out 30 new lab machines. These machines were imaged the same way we image the rest of our labs. We noticed that several applications failed to install, and Carson began troubleshooting that. Turns out, due to the number of installed applications we ran out of space in the Windows Registry. Call us noobs if you like, but really…there is a max?

We found a few articles on the subject and looks like this was a setting that started for Windows NT. Here are a few of the articles we found:

We set ours to 0xffffffff which places it at approximately 4gb. The way I understand it is if this value is over 80% of the amount of paged pool, then Windows set’s it at the 80%. Unset, I believe Windows will automatically adjust this for you, but I’m not really sure on that. I rolled this fix out as a GPO Preference to only lab computers.

The issue presented itself in a few ways, first Windows Updates was broken, additionally multiple .msi installations failed with cryptic error codes.

DPM 2007, Windows 2008 R2 and Windows 2008 Core Domain Controllers

Still bringing services online after our recent hardware upgrade, today’s goal was to make DPM 2007 go. The original server was a PoiwerEdge 1950 with a MD1000 attached. It was populated with 15 750gb drives and housed all of our backup data. As both the server and drive array were falling off warranty we took this opportunity to purchase a new PowerEdge server, but we opted for a PromiseArray with 16 2TB drives.

DPM Server is running Windows 20008 R2, the OS installation went fine, there are several tutorials out there, I followed the one from the Core Team. There is one thing that you will need to change though in order to get reporting to work properly.

Error Message:

DPM could not connect to SQL Server Reporting Services server because of IIS connectivity issues.

On the computer on which the DPM database was created, restart the World Wide Web Publishing Service. On the Administrative Tools menu, select Services, Right-click World Wide Web Publishing Service, and then click Start.

ID: 3013.

This message is due to the new security features, I found this article extermely helpful for 2008 R2.

http://technet.microsoft.com/en-us/magazine/dd673659.aspx
start /w ocsetup WindowsServerBackup

http://technet.microsoft.com/en-us/magazine/dd630943.aspx
Wmic product get name /value
Wmic product where name=“Name” call uninstall

http://social.technet.microsoft.com/Forums/en/winservercore/thread/5a438757-d294-483d-8619-df9eb5700561
netsh firewall set opmode disable
netsh firewall set opmode enable

http://technet.microsoft.com/en-us/library/cc161275.aspx
netsh firewall set portopening TCP 3148 “DPM TCP 3148” ENABLE
netsh firewall set portopening TCP 3149 “DPM TCP 3149” ENABLE
netsh firewall set portopening TCP 135 “DCOM TCP 135” ENABLE
netsh firewall set portopening UDP 137 “NetBIOS UDP 137” ENABLE
netsh firewall set portopening UDP 138 “NetBIOS UDP 138” ENABLE
netsh firewall set portopening TCP 139 “NetBIOS TCP 139” ENABLE

netsh firewall reset
netsh firewall set service RemoteDesktop enable
netsh advfirewall firewall set rule group=”remote administration” new enable=yes
netsh advfirewall firewall set rule group=”Windows Firewall Remote Management” new enable=yes
netsh firewall set logging filelocation=%systemroot%system32LogFilesFirewallpfirewall.log maxfilesize=4096 droppedpackets=Enable Connections=Enable

Production Script: UpdateADDescription

The UpdateADDescription is run hourly on a computer connected to the domain. It’s sole purpose is to loop through all the computers in a given OU and update their Description property. It serves as both inventory and somewhat of a user tracking system as it will give us an idea of who is logged on to what computer.

Most of the functions and procedures in this script have already been documented, so to get the details on how everything interacts you might look to those articles first.

I wanted to be able to use the Active Directory to hold information that was pertinent to how we deliver service to our users. I needed something that would let me know who was logged on, where they were logged and some inventory-ish data about the computer itself.

The beauty part about this is the output of the script is stored in a comma-separated format in the Description property, so in the Active Directory Users and Computers console you can literally export the view as a comma-separated file, and open it directly in any spreadsheet program.

The main fields from the view are comma-separated and the Description property, comma-separated is parsed out like additional fields.

In order to use this script in your environment I highly suggest you run it in a test environment.