Database backed up to VIRTUAL_DEVICE GUID

Working with a customer who wanted to make sure their database backups were consistent before going live, we found the following entry in the log.


Database backed up. Database: ProductionDB, creation date(time): 2016/10/26(15:46:13), pages dumped: 70152, first LSN: 50966:4624:2, last LSN: 50966:13328:1, number of dump devices: 1, device information: (FILE=1, TYPE=VIRTUAL_DEVICE: {'{9B4A021C-7A36-4358-B95B-3E93999DC1BD}8'}). This is an informational message only. No user action is required.

There were several things that we knew, first the SQL Server were running managed backups to Azure. A quick check of the storage account revealed the regular managed backups we expected to see.

Get-CloudBlob.ps1

We didn’t see any backups for the database when that log entry was created. We also knew that the server was being backed up by Azure Recovery Services, and this backup happened daily at 2am. This correlated with the stamp of the log entry, but we were unable to find where this backup was. We assumed that this had something to do with VSS and SQL so with this information we started searching for some folks experiencing similar issues and potential resolutions.

We found several articles that I’ll list below, but the general consensus was that backups in Windows will trigger VSS. The providers will then trigger the VSS writers to quiesce the SQL files. Nothing is being backed up per-se, but SQL is registering this is a backup event. There were several threads where the reported solution was to just disable the SQL VSS service, but I don’t think that’s wise.

To our thinking, if we disable the VSS writer for SQL, then when Azure attempts to perform a backup, it will have at the last an inconsistent backup, and at worst a corrupt backup as the files were in use at the time.

Knowing that the server itself is getting backed up, we launched the File Recovery (Preview) which is very slick. This launches from the portal, you select what date you want and it compiles a script that you download and run on the VM in Azure. This script will connect over iSCSI to the VHD and mount it for 12hrs letting you access the files on the server at that point in time. This would allow you to access the SQL Database files in their normal directories, albeit different driver letters. But they are not actual backups, it’s a snapshot of the data/files at that time.

While very cool, we needed to have a more reliable way to validate that our backups were vaild. Our concern was that the VSS triggered backups would break the restore chain, but turns out that our regular managed backups work as expected and we could restore the last full, and transact get us back to where we needed to be.

So, the thing to remember here is that if you see an entry like the above, keep calm. As long as you are doing your regular backups as you should, then everything will be ok.

And Happy National Backup Day!

https://docs.microsoft.com/en-us/azure/virtual-machines/windows/sql/virtual-machines-windows-sql-backup-recovery

https://www.sqlservercentral.com/Forums/Topic666998-357-1.aspx

http://dba.stackexchange.com/questions/91812/where-is-the-sqls-database-backup-virtual-device-string-stored-in-the-system-tab

https://social.msdn.microsoft.com/Forums/sqlserver/en-US/c96bc256-99e5-453e-aa78-7910fe925f65/ntbackup-causes-sql-server-backups?forum=sqldatabaseengine

https://social.msdn.microsoft.com/Forums/en-US/e1ec1bc8-ee2b-4397-9b98-314d63e3e9a5/trying-to-find-out-whats-triggering-backups-and-where-they-are-going-to?forum=sqldatabaseengine

https://support.microsoft.com/en-us/help/956893/support-policy-for-microsoft-sql-server-products-that-are-running-in-a-hardware-virtualization-environment

Create VSTS Service Principal

Working with one of our CSP customer’s and they needed to connect their Visual Studio Team Services account to their CSP Azure account. If you have a regular Pay-As-You-Go subscription, then you have access to the old portal (manage.windowsazure.com) but if you’re a CSP that doesn’t work. So after talking Brian Moore at Microsoft I created a series of steps that I thought I’d get down for the next time I need to do this.

Step 1

You will need to logon to your Visual Studio Team Services account. As you can see I have logged into mine and I have a couple of projects.

Step 2

You will need to select a project that you will deploy/integrate with Azure, I’ve selected my sample project.

Step 3

This step is where you configure the project to connect to VSTS by creating an endpoint.

Step 4

I’ve given my endpoint an incredibly creative name and associated it with a specific subscription.

Step 5

Here the endpoint is complete and you have the option to change it’s configuration, manage the endpoint’s role within Azure as well as manage the service principal itself, and finally to disconnect the service principal. The disconnect will in fact delete the service principal from azure, so in production this service principal should only ever be used with Visual Studio.

As a side note, the manage service principal link kicks you over to the old portal, so for CSP customer’s this may in fact fail. See images below.

Update Service Configuration

You have a couple of options here, change the connection name, and change the subscription.

Manage Endpoint Roles

This will take you to the Azure portal and let you adjust and generally fiddle with the roles associated with this Service Principal.

Manage Service Principal

By default this appears to connect you over to the old portal. But this gives you the ability to manipulate the properties of the Service Principal.

Here is where you can find the same information in the new portal. This is difficult to see, but Azure Active Directory > App Registrations and then choose the Service Principal named VisualStudioSPN.

Disconnect

Finally, to remove the Endpoint and Service Principal, simply choose disconnect, and this will go through and clean everything up.

 

Fix Windows Server 2012 R2 DFSR Event ID 4614

Recently had a ticket come in where a newly created Domain with two DC’s was not replicating properly. Upon logging into the DC’s I noted the following log entry in the DFS Replication Log


The DFS Replication service initialized SYSVOL at local path S:\SYSVOL\domain and is waiting to perform initial replication. The replicated folder will remain in the initial synchronization state until it has replicated with its partner . If the server was in the process of being promoted to a domain controller, the domain controller will not advertize and function as a domain controller until this issue is resolved. This can occur if the specified partner is also in the initial synchronization state, or if sharing violations are encountered on this server or the synchronization partner. If this event occurred during the migration of SYSVOL from File Replication service (FRS) to DFS Replication, changes will not replicate out until this issue is resolved. This can cause the SYSVOL folder on this server to become out of sync with other domain controllers.

Additional Information:
Replicated Folder Name: SYSVOL Share
Replicated Folder ID: C0037E37-EF20-4CDF-968A-932E669ED810
Replication Group Name: Domain System Volume
Replication Group ID: 35F23B97-543A-4310-A08E-5F28D6342C18
Member ID: 41E8F809-5114-48DC-8297-B7E866502101
Read-Only: 0

Working backwards through the log I found this entry


The DFS Replication service encountered an error communicating with partner AD-02 for replication group Domain System Volume.

Partner DNS address: AD-02.contoso.com

Optional data if available:
Partner WINS Address: AD-02
Partner IP Address: 192.168.1.4

The service will retry the connection periodically.

Additional Information:
Error: 1726 (The remote procedure call failed.)
Connection ID: 3E337B81-109F-4A97-880C-63E30F52E63F
Replication Group ID: 35F23B97-543A-4310-A08E-5F28D6342C18

I didn’t see that error on AD-02 but I did see several alerts like this one leading up to when the above alert fired on AD-01.


The DNS server is waiting for Active Directory Domain Services (AD DS) to signal that the initial synchronization of the directory has been completed. The DNS server service cannot start until the initial synchronization is complete because critical DNS data might not yet be replicated onto this domain controller. If events in the AD DS event log indicate that there is a problem with DNS name resolution, consider adding the IP address of another DNS server for this domain to the DNS server list in the Internet Protocol properties of this computer. This event will be logged every two minutes until AD DS has signaled that the initial synchronization has successfully completed.

As both DC’s are also DNS servers I configured the binding to be the statically set IP address. Which should once and for all rule out any sort of weird resolution issues. This had no affect either. There is not a lot else to go on from either server around the date where I think things went sideways. There is an odd 15 day gap in the DFSR Log, but maybe that’s normal? I don’t really see any indication of a problem in the Directory Services, System or Application logs around that time.

I then installed the Active Directory Replication Status Tool (https://www.microsoft.com/en-us/download/details.aspx?id=30005), while it was interesting to run there were no issues reported. The next thing I ran was the health report from the DFS Management console, which told me more or less what I already knew, DFS was waiting on initial synchronization.

So the next thing to try was to figure out how to force that synchronization to happen. I found the following article (https://support.microsoft.com/en-us/kb/2218556) on Microsoft’s Support site. I ran through the Authoritative Synchronization steps and after going through that the servers were happy. I was able to drop a test file in SYSVOL and see it replicate to the other server, delete that file on the server and see it drop off the other server.

While I don’t really know what the root cause for this issue was, hopefully the next time it crops up I’ll be able to figure it out a lot more quickly. As a side note to the steps I followed below I would tell you that I restarted the DFSR service after making changes in ADSI, each time.

Prior to running dfsrdiag, you may need to install that feature

Add-WindowsFeature -Name RSAT-DFS-Mgmt-Con

In order to force Active Director Replication throughout the domain you will run this command


repadmin /syncall /APed

You want to force the non-authoritative synchronization of SYSVOL on a domain controller. In the File Replication Service (FRS), this was controlled through the D2 and D4 data values for the Burflags registry values, but these values do not exist for the Distributed File System Replication (DFSR) service. You cannot use the DFS Management snap-in (Dfsmgmt.msc) or the Dfsradmin.exe command-line tool to achieve this. Unlike custom DFSR replicated folders, SYSVOL is intentionally protected from any editing through its management interfaces to prevent accidents.

How to perform a non-authoritative synchronization of DFSR-replicated SYSVOL (like “D2” for FRS)

  1. In the ADSIEDIT.MSC tool modify the following distinguished name (DN) value and attribute on each of the domain controllers that you want to make non-authoritative:
    1. CN=SYSVOL Subscription,CN=Domain System Volume,CN=DFSR-LocalSettings,CN=servername,OU=Domain Controllers,DC=domainname
    2. msDFSR-Enabled=FALSE
  2. Force Active Directory replication throughout the domain.
  3. Run the following command from an elevated command prompt on the same servers that you set as non-authoritative:
    1. DFSRDIAG POLLAD
  4. You will see Event ID 4114 in the DFSR event log indicating SYSVOL is no longer being replicated.
  5. On the same DN from Step 1, set:
    1. msDFSR-Enabled=TRUE
  6. Force Active Directory replication throughout the domain.
  7. Run the following command from an elevated command prompt on the same servers that you set as non-authoritative:
    1. DFSRDIAG POLLAD
  8. You will see Event ID 4614 and 4604 in the DFSR event log indicating SYSVOL has been initialized. That domain controller has now done a “D2” of SYSVOL.

How to perform an authoritative synchronization of DFSR-replicated SYSVOL (like “D4” for FRS)

  1. Stop the DFSR service on all domain controllers
  2. In the ADSIEDIT.MSC tool, modify the following DN and two attributes on the domain controller you want to make authoritative (preferrably the PDC Emulator, which is usually the most up to date for SYSVOL contents):
    1. CN=SYSVOL Subscription,CN=Domain System Volume,CN=DFSR-LocalSettings,CN=servername,OU=Domain Controllers,DC=domainname
    2. msDFSR-Enabled=FALSE
    3. msDFSR-options=1
  3. Modify the following DN and single attribute on all other domain controllers in that domain:
    1. CN=SYSVOL Subscription,CN=Domain System Volume,CN=DFSR-LocalSettings,CN=servername,OU=Domain Controllers,DC=domainname
    2. msDFSR-Enabled=FALSE
  4. Force Active Directory replication throughout the domain and validate its success on all DCs.
  5. Start the DFSR service set as authoritative:
  6. You will see Event ID 4114 in the DFSR event log indicating SYSVOL is no longer being replicated.
  7. On the same DN from Step 1, set:
    1. msDFSR-Enabled=TRUE
  8. Force Active Directory replication throughout the domain and validate its success on all DCs.
  9. Run the following command from an elevated command prompt on the same server that you set as authoritative:
    1. DFSRDIAG POLLAD
  10. You will see Event ID 4602 in the DFSR event log indicating SYSVOL has been initialized. That domain controller has now done a “D4” of SYSVOL.
  11. Start the DFSR service on the other non-authoritative DCs. You will see Event ID 4114 in the DFSR event log indicating SYSVOL is no longer being replicated on each of them.
  12. Modify the following DN and single attribute on all other domain controllers in that domain:
    1. CN=SYSVOL Subscription,CN=Domain System Volume,CN=DFSR-LocalSettings,CN=servername,OU=Domain Controllers,DC=domainname
    2. msDFSR-Enabled=TRUE
  13. Run the following command from an elevated command prompt on all non-authoritative DCs (i.e. all but the formerly authoritative one):
    1. DFSRDIAG POLLAD

Stuck in Maintenance Mode

It’s been ages, I do have plans to start writing again, and in fact this particular posting came about because I was going to write up something I just came across at work. I went to login to my site, and was presented with a 500 error. Since I’ve recently moved the entirety of both my sites to Azure Web Apps, I was mildly concerned as I had no real access to any of the switches, buttons or levers that make things go. I checked to see if I could login to my other site and received the same error message, which I guess is good. They are both running on the same App Service in Azure and while the actual pages would load and such I just was unable to login.

I thought about restarting the App Service itself, but chose to start small. I restarted the Web Application and was able to login, I then repeated that process on the other site and logged in to it as well.

That’s when I noticed that there were some updates available, specifically the plugin for Windows Azure Storage, and WordPress itself. So I updated WP first, it took longer than HTTP was willing to hang around so I got a nifty timeout. Reloaded the page and it came back, and I updated the rest of my plugins. Then I rolled over to update this site, same process, same plugins, same timeout. Only a refresh didn’t bring the site back after the plugins updated. I gave it a little more time and still I got the maintenance page. Keep in mind there is no server that I have access to, to really do anything that I would normally do. So i did a search and came across this article.

https://premium.wpmudev.org/blog/wordpress-stuck-in-maintenance-mode

Apparently the update process drops a hidden file .maintenance in the root of the site, neat. All I needed to do was delete that file, but how do you do that when you don’t have a thing to login into?

Turns out Web Apps have a console, this can be found under the Development Tools on the Web App. This drops you into a shell on the root of your site, even though I know this is Windows it felt like Linux, so

> uname -a
D:\home\site\wwwroot
MSYS_NT-6.2-WOW RD0003FF42E552 2.5.0(0.295/5/3) 2016-03-31 18:26 i686 Msys
>

That’s neat, all I needed to do then was run ls -a to make sure .maintenance was there, and then just rm .maintenance to get rid of it, refresh and the site is back up and available. I’m going to paste the contents of that page here, in case it goes away.

Has your WordPress site ever gotten stuck in maintenance mode? Many people seem to run into this problem. They update a WordPress theme or plugin or maybe even WordPress itself, and in the middle of the update, something goes wrong, and their site gets stuck in maintenance mode with nothing appearing but the message, “Briefly unavailable for scheduled maintenance. Check back in a minute.”
Not only are visitors locked out, but even the Admin is locked out of the backend.
If this happens to you, try one of the following solutions.
1. The solution that seems to work for most is to delete a file called “.maintenance” from the root of your site. This is a temporary file that gets created in the update process, and more than likely, this is your culprit. (Notice the *dot* at the beginning of the file name.)
Again, this file rests on your server in your main WordPress install section. You CANNOT access it through the Admin area of your site. You will need to access the server through your webhost’s system (like Cpanel) or via FTP.
If you do not have access to files on your server, then contact your host and let them know the problem (and the solution, of course).

2. Although removing the .maintenance file seems to work for most, it doesn’t work for all. If it doesn’t work for you, then try the following:
a Delete the .maintenance file as outlined in option #1 above.
b Delete the plugin or theme that you were attempting to update.
c. If your site does is not back at this point, then in your wp-content folder, you will find a folder called “upgrade.” Delete the files or folders you find there.
* Remember to clear the cache in your browser (or use a different browser) to make sure that you aren’t getting an old version of your site.
Once you have your site back, you may want to run your updates again to make sure they’ve taken.

Automating Linux in Azure

Automation is one of my major areas of work, and most of my automation revolves around System Center Orchestrator. I also do a fair amount of work in Azure and thought it was time to dust off my Automation account and do something entertaining.

The image below is a PowerShell Workflow inside an Azure Automation Runbook that is connecting to a Linux server (in Azure) and reading the contents of a file.

AzureAutomation-Linux

Oh the possibilities…

DSC + WinRM + GPO

Ok, so I’m working on Desired State Configuration at work, and I had created a GPO to manage the WinRM settings a long while ago. This allows me to control how WinRM works and so forth and was needed for PowerShell to just work on our systems.

Fast forward to today, I’m joining some new servers to the domain, copy my configuration down, and then run Start-DscConfiguration and receive a nasty error:

The WinRM client sent a request to an HTTP server and got a response saying the requested HTTP URL was not available. This is usually returned by a HTTP server that does not support the WS-Management protocol.

That’s no good, it appears as though DSC is unhappy with WinRM, so I run through the usual set of commands

Enable-PsRemoting -Force

Still get the error

Disable-PsRemoting -Force |Enable-PsRemoting -Force

Still get the error

WinRM quickconfig

Still get the error, see WinRM is configured, it turns out while it’s configured just enough for PowerShell to work, it’s not configured enough for DSC to work. I found this thread on Technet

https://social.technet.microsoft.com/Forums/systemcenter/en-US/d3286893-3d3c-4991-a7ba-a9fd07e58288/scvmm-2008-r2-install-error-2927-0x80338113?forum=virtualmachingmgrsetup

The context is for Virtual Machine Manager but the errors are the same, it linked to this TechNet blog article

http://blogs.technet.com/b/scvmm/archive/2011/09/23/vmm-2012-rc-understanding-the-hyper-v-host-addition-operation-if-window-remote-management-winrm-is-configured-using-group-policy-gpo-settings.aspx

The good part is at the bottom under supported configurations. In my GPO I had only the https listener enabled, so I enabled the legacy listener. Additionally I did NOT have the ipv6 filter set to ‘*’ so I did that as well.

The really confusing thing for me was where to find the ipv6 setting “Allow automatic configuration of listeners” it appears that I did not have that in my GPO. Another quick search and I found this TechNet thread

https://social.technet.microsoft.com/Forums/en-US/e4aa3b95-608f-46c3-af06-06f57b02b455/why-dont-i-have-the-allow-automatic-configuration-of-listeners-group-policy-option-for-winrm?forum=winserverGP

I don’t have it, because it was renamed, ‘Allow remote server management through WinRM’. I tried to comment on the article, but I think it’s too old so I decided that I will most likely run into this again at some point.

So, here we are, another blog posting down.

DISM…’because reasons’

I don’t know why this is a thing, it shouldn’t be a thing. I’m going to post a link to the page on TechNet, and then just paste in the content.

https://technet.microsoft.com/en-us/library/dn482069.aspx

You can use the Deployment Image Servicing and Management (DISM) command-line tool to create a modified image to deploy .NET Framework 3.5.

ImportantImportant
For images that will support more than one language, you must add .NET Framework 3.5 binaries before adding any language packs. This order ensures that .NET Framework 3.5 language resources are installed correctly in the reference image and available to users and applications. 

 

In this topic:

 

 

 

  1. Open a command prompt with administrator user rights (Run as Administrator) in Windows 8 or Windows Server 2012.
  2. To Install .NET Framework 3.5 feature files from Windows Update, use the following command:
    DISM /Online /Enable-Feature /FeatureName:NetFx3 /All

    Use /All to enable all parent features of the specified feature. For more information on DISM arguments, see Enable or Disable Windows Features Using DISM.

  3. On Windows 8 PCs, after installation .NET Framework 3.5 is displayed as enabled in Turn Windows features on or off in Control Panel. For Windows Server 2012 systems, feature installation state can be viewed in Server Manager.

 

  1. Run the following DISM command (image mounted to the c:\test\offline folder and the installation media in the D:\drive) to install .NET 3.5:
    DISM /Image:C:\test\offline /Enable-Feature /FeatureName:NetFx3 /All /LimitAccess /Source:D:\sources\sxs

    Use /All to enable all parent features of the specified feature.

    Use /LimitAccess to prevent DISM from contacting Windows Update/WSUS.

    Use /Source to specify the location of the files that are needed to restore the feature.

    To use DISM from an installation of the Windows ADK, locate the Windows ADK servicing folder and navigate to this directory. By default, DISM is installed at C:\Program Files (x86)\Windows Kits\8.0\Assessment and Deployment Kit\Deployment Tools\. You can install DISM and other deployment and imaging tools, such as Windows System Image Manager (Windows SIM), on another supported operating system from the Windows ADK. For information about DISM-supported platforms, see DISM Supported Platforms.

  2. Run the following command to look up the status of .NET Framework 3.5 (offline image mounted to c:\test\offline):
    DISM /Image:c:\test\offline /Get-Features /Format:Table

    A status of Enable Pending indicates that the image must be brought online to complete the installation.

You can use DISM to add .NET Framework 3.5 and provide access to the \sources\SxS folder on the installation media to an installation of Windows® that is not connected to the Internet.

  1. Open a command prompt with administrator user rights (Run as Administrator) in Windows 8 or Windows Server 2012.
  2. To install .NET Framework 3.5 from installation media located on the D: drive, use the following command:
    DISM /Online /Enable-Feature /FeatureName:NetFx3 /All /LimitAccess /Source:d:\sources\sxs

    Use /All to enable all parent features of the specified feature.

    Use /LimitAccess to prevent DISM from contacting Windows Update/WSUS.

    Use /Source to specify the location of the files that are needed to restore the feature.

    For more information on DISM arguments, see Enable or Disable Windows Features Using DISM.

On Windows 8 PCs, after installation, .NET Framework 3.5 is displayed as enabled in Turn Windows features on or off in Control Panel.

Bulk URL Monitoring

How did I not know about this before? So I’m working on creating a Management Pack for Advanced Group Policy Management, and hunting for the utility to seal a MP. I find this utility buried in the Tools folder. I’m going to link the article I found on using it here as well as scrape the text in case it goes away.

Original Article

Bulk URL Manager

The Bulk URL Editor was introduced in SCOM 2007 R2.  I don’t often see this tool used as most customers don’t even know it exists or don’t understand the benefits of it.  The first benefit of the Bulk URL editor is that it scales to thousand of URLs.  If you were to try to create hundreds of URLs with the Web Application Templates it won’t work.  I have tried this in the past and there are so many workflows running at the same time that the agent fails and you end of not monitoring anything.   The second benefit of the tool is that you can add a bunch of websites in a few minutes.

The Bulk URL Editor is not very intuitive, but once you understand how to use it the process is pretty easy.  If you haven’t used the tool I highly recommend giving it a try.

TechNet has some good documentation here. http://technet.microsoft.com/en-us/library/dd788987.aspx

To use the Bulk URL Editor I copy the tool from the installation media.  The file is stored in the “SupportTools\AMD64” directory

On my computer that has the SCOM console installed, I copy the “BulkUrlManager.exe” file to “C:\Program Files\System Center 2012\Operations Manager\Console” (If you copy it anywhere else it won’t work)

I double click on the “BulkUrlManager.exe” file.

On the Connect to Server dialog box I type in the name of my Management Server and click connect

I click the New Icon

I then type in a name of my website template.  I choose “Standard URL Monitoring”

Now I click Create a new Management Pack,

I then give the management pack the name. I choose “BUE Website Monitoring” and click OK

I click OK on the Add New Template Screen

On the next dialog box I click Yes, then OK

Under Templates I click the template I created called “Standard URL Monitoring”

Now I click Add

Now I simply add the URLs that I want to monitor (Note: You need to add http:// or https:// or it will fail)

I click OK and I see all of my URLs are attached to my Standard URL Monitoring Template

Now I simply hit save. I click yes to save the changes to the selected web template.

I am done in the Bulk URL Editor for now. But I am not finished setting up my URL monitoring.  I need to select where I want the URLs to be monitored from.

I launch the Operations Manager Console and go to the Authoring screen.

I expand out Web Application and right click to refresh the screen.

Now I see the website I created using the Bulk URL editor

Under the Actions pane, Custom Actions I click Edit web application settings

My website opens and all I see is a string of text that looks like a ugly variable.  (Don’t panic this is how it is supposed to look)

Now under the Actions Pane, under Web Application I click Configure settings

I click the Watcher Node tab and select the server I want to run the website monitoring.  I choose my Management Server.

I click OK and then click Apply at the bottom of the Web Application Editor screen

I then close the screen with the red X at the top right

Now I go back into the Bulk URL Editor.

I select the Template I have been working with and hit Synchronize (you may need to refresh before the Synchronize button lights up)

I click Yes,

I close the Bulk URL Editor as I am done with it.

Now I open the SCOM Monitoring Console and look for our Web Application, Standard URL Monitoring Instances.

I can see all my websites are now being monitored.

As you can see each website is its own object.  This I nice for putting them into maintenance mode putting them into groups.

I go to groups I can see that the Bulk URL editor also created a group.

I have an addiction problem

At first I thought I had a drinking problem. As evidence by the picture below, I drink A LOT of coffee!

That's a lot of stoppers!

Then as I’m driving in to work I’m listening to NPR. They are talking about the health rankings for Kansas, apparently it’s an annual report of the state of the health system for a given state. As I’m listening to this I find myself thinking, where did they pull their data from? I wonder if I could get at that data? I could easily write a Management Pack to monitor….holy crap!

Yes, I found myself roughing out how I would write a Management Pack in Operations Manager to report on the state of the health system.

Hello, my name is Jeff. I see the world in varying states of health Green (Happy), Yellow (Unhappy) and Red (Angry).

Week In Review : 07/27/2014

Another week coding the app, the nice thing is that the MVC re-write is pretty much done. In fact the idea of utilizing the DB to store information is actually in place, something I wound up having to do. I have decided to store a snapshot of the relevant data in some tables at login. The actual collection of data and table insert takes roughly 20 seconds, so it’s pretty quick. This speeds up everything quite a bit, the entire site feels a lot more responsive.

I’m also having to cron an insert for the proteus data every 24 hours. There isn’t a simple way of to query the underlying properties of a given ip4Network, so I’m just going to collect them all. There are roughly 4,000 ip4Networks defined and it takes about 2 minutes to enum each one and write the underlying properties to a table. Since that data is much more static than the VMWare data, 24 hours on insert is sufficient.

I’m currently making much better use of my classes, and passing data back and forth between views with them. On two of them I have over-ridden the ToString() method so that I can just class.ToString() to get my confirmation email, it looks really slick in code. The added benefit of what how I’m doing this now is that I only store the friendly names in the class, so displaying the information back out is super simple. Then just a couple of simple queries to return the information I need.

I have tweaked a few things on some of the projects that are used.

  • I have written a slightly better way of getting to the cloned network adapter than what I was doing in the past.
  • A lot of my custom validation code has slimmed down, since for these I return a bool, I don’t need to define an object to hold the result and check that, I simply check the call.

As the rest of the code is already done, I figure I have a couple more days then we can call it done. The only major thing left for me is to build the method to create a new virtual machine, and the little I’ve looked into it, it appears it’s nearly identical to how you clone a vm.

See you next week, if not sooner!