Thursday, January 27, 2011

Opalis 6.3 Operator Console Installation Made Easy!

For anyone that has had the pleasure (and heartache!) of installing Opalis 6.3 from scratch, then you will know it is a long and arduous process that involves first installing the Opalis 6.2 Management Server, Database, importing the Licence, installing the Operator Console, upgrading to Opalis 6.2 SP1 and then upgrading to Opalis 6.3

The hard part was when you wanted to install the Opalis Operator Console and this involved the following:

  • download a number of pre-requisite Java and JBoss installers (approx 200MB worth of them!)
  • copy these files to a directory on the C:\ drive
  • ensuring that you had set the 'Path' Environment variable set to %JAVA_HOME%\bin
  • run the java executables from a command line
  • open powershell and run the Opalis console operator installer script (installopconsole.ps1)
  • starting JBoss by running 'run.bat-b 0.0.0.0 from the <JBOSS>\bin folder
If all of these steps were followed exactly to the letter, then the Powershell script would run through a number of installation steps, prompting for user intervention to eventually finish the installation of the Operator Console!!

See below link for a more detailed step by step installation:

http://systemcenterblog.blogspot.com/2010/11/installing-opalis-part-i-opalis-622.html

Once this was completed, you then needed to updgrade Opalis 6.2 to Opalis 6.2.2 SP1

http://systemcenterblog.blogspot.com/2010/11/installing-opalis-part-2-upgrading-to.html

Finally, you then needed to copy an MSI file containing the Opalis 6.3 binaries to the install location and run an upgrade patch to complete the Opalis 6.3 upgrade.

http://systemcenterblog.blogspot.com/2010/11/installing-opalis-part-3-upgrading-to.html

Talk about going around the block for an installation!!!

In fairness, we really enjoyed doing this installation as it brought back memories of long winded legacy application installations from a few years back and lets you really get into the nuts and bolts of the Opalis install structure.

So, how is that 'Made Easy' (see blog post title above!) you might ask?

Well, I came across an entry last night on the System Center Technet site from Adam Hall - Snr Technical Product Manager for Opalis - about a new 3rd party tool that pretty much automates all of the hard part of the installation of the Operator Console that I have outlined in my steps above.

The product is from a company called 'Kelverion' and I downloaded it last night and ran it against a new test VM with no previous Opalis installation to see how it compared to our first attempt at Opalis 6.3 installation (which took us nearly 2 hours by the time we figured out all the steps and read the documentation last month!).

The results are amazing and I managed to get the whole Opalis 6.3 installation including the Operator Console installed in just under 30 minutes! The reason it is so quick is that Kelverion's Configuration Utility for OpConsole is nearly 200MB in size and contains all of the Pre-Requisite software such as Java and JBoss inside the MSI file. It runs through a nice handy wizard prompting you for all of the required information including creating and naming the new directories to place and run the Pre-Requisite software from. The best thing about this is that it is free too!

You can read more about this utility here:

http://www.kelverion.com/news/2011/1/26/kelverion-configuration-utility-for-opconsole-released.html

You need to register for your free download from here (you will get an email within a minute with the download link):

http://www.kelverion.com/utility-for-opconsole-download/

You can view a really quick 5 minute video of the utility in action here:

http://www.kelverion.com/utility-for-opconsole-demo

I most definitely recommend that you use this utility for any future Opalis Operator Console installations that you are deploying -although the orginal long way is worth doing for the fun at least once!.

Monday, January 24, 2011

Instant Recovery of Hyper V using dBeamer

I came across this application on the web yesterday that I thought was worth a mention in relation to DPM.

It is called dBeamer and is available from a company called Instavia.

The main purpose of this product is to maximise uptime whilst performing a recovery of a replica from DPM. This product enables you to allow access immediately to a Hyper V VHD that you are restoring from DPM to your Hyper V host and the end user sees very little downtime from the time the recovery job has been initiated within DPM.

If you have ever performed a full Disaster Recovery test of yours or a clients Virtual Environment using DPM and Hyper V, then although it is a perfectly seamless recovery and it enables you to have the whole virtual environment back up and running exactly how it was, you cannot get away from the fact that the DPM server still needs to recover this data from the DPM server to your newly commissoned Hyper V host and this could take anywhere from 1 hour to 10 hours depending on data size.

dBeamer allows you to get around this issue by enabling access to the VHD immediately and allows administrators to achieve a Recovery Time Objective (RTO) of near zero!

Here's the link to Instavia's site and I would recommend downloading the 64 bit client onto your DPM server, requesting a trial licence and testing away!

http://www.instavia.com/dbeamerdpm-for-it-administrators

Tuesday, January 18, 2011

Download MAP 5.5

The new version of Microsoft's Assesment and Planning Toolkit has come available with some nice new features added to enhance the previous versions capabilities. You can now simplify your move to the cloud with MAP 5.5 by identifiing and analyzing web application and database readiness for migration to Microsoft Azure. There is also support for migrating to Internet Explorer version 9 too along with all of the usual assesment and inventory options based around virtualization analysis.

Download from the link below:

http://www.microsoft.com/downloads/en/details.aspx?displaylang=en&FamilyID=67240b76-3148-4e49-943d-4d9ea7f77730

Monday, January 17, 2011

World IPv6 Day is announced

On 8 June, 2011, Google, Facebook, Yahoo!, Akamai and Limelight Networks will be amongst some of the major organisations that will offer their content over IPv6 for a 24-hour "test drive". The goal of the Test Drive Day is to motivate organizations across the industry – Internet service providers, hardware makers, operating system vendors and web companies – to prepare their services for IPv6 to ensure a successful transition as IPv4 addresses run out.

Sunday, January 16, 2011

Exchange 2010 Backup-Less Configuration

How would you like to never have to do another backup of your Exchange 2010 environment again? Well if you are using or intend on implementing DAG within your Exchange 2010 environment, then read on to revolutionize your backup strategy!!

When Exchange 2010 was in Beta I heard the rumours that if configured properly you could do away with traditional tape or disk based backups and use the High Availability functionality of DAG to achieve maximum up time. In fairness, since then I then thought nothing else of it and never looked into how exactly you could go about creating this type of solution - until now!

I received a request from a customer to investigate the possibility of implementing this within their existing DAG environement and I am very impressed with the information I found and the process involved in implementing it.

Here's a quick summary of whats involved in implementing the solution:

The requirements needed for a backup-less implementation of Exchange 2010 DAG are as follows:

  • Windows Server 2008 Enterprise on all Exchange 2010 servers
  • Exchange Server 2010 Standard or Enterprise Edition
  • Circular Logging Enabled
  • A minimum of 3 DAG copies of each active mailbox database within the DAG environment spread across different geographical locations for disaster recovery
  • Lagged copies of each database preferably stored on a separate Exchange 2010 server within the DAG environment that has DAG Activation disabled
  • Deleted Item Retention Policies to be reviewed
  • Single Item Recovery Enabled on either each entire mailbox database or the top priority mailbox users within the organisation – i.e. Senior Management mailboxes
  • Public Folder Replication Policies need to be in place if Public Folders are in use

The most comprehensive and intuitive source that I found on this topic comes from the guys over at msexchange.org. Exchange MCM and MVP Henrik Walther has created an excellent four part guide on this exact solution and you can view the entire postings from the links below:

http://www.msexchange.org/articles_tutorials/exchange-server-2010/high-availability-recovery/eliminating-traditional-backups-using-native-exchange-2010-functionality-part1.html

http://www.msexchange.org/articles_tutorials/exchange-server-2010/high-availability-recovery/eliminating-traditional-backups-using-native-exchange-2010-functionality-part2.html

http://www.msexchange.org/articles_tutorials/exchange-server-2010/high-availability-recovery/eliminating-traditional-backups-using-native-exchange-2010-functionality-part3.html

http://msexchange.org/articles_tutorials/exchange-server-2010/high-availability-recovery/eliminating-traditional-backups-using-native-exchange-2010-functionality-part4.html

Now, time to throw away those backup tapes...................................!!!

Exchange 2010 Tested Solutions

The guys over at the Exchange Team in Microsoft have come up with an initiative in conjunction with a number of hardware providers to create and test different solutions for Exchange 2010 deployments and then have produced white papers on each test environment.

For example, if you want to know how an Exchange 2010 installation running on 500 mailboxes in a single site based on Hyper V using Dell hardware is configured and operates, then you can download the white paper from the link below!

They have included a number of different scenario's incorporating different Exchange and hardware configurations and will be adding to this link some more white papers in the near future.

Download the white papers from here:

http://technet.microsoft.com/en-us/library/gg513520.aspx

Publishing Outlook Anywhere Using NTLM Authentication With Forefront TMG or Forefront UAG White Paper

Here's a brand new White Paper released by Greg Taylor - Microsoft Senior Program Manager on Exchange Server. It goes through all you need to know to publish Outlook Anywhere using either TMG or UAG.

I could have done with this white paper 7 months ago when I first deployed OA in UAG though!!!

http://www.microsoft.com/downloads/en/details.aspx?FamilyID=040b31a0-9a69-4278-9808-e52f08ffaee3

Saturday, January 15, 2011

KMS Minimum Clients (Activation Thresholds)

I came across this issue during the week when I was reading into KMS licensing in a bit of detail and thought it might be something worth sharing as I hadn't realised there were a minimum number of activations required for KMS to operate. Here's the official text from Microsoft's website on the subject:


KMS requires a minimum number of either physical or virtual computers in a network environment. These minimums, called activation thresholds, are set so that they are easily met by enterprise customers.

For computers running:
  • Windows Server 2008 and Windows Server 2008 R2 you must have at least five (5) computers to activate.


  • Windows Vista or Windows 7 you must have at least twenty-five (25) computers to activate.


  • For Office 2010, Project 2010 and Visio 2010 you must have at least five (5) computers to activate. If you have deployed Microsoft Office 2010 products, including Project 2010 and Visio 2010, you must have at least five (5) computers running Office 2010, Project 2010 or Visio 2010.

Tuesday, January 11, 2011

Full Microsoft Best Practice Analyzer List

Here's a full list of all the available Microsoft Best Practice Analyzers compiled by the Microsoft Premier Support Team here in Ireland. Use these tools on a regular basis to check for configuration and patching issues with your sites:

http://blogs.technet.com/b/premier_support_ireland/archive/2011/01/04/kick-off-2011-with-best-practice-analysers.aspx

Monday, January 10, 2011

HP Proliant Network Teaming for Hyper V White Paper

A new 'HowTo' White Paper has been released by HP on the steps needed to implement NIC Teaming in a Hyper V environment on HP servers.

The document is a must read for anyone considering teaming the NIC's using the HP Network Configuration Utility (NCU) and you must follow these steps in the exact order to ensure no loss of network connectivity:

  • Install the Windows 2008 Operating System normally without using the HP 'Smart Start' DVD
  • Install the Hyper V role into the Windows Server 2008 Operating System
  • Download and install all of the latest Microsoft Updates and security fixes using Windows Update
  • Download and install the latest version of the HP Proliant Support Pack (PSP)
  • Configure the HP Team using the HP NCU version 10.10 or higher
If you install the HP NCU before adding the Hyper V role to the O/S, then you will need to uninstall the NCU and then re-install it once Hyper V is added.

The document goes through in detail the above steps and processes and also talks about using VLAN's in 'Promiscuous' mode from within the HP team.

Download the White Paper from here:

http://h20000.www2.hp.com/bc/docs/support/SupportManual/c01663264/c01663264.pdf

Saturday, January 8, 2011

SCVMM, How to Add a Perimeter Based Hyper V Host

Occasionally you might find the need to add a perimeter / DMZ based Hyper-V host to your SCVMM management scope and as this host is based on a different network to your SCVMM server, there are some different steps that you need to take to deploy the SCVMM agent to it.

If you want to deploy an agent to a server that is based in a perimeter network, then you will need to insert the SCVMM media into the perimeter based host and use the ‘Local Agent’ option from the SCVMM installation splash screen.
 
Once you select the ‘Local Agent’ option, you will then need to move through the wizard until you reach the ‘This Host is on a perimeter network’ option and this will then need to be selected.
 
You will then need to supply an Encryption Key password for a newly generated Security File that SCVMM will put into the default specified location that you can see in the ‘Export Security File to’ path.
 
This file then needs to be copied from the host on the perimeter network to the SCVMM Console Server in your primary network. Once it is copied to the SCVMM Console Server, you need to select ‘Add Host’ from the SCVMM Console and then specify the ‘Windows Server-based host on a perimeter network’ option. When you click ‘Next’ here, you will be presented with a menu that you will need to input in the following information for the perimeter based host:
 
  • Computer Name or IP Address (the computer name of your perimeter host)

  • Encryption Key (the password you specified earlier)

  • Security File Path (the path to the copied Security File from the perimeter host)

Once your firewall has the relevant ports enabled (the default SCVMM ports are 8100, 80 and 443) to communicate between the networks, then both SCVMM and the remote perimeter based host can talk!

Friday, January 7, 2011

SCVMM 2008 R2 Error (402) "Library Server is not associated with this Virtual Machine Manager Server"

I had this problem on a site during the week when a failed template deployment from SCVMM put the machine into an error state and when I tried to delete the virtual machine from SCVMM it failed with error (402) "Library Server is not associated with this Virtual Machine Manager Server".

On searching for the answer for this, I found a lot of comments on people either just leaving it as it was (probably due to the fact that it doesn't do any harm) or to just re-install SCVMM!

I found both of these 'workarounds' a bit excessive and wanted to track down an absolute reason and fix for the error.

I found a post on the Microsoft System Center forums that explained that the object for the virtual machine within the SCVMM SQL Database had an incorrect object state of '104' and this needs to be changed to a state of '1' which would then change the state in the GUI to 'Missing' and would allow me to delete the machine from SCVMM as normal.

If you're partial to a bit of SQL Database modifications, then read on, if not, then just re-install SCVMM!!

Here's what the Technet post suggests to do and I've added my own non-SQL guru way of fixing it:

  • Close any open sessions of SCVMM 2008 R2 on the desktop or from within remote sessions

  • Open up SQL Management Studio and browse to the SQL Database for the SCVMM installation

  • Expand the tables folder within the VirtualManagerDB database and browse down the list until you find the 'dbo.tbl_WLC_VObject' entry

  • Right Mouse Click on this and select the 'Edit Top 200 Rows' option

  • On the right hand side, this will bring up a list of 'ObjectID', 'ObjectType' and 'ObjectState' columns alongside another column titled 'Name'

  • If you expand the 'Name' column, you should be able to see the name that you assigned to the virtual machine template installation as you see it from within the SCVMM GUI

  • Once you find the 'Name' of the virtual machine template installation that you want to modify, you should see an 'ObjectState' entry of '104 beside it

  • Change this 'ObjectState' entry to '1' and then select the 'Execute SQL' option from the 'Query Designer' menu up the top

  • Now close down SQL Management Studio and re-open the SCVMM 2008 R2 GUI and you should now see that the virtual machine template has a status of 'Missing' beside it

  • Simply just right-mouse click on this VM now and select 'Delete' and select 'Yes' if prompted to delete the VHD's associated with it too

All done and apologies for the lack of accurate SQL Database modification etiquette!!!

Here's the link to the Technet article too:

http://social.technet.microsoft.com/Forums/en-US/virtualmachinemanager/thread/872034ba-3545-4431-b9f6-07ee8c65188b

Thursday, January 6, 2011

Opalis Virtualization Support

The Microsoft Technet Opalis Blog has just announced today that Opalis 6.2.2 and later are now officially supported in a virtualized environment.

See the news here:

http://blogs.technet.com/b/opalis/archive/2011/01/06/opalis-integration-server-is-officially-supported-by-microsoft-in-virtualized-environments.aspx

Hyper V Time Synchronization on a Windows Based Network

A very common query that I get asked to help out with in relation to Hyper-V installations is the small (but very important!) task of getting time synchronization working between the physical environment and the virtual environment.

There are a lot of blog posts out there at the moment covering this topic but I thought I'd throw my 2 Cents in based on the solution that works for me.

Background
 
Firstly, I need to explain a little about how time synchronization works in a domain environment.

When you install the first domain controller in your domain, this DC is the owner by default of all 5 FSMO roles within Active Directory. The PDC Emulator Role is the one that is responsible for Active Directory Time Synchronization.

When you add member servers or client computers to your new domain, they then default to using the PDC Emulator Role holder as their time synchronization source. If you move the PDC Emulator Role to another Domain Controller, then this will become the time synchronization source for that domain in place of the original DC.

When you install a Member Server into Active Directory and add the Hyper-V role onto that member server things start getting a little bit more interesting!

The physical Hyper-V host that is now a member server in your new domain will use the PDC Emulator Role holder as its time synchronization source as you would expect it to, however, when you add virtual machines to that Hyper-V host and the 'Hyper-V Integration Services' are installed onto each virtual machine, then that VM starts to synchronize its time from the Hyper-V host it is based on and not from the PDC Emulator Role!

This 'VM to Host' time sync generally isn't a problem when all of your domain controllers are physical machines as the Hyper-V host updates itself from a physical machine separate from the virtual environment and this keeps the internal VM's up to date.

What happens though when your PDC Emulator Role holder Domain Controller (gasp for air!) is based on the Hyper-V host as a virtual machine? You guessed it - time sync issues occur and the virtual DC's clock starts to move out away from the time on the Hyper-V host, which in turn then knocks the rest of the domain off-skew as they are- as mentioned above, configured by default to sync from the PDC Emulator Role Holder.

The Wrong Way
 
A lot of people - including myself for a long time - took the stance that to resolve this problem, the best and easiest solution was to just remove the 'Time Synchronization' tick box from the 'Integration Components Offered' section on the virtual Domain Controller that holds the PDC Emulator Role. Once this step was completed and the time within the virtual DC was configured correctly, this nearly always resolved the time sync issues on site. I use the word 'nearly' in that sentence though as this workaround didn't always work.

The Right Way
 
So what's the solution? A couple of months ago, I came across a blog post from Ben Armstrong -Virtualization Program Manager at Microsoft -covering this subject and specifically around the Hyper-V 'Time Synchronization Integration Service'. In this post, Ben states categorically that you SHOULD NOT disable the 'Time Synchronization Integration Service' on any virtual machine within Hyper-V and instead you should manipulate the Windows Time Service (w32tm service) from within the virtual DC to get the results that you need for a coherent time sync within your domain!

The link to Ben Armstrong's blog post is as follows:
http://blogs.msdn.com/b/virtual_pc_guy/archive/2010/11/19/time-synchronization-in-hyper-v.aspx

To summarise his post and outline my steps when faced with time sync issues in a Hyper-V environment, I carry out the following procedure on all my VM's (most importantly, my virtual DC's):
 
 
Enable all of the Hyper-V Integration Services (see below)


Check each virtual domain controller's time source using the "w32tm /query /source" command

If the virtual DC is using the 'VM IC Time Synchronization Provider', you need to type the following commands into the command line within the virtual DC to leave the Hyper-V time sync enabled for VM reboots but not for when the VM is up and running:

reg add HKLM\SYSTEM\CurrentControlSet\Services\W32Time\TimeProviders\VMICTimeProvider /v Enabled /t reg_dword /d 0

(Select 'Yes' if requested after you enter this command)

w32tm /config /syncfromflags:DOMHIER /update

(This command tells the virtual DC to sync from within the Domain)

net stop w32time & net start w32time

(This command stops and then restarts the Windows Time Service)

w32tm /resync /force

(This command queries the Windows Time Synchronization Service again and should now detail the internal time server - hopefully the DC with the PDC Emulator Role)

At this point, your Virtual Domain Controller should use itself for time sync when it is online and it will use the Hyper-V host for when it is rebooting or coming out of a 'Saved State' and before the Windows O/S loads

Finally, all that is left to do now is to configure the Virtual Domain Controller and each Hyper-V host that you have in your domain to synchronize with an external time source provider (NTP Provider)

Note: Make sure to follow the instructions in this next paragraph exactly as you read them as clicking on the 'Internal Time Source' option will not give you the desired result!
 
Use this link from Microsoft to automatically configure an external authoritative time source by selecting the 'Microsoft Fix It' button half way down the page under the 'Configure the Windows Time Service to use an External Time Source':

http://support.microsoft.com/kb/816042

The 'Microsoft Fix It' button should initiate a wizard that will prompt you to enter your external NTP servers (you can enter two in here for redundancy) and will then configure the registry to reflect these changes automatically.

 As I'm based in Ireland, the NTP provider that I prefer to use is  - ie.pool.ntp.org


Hopefully this will help you to get a better handle on Time Sync in a Hyper-V environment.

Download WhoCrashed

I am constantly asked for the name of the utility that I use when analysing Windows Blue Screen Crash Dumps. For anyone that has analysed these dumps in the past using the Microsoft Windows Debugging Tools, they know it can be a time consuming and sometimes futile excersise!

I came across WhoCrashed a long time back and it really is a one-stop-shop for Crash Dump analysing.

You can download the latest version from this link here:

http://www.resplendence.com/whocrashed

Tuesday, January 4, 2011

Windows Operating System Deployment Methods and the Dynamic Data Center Tookit

I came across a really good white paper over the holidays based around the different methods available for deploying Microsoft Operating Systems using Microsoft Technologies.

The white paper covers topics such as:

  • Dynamic Data Center Toolkit
  • WIM Formats
  • Microsoft WAIK
  • WIM2VHD
  • Windows Deployment Services (WDS)
  • Microsoft Deployment Toolkit 2010 (MDT)
  • DISM
  • SCCM 2007 R2
If you ever wondered what deployment process would be best for your organisation or client, or just want to be aware of all the different options available as some basic walk-throughs on what is needed to get an O/S deployed, then download this document and read through it.

The URL below contains links to both the source code and installers for the DDC Toolkit and also the 'Operating System Imaging with Windows Server 2008 R2' white paper.

http://code.msdn.microsoft.com/ddc/Release/ProjectReleases.aspx?ReleaseId=5020

Enjoy!

Saturday, January 1, 2011

VMNetBac - Backup and Restore Network Configuration Settings on older Virtual Machines

Nice handy free tool this. If you've ever had to move a Virtual Machines VHD manually to a different Hyper-V host and didn't use the 'Export' - 'Import' method within Hyper V Manager or the 'Migrate' option in SCVMM, then you will have experienced the hassle of having to reconfigure the TCP/IP information in the VM NIC configuration once it boots up in the new host- the NIC defaults to DHCP in this instance.

Although this is only a minor annoyance and you would normally have made notes or print screens of the original configuration first, wouldn't it be nice if you had a utility that would easily just first backup the information of the NIC and then the same utility could be used to quickly restore the TCP/IP configuration to the newly located VM?

Well here's the answer! PHDVirtual.com have a tool called VMNetBac that will do just that.

Download it from here:

http://www.phdvirtual.com/vmnetbac

SSP 2.0

SSP 2.0 - What is it you might ask?

It is an abbreviation for Microsoft System Centre Virtual Machine Manager 2008 R2 Self Service Portal 2.0 of course!

I have a lot of experience with SCVMM and have been dealing with it since BETA a few years back so was fairly excited to hear about SSP 2.0 being released.

In the original SCVMM 2008 R2 installation, there is a Self Service Portal that you can install and configure that basically allows you to delgate control of individual Virtual Machines within your Hyper V environment to lower level Admin's or even end user IT admins if you are operating out of a Data Centre offering.

The Self Service Portal is a web page that the SCVMM administrator decides who gets access to what tasks within the Hyper V environment and what kind of quota's are put in place to stop VM sprawl from wiping out all of your hardware resources.

I have in the past configured the Self Service Portal both in live and lab environments and it works quite well to a degree. The downsides are that it doesn't provision for Networking Placement, IP Configurations, SANs or Load Balancers. With these failings in mind, step forward SSP 2.0!

I have only recently had the opportunity to install and configure SSP 2.0 (it's been available nearly 4 months now!) to see what it can and cannot do and I have to say, it is pretty impressive in comparison to the older SSP built into SCVMM 2008 R2. There is a whole new section built around infrastructure and task requesting and approval's that wasn't available in the previous version and this allows for a more granular control of the virtual environment.

Prior to deploying it however, I decided to take the careful steps of finding as much documentation and information on the product installation and configuration before hand (it is my New Years resolution to RTFM first!) and I came across this really informative download that encompasses a 30 minute video demonstration of exactly what it can do and what it looks like from start to finish from an administration point of view. I really recommend downloading and viewing this demo if you are interested in what SSP 2.0 is.

You can download the 'DemoMate' video of the SCVMM 2008 R2 SSP 2.0 from here:

http://www.demomate.com/content/demos/System%20Center%20VMM%20Self-Service%20Portal%202.0%20Demo.zip

Enjoy!!