Monthly archives "November 2013"

Adding a node to existing Windows Server 2012 R2 Hyper-V Cluster

I started working with Windows Server 2012 R2 cluster few weeks back. At the initial time, due to the lack of time – I started with a three node cluster and later made it as a four node cluster and now adding one more node. I thought of making a post on how to safely add a node to an existing production cluster.

I personally don’t encourage changing the Tyre while driving the car :D, however sometime the situation may demand it.

Before start

  • Ensure that the new server is on same patch level with other cluster nodes
  • Ensure that the server is on the same Firmware level with other cluster nodes
  • Ensure that Fail-over Cluster feature is installed
  • Ensure that Hyper-V role is installed and the Virtual Switch is configured with same name and settings as of other cluster nodes
  • Ideally, All cluster nodes should be identical – So I assume the new server is also having the same hardware as other cluster members
  • Ensure the Multipath software has installed as of other cluster nodes
  • Ensure that the Network interfaces are configured properly to meet the requirement for Hyper-V
  • Ensure that NICs dedicated for Live Migration, CSV, Heartbeat etc are able to ping to the respective IPs in other cluster nodes
  • Add an additional SAN disk of smaller size to be used along with validation test
  • Ensure SAN disks are visible in Disk Management of the new server. Just check, Don’t try to make it online.

    Check if DISKS are visible in Disk Management

    Check if DISKS are visible in Disk Management

 

I would definitely go through the validation tests every time I create/add nodes to the cluster. This will give us the confidence that everything is configured and working as expected before the activity starts. Since we are trying to add an additional node to the existing cluster, We should understand that validation test may interrupt the VMs hosted on other nodes as the CSV disks may go offline as part of the test. For this reason, I did assigned one more temporary disk and made it as a CSV so that we can select only this specific disk while performing the validation.

Go to Failover Cluster Manager – Nodes

Right click on Nodes and Select “Add Nodes”

Add node

Add node

 

 

Click next on the “Before you begin” page after reading :D.

On the “Select Server” page, Key In the server name and click on Add. Ideally, adding the node shouldn’t take more than ten seconds.

Adding the new node

Adding the new node

 

On the “Validation Warning” Screen, We need to choose the right option specific to running validation test. As I mentioned earlier, I prefer to run all test with a spare disk form storage which don’t have any production data. So select Yes.

Validation Warning

Validation Warning

 

Clicking Next will Switch on the Validation Configuration Wizard

The account used for running validation wizard should be a local administrator on all nodes.

 

Validation Wizard - Before Begin

Validation Wizard – Before Begin

The next screen is “Testing Options”. I prefer to run all tests. If you are re-running after identifying some issues, Its Ok to choose only that failed module by select “Run only tests I select”.

 

Testing Options

Testing Options

The next screen is important. We need to choose the right SAN disk which will be used for validation test. I already added a 1 TB disk and made it as a CSV disk – Name CSV-4. Chose the right disk and click on next.

Select the Storage which will be used for Validation test

Select the Storage which will be used for Validation test

 

Please read the warning at the bottom –  TO AVOID ROLE FAILURES, IT IS RECOMMENDED THAT ALL ROLES USING CLUSTER SHARED VOLUMES BE STOPPED BEFORE THE STORAGE IS VALIDATED.

In the Confirmation page, Have a look on the specific tests which are going to perform on each modules and click on NEXT

ValidationWizard-Confirmation

 

Click Next to start the Validation tests.

Validation Test in Progress

Validation Test in Progress

 

Once done, You can review the result by clicking “View Report”.

If any of the test got failed, the Wizard assumes that something is not correct, hence shouldn’t proceed with the next step with the blessing of wizard. However, Once you click on finish, You will go back to “Add Node Wizard”  where you will get Option to Run Validation test again or proceed without Validation test.

 

Validation Wizard - Result Summary

Validation Wizard – Result Summary

In my case, I am using NICs with trunking for HyperV Data traffic,  So while creating the Virtual switch – I haven’t assigned any IP to this interface which the validation wizard claims that its not good. As I am sure that this error is not going to make an issue, I am proceeding with the next Option.

 

ValidationWarning-ProceedwithoutChecking

In the next screen, Review the node to be added and click on NEXT.

Wait for the successful confirmation and that’s it.

Cheers

Shaba

 

Windows Server 2012 R2 – NIC Teaming options

With Windows Server 2012, The new age of Windows teaming evolved. Very soon it was widely adapted – especially teaming along with Hyper-V. The key advantage which I see on using Windows Server teaming is the ease of configuration and the different teaming modes. And Switch Independent along with Hyper-V port was the recommended teaming option for Hyper-V.

All went well. Now we got Windows Server 2012 R2. As usual, I was playing around with Windows Server 2012 R2 and Hyper-V in the last few weeks. And the servers which I created was having Windows teaming – Switch Independent – HyperV Port. But later, I noticed one new load balancing algorithm named “Dynamic”. I couldn’t see much details around apart from few MVPs had touched on the overview. Off late, I came across Microsoft guide. Just a snip from this guide.

 

 Algorithms for load distribution

Outbound traffic can be distributed among the available links in many ways. One rule that guides any distribution algorithm is to try to keep all packets associated with a single flow (TCP-stream) on a single network adapter. This rule minimizes performance degradation caused by reassembling out-of-order TCP segments.

NIC teaming in Windows Server 2012 R2 supports the following traffic load distribution algorithms:

  • Hyper-V switch port. Since VMs have independent MAC addresses, the VM’s MAC address or the port it’s connected to on the Hyper-V switch can be the basis for dividing traffic. There is an advantage in using this scheme in virtualization. Because the adjacent switch always sees a particular MAC address on one and only one connected port, the switch will distribute the ingress load (the traffic from the switch to the host) on multiple links based on the destination MAC (VM MAC) address. This is particularly useful when Virtual Machine Queues (VMQs) are used as a queue can be placed on the specific NIC where the traffic is expected to arrive. However, if the host has only a few VMs, this mode may not be granular enough to get a well-balanced distribution. This mode will also always limit a single VM (i.e., the traffic from a single switch port) to the bandwidth available on a single interface. Windows Server 2012 R2 uses the Hyper-V Switch Port as the identifier rather than the source MAC address as, in some instances, a VM may be using more than one MAC address on a switch port.
  • Address Hashing. This algorithm creates a hash based on address components of the packet and then assigns packets that have that hash value to one of the available adapters. Usually this mechanism alone is sufficient to create a reasonable balance across the available adapters.

The components that can be specified, using PowerShell, as inputs to the hashing function include the following:

  •   Source and destination TCP ports and source and destination IP addresses (this is used by the user interface when “Address Hash” is selected)
  • Source and destination IP addresses only
  • Source and destination MAC addresses only

The TCP ports hash creates the most granular distribution of traffic streams resulting in smaller streams that can be independently moved between members. However, it cannot be used for traffic that is not TCP or UDP-based or where the TCP and UDP ports are hidden from the stack, such as IPsec-protected traffic. In these cases, the hash automatically falls back to the IP address hash or, if the traffic is not IP traffic, to the MAC address hash.

  • Dynamic. This algorithm takes the best aspects of each of the other two modes and combines them into a single mode.
    • Outbound loads are distributed based on a hash of the TCP Ports and IP addresses.  Dynamic mode also rebalances loads in real time so that a given outbound flow may move back and forth between team members.
    • Inbound loads are distributed as though the Hyper-V port mode was in use.   See Section 3.4 for more details.

The outbound loads in this mode are dynamically balanced based on the concept of flowlets.  Just as human speech has natural breaks at the ends of words and sentences, TCP flows (TCP communication streams) also have naturally occurring breaks.  The portion of a TCP flow between two such breaks is referred to as a flowlet.  When the dynamic mode algorithm detects that a flowlet boundary has been encountered, i.e., a break of sufficient length has occurred in the TCP flow, the algorithm will opportunistically rebalance the flow to another team member if appropriate.  The algorithm may also periodically rebalance flows that do not contain any flowlets if circumstances require it.    As a result the affinity between TCP flow and team member can change at any time as the dynamic balancing algorithm works to balance the workload of the team members.

In short, here is the best combination to be used in a real world scenario.

Switch Independent configuration / Address Hash distribution

This mode is best used for:

  • Active/Standby mode teams with just 2 team members; and
  • Teaming in a VM.

Switch Independent configuration / Hyper-V Port distribution

Will be best for Windows Server 2012 HyperV servers

Switch Independent configuration / Hyper-V Port distribution

This mode is best used for teaming in both native and Hyper-V environments- Obviously Windows Server 2012 R2 HyperV Infrastructure.

Exceptions

  • Teaming is being performed in a VM,
  • Switch dependent teaming (e.g., LACP) is required by policy, or
  • Operation of a two-member Active/Standby team is required by policy.

 

For anyone who is on HyperV implementation with Window Server 202 R2, Its worth reading this entire guide.

All credits to the author of this guide.

 

Windows Server 2012 R2 – Shared Nothing Live Migration

Last week, We got the first Windows Server 2012 R2 Hyper-V Cluster up. Now, its the time for migrating some low priority payloads to test the performance and stability.

I used Shared nothing live migration on majority of the cases. I am happy to say that its a smooth and silent move.

Before starting shared nothing live migraiton

  • Kerberos or Credential Security Support Provider (CredSSP) to authenticate the Live Migration is configured – Check this link.
  •  Network for Live Migration is configured and able to communicate

On Windows Server 2012 which is the source, Go to Failover Cluster Manager -> Cluster Name -> Networks

Right Click on Networks and Choose Live Migration setting.

Windwos Server 2012 - Hyper V Live Migration Network Configuration

Windows Server 2012 – Hyper V Live Migration Network Configuration

 

  • Ensure that the right network card is selected with right preference for Live migration
  • The same setting needs to be checked on the Windows Server 2012 R2 cluster too.

Windows Server 2012 R2 - Hyper V Live Migration Network Configuration

  • Ensure the communication between source server and destination server over the Live Migration network
  • Remove the VM from cluster high availability is configured
  • From Hyper-V Manager of source server, Right Click on the VM and choose Move

Selecting Move Option

  • In the Choose Move Type, Select “Move Virtual Machine”

 

Update – This post is not completed. For some reason, the rest of the post was missing after I published it. Let me re-create for you. Please check this page later.

 

Remote Desktop App Client for iOS/Android and MAC

One of the good thing which happened in the recent days was the release of Remote Desktop App for iOS, Android and MAC. Though its a bit late, its something which gives me a value add. The reason I thought of writing about this is due to fact that I feel that we are very much benefited on this.

One of the long pending requirement which we always hear is on setting a VDI infrastructure and an option to publish applications. Remote Desktop Services is a segment which started growing along with Hyper-V, though it couldn’t reach the limelight. Why ? For me, the users where hesitant as Microsoft didn’t provide a native client for accessing these resources outside Microsoft OS.

One of the expectation as an end user is what ever I access from one PC/Device should working from another PC/Device. ie, OS Shouldn’t be a barrier for an end user to accessing the right apps. Some of you may have a differencing opinion that accessing an application through a mobile or accessing a VDI through iPAD will have many limitations. I heard this in one of the vendor meeting as they claim that its difficult to use an VDI solution through an iPAD or a mobile. And due to this, they stopped the support for iOS in the recent release.

Limitations is something which is tangible and accessed based on the end user requirement. I may have some limitations to work on  excel  through iPAD using RDS. But for a second person, the requirement is just to see the figures over different sheets on an excel, which can work very well on this solution. However, this logic will be applicable only when the end user is having all options. Till this time, cross platform access was not available natively. And now, its open. Eagerly awaiting for the Office App for iOS. :)

Cheers

Shaba