Monthly archives "December 2013"

Configuring Hyper-V Cluster using SCVMM 2012 R2 – Part 2

In this part, I will be covering the below topics.

  • Adding Hyper-V hosts to SCVMM 2012 R2
  • Creating Hyper-V cluster through SCVMM 2012 R2
  • Configuring Logical switch on the Hyper-V hosts from SCVMM 2012 R2

 

In the last part, we prepared the servers, assigned storage, configured Host Groups and configured the fabric.

 Adding Hyper-V hosts to SCVMM 2012 R2

From VMM Console, Go to Fabric -> Host Group

Right Click on the Host Group where this server will be a part of and sedlect Add Hyper-V Hosts and Clusters

ADD HYPER-V HOSTS

Computer Location

user credentials

server names

 

 

Select the Servers which needs to be added on SCVMM and Click on Next.

 

Server Disovery

HyperV Role Installation Warning

 

 

 

Select host group

Review the activities and proceed.

This step will usually takes few minutes and the progress can be checked by verifying the Running Jobs on SCVMM Console.

Once the job is successful, You may receive a warning which says that “A restart is required to complete the multi-pathing I/O devices on the host hostname.domainname.com. Hence, I prefer to get the nodes rebooted.

At this stage, The Servers should be properly communicating with SCVMM and the host status should be OK.

Fabric - After Adding Servers

 Creating Hyper-V cluster through SCVMM 2012 R2

Now we have the Host servers added to SCVMM. However, these are standalone HyperV hosts. The next step is to build a cluster through SCVMM.

Go to SCVMM -> Fabric -> Create -> Hyper-V Cluster

Create Cluster

 

CLuster Name

Select the appropriate host group where we have the Hyper-V Hosts. Once the correct host group is selected, the HyperV hosts will be visible in Available Hosts.

 

CLUSTER CREATION - Available Hosts

Skip Cluster Validation test – Please always uncheck this. Cluster validation test will perform all mandatory tests with respect to each Host, OS, Network, Storage etc to ensure that the cluster created out of these nodes will work like a HERO 😀 – provided all test result must be green.

Select the Hosts which are required for the cluster and Click to Add > to move the hosts to “Host to Cluster” box.

 

CLUSTER CREATION - SELECT HOST

Cluster IP

Now we need to select the DISKS which will be used in the CLUSTER.

Create Cluster - DISK Configuration

All the remote storage connected to the hosts will be displayed now.

SCVMM detects the smallest disk of the group and reserve if for “Witness Disk”.

I have assigned another 500 GB disk, which will be used as a CSV. This disk needs to be formatted as NTFS and converted to CSV.

We have different check boxes for each disk.

Quick Format – As the name states – this option will preform a quick format. The same option available while trying to do a Format from OS.

Force Format – If the selected disk already contain some data – Then we need to select this so that SCVMM will forcefully do the format

CSV – This option will convert the disk into a CSV disk once cluster is created

 

The next option is to select Virutal Swithc – Which we will skip now and create it once the cluster is created.

Summary

Review the activities and click on Finish.

To get the detailed progress, Review the running job for “Install Cluster”.

Create Cluster - SCVMM Job

Unfortunately, This attempt got failed as the cluster validation process detected some errors. Lets quickly go through the error and fixt it.

 

Cluster Creation - Validation Failed

To get the detailed report, Go to any of the Host and navigate to C:\Windows\Cluster\Reports. Open the latest report mentioned on the above error.

Cluster Validation - ErrorFrom the report, The error is on Network. Click on Network to do further drill down.

Cluster Validation - Error - NetworkOn the detailed report of Network, “Validate Network Communication” Failed. Click on Validate Network Communication to drill down further.

Cluster Validation - Error - Network communication

Now I got it. I have multiple NICs on each server.

1 For Management

1 For Live Migration

2 for HyperV-Data

I have assigned IP only for Management and Live Migration. The interfaces for HyperV Data will be used to create a team while we configure Virtual Switch on each server from SCVMM. So I planned to keep these interfaces disabled, however one NIC DATA-2 was left out. Data-2 got a APIP IP and the validation wizard detected that both servers are having one each interface on the same IP range, however not able to communicate each other which is a serious problem. 😉

I disabled the Data-2 interface on both server and proceeding with cluster creation now.

To Restart a failed job,

SCVMM -> Jobs -> History

Identify the failed job and right click. Select Restart.

And now I verified the Cluster Validation report and see all green 😀

Cluster Validation - Success

The progress can be reviewed from the running Jobs. And once the Job is completed, The Cluster will be visible in the SCVMM Console.

Cluster

Configuring Logical switch on the Hyper-V hosts from SCVMM 2012 R2

Now we have the HyperV Hosts which is part of a cluster. The next step is to assign the Logical Switch to the Hosts.

SCVMM – > Fabric -> Host Group -> Cluster -> Node -> Right Click and select properties

Select Virtual Switches

Click on New Virtual Switch and select New Logical Switch

AddingVirtual Switch

Add both Physical Adapters – DATA-1 and DATA-2.

Add the appropriate port profile – In this example “Port Profile – Prod – DC1″

Click on OK.

Do the same steps on the second host.

Thats it ! You are done. You can create a VM now and try assigning the Logical Switch.

test VM

 

Note – These posts needs more description and some more final updates which are on its way. Visit this space again.

 

Configuring Hyper-V Cluster using SCVMM 2012 R2 – Part 1

Before we start, SCVMM is a product which I started learning recently. I started working with SCVMM 2012, however was not successfully to get it fully utilized. My interest in Network Virtualization made me to focus more on SCVMM 2012 Sp1 and SCVMM 2012 R2. In the last few days, I deployed a Windows Server 2012 R2 Hyper-V Cluster through SCVMM 2012 R2. I thought of making this into three parts. We can do many more with SCVMM 2012 R2 – However, these posts should be consider as a basic setup to start with.

The first part will cover

  • Preparing the Windows Server 2012 R2 Hosts
  • Assign Storage from SAN
  • Preparing SVMM 2012 R2
  • Configuring SCVMM 2012 R2 Fabric

The second part will cover

  • Adding Hyper-V hosts to SCVMM 2012 R2
  • Creating Hyper-V cluster through SCVMM 2012 R2
  • Configuring Logical switch on the Hyper-V hosts from SCVMM 2012 R2

The third part will cover

  • Creating Virtual Server
  • Configuring templates

 

So lets start. This is one of the complex product which I have seen. Just installing SCVMM will not make anything working. SCVMM needs lot many configurations. The main complex part here in the Fabric configuration, which has different components and configurations which are interlinked. Mistakes happening in this area may not reflect immediately and hence you need to pay attention in this area. I would suggest you to write down the details, draw the different components and try to understand how its related.

______________________________________________________

 FOR YOU TO RELATE EACH STEP ON THE FINAL USAGE, I AM ADDING ONE SCREEN SHOT WHERE WE WILL BE USING THIS COMPONENT ON THIS IMPLEMENTATION. WHAT EVER PUT IN BLUE ITALIC AND THE FOLLOWING SCREEN SHOT IS FOR YOU TO RELATE THE FINAL USAGE.

______________________________________________________

Preparing Windows Server 2012 R2 hosts

  • Install OS
  • SCVMM will install the Failover Clustering feature at the time of cluster creation
  • SCVMM will install the Hyper-V role while we add the Host server to SCVMM – ie, no need to install it manually
  • Configure the network
    # Recommended to have two dedicated  NICs for Hyper-V data. We will team this interface and use it for HyperV Virtual Switch
    # Recommended to have a dedicated NIC for Management
    # Good to have a dedicated interface for Live Migration
    # Good to have a dedicated interface fro CSV

Don’t team the NICs if you are planning to used teamed interface for HyperV Data. This will be done through SCVMM 2012 R2.

If you have dedicated NICs used for Live Migration/CSV, Assign IP Address and Subnet  – Leave gateway and DNS  blank. Ensure that these IPs are pinging from other hosts.

Keep the interfaces for Hyper-V Data – Data-1 and Data-2 disabled.

Assigning Storage from SAN

  • Assign a small disk for Quorum Disk
  • Assign Disks for CSV Disks
  • Install  Multipathing Software and configure it.

In this DEMO, I am using Dell Compellent along with Windows Server 2012 R2 MPIO. The configuration is very straight.

DISK Configuration - MPIO

Select Compllent which should be listed in the Device Hardware box and click on Add. After few seconds, the server will be prompted for a reboot.

DISK Configuration - MPIO Reboot

After the reboot, Verify on the Disk management if the disks are visible. You will only see one disk for each LUN assigned from SAN. If you are seeing multiple disks, then Multipathing is not working properly.

 

Here is a screenshot of DISKMGMT before configuring MPIO. The server has two paths to storage and hence, for each LUN – I see two disks in the Disk management.

I have assigned a 500 GB disk for CSV and 10 GB disk fro quorum.

DISK Configuration - MPIO - Before Adding

In the next screenshot, We will verify Disk Management console after configuring MPIO and reboot.

  • Verify the disks on Disk  Management of each Host and confirm that they are visible

DISK Configuration - MPIO - after Adding

  • No need to Format or Assign Drive letters. All those will happen along with the Cluster Creation from SCVMM 2012 R2

Preparing SCVMM 2012 R2

The first activity to start with is to create Host Groups. Its a logical grouping where we assign some common properties. This activity needs some planning as all further configurations are getting tied up with the Host Group. Here is how I created for this demo.

Host Groups

 

The next step is to configure the fabric. Before that, Please go through the definitions of the key components.

Courtesy – Technet

Hyper-V port profile for up-link

A port profile for uplinks (also called an uplink port profile) specifies which logical networks can connect through a particular physical network adapter.

 

After you create an uplink port profile, add it to a logical switch, which places it in a list of profiles that are available through that logical switch. When you apply the logical switch to a network adapter in a host, the uplink port profile is available in the list of profiles, but it is not applied to that network adapter until you select it from the list. This helps you to create consistency in the configurations of network adapters across multiple hosts, but it also enables you to configure each network adapter according to your specific requirements.

 

Hyper-V port profile for virtual network adapters

A port profile for virtual network adapters specifies capabilities for those adapters and makes it possible for you to control how bandwidth is used on the adapters. The capabilities include offload settings and security settings. The following list of options provides details about these capabilities:

  • Enable virtual machine queue
  • Enable IPsec task offloading
  • Enable Single-root I/O virtualization
  • Allow MAC spoofing
  • Enable DHCP guard
  • Allow router guard
  • Allow guest teaming
  • Allow IEEE priority tagging
  • Allow guest specified IP addresses (only available for virtual machines on Windows Server 2012 R2)
  • Bandwidth settings

Port classification

A port classification provides a global name for identifying different types of virtual network adapter port profiles. As a result, a classification can be used across multiple logical switches while the settings for the classification remain specific to each logical switch. For example, you might create one port classification that is named FAST to identify ports that are configured to have more bandwidth, and one port classification that is named SLOW to identify ports that are configured to have less bandwidth. You can use the port classifications that are provided in VMM, or you can create your own port classifications.

Logical switch

A logical switch brings port profiles, port classifications, and switch extensions together so that you can apply them consistently to network adapters on multiple host systems.

Note that when you add an uplink port profile to a logical switch, this places the uplink port profile in a list of profiles that are available through that logical switch. When you apply the logical switch to a network adapter in a host, the uplink port profile is available in the list of profiles, but it is not applied to that network adapter until you select it from the list. This helps you to create consistency in the configurations of network adapters across multiple hosts, but it also makes it possible for you to configure each network adapter according to your specific requirements.

To enable teaming of multiple network adapters, you can apply the same logical switch and uplink port profile to those network adapters and configure appropriate settings in the logical switch and uplink port profile. In the logical switch, for the Uplink mode, select Team to enable teaming. In the uplink port profile, select appropriate Load-balancing algorithm and Teaming mode settings (or use the default settings).

Switch extensions (which you can install on the VMM management server and then include in a logical switch) allow you to monitor network traffic, use Quality of Service (QoS) to control how network bandwidth is used, enhance the level of security, or otherwise expand the capabilities of a switch. In VMM, four types of switch extensions are supported:

  • Monitoring extensions can be used to monitor and report on network traffic, but they cannot modify packets.
  • Capturing extensions can be used to inspect and sample traffic, but they cannot modify packets.
  • Filtering extensions can be used to block, modify, or defragment packets. They can also block ports.
  • Forwarding extensions can be used to direct traffic by defining destinations, and they can capture and filter traffic. To avoid conflicts, only one forwarding extension can be active on a logical switch.

Virtual switch extension manager or Network manager

A virtual switch extension manager (or network manager) makes it possible for you to use a vendor network-management console and the VMM management server together. You can configure settings or capabilities in the vendor network-management console—which is also known as the management console for a forwarding extension—and then use the console and the VMM management server in a coordinated way. To do this, you must ensure that the provider software (which might be included in VMM, or might need to be obtained from the vendor) is installed on the VMM management server. Then you must add the virtual switch extension manager or network manager to VMM, which enables the VMM management server to connect to the vendor network-management database and to import network settings and capabilities from that database.

The result is that you can see those settings and capabilities, and all your other settings and capabilities, together in VMM.

With System Center 2012 R2, settings can be imported into and also exported from VMM. That is, you can configure and view settings either in VMM or your network manager, and the two interfaces synchronize with each other.

 

Fabric Configuration

Hope you got some idea on the components used in the Fabric configuration.

The next step is to create Logical Networks.

Logical Network for Management Interfaces

CREATE LOGICAL NETWORK

 

We need to define the Site and link with the right Host Group. Then Add the VLAN ID and the IP Subnets.

Example – If the Management Interface is using 10.10.116.116/24 as the IP address, we need to use the IP Subnet as 10.10.116.0/24 and its respective VLAN ID.

SITE-PRD-DC1 is a site which is now linked only to DEMO -> PRODUCTION ->DATACENTER-1 with the IP subnet which we will be using in DC1 and its VLAN id.

DEFINE NETWORK SITE

 

We define second Network Site – SITE-PRD-DC2 and link it with the Host Group DEMO -> PRODUCTION -> DATACENTER-2 with the respective subnet details we will be using in DC2. If you have only one data center, this step is not required.

DEFINE NETWORK SITE 2

 

Review the details and Confirm the changes to proceed with the Logical Network creation.

SUMMARY

 

Similarly, you may create Logical Networks for Live Migration network, CSV Network etc.

Finally, create a Logical Network for the VM Network. This will be used for defining the network which will be used by the Virtual Machines.

LOGICAL NETWORK -VM NETWORK

LOGICAL NETWORK -VM NETWORK 2

LOGICAL NETWORK -VM NETWORK 3

LOGICAL NETWORK -VM NETWORK - SUMMARY

______________________________________________________

Once the Hyper-V Cluster is built, The logical network will get mapped automatically or manually to the physical interface.

The next screenshot is to make you clear how the logical network is getting linked with the physical network interfaces on Hyper-V host servers.

The below screen shot is for your reference and not related with the steps we are performing.

TO Related - Logical Netwrok and Network Interface______________________________________________________

The next step is to create UP-Link PORT Profile. I have two interfaces dedicated for Hyper-V Data. This Port Profile will define that its used for Up-Link and has multiple interfaces which is teamed using Switch Independent mode as the teaming mode and Dynamic as Load Balancing Algorithm.

 

UP-LINK - DC1

The next step is to link the port profile with the Network Site.

PORT PROFILE - NETWORK SITE

Select the appropriate site and Click next. Review the changes and proceed.

This port profile will be used to make SCVMM clear that the interfaces which we have mapped to the Logical Network – LN-VM-Network will be used as the Uplink. The team mode to be used is Switch Independent and the load balancing algorithm to be used is Dynamic.

For you to related this, the next screenshot will show where we will be seeing the port profiles once the cluster creation is done.

 

to relate - uplink port profile

The next step is to create a logical switch. Logical switch will group different components which we created now to get one single entity.

 

LOGICAL SWITCH - 1

 

LOGICAL SWITCH - 2

 

LOGICAL SWITCH - 3

 

 

 

LOGICAL SWITCH

LOGICAL SWITCH - 6

LOGICAL SWITCH - 7

LOGICAL SWITCH - 8

 

______________________________________________________

Logical Switch will be used to define HyperV Switch once the Cluster is created. On each HyperV Host, We will create a Virtual Switch (Hyper-V Switch). Along with the logical switch, we will also define the interfaces used by this Virtual Switch. In our case, its Data-1 and Data-2 along with the Uplink Port Profile.

to relate - logical switch______________________________________________________

 

Thats the end of part 1.

 

RSS is limited to 4 queues

After I fixed the Processor overlapping issue using Set-NetAdapterRss, I noticed a new event.

HP FlexFabric 10Gb 2-port 554FLB Adapter #5 : RSS is limited to 4 queues. Enable Advanced Mode in the PXE BIOS to use up to 16 queues. This may require a firmware update.

 

Event 49

Event 49

To Fix this, We need to enabled Advanced Mode in PXE BIOS as mentioned in the event.

So we need to reboot the server. On the POST Screen, You will get an option eo enable PXE Bios. On this HP Blade, its CTRL + P.

PRESS CTRL P

Ctrl + P will open up PXE Select Utility. Advanced mode support will be disabled by default. You need to enable it.

ADVANCED MODE SUPPORT

The Personality should be selected as NIC.

Personality Selection

Save the setting first and then select continue.

From the next screen, You can just exist. And after the reboot, no more events are generated.

 

 

The processor sets overlap when LBFO is configured with sum-queue mode.

This event started just after I created the HyperV virtual switch.

 

The processor sets overlap when LBFO is configured with sum-queue mode.

The processor sets overlap when LBFO is configured with sum-queue mode.

 

As I couldn’t find much information on this event  on Blogs/Forums, I opened up a case with Microsoft for a quicker resolution. And the resolution which I got is to configure the HyperV Data Interfaces to use exclusive set of processors with respect to RSS configuration.

Set-NetAdapterRss -name vm-nic-1 -BaseProcessorNumber 4 -MaxProcessorNumber 7

Set-NetAdapterRss -name vm-nic-2 -BaseProcessorNumber 8 -MaxProcessorNumber 12

This event stopped after the next reboot, However, I got a new event – Event ID 49 – HP FlexFabric 10Gb 2-port 554FLB Adapter #5 : RSS is limited to 4 queues. Enable Advanced Mode in the PXE BIOS to use up to 16 queues. This may require a firmware update. I will write about this in my next post.

To my understanding, You can allocate a subset of logical processors to each HyperV Data interface. I am trying to understand more details on this. Based on a recent Technet Blog, VMQ will get enabled by default while we create a vSwitch. Once VMQ is enabled, RSS will be disabled on those interfaces. I will update this post once I get more details. If you can get this clarified, Please comment 😀 .

 

Update – Microsoft has published a KB article on this recently. Have a look on that too.

 

A port on the virtual switch has the same MAC as one of the underlying team members on Team Nic

On my Windows 2012 R2 HyperV Cluster, The event 16945 was getting logged frequently.

MAC CONFLICT

MAC CONFLICT

The current setup has two network interfaces teamed and this teamed interface is used for creating the Virtual Switch.  The virtual switch, the teamed inteface and one of the team member are having the same MAC Address which is the base of this event.

 

ADAPTER AND MAC DETAILS

ADAPTER AND MAC DETAILS

In my observation, This event will go off if we unchecked the option “Allow management operating system to share this network adapter” from Virtual Switch Manager. By doing so will remove the virtual interface – vEthernet and have the  configuration directly on Virtual Switch.

 

VIRTUAL SWITCH MANAGER

Its recommended to have dedicated interfaces for Management, HyperV Data, LiveMigrationa and CSV. If you have a dedicated interface for Management, this you can go ahead with this option.