Before we start, SCVMM is a product which I started learning recently. I started working with SCVMM 2012, however was not successfully to get it fully utilized. My interest in Network Virtualization made me to focus more on SCVMM 2012 Sp1 and SCVMM 2012 R2. In the last few days, I deployed a Windows Server 2012 R2 Hyper-V Cluster through SCVMM 2012 R2. I thought of making this into three parts. We can do many more with SCVMM 2012 R2 – However, these posts should be consider as a basic setup to start with.
The first part will cover
- Preparing the Windows Server 2012 R2 Hosts
- Assign Storage from SAN
- Preparing SVMM 2012 R2
- Configuring SCVMM 2012 R2 Fabric
The second part will cover
- Adding Hyper-V hosts to SCVMM 2012 R2
- Creating Hyper-V cluster through SCVMM 2012 R2
- Configuring Logical switch on the Hyper-V hosts from SCVMM 2012 R2
The third part will cover
- Creating Virtual Server
- Configuring templates
So lets start. This is one of the complex product which I have seen. Just installing SCVMM will not make anything working. SCVMM needs lot many configurations. The main complex part here in the Fabric configuration, which has different components and configurations which are interlinked. Mistakes happening in this area may not reflect immediately and hence you need to pay attention in this area. I would suggest you to write down the details, draw the different components and try to understand how its related.
FOR YOU TO RELATE EACH STEP ON THE FINAL USAGE, I AM ADDING ONE SCREEN SHOT WHERE WE WILL BE USING THIS COMPONENT ON THIS IMPLEMENTATION. WHAT EVER PUT IN BLUE ITALIC AND THE FOLLOWING SCREEN SHOT IS FOR YOU TO RELATE THE FINAL USAGE.
Preparing Windows Server 2012 R2 hosts
- Install OS
- SCVMM will install the Failover Clustering feature at the time of cluster creation
- SCVMM will install the Hyper-V role while we add the Host server to SCVMM – ie, no need to install it manually
- Configure the network
# Recommended to have two dedicated NICs for Hyper-V data. We will team this interface and use it for HyperV Virtual Switch
# Recommended to have a dedicated NIC for Management
# Good to have a dedicated interface for Live Migration
# Good to have a dedicated interface fro CSV
Don’t team the NICs if you are planning to used teamed interface for HyperV Data. This will be done through SCVMM 2012 R2.
If you have dedicated NICs used for Live Migration/CSV, Assign IP Address and Subnet – Leave gateway and DNS blank. Ensure that these IPs are pinging from other hosts.
Keep the interfaces for Hyper-V Data – Data-1 and Data-2 disabled.
Assigning Storage from SAN
- Assign a small disk for Quorum Disk
- Assign Disks for CSV Disks
- Install Multipathing Software and configure it.
In this DEMO, I am using Dell Compellent along with Windows Server 2012 R2 MPIO. The configuration is very straight.
Select Compllent which should be listed in the Device Hardware box and click on Add. After few seconds, the server will be prompted for a reboot.
After the reboot, Verify on the Disk management if the disks are visible. You will only see one disk for each LUN assigned from SAN. If you are seeing multiple disks, then Multipathing is not working properly.
Here is a screenshot of DISKMGMT before configuring MPIO. The server has two paths to storage and hence, for each LUN – I see two disks in the Disk management.
I have assigned a 500 GB disk for CSV and 10 GB disk fro quorum.
In the next screenshot, We will verify Disk Management console after configuring MPIO and reboot.
- Verify the disks on Disk Management of each Host and confirm that they are visible
- No need to Format or Assign Drive letters. All those will happen along with the Cluster Creation from SCVMM 2012 R2
Preparing SCVMM 2012 R2
The first activity to start with is to create Host Groups. Its a logical grouping where we assign some common properties. This activity needs some planning as all further configurations are getting tied up with the Host Group. Here is how I created for this demo.
The next step is to configure the fabric. Before that, Please go through the definitions of the key components.
Courtesy – Technet
Hyper-V port profile for up-link
A port profile for uplinks (also called an uplink port profile) specifies which logical networks can connect through a particular physical network adapter.
After you create an uplink port profile, add it to a logical switch, which places it in a list of profiles that are available through that logical switch. When you apply the logical switch to a network adapter in a host, the uplink port profile is available in the list of profiles, but it is not applied to that network adapter until you select it from the list. This helps you to create consistency in the configurations of network adapters across multiple hosts, but it also enables you to configure each network adapter according to your specific requirements.
Hyper-V port profile for virtual network adapters
A port profile for virtual network adapters specifies capabilities for those adapters and makes it possible for you to control how bandwidth is used on the adapters. The capabilities include offload settings and security settings. The following list of options provides details about these capabilities:
- Enable virtual machine queue
- Enable IPsec task offloading
- Enable Single-root I/O virtualization
- Allow MAC spoofing
- Enable DHCP guard
- Allow router guard
- Allow guest teaming
- Allow IEEE priority tagging
- Allow guest specified IP addresses (only available for virtual machines on Windows Server 2012 R2)
- Bandwidth settings
A port classification provides a global name for identifying different types of virtual network adapter port profiles. As a result, a classification can be used across multiple logical switches while the settings for the classification remain specific to each logical switch. For example, you might create one port classification that is named FAST to identify ports that are configured to have more bandwidth, and one port classification that is named SLOW to identify ports that are configured to have less bandwidth. You can use the port classifications that are provided in VMM, or you can create your own port classifications.
A logical switch brings port profiles, port classifications, and switch extensions together so that you can apply them consistently to network adapters on multiple host systems.
Note that when you add an uplink port profile to a logical switch, this places the uplink port profile in a list of profiles that are available through that logical switch. When you apply the logical switch to a network adapter in a host, the uplink port profile is available in the list of profiles, but it is not applied to that network adapter until you select it from the list. This helps you to create consistency in the configurations of network adapters across multiple hosts, but it also makes it possible for you to configure each network adapter according to your specific requirements.
To enable teaming of multiple network adapters, you can apply the same logical switch and uplink port profile to those network adapters and configure appropriate settings in the logical switch and uplink port profile. In the logical switch, for the Uplink mode, select Team to enable teaming. In the uplink port profile, select appropriate Load-balancing algorithm and Teaming mode settings (or use the default settings).
Switch extensions (which you can install on the VMM management server and then include in a logical switch) allow you to monitor network traffic, use Quality of Service (QoS) to control how network bandwidth is used, enhance the level of security, or otherwise expand the capabilities of a switch. In VMM, four types of switch extensions are supported:
- Monitoring extensions can be used to monitor and report on network traffic, but they cannot modify packets.
- Capturing extensions can be used to inspect and sample traffic, but they cannot modify packets.
- Filtering extensions can be used to block, modify, or defragment packets. They can also block ports.
- Forwarding extensions can be used to direct traffic by defining destinations, and they can capture and filter traffic. To avoid conflicts, only one forwarding extension can be active on a logical switch.
Virtual switch extension manager or Network manager
A virtual switch extension manager (or network manager) makes it possible for you to use a vendor network-management console and the VMM management server together. You can configure settings or capabilities in the vendor network-management console—which is also known as the management console for a forwarding extension—and then use the console and the VMM management server in a coordinated way. To do this, you must ensure that the provider software (which might be included in VMM, or might need to be obtained from the vendor) is installed on the VMM management server. Then you must add the virtual switch extension manager or network manager to VMM, which enables the VMM management server to connect to the vendor network-management database and to import network settings and capabilities from that database.
The result is that you can see those settings and capabilities, and all your other settings and capabilities, together in VMM.
With System Center 2012 R2, settings can be imported into and also exported from VMM. That is, you can configure and view settings either in VMM or your network manager, and the two interfaces synchronize with each other.
Hope you got some idea on the components used in the Fabric configuration.
The next step is to create Logical Networks.
Logical Network for Management Interfaces
We need to define the Site and link with the right Host Group. Then Add the VLAN ID and the IP Subnets.
Example – If the Management Interface is using 10.10.116.116/24 as the IP address, we need to use the IP Subnet as 10.10.116.0/24 and its respective VLAN ID.
SITE-PRD-DC1 is a site which is now linked only to DEMO -> PRODUCTION ->DATACENTER-1 with the IP subnet which we will be using in DC1 and its VLAN id.
We define second Network Site – SITE-PRD-DC2 and link it with the Host Group DEMO -> PRODUCTION -> DATACENTER-2 with the respective subnet details we will be using in DC2. If you have only one data center, this step is not required.
Review the details and Confirm the changes to proceed with the Logical Network creation.
Similarly, you may create Logical Networks for Live Migration network, CSV Network etc.
Finally, create a Logical Network for the VM Network. This will be used for defining the network which will be used by the Virtual Machines.
Once the Hyper-V Cluster is built, The logical network will get mapped automatically or manually to the physical interface.
The next screenshot is to make you clear how the logical network is getting linked with the physical network interfaces on Hyper-V host servers.
The below screen shot is for your reference and not related with the steps we are performing.
The next step is to create UP-Link PORT Profile. I have two interfaces dedicated for Hyper-V Data. This Port Profile will define that its used for Up-Link and has multiple interfaces which is teamed using Switch Independent mode as the teaming mode and Dynamic as Load Balancing Algorithm.
The next step is to link the port profile with the Network Site.
Select the appropriate site and Click next. Review the changes and proceed.
This port profile will be used to make SCVMM clear that the interfaces which we have mapped to the Logical Network – LN-VM-Network will be used as the Uplink. The team mode to be used is Switch Independent and the load balancing algorithm to be used is Dynamic.
For you to related this, the next screenshot will show where we will be seeing the port profiles once the cluster creation is done.
The next step is to create a logical switch. Logical switch will group different components which we created now to get one single entity.
Logical Switch will be used to define HyperV Switch once the Cluster is created. On each HyperV Host, We will create a Virtual Switch (Hyper-V Switch). Along with the logical switch, we will also define the interfaces used by this Virtual Switch. In our case, its Data-1 and Data-2 along with the Uplink Port Profile.
Thats the end of part 1.