Results for category "SCVMM 2012 R2"

Removing stale HostCluster and VMHost from SCVMM 2012 R2

I had seen few cases where deleting a VMHost from SCVMM console may fail. Even with the -Force switch in Powershell will not help. This was based on my experience with the previous versions.

This time also, I got a chance to try it with SCVMM 2012 R2. Worked well.

Here is the sequence.

1) Delete all VMs from this cluster

2) Delete each Host one by one.

Remove-VMHost -VMHost HyperVHostName -Force

3) Once all Hosts are removed, Remove the VMHostCluster

Get-VMHostCluster -Name HostClusterName |Remove-VMHostCluster

 

Just Remove-VMhostCluster will not help.

 


 

PS C:\Users\shaba> Remove-VMHostCluster -VMHostCluster HostClusterName
Remove-SCVMHostCluster : Cannot bind parameter ‘VMHostCluster’. Cannot convert the “HostClusterName” value of type
“System.String” to type “Microsoft.SystemCenter.VirtualMachineManager.HostCluster”.
At line:1 char:37
+ Remove-VMHostCluster -VMHostCluster HostClusterName
+ ~~~~~~~~~~~
+ CategoryInfo : InvalidArgument: (:) [Remove-SCVMHostCluster], ParameterBindingException
+ FullyQualifiedErrorId : CannotConvertArgumentNoMessage,Microsoft.SystemCenter.VirtualMachineManager.Cmdlets.RemoveHostClusterCmdlet


 

Ensure that you have a safe backup before playing around !

Good luck !

 

SCVMM 2012 R2 – VLAN Information missing on the Network Adapter Configuration

Observed this issue last week. The VLAN ID in the Network Interface details inside the properties of a VM is not showing up as expected.

Here is the scenario.

SCVMM 2012 R2 with Windows Server 2012 R2 HyperV.

VMs are deployed and all works well. The Virtual Switch is configured as a Trunk and each VM needs to specify the VLAN ID. All well, However the VLAN ID of a VM is not getting displayed in the VMM console.

VLAN ID Missing on VM Properties

VLAN ID Missing on VM Properties

 

However, If we verify the VLAN ID of the same VM over this command let, its properly configured.

VLAN ID through command

VLAN ID through commandlet

 

Now lets try to change the VLAN ID through SCVMM Console.

VLAN ID Change through SCVMM

VLAN ID Change through SCVMM

I changed the VLAN ID to 66 and here is the confirmation of the job status.

Change VLAN through SCVMM - Job successful

Change VLAN through SCVMM – Job successful

 

After this change, I verified that the new VLAN got updated on the VM using the commandlet.

 

VLAN ID Change - Confirmation over Commandlet

VLAN ID Change – Confirmation over Commandlet

 

Going back to the SCVMM, The VM properties still don’t show that VLAN id is enabled and configured.

 VLAN-ID-CHANGE-Still-missing


VLAN ID CHANGE – Still missing

 

However, if we go to the connection details, VLAN ID is displayed.

VLAN ID Change - Connection details

VLAN ID Change – Connection details

 

This may be a minor bug. I have opened a case with the Microsoft support. I will update once I get more details on this.

 

Temporary Template Error

It happened some how. The template was removed from the template folder and hence, the status was missing in SCVMM.

MISSING TEMPLATE

While trying to remove this template from SCVMM, Another error on dependent temporary template.

Delete missing template - error 848

 

It says that this object is dependent on a temporary template. The easiest way to remove temporary template is using the below commandlet.

Remove-SCVMTemplate

 

This will remove all temporary templates in one stretch !

 

 

 

Delete a VM which is in UnSupported Cluster Configuration

It happens some time.. the VMM says that VM is in unsupported cluster configuration. In my case, I created few test machines, tested everything and deleted it from Hyper-V Manager. However, few of the VMs was created with VHDX file on a local drive instead of CSV Volume. In-fact, this message should be there in VMM, however I noticed this only after the VMs got deleted.

Remove VM on Unsupported Cluster Configuration

Unsupported Cluster Configuration

Trying to remove the VM directly form the VMM Console fails with error 809.

Error (809)
VMM cannot remove the virtual machine because it is in the Unsupported Cluster Configuration state.

Recommended Action
Change the virtual machine’s state, and then try the operation again.

So its clear that VMM expects the virtual machine’s state to be running or stopped or anything apart from “unsupported cluster configuration”.

Since I don’t see any option through GUI, I tried using the Power Shell module.

Remove-VM

Remove-VM

However, Remove-VM VMName didn’t helped as the commandlet failed with the below error.

Remove-VM : VMM cannot remove the virtual machine because it is in the
Unsupported Cluster Configuration state. (Error ID: 809, Detailed Error: )

Change the virtual machine’s state, and then try the operation again.

 

The next attempt is with a -Force switch.

Remove-VM VMName -Force

Remove-VM VMName -Force

 

That worked !

 

Power Optimization – Make a step towards an Eco Friendly Datacenter

In my view point, earth is heavily exploited. The greed for money and power makes people blind. However, I wish each of us do our part to save the green earth.

Today, I would like to take you through one of the great features which seems to be ignored by the majority. As per an old statistics in Wikipedia, 67% of electricity is produced from Fossil fuels. How about saving some power in the data center ? Are we saving something beyond the financial saving ?

I just tested Power optimization and feel great on the way SCVMM is growing and proving that its mature. Though we have Dynamic Optimization and Power optimization with SCVMM 2012, I am not seeing many organizations using it actively. I would recommend you all to consider this great feature and make use of it on your next design/implementation of SCVMM. Thus we – the IT team also can make a small contribution on saving the energy and reducing the carbon foot print.

Back to the topic – What is Power Optimization? We have the infrastructure built to withstand the high load on peak business hours. On many traditional business, the load for the IT is aligned with the business hours of the active business. In such cases, we can easily predict that the chance for having a peak load in off business hours is very minimal.  Why do we need to keep the full compute in the off business hours ? This thought is the base of Power optimization. With power optimization, In a defined windows – say off business windows – If the existing load on a cluster node can be distributed to other cluster nodes without affecting the load of other cluster nodes, SCVMM can turn off some nodes..

Lets try this. Power Optimization is a features which works on top of the Dynamic Optimization feature. Dynamic Optimization and Power Optimization can be configured on the Host Group Level.  By default, all host groups get this setting inherited from the parent host group. So the settings which is configured on “All Hosts” host group is applied for all host groups unless the inheritance is disabled.

For configuring Power Optimization, We need the below prerequisites.

  1. Run As Account to be created for managing the host using Baseboard Management Controller
  2. Local user creation on the BMC of each host server
  3. Baseboard Management Controller configuration for each host on SCVMM
  4. Live migration should be configured for Dynamic Optimization to work

Creating a Run As Account for BMC

Its a straight process. Navigate to SCVMM -> Settings -> Security -> Run As Accounts

run as account for bmc

Run As Account for BMC

 

Please note – the user account may be case sensitive. Please check the server manual from the manufacturer for the exact details.

Now we have the Run As Account ready. The next step is to have the same run ac account created on all the Hyper V hosts.

If you have a Blade enclosure, the usual way of managing the blade is through the on board administrator, through which we can manage/login to the individual server in the blade. However, the access is getting initiated over the On Board Admin which got access to login to each server individually. This does not mean the the account you connected to the On Board Admin page can access the remote web access of individual server. Hence, we need to login to the iLO/DRAC console of each individual server and create this account on each server.

Create local user on host to access Baseboard Management Controller

The next steps walks through the procedure on an HP Blade.

Login to the OnBoard Administrator webpage.

Navigate to Enclosure Information -> Device Bays -> Server

Select Web Administration to access the iLO Web Interface of the server we selected.

HP Onboard Administrator - Server iLO Web Administration

HP Onboard Administrator – Server iLO Web Administration

 

From the server ILO web page, Navigate to Administration -> User Administration

On the Local User Section, Click on the “New” button which is in the right end of the screen.

Create a local user name with the same user name and password we used in SCVMM Run As Account.

Assign all privileges for this account.

iLO Admin User Creation

iLO Admin User Creation

Perform the same steps on all Hyper-V hosts in the cluster to create an account

Baseboard Management Controller configuration for each host on SCVMM

The next step is to configure BMC on each Hyper-V host through SCVMM. In this step, we will link each server with its remote management IP address and the corresponding run as account which has the privilege to manage it.

Navigate to SCVMM -> Fabric -> Servers -> All Hosts ->Host Group -> Server

Right Click on the Server and take the properties on the server.

Navigate to Hardware -> Advanced -> BMC Settings

Select “This physical computer is configured for out of band (OOB) management”

Select the appropriate Power Management configuration provider.

Key in the BMC IP Address.

The default port is 623. If the port got changed on the server side, please adjust it accordingly.

Select the Run As Account created for BMC.

BMC Configuration

BMC Configuration

Click on OK and verify that the job was successful on the SCVMM jobs.

Configure this setting for each server in the cluster.

We are now ready with the prerequisites.

Configuring Power Optimization

As mentioned, Power Optimization works along with Dynamic Optimization. Now we will enable Power Optimization.

Navigate to SCVMM -> Fabric -> Servers -> All hosts -> Host Group of the cluster

Right Click on the Host Group and select properties

Click on Dynamic Optimization and enable Power Optimization.

SCVMM 2012 R2 Power Optimization

SCVMM 2012 R2 Power Optimization

 

Click on Settings to configure the Power Optimization settings and the time windows.

SCVMM 2012 R2 Power Optimization Settings

SCVMM 2012 R2 Power Optimization Settings

The best candidate for Power Optimization will be identified by SCVMM and then try to see if this host can be evacuated by moving the VMs to other available nodes, at the same time keeping the destination server load with in the above configured threshold.

Schedule is the window which SCVMM can perform Power Optimization. In my case, the ideal time for power optimization is in night from 10 PM to morning 6 AM.

The time windows is based on the local server time.

In the above mentioned settings, Every day between 10 PM and 6 AM, the servers will undergo power optimization. The first node will go down if the load hosted on the first node can be moved to other nodes with out breaching the threshold of 40% Processor load and 4 GB memory. The number of servers which can be powered down is based on the total number of cluster nodes. The general rule here is the number of nodes  required for satisfying node majority + 1. If its a 5 node cluster. If the cluster was created in VMM, we can use the witness disk that is automatically added to count as one additional node in this calculation.

 

CLUSTER NODES Created using SCVMM Created outside SCVMM and later added to SCVMM
Maximum nodes which can be powered off Maximum nodes which can be powered off
4 1 0
5 1 1
6 2 1
7 2 2
8 3 2
9 3 3
10 4 3

 

Server will be in maintenance mode if picked up by Power optimization and kept shut down.

When a server which is OFF due to Power Optimization will come back ?

  1. If the PO schedule comes to an end. In the above example, At 6 AM every day, the servers will be powerd on by SCVMM.
  2. A VM in the cluster is experiencing a warning condition due to resource availability and that can only be resolved by migrating it to a powered off node

 

 

 

 

ODX along with SCVMM 2012 R2 – Fast File Copy

The next step after ODX is to make use of it in the real scenarios. In my viewpoint, Machine deployment will be the best consumer who can showcase a significant performance improvement using ODX. The traditional deployment using SCVMM make use of BITS over Network. And I see that for a deployment, which takes 10 minutes – more than half of the time will be used for moving the VHDX file to the destination HyperV Server.

With VMM 2012 R2, we optimized the speed for VM deployment from VMM library by leveraging Offloaded Data Transfer (ODX).  Many large environments use SAN storage.  ODX is a feature introduced on Windows Server 2012 which automatically orchestrates and optimizes the use of SAN storage by using tokenization on reads and writes without using buffers.  With this capability, copying the VM can offloaded to the SAN device which decreases the server cpu utilization and decreases network bandwidth consumption which provides faster VM deployments. To use ODX, SAN storage must support this feature.

Thansk Keiko Harada for making this post.

I just tested out ODX along with SCVMM 2012 r2. I am well impressed as the performance gain is clearly visible. The VM Deployment from templates only require few minutes compared to the time consumed before.

The current environment where I tested ODX contain Windows Server 2012 R2 Hyper-V Cluster along with SCVMM 2012 R2. Hyper-V Servers and SCVMM Server got disk allocated from an ODX capable SAN. SCVMM make use of the SAN Volume for library.

For ODX to work with SCVMM – Here is the Prerequisites

  • Only for new VM deployment from the VMM library
  • RAA (run as account) on source and target
    • VMM and the hosts (source and target) must be on the trusted domain with no firewall among the environments.
  • VMM R2 agent
  • VMM R2 Server
  • Windows Server 2012 and above
  • ODX supported SAN storage

I got all other prerequisites except the RAA.

And for that reason Fast File copy was not functional. While trying to deploy a VM from template, On the HOST Selection page, This was getting clearly evident.

The message inside “Deployment and transfer explanation” says – Creating Virtual Machines using fast file copy requires the host hostname to have an associated Run As Account.

Fast File Copy Require assosiated run as account

In my case, the Run As Account was not configured initially. I created the Hyper-V Cluster from SCVMM but later – while tried to configure the Run As Account for Host Management, that option was disabled.

HOST Access - Run As Account Disabled

Some search on Technet forum pointed me to this page where we can configure the Run As Account through Shell.

$YourCluster = Get-SCVMHostCluster -Name YOUR-CLUSTER-NAME

$YourRunAs = Get-SCRunAsAccount -Name “YOURRUNASACCOUNT”

Set-SCVmHostCluster -VMHostCluster $YourCluster -VMHostManagementCredential $YourRunAs

After executing this, Run As Account got displayed in the Host properties.

Host Access - Run As Account Assigned

Now we are all set with the prerequisites for Fast File Copy.

Lets try it.

Now VM Creation is using FAST FILE COPY 😀

Deploying using Fast File Copy

See the total time took for completing the file copy. Just 50 Seconds.

fast file copy - start time and end time

The same template when deployed with out Fast File transfer took more than 4 minutes.

Deploying without Fast File transfer

Thus the total time took for deploying a VM came down drastically – Now its just 3 Minutes.

Create VM - Total Time

Enjoy !

 

Configuring Hyper-V Cluster using SCVMM 2012 R2 – Part 3

In this part, I will be covering how to setup a template. This was one of the post which I really enjoyed just because lot many things worked well today… 😀

In fact, I was experimenting on templates for more than a week with different combinations – But stuck on just one single issue. VMs are not getting joined to Active Directory as expected. I had similar experience with SCVMM 2012 and the twist was adjusting the image with some tweaks. However, I forgot what I did to fix this with SCVMM 2012 and continuing with troubleshooting. Today, I made it successful !

So lets do it before I forget 😀

Creating a Window Server 2012 R2 STD Gen 1 Template

The first stage is to prepare a VM for the templage

  1. Create a Gen 1 VM – I will name it as Golden Image
  2. Windows Patch Updates
  3. Install HyperV Integration Component
  4. Antivirus/Agents or any of the custom software required on all servers deployed using this image
  5. Enable RDP
  6. Disable Firewall
  7. Set Administrator password blank
  8. Export the VM – This is required if we need to update template on a later stage
  9. SysPrep and Shutdown

The second stage is to configure the per-requisites on Hyper-V for template / deployment using template

  1. Create Run As account for Local Administrator
  2. Create Run As account to be used with ADJoin which has the rights to Join Computers on AD
  3. Create Gust OS Profile
  4. Create Hardware Profiles
  5. If the HyperV Data Network is on trunk, Configure IP Pool for the Subnet which the VMs will be placed while deploying

The final stage is to create the template.

So lets go back to Stage 1. The first few steps are very direct and straight. So I will skip these steps and jump to the 7th step.

Set the Administrator password to blank

The default local policy have complexity enabled – due to which we are forced to give a password as the last step of installation. So lets adjust the local password policy so that we can have a blank password.

MMC ->File -> Add Remove SnapIn -> Select “Group Policy Object Editor” and click Add

By Default, Local Computer Policy will be selected. Click on Finish and then OK

 

Select Local Computer Policy

Navigate to Local Computer Policy -> Computer Configuration -> Windows Setting -> Security Settings -> Account Policies -> Password Policy

Double click on “Password must meet complexity requirement” policy.

Password must meet complexity

This policy is enabled now. We just need to make it disable it.

Disable Password Complexity

Now we can reset the local users password to blank.

Reset password to blan

Next step is to export the VM.. Yea – Its a live export 😉 . This will be used in case if we need to do further update or the template creation failed in between. The VM which is used for creating the template will get destroyed. So its better to export and keep a copy safe.

Now we are ready for Sysprep

SysPrep

Once Sysprep is completed, the VM will be shut down.

Preparing SCVMM

The run as accounts which we are creating this are optional, how ever – I prefer that.

The first run as account is for the local administrator. Once the VM is deployed, we need to have a local administrator with password. I know – you should be thinking who wants to keep the administrator password blank on a server 😀

From SCVMM -> Settings -> Security -> Run As Accounts

RUNAs-Accounts

Click on “Create Run As Account”

RunAsAccount-LocalAdmin

In the same way, Create one more run as account for AD Joining process. For this Account, the user name should be domainname\domainuser.

This Active Directory account should have the permission to join computers in AD.

I also recommend to keep the “Validate domain credentials” checked to ensure that the password stored is correct.

RunAsAccount-ADJoin

The next step is Gust OS Profile. Here we define the OS details, Local Admin Password, Time Zone, Roles and features, Domain Join etc.

Navigate to SCVMM -> Library -> Profiles -> Gust OS Profiles

Gust OS Profiles1

Click on the Drop Down Menu – Create and select Gust OS Profile

Create Gust OS Profile

In General, Enter the OS Name and Compatibility details.

New Gust OS Profile - 1

In Gust OS Profile, Choose the OS Details first from the drop down menu.

New Gust OS Profile - OS Selection

No need to change identity information.

On Admin Password, We need to make use of the Run As Account we created for Local Administrator.

Gust OS Profiles - Local Admin

If the environment don’t have a KMS, You can manually enter the Product Key.

Set the Time Zone according to your requirement.

Gust OS Profiles - Time Zone

If you need to have any Roles or Features in Common on these VM, You can select the required roles and features.

In Domain/WorkGroup, Enter the domain name and the Run As account for performing AD Join task.

Gust OS Profile - AD Join

If you are using a legacy OS, You may need to make use of an Answer File.

GUIRunOnce command can be used to perform onetime activity along with the deployment.

 

The next step is to create the hardware profile. Hardware profiles are used to provision VMs with a pre-defined, standard hardware configuration. Its like defining different offering plans. I usually create Gold-Silver-Bronze Plans.

Navigate to SCVMM -> Library -> Profiles -> Hardware Profiles

From the “Create” drop down menu, Select Hardware Profile.

Hardware Profile  - Generation

Define the Processor, Memory, Other hardware components as per your requirement. Define if the VM using this hardware profile needs to be highly available.

Hardware Profile  - Details

 

One important setting here is the configuration in Network Adapter. We need to ensure that the profile is connected to the correct VM Network which is available on the destination HyperV Cluster / Server.

If we have multiple VLANs configured as trunk, We will be selecting the specific VLAN we required while configuring the network. I dont see such option to select the specific VLAN while we deploy from a template. The work around I got from various forums/blogs is to use an IP Pool in SCVMM for the desired VLANs.

We defined a Logical Network for VM Network in the Part-1 of this series. In LN-VM-Network, We defined only one subnet as I made it as a demo. However, In the real world scenario – We will have have multiple VLANs which will be configured as TRUNK and we may need to tag the right VLAN for each VM. This part is important as we expect the VM to be joined into our domain along with the deployment. For ADJOIN to happen, the network needs to be working fine. And once we trigger a VM creation from template, We dont have an option to edit the configuration before the deployment is complete.

Here is a quick look of the different VLANs currently linked with Logical Network – LN-VM-Network.

VM Network - VLANs

While deploying from the template, SCVMM will chose one of those VLAN from this group. In order force the template to use one of the VLAN among this, the workaround is to create an IP Pool for the VLANs and then select this IP Pool to be used while creating VM from the template.

So lets do that.

Navigate to SCVMM -> Fabric -> Networking -> Logical Network

Right Click on LN-VM-Network and select Create IP POOL

Create IP Pool

Enter the name for the IP POOL and the Logical Network which should be linked.

I am planning to create an IP POOL for 10.66.66.0/24 which is already defined the the Logical Network – LN-VM-Network.

Create IP POOL - NAME

Specify the IP Range to be used. We will just give a few IPs available for this purpose.

Create IP POOL - IP Range

Define the Gateway address for the IP Subnet.

Create IP Pool - Gateway

Enter the DNS Servers

Create IP Pool - DNS

If you have WINS still in your infrastructure, you can define the WINS Servers.

Review the Summary and click on finish.

Create IP Pool - Summary

So we are done with Staeg 2. Good to start the template creation.

Creating VM Template

Navigate to SCVMM -> VMs and Services -> All Hosts

Identify the Golden Image VM which we created and performed SYSPREP.

Right Click on the VM and select Properties.

Ensure that the Operating System and Integration Service is properly updated in SCVMM.

If Operating System is displayed as unknown, Select the right Operating system and Click on OK.

Golden Image - OS IC Check

Right Click on the VM and Select Create -> Create VM Template

Create Template

Oh.. WARNING – Creating a template will destroy the source virtual machine VM Name.

Always read warning and understand correctly before proceeding with YES or NO.. 😀

Create Template - Warning - Source VM Destroy

Enter the name for this Template

Create Template - Template Name

No need to change anything on the Hardware.. Just click NEXT.

Configure Operating System – Select the GUST OS Profile which we created for Windows Server 2012 R2 Std.

Create Template - OS Profile

Select the VMM Library Server

Select the path where you need to store this on Library. I will usually put under the Templates folders inside. You can even create a foldersturcture as you wish.

Review the summary and click on Finish.

The whole process will take 5 t0 10 minutes usually – depending on the mode of file transfer (Network or Fast Transfer if ODX is enabled), Size of the VHDX file etc.

We are good to deploy a VM from a template.

 

Select “Use existing Virtual Machine, VM Template or Virtual Hardisk” which is the default option and click on Browse.

Create VM

Navigate the to Type : VM Template and select the Template which got created now.

Create VM - Selecting Template

On the Identity – Key in the Virtual Machine Name and Description.

Create VM - VM Name

 

On Configure Hardware – Select the Hardware Profile which we created.

Create VM - Hardware Profile

As I mentioned earlier, If the environment is having TRUNK on the HyperV Data Traffic, You need to select Static IP (From a static IP POOL) in the Network Adapter properties.

Create VM - Static IP from IP POOL

On Configuring Operating System, Select the GUST OS Profile which we created.

Create VM - GUST OS PROFILE

Select the Destination to deploy the Virtual Machine.

Create VM - Destination

Create VM - VM Computer Name

create vm - set computer name

On the Networking, Click on the Network Adapter. On the Address Pool, the IP Address Pool which we created is selected.

Create VM - Network Adapter Setting

In the Machine Resources,  Enter the Destination PATH – the location where to save the VHDX, VHDX File Name etc.

Create VM - Network Adapter Setting1

Make sure that a Folder exists in the destination path if you are pointing a folder in the destination path.

On Add Properties, Select the Automatic Action and action to take when HyperV Servers stops.

Review the summary and proceed.

 

Configuring Hyper-V Cluster using SCVMM 2012 R2 – Part 2

In this part, I will be covering the below topics.

  • Adding Hyper-V hosts to SCVMM 2012 R2
  • Creating Hyper-V cluster through SCVMM 2012 R2
  • Configuring Logical switch on the Hyper-V hosts from SCVMM 2012 R2

 

In the last part, we prepared the servers, assigned storage, configured Host Groups and configured the fabric.

 Adding Hyper-V hosts to SCVMM 2012 R2

From VMM Console, Go to Fabric -> Host Group

Right Click on the Host Group where this server will be a part of and sedlect Add Hyper-V Hosts and Clusters

ADD HYPER-V HOSTS

Computer Location

user credentials

server names

 

 

Select the Servers which needs to be added on SCVMM and Click on Next.

 

Server Disovery

HyperV Role Installation Warning

 

 

 

Select host group

Review the activities and proceed.

This step will usually takes few minutes and the progress can be checked by verifying the Running Jobs on SCVMM Console.

Once the job is successful, You may receive a warning which says that “A restart is required to complete the multi-pathing I/O devices on the host hostname.domainname.com. Hence, I prefer to get the nodes rebooted.

At this stage, The Servers should be properly communicating with SCVMM and the host status should be OK.

Fabric - After Adding Servers

 Creating Hyper-V cluster through SCVMM 2012 R2

Now we have the Host servers added to SCVMM. However, these are standalone HyperV hosts. The next step is to build a cluster through SCVMM.

Go to SCVMM -> Fabric -> Create -> Hyper-V Cluster

Create Cluster

 

CLuster Name

Select the appropriate host group where we have the Hyper-V Hosts. Once the correct host group is selected, the HyperV hosts will be visible in Available Hosts.

 

CLUSTER CREATION - Available Hosts

Skip Cluster Validation test – Please always uncheck this. Cluster validation test will perform all mandatory tests with respect to each Host, OS, Network, Storage etc to ensure that the cluster created out of these nodes will work like a HERO 😀 – provided all test result must be green.

Select the Hosts which are required for the cluster and Click to Add > to move the hosts to “Host to Cluster” box.

 

CLUSTER CREATION - SELECT HOST

Cluster IP

Now we need to select the DISKS which will be used in the CLUSTER.

Create Cluster - DISK Configuration

All the remote storage connected to the hosts will be displayed now.

SCVMM detects the smallest disk of the group and reserve if for “Witness Disk”.

I have assigned another 500 GB disk, which will be used as a CSV. This disk needs to be formatted as NTFS and converted to CSV.

We have different check boxes for each disk.

Quick Format – As the name states – this option will preform a quick format. The same option available while trying to do a Format from OS.

Force Format – If the selected disk already contain some data – Then we need to select this so that SCVMM will forcefully do the format

CSV – This option will convert the disk into a CSV disk once cluster is created

 

The next option is to select Virutal Swithc – Which we will skip now and create it once the cluster is created.

Summary

Review the activities and click on Finish.

To get the detailed progress, Review the running job for “Install Cluster”.

Create Cluster - SCVMM Job

Unfortunately, This attempt got failed as the cluster validation process detected some errors. Lets quickly go through the error and fixt it.

 

Cluster Creation - Validation Failed

To get the detailed report, Go to any of the Host and navigate to C:\Windows\Cluster\Reports. Open the latest report mentioned on the above error.

Cluster Validation - ErrorFrom the report, The error is on Network. Click on Network to do further drill down.

Cluster Validation - Error - NetworkOn the detailed report of Network, “Validate Network Communication” Failed. Click on Validate Network Communication to drill down further.

Cluster Validation - Error - Network communication

Now I got it. I have multiple NICs on each server.

1 For Management

1 For Live Migration

2 for HyperV-Data

I have assigned IP only for Management and Live Migration. The interfaces for HyperV Data will be used to create a team while we configure Virtual Switch on each server from SCVMM. So I planned to keep these interfaces disabled, however one NIC DATA-2 was left out. Data-2 got a APIP IP and the validation wizard detected that both servers are having one each interface on the same IP range, however not able to communicate each other which is a serious problem. 😉

I disabled the Data-2 interface on both server and proceeding with cluster creation now.

To Restart a failed job,

SCVMM -> Jobs -> History

Identify the failed job and right click. Select Restart.

And now I verified the Cluster Validation report and see all green 😀

Cluster Validation - Success

The progress can be reviewed from the running Jobs. And once the Job is completed, The Cluster will be visible in the SCVMM Console.

Cluster

Configuring Logical switch on the Hyper-V hosts from SCVMM 2012 R2

Now we have the HyperV Hosts which is part of a cluster. The next step is to assign the Logical Switch to the Hosts.

SCVMM – > Fabric -> Host Group -> Cluster -> Node -> Right Click and select properties

Select Virtual Switches

Click on New Virtual Switch and select New Logical Switch

AddingVirtual Switch

Add both Physical Adapters – DATA-1 and DATA-2.

Add the appropriate port profile – In this example “Port Profile – Prod – DC1″

Click on OK.

Do the same steps on the second host.

Thats it ! You are done. You can create a VM now and try assigning the Logical Switch.

test VM

 

Note – These posts needs more description and some more final updates which are on its way. Visit this space again.

 

Configuring Hyper-V Cluster using SCVMM 2012 R2 – Part 1

Before we start, SCVMM is a product which I started learning recently. I started working with SCVMM 2012, however was not successfully to get it fully utilized. My interest in Network Virtualization made me to focus more on SCVMM 2012 Sp1 and SCVMM 2012 R2. In the last few days, I deployed a Windows Server 2012 R2 Hyper-V Cluster through SCVMM 2012 R2. I thought of making this into three parts. We can do many more with SCVMM 2012 R2 – However, these posts should be consider as a basic setup to start with.

The first part will cover

  • Preparing the Windows Server 2012 R2 Hosts
  • Assign Storage from SAN
  • Preparing SVMM 2012 R2
  • Configuring SCVMM 2012 R2 Fabric

The second part will cover

  • Adding Hyper-V hosts to SCVMM 2012 R2
  • Creating Hyper-V cluster through SCVMM 2012 R2
  • Configuring Logical switch on the Hyper-V hosts from SCVMM 2012 R2

The third part will cover

  • Creating Virtual Server
  • Configuring templates

 

So lets start. This is one of the complex product which I have seen. Just installing SCVMM will not make anything working. SCVMM needs lot many configurations. The main complex part here in the Fabric configuration, which has different components and configurations which are interlinked. Mistakes happening in this area may not reflect immediately and hence you need to pay attention in this area. I would suggest you to write down the details, draw the different components and try to understand how its related.

______________________________________________________

 FOR YOU TO RELATE EACH STEP ON THE FINAL USAGE, I AM ADDING ONE SCREEN SHOT WHERE WE WILL BE USING THIS COMPONENT ON THIS IMPLEMENTATION. WHAT EVER PUT IN BLUE ITALIC AND THE FOLLOWING SCREEN SHOT IS FOR YOU TO RELATE THE FINAL USAGE.

______________________________________________________

Preparing Windows Server 2012 R2 hosts

  • Install OS
  • SCVMM will install the Failover Clustering feature at the time of cluster creation
  • SCVMM will install the Hyper-V role while we add the Host server to SCVMM – ie, no need to install it manually
  • Configure the network
    # Recommended to have two dedicated  NICs for Hyper-V data. We will team this interface and use it for HyperV Virtual Switch
    # Recommended to have a dedicated NIC for Management
    # Good to have a dedicated interface for Live Migration
    # Good to have a dedicated interface fro CSV

Don’t team the NICs if you are planning to used teamed interface for HyperV Data. This will be done through SCVMM 2012 R2.

If you have dedicated NICs used for Live Migration/CSV, Assign IP Address and Subnet  – Leave gateway and DNS  blank. Ensure that these IPs are pinging from other hosts.

Keep the interfaces for Hyper-V Data – Data-1 and Data-2 disabled.

Assigning Storage from SAN

  • Assign a small disk for Quorum Disk
  • Assign Disks for CSV Disks
  • Install  Multipathing Software and configure it.

In this DEMO, I am using Dell Compellent along with Windows Server 2012 R2 MPIO. The configuration is very straight.

DISK Configuration - MPIO

Select Compllent which should be listed in the Device Hardware box and click on Add. After few seconds, the server will be prompted for a reboot.

DISK Configuration - MPIO Reboot

After the reboot, Verify on the Disk management if the disks are visible. You will only see one disk for each LUN assigned from SAN. If you are seeing multiple disks, then Multipathing is not working properly.

 

Here is a screenshot of DISKMGMT before configuring MPIO. The server has two paths to storage and hence, for each LUN – I see two disks in the Disk management.

I have assigned a 500 GB disk for CSV and 10 GB disk fro quorum.

DISK Configuration - MPIO - Before Adding

In the next screenshot, We will verify Disk Management console after configuring MPIO and reboot.

  • Verify the disks on Disk  Management of each Host and confirm that they are visible

DISK Configuration - MPIO - after Adding

  • No need to Format or Assign Drive letters. All those will happen along with the Cluster Creation from SCVMM 2012 R2

Preparing SCVMM 2012 R2

The first activity to start with is to create Host Groups. Its a logical grouping where we assign some common properties. This activity needs some planning as all further configurations are getting tied up with the Host Group. Here is how I created for this demo.

Host Groups

 

The next step is to configure the fabric. Before that, Please go through the definitions of the key components.

Courtesy – Technet

Hyper-V port profile for up-link

A port profile for uplinks (also called an uplink port profile) specifies which logical networks can connect through a particular physical network adapter.

 

After you create an uplink port profile, add it to a logical switch, which places it in a list of profiles that are available through that logical switch. When you apply the logical switch to a network adapter in a host, the uplink port profile is available in the list of profiles, but it is not applied to that network adapter until you select it from the list. This helps you to create consistency in the configurations of network adapters across multiple hosts, but it also enables you to configure each network adapter according to your specific requirements.

 

Hyper-V port profile for virtual network adapters

A port profile for virtual network adapters specifies capabilities for those adapters and makes it possible for you to control how bandwidth is used on the adapters. The capabilities include offload settings and security settings. The following list of options provides details about these capabilities:

  • Enable virtual machine queue
  • Enable IPsec task offloading
  • Enable Single-root I/O virtualization
  • Allow MAC spoofing
  • Enable DHCP guard
  • Allow router guard
  • Allow guest teaming
  • Allow IEEE priority tagging
  • Allow guest specified IP addresses (only available for virtual machines on Windows Server 2012 R2)
  • Bandwidth settings

Port classification

A port classification provides a global name for identifying different types of virtual network adapter port profiles. As a result, a classification can be used across multiple logical switches while the settings for the classification remain specific to each logical switch. For example, you might create one port classification that is named FAST to identify ports that are configured to have more bandwidth, and one port classification that is named SLOW to identify ports that are configured to have less bandwidth. You can use the port classifications that are provided in VMM, or you can create your own port classifications.

Logical switch

A logical switch brings port profiles, port classifications, and switch extensions together so that you can apply them consistently to network adapters on multiple host systems.

Note that when you add an uplink port profile to a logical switch, this places the uplink port profile in a list of profiles that are available through that logical switch. When you apply the logical switch to a network adapter in a host, the uplink port profile is available in the list of profiles, but it is not applied to that network adapter until you select it from the list. This helps you to create consistency in the configurations of network adapters across multiple hosts, but it also makes it possible for you to configure each network adapter according to your specific requirements.

To enable teaming of multiple network adapters, you can apply the same logical switch and uplink port profile to those network adapters and configure appropriate settings in the logical switch and uplink port profile. In the logical switch, for the Uplink mode, select Team to enable teaming. In the uplink port profile, select appropriate Load-balancing algorithm and Teaming mode settings (or use the default settings).

Switch extensions (which you can install on the VMM management server and then include in a logical switch) allow you to monitor network traffic, use Quality of Service (QoS) to control how network bandwidth is used, enhance the level of security, or otherwise expand the capabilities of a switch. In VMM, four types of switch extensions are supported:

  • Monitoring extensions can be used to monitor and report on network traffic, but they cannot modify packets.
  • Capturing extensions can be used to inspect and sample traffic, but they cannot modify packets.
  • Filtering extensions can be used to block, modify, or defragment packets. They can also block ports.
  • Forwarding extensions can be used to direct traffic by defining destinations, and they can capture and filter traffic. To avoid conflicts, only one forwarding extension can be active on a logical switch.

Virtual switch extension manager or Network manager

A virtual switch extension manager (or network manager) makes it possible for you to use a vendor network-management console and the VMM management server together. You can configure settings or capabilities in the vendor network-management console—which is also known as the management console for a forwarding extension—and then use the console and the VMM management server in a coordinated way. To do this, you must ensure that the provider software (which might be included in VMM, or might need to be obtained from the vendor) is installed on the VMM management server. Then you must add the virtual switch extension manager or network manager to VMM, which enables the VMM management server to connect to the vendor network-management database and to import network settings and capabilities from that database.

The result is that you can see those settings and capabilities, and all your other settings and capabilities, together in VMM.

With System Center 2012 R2, settings can be imported into and also exported from VMM. That is, you can configure and view settings either in VMM or your network manager, and the two interfaces synchronize with each other.

 

Fabric Configuration

Hope you got some idea on the components used in the Fabric configuration.

The next step is to create Logical Networks.

Logical Network for Management Interfaces

CREATE LOGICAL NETWORK

 

We need to define the Site and link with the right Host Group. Then Add the VLAN ID and the IP Subnets.

Example – If the Management Interface is using 10.10.116.116/24 as the IP address, we need to use the IP Subnet as 10.10.116.0/24 and its respective VLAN ID.

SITE-PRD-DC1 is a site which is now linked only to DEMO -> PRODUCTION ->DATACENTER-1 with the IP subnet which we will be using in DC1 and its VLAN id.

DEFINE NETWORK SITE

 

We define second Network Site – SITE-PRD-DC2 and link it with the Host Group DEMO -> PRODUCTION -> DATACENTER-2 with the respective subnet details we will be using in DC2. If you have only one data center, this step is not required.

DEFINE NETWORK SITE 2

 

Review the details and Confirm the changes to proceed with the Logical Network creation.

SUMMARY

 

Similarly, you may create Logical Networks for Live Migration network, CSV Network etc.

Finally, create a Logical Network for the VM Network. This will be used for defining the network which will be used by the Virtual Machines.

LOGICAL NETWORK -VM NETWORK

LOGICAL NETWORK -VM NETWORK 2

LOGICAL NETWORK -VM NETWORK 3

LOGICAL NETWORK -VM NETWORK - SUMMARY

______________________________________________________

Once the Hyper-V Cluster is built, The logical network will get mapped automatically or manually to the physical interface.

The next screenshot is to make you clear how the logical network is getting linked with the physical network interfaces on Hyper-V host servers.

The below screen shot is for your reference and not related with the steps we are performing.

TO Related - Logical Netwrok and Network Interface______________________________________________________

The next step is to create UP-Link PORT Profile. I have two interfaces dedicated for Hyper-V Data. This Port Profile will define that its used for Up-Link and has multiple interfaces which is teamed using Switch Independent mode as the teaming mode and Dynamic as Load Balancing Algorithm.

 

UP-LINK - DC1

The next step is to link the port profile with the Network Site.

PORT PROFILE - NETWORK SITE

Select the appropriate site and Click next. Review the changes and proceed.

This port profile will be used to make SCVMM clear that the interfaces which we have mapped to the Logical Network – LN-VM-Network will be used as the Uplink. The team mode to be used is Switch Independent and the load balancing algorithm to be used is Dynamic.

For you to related this, the next screenshot will show where we will be seeing the port profiles once the cluster creation is done.

 

to relate - uplink port profile

The next step is to create a logical switch. Logical switch will group different components which we created now to get one single entity.

 

LOGICAL SWITCH - 1

 

LOGICAL SWITCH - 2

 

LOGICAL SWITCH - 3

 

 

 

LOGICAL SWITCH

LOGICAL SWITCH - 6

LOGICAL SWITCH - 7

LOGICAL SWITCH - 8

 

______________________________________________________

Logical Switch will be used to define HyperV Switch once the Cluster is created. On each HyperV Host, We will create a Virtual Switch (Hyper-V Switch). Along with the logical switch, we will also define the interfaces used by this Virtual Switch. In our case, its Data-1 and Data-2 along with the Uplink Port Profile.

to relate - logical switch______________________________________________________

 

Thats the end of part 1.