Results for category "Announcement"

System Center Virtual Machine Manager 2012 R2 – Rollup 5 released

Rollup Updates are becoming more frequent as promised and this time, Rollup Update 5 came up with tons of bug fixes.

Really impressed on the focus VMM team got now on making this product better.

Bug Fixes with RU5

  • When you use the Virtual Machine Manager UI to enable a virtual machine replica, it doesn’t throw any error. However, Windows PowerShell does throw an error.
  • Virtual Machine Manager services crash when there is a discrepancy between Windows Server Update Services (WSUS) and the Virtual Machine Manager baseline. If the update is cleaned up from WSUS, the Virtual Machine Manager Service crashes while it tries to approve that update. Additionally, the ObjectNotFound exception is not handled correctly.
  • The current logic is that if a storage pool supports only one provisioning type, the storage pool defaults to type “fixed.” This is incorrect. If AreMultipleProvisioningTypesSupported is false and we mark THIN as false and then go ahead with the FIXED type, logical unit number (LUN) creation fails in cases where storage pools support the THIN type of storage.
  • To create site-to-site connectivity in current versions, customers have to use IPSec S2S tunnels. S2S GRE tunnels are now enabled to use bandwidth more efficiently.
  • When Virtual Machine Manager is running a job, it holds the lock and cannot be cancelled. If you try to cancel the job, the job is still shown as “running.” Any further refresh of the host or the cluster will evoke a failure notice that states that the lock is held by the running Virtual Machine Manager job.
  • The power savings daily and monthly performance data is not aggregated correctly. As a result, power savings for the month is seen as zero (0) for the Administrator Console. The hourly performance data is reported correctly by the cmdlet.
  • Virtual Machine Manager server setup is updated to install the latest DacFx (SQL Server 2014) for the SQL Server application host.
  • Virtual Machine Manager UI console crashes intermittently with the following exception:
    System.ServiceModel.CommunicationException in OptimizeHostsAction
  • All “Hyper-V Recovery Manager”-related strings have been updated to “Azure Site Recovery.” This is the new product name.
  • Migrating an unprotected virtual machine to a protected virtual machine on the same host currently shows the transfer as the SAN type. This is incorrect. Virtual Machine Manager should show the network only when the virtual machine is off, and shows virtual machine and storage migration (VSM) when the virtual machine is running, whether it is on the same host or on a different host.
  • Customers who run Virtual Machine Manager at the scale experience long completion time for loading the history of jobs.
  • The stored procedure dbo.prc_WLC_GetVmInstanceIdByObjectId fails if the VMId column is empty in any of the rows of the tbl_WLC_VMInstance table. Affected customers will not able to set up disaster recovery for their virtual machines. Typically this occurs when the customer has a virtual machine that was created and then upgraded to System Center 2012 Virtual Machine Manager SP1. In this case, enable protection is blocked for all virtual machines and not just for specific virtual machines.
  • The new LUN that is created on an EMC array pool tries to use an old SMLunId that was previously generated. Therefore, RG creation with multiple LUNs fail, and you receive the following error message:
    26587: SMLunId is re-used for newly created LUN
  • Storage array is unnecessarily locked while data is collected during a provider rescan. The lock is applied only when data is refreshed.
  • The EnableRG operation for NetApp fails when two providers are used in a fully discoverable model.
  • Creation of a virtual machine to the recovery cloud after failover has occurred on the recovery site fails, and you are told that the cloud doesn’t support protection. Virtual machine migration to RG is blocked in this scenario.
  • If the virtual machine was refreshed, the Administrator Console blocks shutdown of the virtual machine.
  • When test failover (TFO) is completed, the snapshot LUNs are removed from the backend (NetApp). However, Virtual Machine Manager still shows them. Only a provider rescan (not refresh) removes both the LUNs and the pool.
  • Currently, Virtual Machine Manager has only the “NETAPP LUN” option added into the host MPIO devices when we add the host into Virtual Machine Manager. With this update, “NETAPP LUN C-Mode” is added into the host MPIO devices as another option.
  • The System Center Operations Manager object property IsClustered for HostVolume is displayed in the UI without an associated value.
  • When the system is under load, Virtual Machine Customization operations report the following error:
    609: Virtual Machine Manager cannot detect a heartbeat from the specified virtual machine

    The creation of the virtual machine (with customizations) actually succeeds. However, Virtual Machine Manager puts the virtual machine in a failed state because of this job failure. The user can safely ignore the failure to bring the virtual machine back. However, the user may think that something went wrong and re-create the virtual machine.

  • Currently, users have to manually update the DHCP extension after update rollup installation on all hosts. This is now automated. After the DHCP extension is replaced in the Virtual Machine Manager server’s installation folder to the latest version, Virtual Machine Manager automatically checks the DHCP extension against all hosts. If the host has an older version of DHCP extensions, the agent version status will be displayed as “DHCP extension needs to be updated in host properties on the Status page.” The user calls the update agent and updates the DHCP extension on the Hyper-V host in the same way that the user did this for the Virtual Machine Manager agent. Also, if the VSwitch is a logical switch, the status will be shown in “logical switch compliance.” The user can remediate the logical switch. This will also update the DHCP extension on the host.
  • Sometimes the creation of the unattend.xml file will fail when multiple virtual machines are being customized at the same time. (This file is used as part of virtual machine customization.)
  • Virtual Machine Manager host and cluster refresher checks the permissions (ACLs) of file shares that are registered to a host or cluster. When permissions aren’t found, the refresher reports an error, and a button is placed on the host’s or cluster’s properties page to “repair” the share to set the appropriate permissions. If certain permissions are added, the refresher will erroneously report an error even if the required permissions do exist. The “repair” operation, if invoked, will report that it failed to repair the share.
  • Host.GetInstance is taking a lock on HostVolumes, HostDisks, and HostHbas. However, it is releasing locks only on HostVolumes and HostDisks and continues to hold HostHbas locks. Therefore, if a child task in ControlledChildScheduler takes a lock on the host, subsequent child subtasks cannot acquire a lock on the host.
  • Source logical unit is now being set as part of TFO snapshot creation so that DRA is able to look up the snapshot LUN that corresponds to a replica LUN.
  • Virtual Machine Manager Update Remediation shows only check boxes. This occurs because the dialog box doesn’t force loading of all of the update objects from the server and because of how data binding works. This leaves the names blank instead of either causing a crash or throwing error text.
  • Enabling protection with replication groups with a null target group entity in paramset while making the WCF protocol causes a critical exception to occur.
  • Sometimes, Replication Group protection at scale fails because of database issues.
  • In-place migration of a virtual machine to RG can lead to any of the following issues:
    • No destinations are visible for migration.
    • The migration wizard finishes, but no migration job is triggered.
    • Actual data transfer rather than just metadata transfer takes place.
  • The virtual machine vNic is renamed to “Not connected” if the vNic is not connected to a network. However, the name is not being reset to its original name when the connectivity changes. This can cause a lot of confusion in the UI because all vNics appear as “Not connected” even if there is real connectivity.
  • The UnregisterStorageLun task in a replication group fails intermittently because of SQL deadlock.
  • Service Deployment fails if a library resource is read-only and is copied directly and not from the network. When a file in a custom resource on the library server is read-only (attribute), the deployment will fail when you don’t use network copy. However, it will succeed when you do use network copy.
  • Very slow performance occurs when new-scvmconfig is called. new-scvmconfig is required for multiple new virtual machine scenarios. Each virtual machine creation takes longer and longer to run through placement as the virtual machine name is created.
  • A Virtual Machine refresher job hangs indefinitely after you enable maintenance mode on another cluster node. This will cause a deadlock condition in event-based virtual machine refresh jobs. This may occur when something happens during Subscribe or UnsubscribeForEvents. Now the deadlock condition is removed. If there is an error, it will fall back to the VMLightRefresher for that host.
  • A LUN that has no snapshots could not be registered with Register-LUN.
  • When parallel Register-LUNs are running for NetApp, that multiple SPCs may be created for the same node but for a different LUN. This can happen because of the following two reasons:
    • An OOB configuration was done, and an IG has been created for each node of the cluster for different LUNs. Although this is possible in Netapp, this is a bad configuration, and Virtual Machine Manager will throw an error.
    • An issue in Virtual Machine Manager could cause a situation in which parallel operations create different IGs for the same node and different LUNs.
  • For a discovered ReplicationGroup, the relationship of the LUN to RG is not established.
  • After a customer initializes placement of any or all members of a service configuration that uses a load balancer, the customer can no longer retarget the individual virtual machine configurations of the service configuration. Instead, when the user tries to do this, the user receives a message that states that the service configuration actions are invalid. This blocks the customer from being able to spread virtual machines across host groups.
  • Intermediate-level refresher removes all group sync information.
  • Get-SCReplicationGroup does not return a replication group after a provider was removed and re-added or if the provider was never realized in the array.
  • Planned failover for replication group fails in “PreValidateFailover.”
  • User selection on the Virtual Machine Manager UI grid is lost because Virtual Machine Manager keeps refreshing the object. This severely affects the ability to do multi-select and do operations on scale environments. When theIsSynchronizedWithCurrentItem property is set to True on a data grid, a multi-selection resets to a single selection during a refresh.
  • Virtual Machine Manager UI start-up for a self-service user takes a long time. The UI start takes longer than 4 to 10 minutes.
  • Cloning a virtual machine with Checkpoint fails with “An item with the same key has already been added” when the cloned virtual machine is placed on a dynamic disk. This blocks the cloning of the impacted virtual machines.
  • Performance data (disk space) is not available in Operations Manager for VMWare hosts. Performance data collection (except host disk space) for non-HyperV hosts is collected through the Windows PowerShell cmdlet Get-SCPerfData. However, for Host disk space, Virtual Machine Manager was still using the managed module. Now everything uses the Windows PowerShell cmdlet.
  • The Virtual Machine Manager Administrator Console crashes when the user tries to open the “Add Hyper-v Host And Clusters” wizard. The customer environment produces PRO objects with guid == guid.empty. These objects are cached at the client-side in ClientCache.
  • Library Refresher takes very long to run if the number of files on the library shares is huge and if during this time a read lock is held on the User Role. Therefore, other operations on UserRole fail during this time. Library refresh runs every hour, and currently it can take up to 50 minutes in a customer environment to run for one complete cycle. During this time, some of the operations on UserRole may fail.
  • When multiple deployments are performed concurrently, the progress (time remaining) display job of the file copy fails because of an Overflow exception and causes the service to crash.
  • If an administrator user or a self-service user grants permission to a new self-service user to a service template, the newly added self-service user does not see the service template in its session. Therefore, the self-service user who was granted the access does not see the resources in its session even after authorization is granted.
  • When a user is deleted from Active Directory, the user starts appearing as invalid SID in Virtual Machine Manager. If an invalid SID is present in the ACLs of a virtual machine, all subsequent modifications (addition or removal of users) to the ACL fail silently.
  • Storage provider refresh causes an exception to occur.
  • Migration of a protected running virtual machine uses a network instead of LiveVSM if the virtual machine is migrated to unprotected storage.
  • The container ID for a tier configuration object is initialized even when the hosts that are appropriate for that tier configuration are not in the scope of the placement attempt.
  • Multiple Create VM jobs fail with a locking exception when you run batch virtual machine creations in parallel.
  • While you are running a batch of 100 or more virtual machine creation scenarios in parallel, each virtual machine creation task does not show any progress for 15 to 20 minutes after you submit the task.
  • If a physical computer profile is created by using vNic (and by using a virtual machine network) and if there are more than one hostgroup for a logical network that also has that virtual machine network when you add a host resource on the “Provisioning Options” page, the host profile will be displayed for only one host group. The profile won’t be displayed for the rest of the host groups.
  • A Disable Replication Group job fails because of a database deadlock condition.
  • HP returns the same SMLUNId for source and replica LUNs. Therefore, the hostdisk-to-LUN association is not established in hostrefresher.
  • Disable Maintenance Packs because of Operations Manager alerts such as the following:
    Cloud maximum memory usage to fabric memory capacity ratio has reached or exceeded threshold.
  • Add-Virtual Machine ManagerStorageToComputeClusterOnRack fails, and you receive the following error message:
    Could not find tenant share registered to cluster 43J05R1CC.43J05.nttest.microsoft.com.
  • Virtual Machine Manager encounters critical exceptions during provider rescan.
  • Replication group does not show up in Cloud Properties if pools and LUNs are attached to a child hostgroup.
  • If replication is broken, a critical exception occurs if the replication group is used to perform a disaster recovery (DR) operation.
  • In live migration of a virtual machine from Windows Server 2012 to Windows Server 2012 R2, the threshold fails with a critical exception. As a result, live migration from Windows Server 2012 to Windows Server 2012 R2 won’t have the virtual network adapters fixed. This could cause the Virtual Machine Manager database to be inconsistent with the Hyper-V host and also to fail during the migration.
  • Creation of LUN sometimes fails with invalid handle error.
  • After failover, the virtual machines are reported to have protection errors in the Virtual Machine Manager UI although there are actually no errors.
  • A potential race condition in the MOM Discovery Refresher causes intermittent failures in Virtual Machine Manager Operations Manager connections. This can cause Operations Manager connection failures.
  • Cluster Node goes into a pause state intermittently after you refresh the host cluster. As part of reliability improvement, the HA calculation logic was changed to support failed nodes to be ignored. The calculation logic was rewritten, and in the new logic, logical networks are enforced on the switch. If the switch does not have any logical networks marked, the switch is marked as “non-HA,” and Virtual Machine Manager pauses the cluster node.
  • Custom properties are returned as Null after Set-SCVMTemplate is called. When a Virtual Machine Manager object’s attribute (such as Description) is updated through Windows PowerShell ($t | set-scvmtemplate -args), a problem arises in retrieving the CustomProperty parameter data (by using Windows PowerShell cmdlets such as $t, $t.customproperty, and$t.customproperties), and they will be returned as Null. This occurs because the CustomPropertyIDs of the object are being cleared on the engine side during updates.
  • Live migration fails with incompatible switch port settings. If the target Hyper-V virtual switch doesn’t have VLAN configured during the migration, and if the source virtual machine has a virtual network adapter, Virtual Machine Manager tries to create VLAN settings for it and to assign VLAN ID 0 (that is, “VLAN disabled”). But on a virtual switch where no VLAN is configured, the adding of the VLAN setting causes an incompatibility error from Hyper-V, and the migration fails.
  • A critical exception occurs in the Storage Refresher — ArgumentException — StrgArray.addPoolInternal. Under certain erroneous conditions, he Windows storage service can report multiple storage pools sharing the same ObjectId (this should never happen). The storage provider cannot be refreshed, and therefore cannot be managed. The provider cannot even be removed from Virtual Machine Manager to begin again.
  • Operations such as Migration, Store, or Delete on cloned virtual machines leave the virtual machine configuration file on the host. In a Virtual Machine Manager setup that uses cloning heavily, every cloned virtual machine will leave behind a set of virtual machine configurations and save state files after it is deleted or migrated. This consumes significant disk space. In addition, the deletion and migrations all succeed with a warning message that states that they couldn’t clean up the folder.
  • File share loses user set classification when the share goes from managed to unmanaged. Because of a NetApp provider issue, if the provider loses network connectivity to the array, the provider may not report back any pools even though there are pools. If this happens during a refresh, Virtual Machine Manager will assume that the pools are no longer there and will remove any pool records from the database.
  • GroupMasking in Virtual Machine Manager fails to get MaskingSet From Job. For group masking, if createmaskingset is called with a job, Virtual Machine Manager doesn’t get masking set from job completion but retries even on success. This is reported to occur only when unmasking to an iSCSI Initiator. FC initiators work fine.
  • Template that is based on “SAN Copy capable” VHDX is marked as “Non-SAN Copy Capable” template. The user won’t be able to rapidly provision virtual machines by using Virtual Machine Manager on Nimble storage.
  • prc_WLC_GetUniqueComputerName doesn’t set FoundUniqueName to true even when a unique name is found.
  • After storing a virtual machine to a library, Refresh Library Share will hit critical failures.
  • A LibraryShare resource does not update the namespace after an update of the SSU data path as the Library share. The namespace for library resources will not be updated even after refresh.
  • A virtual machine is shown as a replica virtual machine after failover operations. For ASR SAN replication scenarios, when the virtual machine is failed over, the virtual machine is shown as a replica virtual machine, and the user cannot make much use of the failed-over virtual machine. The user has to trigger reverse role to fix the replication mode.
  • Deploying a stored virtual machine fails in placement with a critical exception. A critical exception will block deploying a previously stored HA Hyper-V virtual machine from the library to an HA host after Update Rollup 5 is applied.
  • Security Update A vulnerability exists in Virtual Machine Manager when it incorrectly validates user roles. The vulnerability could allow elevation of privilege if an attacker logs on to an affected system. An attacker must have valid Active Directory logon credentials and be able to log on with those credentials to exploit the vulnerability. For more information about this security update, see the following article in the Microsoft Knowledge Base.

I did the update on my SCVMM cluster and all looks good other than a start-up error related with Add-In. This can be fixed by adjusting the permissions for the specific folder.

 

Console-Error

Download link – http://support.microsoft.com/kb/3023195

 

 

E-Book on Hyper-V from Altaro

Altaro Software has published a free E Book on Hyper-V written by Eric Siron. Its a well narrated and informative stuff which will be a great asset for any Hyper-V administrator. Eric also contributes for the Hyper-V blog which I always follow.

7 Key Areas of Hyper-V

Grab your copy of e book here.

 

 

 

Update rollup 3 for System Center 2012 R2 Virtual Machine Manager released

RU3 for SCVMM 2012 R2 was released on July 29th.

I feel like this was a silent release. I couldnt find this information on any blog or forums.

Issues that are fixed in this update rollup
Total storage for a User role is reported incorrectly. For example, the User role can use only half of the allowed quota.
A host cluster update fails intermittently because of a locked job.
Virtual machine (VM) refreshers do not update highly available virtual machines (HAVMs) after failover to another node.
A cluster IP address for a guest cluster configuration in a Hyper-V Network Virtualization (HNV) environment is not updated correctly by using HNV policies during failover.
The cluster IP address in an HNV environment is updated incorrectly during failover
Server Message Block (SMB) shares may not be usable by high availability (HA) workloads if they are also connected to stand-alone hosts.
Storage objects discovery does not occur on a Virtual Machine Manager server if the discovery item is too big.
 A Virtual Machine Manager job that assigns network service backend connectivity fails.
Enable maintenance mode fails when you evacuate failed-state VMs.
The Virtual Machine Manager service cannot be restarted because of database corruption.
The ZH-TW language incorrectly appears in the tooltip of the VM Network icon.
Library refresher rewrites the alternative data stream on every file during every update.
For iSCSI hardware storage-based array, when the MaskingPortsPerView property option is set to "multi-port per view," the target endpoint is not obtained as the port address.
The virtual hard disk (VHD) is left in the source folder after storage migration is completed.
The addition of a bandwidth limitation to an existing virtual private network (VPN) connection is not added to the generated script.
A VM that is attached to an HNV VM network loses connectivity when it is live migrated to another node in the failover cluster that is not currently hostingother VMs that belong to the same VM network.
VM network shared access is lost after a service restart or an update interval.
The Remove-SCFileShare command fails for a network-attached storage SMI-S provider.
Setting the template time zone to UTC (GMT +0:00) is incorrectly displayed as "Magadan Standard Time."
The System Center 2012 R2 Virtual Machine Manager crashes when you add groups that contain the at sign (@) character in User roles.
VM deployment fails in a VMWare environment when you have virtual hard disk (.vmdk) files of the same size in your template.
Deployment of an application host on HAVMM fails and generates a 22570 error.
Live migration of an HAVM cross cluster creates a duplicate VM role in the target cluster.
An error occurs when you apply physical adapter network settings to a teamed adapter.
A VMM agent crashes continuously when the HvStats_HyperVHypervisorLogicalProcessor query returns a null value.
A host refresh does not update the VMHostGroup value of a VMWARE cluster after the cluster is moved from VCENTER.
VMM reports an incorrect Disk Allocation size for dynamic VHDs that are mapped to a virtual machine.
A VMM service template script application does not work for a self-service role.
VM creation fails if Virtual Machine Manager undergoes failover during the creation process.
The Access Status value of a file share is incorrect in the user interface.
The Virtual Machine Manager service crashes because of an invalid ClusterFlags value.
VMs cannot be deployed from a service template to a cloud across multiple host clusters (multiple datacenters).
Features that are added in this update rollup
This update includes a Linux guest agent upgrade to support the following new operating systems:
 Ubuntu Linux 14.04 (32-bit)
 Ubuntu Linux 14.04 (64-bit)
 This update also includes the following:
 Host DHCP extension driver upgrade
 Several performance improvements
 Several Management Pack package improvements

 

http://support.microsoft.com/kb/2965414

 

Enjoy.

 

Gartner Magic Quadrant for Server Virtualization – 2014

Much awaited gartner report for server virtualization is now available.

http://www.gartner.com/technology/reprints.do?id=1-1WR6HLK&ct=140703&st=sb

Hyper-V is picking up and getting closer to VMWare. This year, Huawei also got included into the group.

 

Microsoft has effectively closed most of the functionality gap with VMware in terms of the x86 server virtualization infrastructure. Additional gaps remain in management and automation features — notably, VMware’s Site Recovery Manager (SRM) is more automated and better-suited for large-scale disaster recovery requirements. Importantly, Microsoft made Hyper-V Recovery Manager (HRM) available in January 2014 — an Azure-hosted service that orchestrates Hyper-V Replica for disaster recovery purposes. Microsoft plans to expand that offering by including Azure-based replication and recovery, and renaming the offering Microsoft Azure Site Recovery (currently in preview mode). It is too early to judge the competitiveness of these offerings, but they will be critical to Microsoft’s success against VMware.

System Center VMM 2012 and Windows Azure Pack (delivered October 2013) dramatically improve the ability to create private cloud solutions based on Hyper-V, which also enables service providers to use Microsoft as the basis for cloud offerings. While Microsoft does not have the service provider ecosystem that VMware has, Microsoft’s Azure service is becoming a growing attraction for enterprises that want to develop Microsoft-based applications on-premises and in the cloud using common development and management tools.

While the management functionality is strong, ease of use (for example, clients report that Hyper-V HA is relatively difficult to set up and manage) and lack of fully centralized management remain issues. While most management tasks can be handled through VMM, some require Hyper-V Manager or Windows PowerShell. Microsoft has made improvements in recent versions of Hyper-V and System Center.

Microsoft can now meet the needs of most enterprises with respect to server virtualization. Its challenge is neither feature nor functions, but competing in a market with an entrenched competitor, VMware. Microsoft is now winning a good percentage of enterprises that are not heavily virtualized yet — especially those that are mostly Windows-based (while Linux support is improved, especially in Windows Server 2012 R2, there are very few customers using Hyper-V for Linux). However, few enterprises that are heavily virtualized with an alternative technology are choosing to go through the effort to switch. A growing number of large enterprises are finding niches in which to place Microsoft — for example, in stores, branch offices or separate data centers. This strategy of “second sourcing” will enable these enterprises to evaluate Hyper-V for further deployments and perhaps leverage the competition in deals with VMware. While Microsoft’s technology is capable, winning the larger and more mission-critical deployments will be an uphill battle and will require more proof points.

Microsoft’s challenge is less about products but much more about sales and marketing, as well as overcoming an entrenched competitor with high-quality products and happy customers. The most important factor in Microsoft’s favor is price. Unlike VMware, Microsoft does not rely uniquely on a business model based on virtualization software. At the same time, the market — including service providers — is becoming more concerned about vendor lock-in. In a market moving to cloud infrastructures based on virtualization software, and with growing interest in potentially heterogeneous and open-source solutions such as OpenStack, Microsoft must be careful to not position itself as just another proprietary solution. Furthermore, it must find ways to differentiate itself from VMware based on its service provider and Azure offerings — for example, using Azure for disaster recovery and developing new applications Azure — but managed centrally together with on-premises assets. In the end, Azure interoperability may become the more important factor compared with price.

 

Server Virtualization - Gartner Magic Quadrant - As of July 2014

Server Virtualization – Gartner Magic Quadrant – As of July 2014

Shaba