Blog

VMware NUMA Performance

February 11, 2015 | Susan Bilder

In our previous post we outlined how NUMA works with Virtual Machines (VMs) that either fit entirely into one NUMA home node or VMs that are divided into multiple NUMA clients with each client assigned its own home node. The following points should be considered when working with VMs that use NUMA hardware:

The hypervisor may migrate VMs to a new NUMA node on the host.

Hypervisors adjust physical resources as needed to balance VM performance and fairness of resource allocation over all VMs. If the home NUMA node for a VM is maxed out on CPU, the hypervisor may allocate CPU from a different NUMA node even if that means incurring latency from accessing remote memory. When CPU is available on the NUMA node with VM memory the hypervisor will migrate the CPU resources back to that NUMA node in order to improve memory access. The hypervisor will factor in CPU usage, memory locality, and the performance cost of moving memory from one NUMA node to another in its determination of whether to migrate a VM to a different NUMA node.VMs may also be migrated in an attempt to achieve long term fairness. The CPU Scheduler in VMware vSphere 5.1 gives an example of 3 VMs on 2 NUMA nodes. The NUMA node with 2 VMs will split resources between the 2 VMs while the NUMA node with only one VM will have all resources allocated to that VM. In the long term migrating VMs between the two nodes may average out performance but in the short term it just transfers the resource contention from one VM to another with the additional performance cost of moving memory from one NUMA node to the other. This type of migration can be disabled by setting the advanced host attribute /Numa/LTermFairnessInterval=0.

VMs that frequently communicate with each other may be placed on the same NUMA node.

VMware’s ESX hypervisor may place VMs together on the same NUMA node if there is frequent communication between the VMs. This “ action-affinity” can end up causing an imbalance in VM placement on NUMA nodes by assigning multiple VMs to the same home NUMA node and underpopulating other NUMA nodes. In theory the gain from the VMs being able to access common memory resources will offset the increase in CPU ready levels. In practice this may not be the case and this feature may be disabled by setting /Numa/LocalityWeightActionAffinity=0 in advanced host attributes.

Hyperthreading doesn’t count when VMs are assigned to NUMA nodes.

The hypervisor looks at available cores when determining which NUMA node to assign as the home node for a VM. If a NUMA node has 4 physical cores and a VM is allocated 8 processors, then the VM would be divided into 2 NUMA clients spread over 2 nodes. This ensures CPU resources but may increase memory latency. If a VM is running a memory intensive workload it may be more efficient to restrict the VM to one NUMA node by configuring the hypervisor to take hyperthreading into account. This is done by setting the numa.vcpu.preferHT advanced VM property to True.

VMs migrating between hosts with different NUMA configurations may experience degraded performance.

For VMs moving to a host with smaller NUMA nodes, it is possible that they will need to be split into multiple NUMA clients, while hosts with larger NUMA nodes may be able to merge wide VMs into a single node. Performance will be degraded until the hypervisor on the new host can configure the VMs for the new host NUMA configuration.

VMs spread over multiple NUMA nodes may benefit from vNUMA.

Some applications and operating systems can take advantage of the NUMA architecture to improve performance. A VM running these applications or operating systems that is spread across multiple NUMA nodes can be configured to use virtualized NUMA (vNUMA) to take advantage of the underlying architecture just as if it were on a physical host leading to large performance gains. However if the VM migrates to a new host and the NUMA configuration on the new host is different this could end up degrading performance until the VM can be restarted using the new vNUMA configuration.

While adjustments to the hypervisor’s NUMA algorithms may provide some performance improvements, the last two items are the most important takeaways. It is a best practice to ensure that hosts in a cluster have the same NUMA configuration to avoid performance issues when VMs move from one host to another.

Want to learn more?

Download our Virtualization or Cloud IaaS Whitepaper - both technologies can provide redundancies that will maximize your uptime and that will allow you to squeeze out the most performance. Which is better and how do you decide?

 

Download the whitepaper:  Virtualization or Cloud IaaS?