Guide for Server Sizing for NComputing Solution and Virtualization
This Guide help you in designing your Server for NComputing Solution.
The NComputing’s standard deployment architecture offers the best balance of simplicity, affordability and performance. In this architecture scenario, the NComputing vSpace desktop virtualization software is installed on an operating system that runs directly on a physical host (ranging from PC to server-class hardware). In this configuration, vSpace quickly transforms the host into a multiuser system capable of supporting up to 30 simultaneous users with the NComputing L-series virtual desktop.
For applications requiring additional flexibility, scalability and manageability – NComputing solutions can be deployed with server virtualization infrastructure that leverages virtual machines and hypervisor technology. Under this scenario, NComputing vSpace runs on a host operating system that is running inside a virtual machine. This configuration enables multiple vSpace hosts to run on a single physical server platform resulting in the ability to scale up and host a much larger number of users on a single physical machine (>100). Furthermore, the vSpace virtual machine hosts can be managed using standard management tools giving IT Manager’s additional levels of flexibility and control. This approach is essentially an effective means of server consolidation. While deploying the NComputing solution with server virtualization infrastructure provides a number of benefits over the traditional model, it is also inherently more complex to deploy and greater care must be taken to design and deploy an effective solution. This guide is intended to serve as a primer for those preparing to deploy NComputing vSpace and L-series products within hypervisor environments. Specific deployment guidelines are provided for VMware’s ESXi hypervisor. Other hypervisors such as Microsoft’s Hyper-V and Citrix’s XenServer may be used in conjunction with NComputing systems, however, ESXi is the leading hypervisor used for server consolidation in the market today, and as such it is the focus of this guide. However, many of the deployment guidelines in this paper also apply to other hypervisors. This document provides an overview of key technical considerations for successful virtual machine deployment of
NComputing products and includes a selection of specific guidelines to help you optimize the overall performance of the VM host and user sessions. Before you begin, you should familiarize yourself with the L-Series User Guide, which can be found on the Documentation page of the Support section of www.ncomputing.com website.
Due to the added complexity of hypervisors and VM management, every deployment with server virtualization software will be unique. The tips and tricks contained in this document are general in nature, and may not apply to every situation. If you are unfamiliar with server sizing, desktop virtualization, or large-scale deployments, it is strongly suggested that you consult a qualified professional prior to deploying
Host Server Sizing
When choosing hardware for a virtual machine host, it is critical to first understand the expected use case. An environment to support a greater number of users inherently requires more resources than an environment with fewer users. The same is also true of environments where the users’ needs are more complex or demanding. The following sections discuss typical provisioning for different hardware components within a virtual machine host. These recommendations are estimates, and the needs of an actual deployment WILL vary. Note that it is suggested never to exceed 80% utilization of your server’s resources; this means provisioning for roughly 25% more users than are actually expected.
Today, a typical mid-range PC comes with a dual-core processor in the 2.4-2.6 GHz range. The average user only needs a fraction of this processing power, except for occasional bursts during intensive tasks. For most basic use cases, 600-800 MHz per user of processing power is sufficient provisioning. For more intensive use cases, such as multimedia playback or image manipulation, this number should be raised to 1.0 or 1.2 GHz. Multiply this value by the number of users you are serving to obtain a rough estimate of your deployment’s processing needs. In some cases, these needs can be met with a single physical server, but many situations require a cluster of two or more physical systems to support the desired number of users. When calculating the abilities of a given processor, multiply the processor’s speed (in GHz) by the number of physical cores. If the processor utilizes hyper-threading, include a 25% increase in processing power. The following calculation demonstrates the capabilities of a hyper-threaded quad-core 3.0 GHz processor:
3.0 GHz x 4 cores x 1.25 (hyper-threading) = 15.0 GHz = 12-25 users (depending on use-case)
NComputing greatly increases the efficiency of memory usage over typical one-user-per-OS-instance environments. As such, it is typically adequate to provision 2-3GB of memory per 10 users. Obviously, this value will vary depending on the types of applications used. Memory-intensive programs such as photo- or video-editing suites require special consideration.
Disk storage issues are often overlooked in standard desktop deployments, but they quickly become a serious bottleneck for virtual machines. Rather than measuring just disk Gigabyte sizes or platter speeds, virtual machine servers necessitate the consideration of IOPS (“Input/Output Operations Per Second”). A typical SATA desktop hard drive can deliver between 80 and 100 IOPS, and the average user will draw between 5-10 IOPS. Based on this, you can immediately see how storage access will become a major limiter to user experience as soon as you connect 15 or 20 users to the server. The solution is typically to implement a RAID storage environment. Below are some points to consider when deciding what sort of RAID to use, but it is ultimately up to you to choose a storage solution that is best for your environment and specific needs:
Read/Write IOPS – In a virtualized environment, there is rarely an even balance between read IOPS and write IOPS. In fact, usually around 80% of the hard drive activity is for write operations, and that is an important consideration when calculating your RAID requirements.
IOPS Penalty– The type of RAID used can cause significant IOPS penalties, which result in diminishing returns as you add more drives. The table below lists these RAID trade-offs:
|RAID Level||Reads Used||Writes Used|
|RAID 1 (or 10)||1||2|
|RAID 5 (or 50)||1||4|
|RAID 6 (or 60)||1||6|
Calculating RAID IOPS– To support a user base of 50 (250-500 IOPS), the server should be provisioned with 600 IOPS. To calculate the RAID penalty, we take 80% of the total IOPS needs (480 IOPS), multiply them by the Writes Used value, then add in the remaining Read IOPS (120). The resulting value is the net IOPS required. Under the different RAID standards, this calculation looks like the following:
- RAID 1: (480 x 2) + 120 = 1,080 IOPS 12 SATA Drives
- RAID 5: (480 x 4) + 120 = 2,040 IOPS 23 SATA Drives
- RAID 6: (480 x 6) + 120 = 3,000 IOPS 34 SATA Drives
When determining the network needs of your deployment, it is important to realize that there are two different sources of network activity. First is the communication between vSpace (on the server) and the L-series access devices. Depending on the product model and type of use, this network load can range from 100 Kbps up to 10 or 15 Mbps per connected device. Please see the table below for specifics.
|Typical Bandwidth Use (in Mbps)|
|Basic Office Apps||0.3||0.3||0.3|
|Multimedia *||15||15||4 to 10|
|Recommended Provisioning **||15||17||8 to 10|
* The L130 & L230 are designed for no or limited multimedia use.
** These values are an estimate; please test your environment to ensure that network needs are met