When considering server memory requirements, the goal is to have enough physical memory compared to active virtual memory, so that memory pagination can be avoided. Virtual memory is paged to disk, and this is a slow process. If intense pagination occurs, significant performance degradation is unavoidable. Memory also depends on the number and speed of processors. A server with more cores processes information faster and requires more memory than a server with fewer cores. 64-bit operating systems also use more memory than their 32-bit predecessors. Memory requirements, while clearly dependent on the number of users, depend more heavily on the type of applications being run. For example, text-based applications tend to require less memory than GUI/graphic-based applications. The applications that tend to use less memory are applications like directory services (DNS, DHCP, or LDAP servers) that perform a very specific task. 1 GB to 2 GB can be reasonable. Communication servers (like IP phones, voice servers, and email/fax servers) are intermediate memory users in the 2 GB to 4 GB range. Web server memory mainly depends on the number of concurrent users and the required response time; they are in the 2 GB to 4 GB range. Application server memory is in a similar range as web servers, and it greatly depends on the application and on the number of users. Database servers are in general the most memory intensive and can use the maximum amount of memory a server can provide to cache the database content, thus dramatically improving performance. They start at 4 GB for small databases. Some engineering tools are memory-bound on the complexity of the model they can simulate; the larger the memory on the server, the larger the circuit they can simulate or the more accurate the analysis they perform. In the same class are tools for oil and energy and biotech research.
Memory on Virtualized Servers
The amount of memory installed in a server may be insufficient in a virtualized environment where multiple virtual machines run on the same processor. Each virtual machine consumes memory based on its configured size, plus a small amount of additional overhead memory for virtualization. Hypervisors typically allow specifying a minimum and a maximum amount of memory per VM: The minimum is guaranteed to be always available; the maximum can be reached as a function of the state of the other VMs. This technique is called memory over-commitment, and it is useful when some virtual machines are lightly loaded while others are more heavily loaded, and relative activity levels vary over time.
The UCS Approach
In the Intel® Microarchitecture, memory controllers are integrated on the processor socket and therefore memory can only be added with additional sockets, but this does not solve the original issue of increasing the memory per socket. The limit on the number of memory sockets is not directly related to the addressing capability of the processor (Nehalem is capable of addressing larger memories). Electrical issues, DRAM device density, and details of the processor’s pinout are the real limiting factors.
When a larger memory system is desired, UCS uses Cisco ASICs called the “Catalina chipset” to expand the number of memory sockets that can be connected to each single memory bus. These ASICs are inserted between the processor and the DIMMs on the memory bus, minimizing the electrical load, and bypassing the control signal limitations of the Nehalem CPU design. The achieved expansion factor is 4X: Using 8GB DIMMs, a Nehalem socket can connect up to 192 GB of RAM and a dual-processor socket blade can host up to 384 GB of RAM. This expansion is done at the electrical level and it is completely transparent to the operating systems and its applications. The BIOS is extended to initialize and monitor the ASICs and to perform error reporting.