Should Linux VM’s in VMware/ESX have a swap partition? – Managing your servers can streamline the performance of your team by allowing them to complete complex tasks faster. Plus, it can enable them to detect problems early on before they get out of hand and compromise your business. As a result, the risk of experiencing operational setbacks is drastically lower.
But the only way to make the most of your server management is to perform it correctly. And to help you do so, this article will share nine tips on improving your server management and fix some problem about linux, windows, vmware-esxi, vmware-esx, .
On a VMware ESX setup what is the difference of these options?:
- a Linux VM with 1GB RAM and 1GB swap partition and the VM uses 1.5 GB ram
- a Linux VM with 1GB RAM and no swap partition and the VM uses 1.5 GB ram
I mean, in both cases swap is being used;
- in the first swap is done to the linux swap partition
- in the second case, VMware will swap 512MB to the VMware storage pool.
So are there any point in giving Linux VM’s a swap partition?
Ignoring the fact that people are dealing with OS specific reasons I have two reasons why it’s a bad idea to not run with a swap partition/file.
- If you have 1.5 GB of RAM allocated to a VM with no space file/partition and it wants to use 1.5 GB + 1 MB it will report an out of memory error. With the swap space it will be able to swap data out of active memory and onto the disk.
- The guest OS does a much better job of memory management than the host. This is why technology like memory ballooning exists because the Host can make educated guesses on what memory isn’t needed right now but the guest knows at a much more intelligent level (this keeps OS memory from being swapped out which could kill your performance).
Yes. It is the Unix way.
Unix (even Linux) expects to be able to swap.
Bad Things happen when the system can’t swap (either because it was misconfigured without a swap partition or because the swap space is full). In Linux one of these Bad Things is the Out Of Memory Killer, which will plunge a knife into the back of the program it thinks is using the most RAM (database servers are a favorite target).
What do you have
/proc/sys/vm/overcommit_memory set to? From the kernel documentation:
0 - Heuristic overcommit handling. Obvious overcommits of address space are refused. Used for a typical system. It ensures a seriously wild allocation fails while allowing overcommit to reduce swap usage. root is allowed to allocate slightly more memory in this mode. This is the default. 1 - Always overcommit. Appropriate for some scientific applications. 2 - Don't overcommit. The total address space commit for the system is not permitted to exceed swap + a configurable percentage (default is 50) of physical RAM. Depending on the percentage you use, in most situations this means a process will not be killed while accessing pages but will receive errors on memory allocation as appropriate.
Thus if you are using 1 there is no difference. If you are using 2 and no linux swap file then no process will be able to allocate 512M of (virtual) memory. The outcome isn’t clear for 0.
Edit: From http://utcc.utoronto.ca/~cks/space/blog/linux/LinuxVMOvercommit this is how 0 works:
Heuristic overcommit attempts to work out how much memory the system
could give you if it reclaimed all the memory it could and no other
process used more RAM than it currently is; if you are asking for more
than this, your allocation is refused. In specific, the theoretical
‘free memory’ number is calculated by adding up free swap space, free
RAM (less 1/32nd if you are not root), and all space used by the
unified buffer cache and kernel data that is labeled as reclaimable
(less some reserved pages).
So it uses swap in the calculation as well. In general I’d follow the RHEL recommendation of :
M = Amount of RAM in GB, and S = Amount of swap in GB, then If M < 2 S = M *2 Else S = M + 2
Swap partitions can be potentially faster, especially in situations where the root disk is nearly full and you can’t create a swap file in one piece without fragmentation, let alone the overhead that might be created by the filesystem and, if applicable, things like LVM.
But the performance of both machines will suck anyway if you keep 33% of your memory requirements on the disk.