Hello there,
I've recently applied performance tricks to enhance the IO subsystem on my vmware server 2 on linux, by putting in the /etc/vmware/conf file the followig lines:
# Custom:
tmpDirectory = "/dev/shm"
mainMem.useNamedFile = "FALSE"
sched.mem.pshare.enable = "FALSE"
MemTrimRate = "0"
MemAllowAutoScaleDown = "FALSE"
prefvmx.useRecommendedLockedMemSize = "TRUE"
prefvmx.minVmMemPct = "100"
Then the disk performances went from terrible to acceptable. I put the vmem files in the /dev/shm file system and not in a dedicated tmpfs file system.
But I noticed that every time I started a new VM with 256Mb, Linux cached almost twice the size of the VM: (excerpt from free -m)
MemVM | cached | DiffCached | free | used | DiffUsed | |
no VM | 0 | 314 | 15788 | 448 | ||
VM1 | 256 | 600 | 286 | 15442 | 795 | 347 |
VM2 | 256 | 1095 | 495 | 14910 | 1326 | 531 |
VM3 | 256 | 1639 | 544 | 14335 | 1901 | 575 |
VM4 | 256 | 2052 | 413 | 13890 | 2346 | 445 |
VM5 | 256 | 2510 | 458 | 13393 | 2843 | 497 |
VM6 | 256 | 2938 | 428 | 12925 | 3311 | 468 |
VM7 | 256 | 3369 | 431 | 12472 | 3764 | 453 |
VM8 | 256 | 3834 | 465 | 11970 | 4267 | 503 |
Although the /dev/shm file system grows normally by steps of 256M. It means that I'm wasting half of the 16Gb on my host.
Do you think this can be related to the fact I use /dev/shm to store the vmem file rather than a tmpfs ?
Thanks in advance
My Host is: a DELL POWEREDGE 1900 with 1 quad Core and 16 Gb of RAM.
OS:Red Hat Enterprise Linux AS release 4 (Nahant Update 3)
(2.6.9-34.ELsmp #1 SMP Fri Feb 24 16:54:53 EST 2006 i686 i686 i386 GNU/Linux)