Quantcast
Channel: VMware Communities : Popular Discussions - General Issues
Viewing all articles
Browse latest Browse all 41042

IO throughput issues with VMWare Server 2 and SLES 11 on Host and RedHat on the client.

$
0
0

 

I am unsure if this is the best way to measure this but I have VMWare Server 2 installed on an SLES 11 (64-bit) host.  This host has 6 GB of ram with 5 GB allocated for VMWare.  I have 2 SATA disks that are software raided (raid 1).  I have a VMGuest running 32-bit Redhat Enterprise Linux with 1 GB of ram allocated.  I run into the following performance differences when I perform testing.  Please note that this box is effectively completely quiet barring the other VMs running and minimal activity on the host.  I don't think that this would cause what I'm seeing

 

 

Measurement of read performance on the host (showing the raw storage as well as the logical volume):

 

 

This is the typical number, btw.

 

 

hdparm -tT /dev/sda2

 

/dev/sda2:

Timing cached reads:   2138 MB in  2.00 seconds = 1069.57 MB/sec

Timing buffered disk reads:  208 MB in  3.02 seconds =  68.98 MB/sec

 

 

/dev/lvmpool/open_client_test:

Timing cached reads:   2292 MB in  2.00 seconds = 1147.19 MB/sec

Timing buffered disk reads:  186 MB in  3.01 seconds =  61.86 MB/sec

 

 

Measurement of read performance on the guest:

 

 

This is approx. the typical number but is on the high side of what I've seen.

 

 

hdparm -tT /dev/sda1

 

/dev/sda1:

Timing cached reads:   3708 MB in  2.00 seconds = 1853.38 MB/sec

Timing buffered disk reads:   94 MB in  3.07 seconds =  30.58 MB/sec

 

 

I have the following settings on the Host based on various advice I've read elsewhere.  None of these settings appear to make any difference:

 

 

prefvmx.minVmMemPct = "100"

prefvmx.allVMMemoryLimit = "5385"

prefvmx.useRecommendedLockedMemSize = "TRUE"

 

mainMem.partialLazySave = "TRUE"

mainMem.partialLazyRestore = "TRUE"

 

mainMem.useNamedFile = "FALSE"

tmpDirectory="/dev"

 

 

 

On the guest, I have added the following which has also made no difference (I initially used all stock settings and had 2 GB allocated):

 

 

 

scsi0:0.writeThrough = "TRUE"

mem.ShareScanTotal=0

mem.ShareScanVM=0

mem.ShareScanThreshold=4096

sched.mem.maxmemctl=0

sched.mem.pshare.enable = "FALSE"

 

 

On top of this I have preallocated all storage, created a single SCSI disk (13GB), used LSI Logic as the scsi controller, and created a dedicated logical volume to load this on.  It's not that the performance is that horrible but I would expect that performance would be perhaps 80-90% of host given that the system isn't heavily loaded and any IO overhead would be seen on the host as well. 

 

 

Does anyone have any ideas?  I have seen a lot of questions on the topic of IO but it seems like, typically, the performance is improved by making the above changes while these changes didn't appear to help me out at all.  

 

 


Viewing all articles
Browse latest Browse all 41042

Trending Articles



<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>