Monday, April 19, 2010

Network Performance Test Xen/Kvm (VT-d and Para-virt drivers)

Para-virtualized Network Driver
Note: In case [1] and [2] the numbers are greater than the speed (1Gbps) of the NIC since the client is communicating with the server via the Para-virt driver (for KVM and Xen) or via loopback link (Native).

Passing a NIC to Guest Via VT-d


Summary of Results:
  • One should use Para-virtualized drivers
  • KVM and XEN have close network performance for both VT-d and Para-virt.
  • The MAX bandwidth of Virtio connecting to a remote is very close to VT-d or Native
  • Using Para-virt to connect to Dom0 is much faster than using VT-d

Type of Setup:

VT-d (e1000 PCI Passthrough)
Passing a e1000 NIC from host to guest via VT-d. Need to be specified at virt-install "--host-device=pci_8086_3a20" (otherwise you need to handle the complex pci driver loading/unloading), where "pci_8086_3a20" is the name of the NIC. Use lspci -v and virsh nodedev-list to see them.

KVM: Virtio
Using the virtio_net driver, set in libvirt xml file, which produces a "-net nic,macaddr=xxx,vlan=0,model=virtio" in kvm arguements.
Note: to load the virtio_net driver correctly in SLC5 DomU (guest) one need to remake an initrd image like below:
mkinitrd -f --with=virtio --with=virtio_pci --with=virtio_ring --with=virtio_blk --with=virtio_net initrd-2.6.18-164.15.1.el5.virtio.img 2.6.18-164.15.1.el5

XEN: xen_vnif
Using the xen_vnif driver.

Native (Run in Dom0 - e1000)
This is the control setup, in this case all test commands are run within Dom0 (the host computer).


Server Command:
iperf -s -w 65536 -p 12345

Client Command:

[1] Link to dom0
iperf -c dom0 -w 65536 -p 12345 -t 60

[2] Link to dom0 with 4 spontaneous threads
iperf -c dom0 -w 65536 -p 12345 -t 60 -P 4

[3] Link to a remote box on the same switch
iperf -c remote -w 65536 -p 12345 -t 60 -P 4

[4] Link to a remote box on the same switch with 4 spontaneous threads
iperf -cremote -w 65536 -p 12345 -t 60 -P 4

CPU Performance Xen/Kvm



Summary:

  • For KVM,there is little performance penalty for CPU.
  • XEN performs worse, maybe optimizations in configuration can be made?
Test Setup:

Xen: 7GB memory, 8 VCPU
KVM: 8GB memory, 8 VCPU
Native: 8GB memory, 8CPU

Test command:
nbench -v

KVM Disk Performance with different configurations


Summary:
  • Using a block device as vda and apply the virtio_blk driver is the fastest
  • There is still a 5-10% penalty on both read and write.
Test Setup:

KVM: 8GB memory, 8 VCPU
Native: 8GB memory, 8CPU

Test command:

bonnie++ -s 24576 -x 10 -n 512

Disk Performance Xen/Kvm with LVM and Para-virt drivers


Summary:
  • For KVM, there is a 5-10% penalty on both read and write.
  • For XEN, the penalty is much larger for read/write, but seek time is better.
Test Setup:

Xen: 7GB memory, 8 VCPU
KVM: 8GB memory, 8 VCPU
Native: 8GB memory, 8CPU

Test command:

bonnie++ -s 24576 -x 10 -n 512

Tuesday, April 13, 2010

Network Speed Test (IPerf) in KVM (Virtio-net, emulated, vt-d)


Note: Some of the numbers are too big, so the numbers on top of the bars shows their actual number. So they won't mess up the scale.

Summary of Results:
  • One should use Virtio in favor of VT-d pass-through, or emulated Network Driver
  • Emulated NICs are much slower than Virtio or VT-d
  • The MAX bandwidth of Virtio connecting to a remote is very close to VT-d or Native
  • Using Virtio to connect to Dom0 is much faster than using VT-d (since in our setup VT-d is a second NIC)

Type of Setup:

[a] Emulation (rtl8139)
Emulating an rtl8139 100Mbps-NIC, this is the default if you don't change anything with virt-install. (I.E. eucalyptus might get this one).

[b] Emulation (e1000)
Emulating an e1000 1Gbps-NIC, set in libvirt xml file, which produces a "-net nic,macaddr=xxx,vlan=0,model=e1000" in kvm arguements.

[c] VT-d (e1000 PCI Passthrough)
Passing a e1000 NIC from host to guest via VT-d. Need to be specified at virt-install "--host-device=pci_8086_3a20" (otherwise you need to handle the complex pci driver loading/unloading), where "pci_8086_3a20" is the name of the NIC. Use lspci -v and virsh nodedev-list to see them.

[d] Virtio
Using the virtio_net driver, set in libvirt xml file, which produces a "-net nic,macaddr=xxx,vlan=0,model=virtio" in kvm arguements.
Note: to load the virtio_net driver correctly in SLC5 DomU (guest) one need to remake an initrd image like below:
mkinitrd -f --with=virtio --with=virtio_pci --with=virtio_ring --with=virtio_blk --with=virtio_net initrd-2.6.18-164.15.1.el5.virtio.img 2.6.18-164.15.1.el5

[z] Native (Run in Dom0 - e1000)
This is the control setup, in this case all test commands are run within Dom0 (the host computer).


Server Command:
iperf -s -w 65536 -p 12345

Client Command:

[1] Link to dom0
iperf -c dom0 -w 65536 -p 12345 -t 60

[2] Link to dom0 with 4 spontaneous threads
iperf -c dom0 -w 65536 -p 12345 -t 60 -P 4

[3] Link to a remote box on the same switch
iperf -c remote -w 65536 -p 12345 -t 60 -P 4

[4] Link to a remote box on the same switch with 4 spontaneous threads
iperf -cremote -w 65536 -p 12345 -t 60 -P 4