View previous topic :: View next topic |
Author |
Message |
Adel Ahmed Veteran
Joined: 21 Sep 2012 Posts: 1523
|
Posted: Thu Oct 16, 2014 2:18 pm Post subject: kvm internal network |
|
|
I want to setup an internal high capacity network like the internal network in virtual box, I will be transferring 10s of GBs over this network link(using a backup product)
I cannot seem to set something like this up under kvm and virt-manager
the best transfer rate I could achive is 40 MB/s using NAT
thanks |
|
Back to top |
|
|
szatox Advocate
Joined: 27 Aug 2013 Posts: 3135
|
Posted: Thu Oct 16, 2014 6:08 pm Post subject: |
|
|
4 simple tips from me:
1) use bridged networking
2) use virtio drivers (you need support in host's kernel, guest's kernel and option in qemu command line)
3) use jumbo frames (it seems you must enable jumbo on at least one bridged interface before you can enable it on bridge)
4) make sure it's not hard drive that is your bottleneck |
|
Back to top |
|
|
NeddySeagoon Administrator
Joined: 05 Jul 2003 Posts: 54236 Location: 56N 3W
|
Posted: Thu Oct 16, 2014 7:13 pm Post subject: |
|
|
blakdeath,
If you are getting 40 MB/s between machines sharing the same rotating rust platters, it won't get much better.
The problem is that you are reading and writing the same HDD but in different areas, so you have lots of slow head movemets.
A HDD will do between 120Mb/sec on the outside track to 40Mb/sec on the inside track, so youl 40Mb/sec sounds OK. _________________ Regards,
NeddySeagoon
Computer users fall into two groups:-
those that do backups
those that have never had a hard drive fail. |
|
Back to top |
|
|
Adel Ahmed Veteran
Joined: 21 Sep 2012 Posts: 1523
|
Posted: Wed Oct 22, 2014 5:09 pm Post subject: |
|
|
I'll give it a shot on my ssd
see how things go |
|
Back to top |
|
|
Adel Ahmed Veteran
Joined: 21 Sep 2012 Posts: 1523
|
Posted: Sun Oct 26, 2014 11:34 am Post subject: |
|
|
32MB/s on my ssd, this number is inadequate
any ideas on how to improve? |
|
Back to top |
|
|
NeddySeagoon Administrator
Joined: 05 Jul 2003 Posts: 54236 Location: 56N 3W
|
Posted: Sun Oct 26, 2014 12:16 pm Post subject: |
|
|
blakdeath,
That suggests the bottleneck is not the SSD, as head movements have been eliminated.
Does VBox have access to a partition or is its fllesystem a file on the hosts filesysem?
The latter is slow as there are two passes through the filesystem code. Once in VBox and agian on the host.
This can be made slower if the VBox file on a filesystem uses journaling. _________________ Regards,
NeddySeagoon
Computer users fall into two groups:-
those that do backups
those that have never had a hard drive fail. |
|
Back to top |
|
|
Adel Ahmed Veteran
Joined: 21 Sep 2012 Posts: 1523
|
Posted: Sun Oct 26, 2014 12:56 pm Post subject: |
|
|
it's a file on the hosts file system
and I have no journalling on the host FS |
|
Back to top |
|
|
szatox Advocate
Joined: 27 Aug 2013 Posts: 3135
|
Posted: Sun Oct 26, 2014 9:50 pm Post subject: |
|
|
I think you're making a bad mistake trying to messure network performance sending files over it.
Use a tool, that would test only network, without generating (or being limited by) load on any other components instead.
Check out iperf or netperf for example. You need one of those at both ends of link you want to test. Launch one in listening mode, then launch the other pointing it to first one. They will tell you how fast network is and whether you're looking for bottleneck in a right place |
|
Back to top |
|
|
Navar Guru
Joined: 20 Aug 2012 Posts: 353
|
Posted: Sun Oct 26, 2014 11:32 pm Post subject: |
|
|
I just use dd, netcat and /dev/zero (local pull)-> /dev/null (remote sink) to test raw network throughput. Haven't found anything more efficient. I suppose you could toss pv in there too. |
|
Back to top |
|
|
Adel Ahmed Veteran
Joined: 21 Sep 2012 Posts: 1523
|
Posted: Fri Dec 19, 2014 9:47 am Post subject: |
|
|
now that I've setup my kvm and libvirt again, I'll give it another shot using the dd method |
|
Back to top |
|
|
|