Gentoo Forums
Gentoo Forums
Gentoo Forums
Quick Search: in
Windows faster than Linux?
View unanswered posts
View posts from last 24 hours

 
Reply to topic    Gentoo Forums Forum Index Networking & Security
View previous topic :: View next topic  
Author Message
grooveman
Veteran
Veteran


Joined: 24 Feb 2003
Posts: 1217

PostPosted: Sun Feb 21, 2016 10:22 pm    Post subject: Windows faster than Linux? Reply with quote

Hi,

I have set up a freenas box as a samba server. I have bumped into something interesting, and in all honesty, a little bit disappointing.

It seems that when I'm using Windows 8 (which I am loathe to do), that I saturate the full 1gb network connection. I can transfer files at 110 mib/sec. (for reads and writes)

However, when I boot the same computer to Gentoo Linux, that speed drops to about 60 mib/sec on the write, and about 75 mib on the read -- moving the same files on the same hardware, over the same network.

I have spent a few days trying to tweak the smb4.conf file on the freenas box to get a little better transfer rate, but nothing I try has worked.

I have tried suggestions here:
a rather long google-link

here:
http://www.eggplant.pro/blog/faster-samba-smb-cifs-share-performance/

here:
https://calomel.org/samba_optimize.html

and here:
https://wiki.amahi.org/index.php/Make_Samba_Go_Faster

And many other places as well, but they usually reference the same directives.

I'm beginning to believe that the problem is not with my smb(4).conf, but rather with my linux clients. Could it be that there are tweaks to the linux TCP/IP stack on the client side that I should be making?

I have verified this on other machines as well. The windows PCs are able to saturate 1gb, but not the linux machines. They are always 25% to 50% slower.

Any ideas on what I can to get the linux machines to saturate a 1gb network connection?

Thanks!

G

Formed URL-tags around a link to google due to it being past the length that tends to break the forum layout a little bit.Chiitoo
_________________
To look without without looking within is like looking without without looking at all.
Back to top
View user's profile Send private message
chithanh
Developer
Developer


Joined: 05 Aug 2006
Posts: 2158
Location: Berlin, Germany

PostPosted: Sun Feb 21, 2016 11:02 pm    Post subject: Reply with quote

Try to identify the bottleneck.

When doing samba transfers, is the CPU at 100% load?
Are you limited by network(iperf) or disk(bonnie++) performance? Which filesystem?
Back to top
View user's profile Send private message
grooveman
Veteran
Veteran


Joined: 24 Feb 2003
Posts: 1217

PostPosted: Mon Feb 22, 2016 1:01 am    Post subject: Reply with quote

CPU does not appear to even be touched during the transfer. There is no load on the system.

The filesystem speed is likewise ruled out by copying the same files to another folder on the system. These are almost instantaneous.

This really seems to be (linux) networking related.

Thanks.

G
_________________
To look without without looking within is like looking without without looking at all.
Back to top
View user's profile Send private message
gordonb3
Apprentice
Apprentice


Joined: 01 Jul 2015
Posts: 185

PostPosted: Mon Feb 22, 2016 8:36 am    Post subject: Re: Windows faster than Linux? Reply with quote

grooveman wrote:

I'm beginning to believe that the problem is not with my smb(4).conf, but rather with my linux clients. Could it be that there are tweaks to the linux TCP/IP stack on the client side that I should be making?

That is the logical conclusion if Windows 8 machines can access the files quicker on the same Samba server. Meaning that the links you followed are rather useless because these are about speeding up the server, not the client.

Now the interesting part in your post is you mentioning copying files, plural. Starting with Vista Microsoft did some work on the SMB protocol, resulting in what is now known as smb2. This incorporates client side caching of frequently requested data that usually does not change very often and the Samba server supports this. Cifs however is not smb2 and as a result will spend more time reloading and interpreting that "static" data. You will likely be able to verify this by copying a single large file rather than a set of files, which should result in similar times for both Windows and Linux.
Back to top
View user's profile Send private message
schorsch_76
Guru
Guru


Joined: 19 Jun 2012
Posts: 450

PostPosted: Mon Feb 22, 2016 8:50 am    Post subject: Reply with quote

Try to use netcat to split the problem into two halfs.

Computer A (Receiver):
Code:
nc -l 10000 > /dev/null


Computer B (Sender):
Code:
cat /dev/zero | pv | nc targetip 10000



pv = sys-apps/pv (pipe view to determine the speed)
nc = netcat. There are different Versions in the portage tree.

If you get there, on the raw network your desired speed, you eliminated the network stack as an error source. Otherwise, the problem sits in the network stack (module options whatever). With this test (/dev/zero -> tcp socket -> /dev/null) you pipe Zeros to the other computers /dev/null. Zero and null are not filesystem or CPU bound.
_________________
// valid again: I forgot about the git access. Now 1.2GB big. Start: 2015-06-25
git daily portage tree
Web: https://github.com/schorsch1976/portage
git clone https://github.com/schorsch1976/portage
Back to top
View user's profile Send private message
chithanh
Developer
Developer


Joined: 05 Aug 2006
Posts: 2158
Location: Berlin, Germany

PostPosted: Mon Feb 22, 2016 10:27 am    Post subject: Reply with quote

If you suspect that the network is the bottleneck, iperf (available both on Gentoo and FreeNAS) will be able to verify this.

Any further speculation is of limited use until hard data comes in from measurements.
Back to top
View user's profile Send private message
gordonb3
Apprentice
Apprentice


Joined: 01 Jul 2015
Posts: 185

PostPosted: Mon Feb 22, 2016 11:37 am    Post subject: Reply with quote

chithanh wrote:
If you suspect that the network is the bottleneck,...

He doesn't. Read the opening post:
grooveman wrote:
However, when I boot the same computer to Gentoo Linux, that speed drops to about 60 mib/sec on the write, and about 75 mib on the read -- moving the same files on the same hardware, over the same network.

It is software related and unless one thinks file transfer should run even faster when both the client machine and the NAS are booted into Windows the problem is restricted to what is running on the client.
Back to top
View user's profile Send private message
chithanh
Developer
Developer


Joined: 05 Aug 2006
Posts: 2158
Location: Berlin, Germany

PostPosted: Mon Feb 22, 2016 11:40 am    Post subject: Reply with quote

gordonb3 wrote:
chithanh wrote:
If you suspect that the network is the bottleneck,...

He doesn't. Read the opening post:

He does. Read his second post:
grooveman wrote:
This really seems to be (linux) networking related.

iperf will confirm or refute this hypothesis.
If it confirms, next step would be identifying which exact part of the network is the bottleneck. Could be the Linux TCP/IP stack, the kernel NIC driver, or some misconfiguration which makes the NIC work non-optimally with other network components.
Back to top
View user's profile Send private message
grooveman
Veteran
Veteran


Joined: 24 Feb 2003
Posts: 1217

PostPosted: Mon Feb 22, 2016 2:23 pm    Post subject: Reply with quote

Thank you.

Since this seems to happen out of the box for all of the linux boxes I deal with, I guess I was hoping someone knew of a known issue inside the Linux TCP/IP stack that would account for this.

I really wanted to see if other people could corroborate this: the notion that Linux is slower than windows on the network (at least when using smb).

I will see what I can get out of iperf.

Thank you :)
_________________
To look without without looking within is like looking without without looking at all.
Back to top
View user's profile Send private message
gordonb3
Apprentice
Apprentice


Joined: 01 Jul 2015
Posts: 185

PostPosted: Mon Feb 22, 2016 3:44 pm    Post subject: Reply with quote

chithanh wrote:
gordonb3 wrote:
chithanh wrote:
If you suspect that the network is the bottleneck,...

He doesn't. Read the opening post:

He does. Read his second post:
grooveman wrote:
This really seems to be (linux) networking related.

iperf will confirm or refute this hypothesis.
If it confirms, next step would be identifying which exact part of the network is the bottleneck. Could be the Linux TCP/IP stack, the kernel NIC driver, or some misconfiguration which makes the NIC work non-optimally with other network components.

So we're both smart-asses :lol:

Yes, if the low network throughput can be linked to non-smb simple network transfers as well that would mean some low level networking component is not working the way it should. Because the FreeNAS machine appears to be unaffected by whatever may be causing this I'd not expect anything to come from this, but I guess it can't hurt. It's like you suggested: if the client machines all use the same NIC which is different from the one in the FreeNAS machine a faulty driver could slow down networking.
Back to top
View user's profile Send private message
1clue
Advocate
Advocate


Joined: 05 Feb 2006
Posts: 2569

PostPosted: Mon Feb 22, 2016 5:26 pm    Post subject: Reply with quote

schorsch_76 wrote:
Try to use netcat to split the problem into two halfs.

Computer A (Receiver):
Code:
nc -l 10000 > /dev/null


Computer B (Sender):
Code:
cat /dev/zero | pv | nc targetip 10000



pv = sys-apps/pv (pipe view to determine the speed)
nc = netcat. There are different Versions in the portage tree.

If you get there, on the raw network your desired speed, you eliminated the network stack as an error source. Otherwise, the problem sits in the network stack (module options whatever). With this test (/dev/zero -> tcp socket -> /dev/null) you pipe Zeros to the other computers /dev/null. Zero and null are not filesystem or CPU bound.


Just as a control group, I tried this with my systems and got 112MiB/s, wire speed would be 119 MiB/s. This is Gentoo (atom c2758+Intel I354 nic, igb driver) client, Ubuntu server (i7 920+Realtek RTL8111 nic, r8169 driver).
Back to top
View user's profile Send private message
grooveman
Veteran
Veteran


Joined: 24 Feb 2003
Posts: 1217

PostPosted: Mon Feb 22, 2016 6:05 pm    Post subject: Re: Windows faster than Linux? Reply with quote

gordonb3 wrote:
grooveman wrote:

I'm beginning to believe that the problem is not with my smb(4).conf, but rather with my linux clients. Could it be that there are tweaks to the linux TCP/IP stack on the client side that I should be making?

That is the logical conclusion if Windows 8 machines can access the files quicker on the same Samba server. Meaning that the links you followed are rather useless because these are about speeding up the server, not the client.

Now the interesting part in your post is you mentioning copying files, plural. Starting with Vista Microsoft did some work on the SMB protocol, resulting in what is now known as smb2. This incorporates client side caching of frequently requested data that usually does not change very often and the Samba server supports this. Cifs however is not smb2 and as a result will spend more time reloading and interpreting that "static" data. You will likely be able to verify this by copying a single large file rather than a set of files, which should result in similar times for both Windows and Linux.


Hi, Sorry, I missed this post (and the next). Actually I have tried sending one large file as well, the speed difference is still there.

Thanks.
_________________
To look without without looking within is like looking without without looking at all.
Back to top
View user's profile Send private message
szatox
Advocate
Advocate


Joined: 27 Aug 2013
Posts: 3136

PostPosted: Mon Feb 22, 2016 6:07 pm    Post subject: Reply with quote

Quote:
Computer A (Receiver):
Code:
nc -l 10000 > /dev/null


Computer B (Sender):
Code:
cat /dev/zero | pv | nc targetip 10000

There is a serious pitfall here. I noticed it when I was testing "virtual" connections: that pipe is too thin even for gigabit ethernet.
Use nc < /dev/zero and nc > /dev/null instead. And measure the speed with any other tool, like iftop, iptables, or even track progress with ifconfig if you can't do better, but do not push that data through a pipe.
Back to top
View user's profile Send private message
grooveman
Veteran
Veteran


Joined: 24 Feb 2003
Posts: 1217

PostPosted: Mon Feb 22, 2016 6:31 pm    Post subject: Reply with quote

Ok. I can send information from my linux PC to the NAS -- and it is cruising around 111mib/s -- commensurate with the windows machine moving data over smb.

However, I'm having trouble sending data (via nc) from the NAS to the linux machine.

The line (executed from the NAS)
Code:
cat /dev/zero |nc 10.9.99.200 10000


appears to do nothing. I just get my prompt back.

The line:
Code:
cat /dev/zero >nc 10.9.99.200 10000


Seems to hang. of course, all the while I'm running:
Code:
nc -l 10000 > /dev/null
on the linux box, and I have iptraf open in another terminal. Nothing registers. Both Nas and the linux machine just sit there, the prompt apparently hanging. Iptraf sits at zero.
_________________
To look without without looking within is like looking without without looking at all.
Back to top
View user's profile Send private message
1clue
Advocate
Advocate


Joined: 05 Feb 2006
Posts: 2569

PostPosted: Mon Feb 22, 2016 6:46 pm    Post subject: Reply with quote

szatox wrote:
Quote:
Computer A (Receiver):
Code:
nc -l 10000 > /dev/null


Computer B (Sender):
Code:
cat /dev/zero | pv | nc targetip 10000

There is a serious pitfall here. I noticed it when I was testing "virtual" connections: that pipe is too thin even for gigabit ethernet.
Use nc < /dev/zero and nc > /dev/null instead. And measure the speed with any other tool, like iftop, iptables, or even track progress with ifconfig if you can't do better, but do not push that data through a pipe.


@szatox,

I don't understand what you're saying here. Are you saying that a pipe cannot support more than 1gbps? What exactly can't support 1gbps?

FWIW I have an atom box that can cat /dev/zero | pv -ar | pbzip2 > /dev/null at 319MiB/s, it's doing it right now. That's 2.68 gbps.
Back to top
View user's profile Send private message
Akkara
Bodhisattva
Bodhisattva


Joined: 28 Mar 2006
Posts: 6702
Location: &akkara

PostPosted: Mon Feb 22, 2016 7:41 pm    Post subject: Reply with quote

grooveman wrote:
The line:
Code:
cat /dev/zero >nc 10.9.99.200 10000

Seems to hang. of course, all the while I'm running:...

That line simply made a file named "nc" and filled it with zeros. You might want to remove it now so it doesn't take up space. :)

I think the suggestion had been to run:
Code:
nc 10.9.99.200 10000 </dev/zero

_________________
Many think that Dilbert is a comic. Unfortunately it is a documentary.
Back to top
View user's profile Send private message
gordonb3
Apprentice
Apprentice


Joined: 01 Jul 2015
Posts: 185

PostPosted: Mon Feb 22, 2016 8:42 pm    Post subject: Reply with quote

If you have trouble running low level protocols as suggested by some, how about trying one of the other file sharing protocols supported by FreeNAS? You could try FTP, which is a seriously no-nonsense sixties protocol.
Back to top
View user's profile Send private message
grooveman
Veteran
Veteran


Joined: 24 Feb 2003
Posts: 1217

PostPosted: Tue Feb 23, 2016 12:44 am    Post subject: Reply with quote

Akkara wrote:
grooveman wrote:
The line:
Code:
cat /dev/zero >nc 10.9.99.200 10000

Seems to hang. of course, all the while I'm running:...

That line simply made a file named "nc" and filled it with zeros. You might want to remove it now so it doesn't take up space. :)

I think the suggestion had been to run:
Code:
nc 10.9.99.200 10000 </dev/zero


:lol: LOL

hah, missed that one. Yes, even trying it with your syntax still doesn't work. The weird thing is that I get an "RSET" in my iptraf, and the connection is cut immediately... weird.
_________________
To look without without looking within is like looking without without looking at all.
Back to top
View user's profile Send private message
schorsch_76
Guru
Guru


Joined: 19 Jun 2012
Posts: 450

PostPosted: Tue Feb 23, 2016 6:58 am    Post subject: Reply with quote

Akkara wrote:
grooveman wrote:
The line:
Code:
cat /dev/zero >nc 10.9.99.200 10000

Seems to hang. of course, all the while I'm running:...

That line simply made a file named "nc" and filled it with zeros. You might want to remove it now so it doesn't take up space. :)

I think the suggestion had been to run:
Code:
nc 10.9.99.200 10000 </dev/zero


Oh yes, you are right. I wrote this as i was at work on a Windows machine ;)
_________________
// valid again: I forgot about the git access. Now 1.2GB big. Start: 2015-06-25
git daily portage tree
Web: https://github.com/schorsch1976/portage
git clone https://github.com/schorsch1976/portage
Back to top
View user's profile Send private message
szatox
Advocate
Advocate


Joined: 27 Aug 2013
Posts: 3136

PostPosted: Tue Feb 23, 2016 5:16 pm    Post subject: Reply with quote

1clue, have a look at those:
Code:
nc -l -p 9999 > /dev/null & nc localhost 9999 < /dev/zero
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
TX:             cum:   18.7GB   peak:      0b                                                                rates:   10.7Gb  10.8Gb  5.75Gb
RX:                       0B               0b                                                                            0b      0b      0b
TOTAL:                 18.7GB              0b                                                                         10.7Gb  10.8Gb  5.75Gb


Code:
 nc -l -p 9999 > /dev/null & cat /dev/zero | nc localhost 9999
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
TX:             cum:   9.35GB   peak:   3.91Gb                                                               rates:   3.91Gb  3.87Gb  3.12Gb
RX:                       0B               0b                                                                            0b      0b      0b
TOTAL:                 9.35GB           3.91Gb                                                                        3.91Gb  3.87Gb  3.12Gb

Alright, I noticed that drop when I had some load on this box (and the above is with otherwise idle system). This time it is enough for gigabit eternet, though the difference between ~10Gbps and 4Gbps is still noticable, and there are already 10Gb networks out there. Bear it in mind when testing throughput.
And a bonus: adding another pipe doesn't really change anything, so it's not an overload issue.
Code:
nc -l -p 9999 | cat > /dev/null & cat /dev/zero | nc localhost 9999
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
TX:             cum:   5.81GB   peak:   4.07Gb                                                               rates:   4.07Gb  3.99Gb  2.91Gb
RX:                       0B               0b                                                                            0b      0b      0b
TOTAL:                 5.81GB           4.07Gb                                                                        4.07Gb  3.99Gb  2.91Gb

Tweaked line lengths in order to make the forum layout behave.Chiitoo
Back to top
View user's profile Send private message
EggplantSystems
n00b
n00b


Joined: 05 Sep 2016
Posts: 1
Location: Mobile, AL

PostPosted: Mon Sep 05, 2016 2:28 am    Post subject: SMB / CIFS Share Performance Reply with quote

I authored one of the original suggestions: https://eggplant.pro/blog/faster-samba-smb-cifs-share-performance/

Before I get to the client stuff, additional things to consider:

  • "strict allocate" makes a big difference, but your file system must support unwritten extents (XFS, ext4, BTRFS, OCS2) for it to work
  • "allocation roundup size" has no noticeable bearing on performance, but should probably be specified to keep space-wastage down when "strict allocate" = Yes
  • "socket options" only made a minor difference for me, but I am using Linux. Some of the socket options are different/not-applicable for different operating systems.

Drives are many orders of magnitude slower than CPU and RAM. Gigabit ethernet -- even unoptimized -- has just enough bandwidth for your typical 7200 RPM SATA drive. If you're not getting the good numbers, the problem is probably going to be file-system related. Samba has so many low-level tweaks for interfacing with the host filesystem that end up second-guessing the caching/allocation strategies of the host platform. When the samba tweaks undermine the native caching/allocation strategies, the results are going to be bad.

As for client performance, smbclient/cifs has options similar to that of the server (buffers, socket opts, and in some cases even defaults to the contents of smb.conf). You can use cifsiostat to get transfer stats. On Debian:

Code:
Mount options for cifs
       See the options section of the mount.cifs(8) man page (cifs-utils package must be installed).


Another thing that can make a difference is the corpus you use for testing. Sending a single 1GB file is going to go a lot faster than sending 1024 separate 1MB files. There is a decent amount of per-file overhead (names, ACLs, etc). The single file gets that out of the way once and can focus on raw data thereafter.
_________________
Sincerely,

Jason Stewart
Software Development, Eggplant Systems and Design, LLC
Back to top
View user's profile Send private message
Display posts from previous:   
Reply to topic    Gentoo Forums Forum Index Networking & Security All times are GMT
Page 1 of 1

 
Jump to:  
You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot vote in polls in this forum