Gentoo Forums
Gentoo Forums
Gentoo Forums
Quick Search: in
Gentoo NFS Client > Windows NFS Server Performance Issues
View unanswered posts
View posts from last 24 hours

 
Reply to topic    Gentoo Forums Forum Index Networking & Security
View previous topic :: View next topic  
Author Message
Crimjob
Tux's lil' helper
Tux's lil' helper


Joined: 04 Dec 2006
Posts: 111

PostPosted: Sat Jun 02, 2012 4:39 am    Post subject: Gentoo NFS Client > Windows NFS Server Performance Issues Reply with quote

Hey guys,

I've been racking my brain over this one for months and I've finally given up and made the (perhaps temporary) switch to CIFS.

I've found plenty of posts around the internet about NFS issues, but my situation appears to be unique(ish) as I haven't found any posts about it else where (only vice versa).

I have a Windows Small Business Server 2011 running Server for NFS and Services for Unix. It took me awhile to get NIS and username mapping and everything set up, which is kind of why I want to keep running it, but it's always run like crap.

Small file transfers are doable, but very slow:

Code:
Aurora ~ # dd if=/dev/zero of=/mnt/nfs/testfile_10MB bs=10M count=1
1+0 records in
1+0 records out
10485760 bytes (10 MB) copied, 3.93638 s, 2.7 MB/s
Aurora ~ # dd if=/dev/zero of=/mnt/nfs/testfile_50MB bs=50M count=1
1+0 records in
1+0 records out
52428800 bytes (52 MB) copied, 26.0095 s, 2.0 MB/s


Large file transfers are pretty much out of the question. They do eventually succeed, but they take absolutely *forever*, and to boot, they completely lock up not one, but both machines to the point of being unusable. These are file transfers between two of my main servers, so this doesn't work out well for my network with two out of four main components are unresponsive :)

CIFS on the other hand seems to be much quicker and does not cause these lockups, but I feel there is yet more speed available to my network (all Cisco gigabit wired gear, less than 0.1ms latency across the board, no errors or discards).

Code:
Aurora ~ # dd if=/dev/zero of=/mnt/cifs/TV/testfile_10MB bs=10M count=1
1+0 records in
1+0 records out
10485760 bytes (10 MB) copied, 0.170322 s, 61.6 MB/s
Aurora ~ # dd if=/dev/zero of=/mnt/cifs/TV/testfile_50MB bs=50M count=1
1+0 records in
1+0 records out
52428800 bytes (52 MB) copied, 1.10896 s, 47.3 MB/s
Aurora ~ # dd if=/dev/zero of=/mnt/cifs/TV/testfile_500MB bs=500M count=1
1+0 records in
1+0 records out
524288000 bytes (524 MB) copied, 9.31721 s, 56.3 MB/s


One thing I really wanted to do but haven't figured out is async transfers, but there doesn't appear to be the option on the Windows server, and setting it client side has no effect.

I'm just wondering if anyone else has tried similar and has come up either way? As I'm unable to find much information on the web, other than the likely reason that Microsoft doesn't do well at implementing NFS for Windows. I just can't imagine why they'd even bother putting it out there if the performance is this bad all the time though :(. Perhaps there's other options / settings I can try?
_________________
"Who are you to judge the life I live? I know I'm not perfect and I don't live to be, but before you start pointing fingers... make sure your hands are clean." ~Bob Marley
Back to top
View user's profile Send private message
Rexilion
Veteran
Veteran


Joined: 17 Mar 2009
Posts: 1044

PostPosted: Sat Jun 02, 2012 2:49 pm    Post subject: Reply with quote

The below post might be considered as 'throwing raw stuff over the wall and never look back'. Since this seems to be a more Windows related issue than a Linux one. However, I did find some links:

http://technet.microsoft.com/en-us/library/bb463205.aspx (optimizations)

This mentions the following as well (besides the other probable usefull stuff):

Quote:
NFS-Only Mode

Enhanced NFS performance can be achieved by using the NFSONLY.EXE application. This allows a share to be modified to do more aggressive caching to improve performance. This may be set on a share-by-share basis. NFS-Only mode should not be used on any share that can be accessed by any means other than NFS, because data corruption can occur. However, as much as a 15% improvement has been observed when using an NFS-only share. The syntax of this command is:

NfsOnly <resourcename|sharename> [/enable|/disable]

Resourcename|sharename is the name of the NFS share. The /enable option turns on NFS-Only mode for the specified resource or share, while the /disable option turns off NFS-Only mode for the specified resource or share. The MicroSuck Services for Network File System Server service must be restarted for NFS-Only mode to take effect.


I also found this:

http://www.suacommunity.com/forum/tm.aspx?m=18142 (post mentioning somewhat identical problems)
http://www.oreillynet.com/onlamp/blog/2008/04/nfs_vs_cifs_for_vmware.html (mentions the same outcome: use CIFS)

And as a final remark:

Dit you try jumbo (ifconfig eth0 mtu 9000) frames?
Back to top
View user's profile Send private message
Crimjob
Tux's lil' helper
Tux's lil' helper


Joined: 04 Dec 2006
Posts: 111

PostPosted: Sun Jun 03, 2012 3:13 am    Post subject: Reply with quote

Rexilion wrote:
The below post might be considered as 'throwing raw stuff over the wall and never look back'. Since this seems to be a more Windows related issue than a Linux one. However, I did find some links:

http://technet.microsoft.com/en-us/library/bb463205.aspx (optimizations)

This mentions the following as well (besides the other probable usefull stuff):

Quote:
NFS-Only Mode

Enhanced NFS performance can be achieved by using the NFSONLY.EXE application. This allows a share to be modified to do more aggressive caching to improve performance. This may be set on a share-by-share basis. NFS-Only mode should not be used on any share that can be accessed by any means other than NFS, because data corruption can occur. However, as much as a 15% improvement has been observed when using an NFS-only share. The syntax of this command is:

NfsOnly <resourcename|sharename> [/enable|/disable]

Resourcename|sharename is the name of the NFS share. The /enable option turns on NFS-Only mode for the specified resource or share, while the /disable option turns off NFS-Only mode for the specified resource or share. The MicroSuck Services for Network File System Server service must be restarted for NFS-Only mode to take effect.


I also found this:

http://www.suacommunity.com/forum/tm.aspx?m=18142 (post mentioning somewhat identical problems)
http://www.oreillynet.com/onlamp/blog/2008/04/nfs_vs_cifs_for_vmware.html (mentions the same outcome: use CIFS)


Interesting, those are very very close to my situations and I had no luck finding them myself :P.

I've tried the majority of the optomizations listed there and have mixed results. It looks like some of the changes have made the client a bit more responsive but the server still practically locks up. The speeds got worse as well. Wishing I would have backed up the defaults :)

Sounds like I'll be sticking with CIFS for this situation though as it seems to work much better. One of those posts suggests that VMWare supports CIFS as well so I'll have to give that a try to get my hobbling VM's on NFS better performance.

Quote:

And as a final remark:

Dit you try jumbo (ifconfig eth0 mtu 9000) frames?


I have not. I've actually been pretty concerned about doing so. I have some things on the network that require specific MTU settings, and everything outside of my network must match 1500. My primary concern with enabling jumbo frames is the amount of work my router (gentoo box) will have to do if the traffic has to go to the internet. Mind you, I'm not really hurting in the power department (it's overkill for a router, Dual CPU Dual Core AMD Opteron 280's, 8GB RAM, 10KRPM SAS), but I just haven't had the time to "play around" with that coupled with the fear :).

Would you happen to know of any potential downsides to enabling jumbo frames? Should I even worry about the extra overhead on the router this day of age with the power I have?
_________________
"Who are you to judge the life I live? I know I'm not perfect and I don't live to be, but before you start pointing fingers... make sure your hands are clean." ~Bob Marley
Back to top
View user's profile Send private message
Crimjob
Tux's lil' helper
Tux's lil' helper


Joined: 04 Dec 2006
Posts: 111

PostPosted: Sun Jun 03, 2012 3:14 am    Post subject: Reply with quote

I couldn't even get a 1GB file to complete successfully :(

Code:

Aurora ~ # dd if=/dev/zero of=/mnt/nfs/testfile_10MB bs=10M count=1
1+0 records in
1+0 records out
10485760 bytes (10 MB) copied, 10.2177 s, 1.0 MB/s
Aurora ~ # dd if=/dev/zero of=/mnt/nfs/testfile_50MB bs=50M count=1
1+0 records in
1+0 records out
52428800 bytes (52 MB) copied, 36.353 s, 1.4 MB/s
Aurora ~ # dd if=/dev/zero of=/mnt/nfs/testfile_100MB bs=100M count=1
1+0 records in
1+0 records out
104857600 bytes (105 MB) copied, 23.3514 s, 4.5 MB/s

_________________
"Who are you to judge the life I live? I know I'm not perfect and I don't live to be, but before you start pointing fingers... make sure your hands are clean." ~Bob Marley
Back to top
View user's profile Send private message
Rexilion
Veteran
Veteran


Joined: 17 Mar 2009
Posts: 1044

PostPosted: Sun Jun 03, 2012 5:21 am    Post subject: Reply with quote

To be honest, I find CIFS a lot better than NFS too. Strange hangs, wouldn't unmount, file operations that took ages, file transfers that never completed (or half of it) and finally unexplainable unresponsiveness.

About the jumbo frames, enable them only on the card serving your internal network (card). And, judging by how it works, it should reduce the load on the router given the same data rate. Instead of having to detect/repair/check/verify/handle 9000/1500 = 6 packets, it only has to handle 1 packet. Make sure your hardware support it and do try it, you are hurting yourself if you don't.

I'm out of idea's here, did notice some other NFS server implementations for Windows. Maybe those are better? Unix services for Windows aren't exactly 'optimized' according to that vmware pages. And considering that argument validity, it could be true. They have CIFS, why would they put effort in NFS?
Back to top
View user's profile Send private message
Crimjob
Tux's lil' helper
Tux's lil' helper


Joined: 04 Dec 2006
Posts: 111

PostPosted: Thu Jun 07, 2012 1:50 am    Post subject: Reply with quote

Well so far CIFS seems to be functioning so I guess I can live with it. I've also switched my VMWare box to iSCSI which is much faster than the Windows NFS.

As for the jumbo frames, all the server equipment supports it. My concern is that there are some devices on the network that will access content on one of the servers, some of them are hard coded to 1500 MTU. Is that going to cause a problem if I have the server running at 9000? Otherwise, I'm excited for the test :)
_________________
"Who are you to judge the life I live? I know I'm not perfect and I don't live to be, but before you start pointing fingers... make sure your hands are clean." ~Bob Marley
Back to top
View user's profile Send private message
Rexilion
Veteran
Veteran


Joined: 17 Mar 2009
Posts: 1044

PostPosted: Thu Jun 07, 2012 5:56 am    Post subject: Reply with quote

Unfortunately that is not possible, I found this on wikipedia:

Quote:
Internet Protocol subnetworks require that all hosts in a subnet have an identical MTU. As a result, interfaces using the standard frame size and interfaces using the jumbo frame size should not be in the same subnet. To reduce interoperability issues, network interface cards capable of jumbo frames require explicit configuration to use jumbo frames.


However, a nice alternative is setting up two networks, one with mtu 1500 and another with 9000. One of the servers just could contain *another* LAN card with a small seperate network for those units that are not capable of doing mtu, and do forwarding for that network.
_________________
fs/super.c: "Self-destruct in 5 seconds. Have a nice day...\n"
Back to top
View user's profile Send private message
Crimjob
Tux's lil' helper
Tux's lil' helper


Joined: 04 Dec 2006
Posts: 111

PostPosted: Thu Jun 07, 2012 6:41 am    Post subject: Reply with quote

Interesting! Likely a problem I had when attempting years past :P

Luckily I have a spare gigabit line card for my Cisco 4006 I should be able to dedicate to such a task.

I'm going to pick up two Intel PCIX Dual Gigabit NIC's and give this all a whirl on a separate subnet just for large file transfers, hopefully allowing for expandability in the future (maybe Mr. VMWare can have split port channels, one for iSCSI and one for regular LAN). You've really got my mind going on optimization, not sure where I can stop now!

Thanks again for all your assistance :) I'll post back if I have any problems once the gear arrives.
_________________
"Who are you to judge the life I live? I know I'm not perfect and I don't live to be, but before you start pointing fingers... make sure your hands are clean." ~Bob Marley
Back to top
View user's profile Send private message
Display posts from previous:   
Reply to topic    Gentoo Forums Forum Index Networking & Security All times are GMT
Page 1 of 1

 
Jump to:  
You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot vote in polls in this forum