Gentoo Forums
Gentoo Forums
Gentoo Forums
Quick Search: in
RAID5 + NFS-server: Crash when using cmp on NFS client
View unanswered posts
View posts from last 24 hours

 
Reply to topic    Gentoo Forums Forum Index Kernel & Hardware
View previous topic :: View next topic  
Author Message
UlFie
Tux's lil' helper
Tux's lil' helper


Joined: 01 Nov 2011
Posts: 112
Location: Wuppertal

PostPosted: Tue Nov 13, 2012 10:12 pm    Post subject: RAID5 + NFS-server: Crash when using cmp on NFS client Reply with quote

Hopefully, this is the right place to ask for help. At least two components involved are part of the kernel...

Unfortunately, I have devised a reliable way to crash my server system. All I have to do is access a file on an ext4-filesystem residing on a Software-RAID5 exported via NFS (v3) from an NFS client by means of the program cmp (comparing it to a -- hopefully -- exact copy), as long as the file in question has not been accessed (and therefore cached) before. I end up with the server hanging, not responding to any keys being pressed or pings, no log messages, but a constantly lit hard disk LED. All I can do is press the reset or power button.

The RAID5 array consists of 6 primary partitions on 6 Hitachi 2TB hard drives which fill up almost the whole disks (except for 8GiB on each disk that I generously use as swap partitions), so the whole filesystem has almost 9TiB. The array and filesystem had been built using the (Gentoo-derived) SysRescueCD (if I remember correctly, version 2.3.1, but it might have been 2.8.0) prior to installing Gentoo on an SSD. The server is based on a Gigabyte 870A-UD3 motherboard, an AMD PhenomII-X6 1100T and 16GiB RAM and has been thoroughly tested prior to setting up the RAID array (I carried out a good deal of file system benchmarks booted from the SysRescueCD, leaving the computer running for several days non-stop).

The Gentoo-flavour used is AMD64, initially with kernel 3.3.8, now with 3.5.7. The problem is reproducable when booting from an openSUSE 12.2 KDE Live CD (64bit) and setting up the NFS server, so it is not Gentoo-specific. The NFS client is running openSUSE 11.4 (64bit), the two computers are connected via the Gbit-switch of an Edimax BR-6574n router (both using Gbit ethernet) using fixed IPs.

The files I could crash the computer with are just under 1GiB in size (they have originally been created on a FAT32 USB drive by a DVB-S receiver and are in a TS format suitable for dvbcut). I can copy them from the file system in question to another one locally on the server and run cmp on the two (again, locally on the server) without problems (after rebooting, to make sure there is nothing cached). Also, accessing the copy from an NFS client using cmp does not cause a crash. On the NFS client, I can cat the files to /dev/null without problems. If I do that (or anything that will bring the files into cache on the server) first, using cmp afterwards causes no crash of the server. After starting cmp, a certain amount of data gets transferred across the net before the crash happens, observable using ksysguard running on the client. Accessing the filesystem on the server by means of sshfs, I recently could not cause any crash, no matter how many files I tried to compare to their copies, but I am fairly sure that I had similar problems in the past with another client using sshfs (although crashes were not that reliable, sometimes a file could be compared without problems, at other times the same file crashed the server).

Does anybody have any ideas what strange access patterns to the ext4 filesystem and the RAID5 array cmp (unlike cat) might be generating via NFS (but not locally or via sshfs) that cause the server to crash? What could I do to make my kernel sufficiently verbose to be able to see what happens just before the crash (and where would I see that output)? Is there any check I could run on the RAID array or the filesystem living on it (beyond fsck) to make sure there is no unidentified error in these?

Please let me know should I have missed any information necessary to understand or track my problem.

Any help is greatly appreciated, thanks in advance.
Back to top
View user's profile Send private message
eccerr0r
Watchman
Watchman


Joined: 01 Jul 2004
Posts: 9601
Location: almost Mile High in the USA

PostPosted: Tue Nov 13, 2012 11:03 pm    Post subject: Reply with quote

I knew I wasn't smoking anything when the exact same thing happened to me a while back (forgot when I posted this first on f.g.o) on different hardware (core2 machines, one on x86, one on x86_64). Unfortunately I haven't been able to reproduce it reliably with a small data set.

There definitely is a kernel problem here... just that I can't reproduce it 100% of the time, if I remember correctly I was able to trigger the server to crash just by reading a bunch of files soon after mounting; I'll try it again on cached files, it could possibly be that...

Not sure when the best time to ping lkml about this... but there is a real issue here...

Edit:
Hmm... I initially saw it with NFS but using tar + netcat appeared to also trigger the problem... Might not be the same thing after all.

https://forums.gentoo.org/viewtopic-t-901224.html
_________________
Intel Core i7 2700K/Radeon R7 250/24GB DDR3/256GB SSD
What am I supposed watching?
Back to top
View user's profile Send private message
UlFie
Tux's lil' helper
Tux's lil' helper


Joined: 01 Nov 2011
Posts: 112
Location: Wuppertal

PostPosted: Sat Nov 17, 2012 12:36 am    Post subject: Reply with quote

Thank you for providing the link to your earlier thread.

I have carried out some further experiments and it seems that cmp can be replaced by any sufficiently slow operation on the NFS client. While my original cmp used a file on a USB hard disk attached to the NFS client (and its copy on the NFS server), I could verify that the server crash also occurs when that file is on the client's SATA disk (but not if it is cached on the client, which would make cmp too fast). I could also crash the server using cp to copy the file to the client's SATA disk, though this may not always happen as the crash occured only after 88% of the file was already copied (and no wrong bytes got copied, the copy was just truncated when cp was aborted with Ctrl-C after the server had crashed). Using a (self-written) hex dump program to read the file on the NFS server and writing the dump to the client's SATA disk, the server already crashed after about 5% of the file was processed.

Next I limited network bandwidth (replacing the Gbit router with a 100Mbit router). Now cmp did not crash the server, but the hex dump program did (after 23% of the file was processed). Note that the bandwidth of the hex dump program (when writing output to disk) is about 7 or 8 MB/s while cmp can easily reach 12.5MB/s (the theoretical maximum over a 100Mbit/s network). Still the file had to be on the server's RAID, no crash occurred when reading from the server's SSD.

Next week I intend to carry out still more experiments replacing NFS with netcat and limiting bandwidth with throttle (available in portage) or jetcat (from http://scara.com/~schirmer/o/jetcat/). Also, I will try to set up another RAID array (and also use other RAID levels than 5) on the server (using the 8GiB partitions on the 6 disks currently set up as swap space). I'll keep you posted.
Back to top
View user's profile Send private message
UlFie
Tux's lil' helper
Tux's lil' helper


Joined: 01 Nov 2011
Posts: 112
Location: Wuppertal

PostPosted: Sun Dec 02, 2012 9:16 pm    Post subject: Reply with quote

Progressing slowly. This update is mainly intended for myself as a reminder what I have tried and which results I had.

For slow access to a file I usually use jetcat now, with parameter ``-r 4096 1'' (which means read 4096 Bytes per 1 ms, so that is a bandwidth of 4kiB/ms or roughly 4MB/s). Using that on the NFS client for a file on the server's RAID in the form ``jetcat -r 4096 1 <somefile >/dev/null'' reliably crashes the server within seconds.

Using the (Gentoo-derived) SystemRescueCD (version 2.8.0 with kernel 3.2.19) I could verify that the problem is not 64bit-specific, but also occurs with a 32bit-Kernel.

The problem also occurs with a new RAID array set up on the 8GiB partitions on the 6 disks originally used as swap space for RAID levels 5 (as seen before for the main array) and 0, but not for RAID level 1 (OK, RAID 1 with 6 disks is ridiculous redundancy, will try that again with just 2 disks later, also for RAID 0, and just 3 disks for RAID 5 could be interesting, too).

BTW, is it normal that (in the output of ``mdadm --detail'') members of a RAID5 array are numbered 0, 1, 2, 3, 4, 6 (omitting 5!)?

I mentioned earlier that I was ``fairly sure'' to have seen similar crashes of the server using sshfs rather than NFS. From log files and tcsh history files (which, unlike bash's, contain time stamps, just another reason for my preferring tcsh over bash) I could reconstruct that things were a bit different. Firstly, roles were reversed; the computer with the RAID 5 array was sshfs client of a remote machine. Secondly, I could only produce crashes when booting from SystemRescueCD version 2.3.1 with kernel 2.6.38 (and not from my up-to-date, mostly stable Gentoo installation or SystemRescueCD version 2.8.0), and crashes look a bit different (sometimes the server would spontaneously reboot, sometimes it would just stop responding to anything as described before, but without the hard disk LED being constantly lit). The crashes can be produced by issuing ``cmp file_on_sshfs file_on_raid5'' or the following two commands (in different terminals, order is not important, nor is time between issuing them as long as they run in parallel for at least a few seconds) for sufficiently big files: ``cat file_on_sshfs >/dev/null'' and ``jetcat -r 4096 1 <file_on_raid5 >/dev/null''. Both commands work when not running in parallel. Crashes only occurred for file_on_raid5 really being on a RAID5 array, when on the SSD or a RAID0 array, no crashes occurred. So while probably being related, this problem may be different (and may be considered irrelevant as it has disappeared from more recent releases).
Back to top
View user's profile Send private message
NeddySeagoon
Administrator
Administrator


Joined: 05 Jul 2003
Posts: 54028
Location: 56N 3W

PostPosted: Sun Dec 02, 2012 9:57 pm    Post subject: Reply with quote

UlFie,

Does checking the raid cause a crash too?
Code:
echo check > /sys/block/md2/md/sync_action
or whatever the right mdX is?

That does low level reads on the raid and validates the redundant data. If that works, its would suggest that the raid code is ok.+
_________________
Regards,

NeddySeagoon

Computer users fall into two groups:-
those that do backups
those that have never had a hard drive fail.
Back to top
View user's profile Send private message
eccerr0r
Watchman
Watchman


Joined: 01 Jul 2004
Posts: 9601
Location: almost Mile High in the USA

PostPosted: Sun Dec 02, 2012 10:22 pm    Post subject: Reply with quote

Oh...just as a curiosity - what ethernet card are you using in the NFS server (specific chip)?

Wonder if there's some correlation with the Realtek 8111...
_________________
Intel Core i7 2700K/Radeon R7 250/24GB DDR3/256GB SSD
What am I supposed watching?
Back to top
View user's profile Send private message
UlFie
Tux's lil' helper
Tux's lil' helper


Joined: 01 Nov 2011
Posts: 112
Location: Wuppertal

PostPosted: Mon Dec 03, 2012 2:14 pm    Post subject: Reply with quote

Thanks for your input.

@NeddySeagoon: What would that check do for a RAID 0 (for which the crash also occurs)? Can the RAID code be OK (even if that check works, which I will try tonight) if involvement of specific levels of RAID is apparently necessary? My feeling (and this is not based on any knowledge!) currently is that parallel I/O related to striping (which is found in RAID 0 and 5, but not 1) might be causing problems. Of course I still have to see if the crashes are reproducable without NFS involvement (so far, I could not reproduce them using local operations only or replacing NFS with sshfs). If they are not, there is probably something wrong with NFS, but why does the data source have to be RAID 0 or 5 to cause a crash (and not RAID 1 with 6 disks or a single SSD)?

@eccerr0r: The motherboard specification says Realtek 8111D/E (will have to take a look at the actual board first for more precise information). That's an interesting point, maybe I should try some other network hardware on the server side (WLAN by means of an USB stick should be the easiest way). Actually, I would love to make some experiments with different hardware (maybe it is even the SATA controller driver that causes problems?), but I don't have that many disks lying around that I could use with some older motherboards...
Back to top
View user's profile Send private message
eccerr0r
Watchman
Watchman


Joined: 01 Jul 2004
Posts: 9601
Location: almost Mile High in the USA

PostPosted: Mon Dec 03, 2012 4:35 pm    Post subject: Reply with quote

Interesting. Yes the board that fails on me is also a Gigabyte (EP43-UD3) with an RTL8111. I have an Intel PCIe ethernet card I could try if I could reproduce the problem reliably; as of late it's been not easy to reproduce for some reason...
_________________
Intel Core i7 2700K/Radeon R7 250/24GB DDR3/256GB SSD
What am I supposed watching?
Back to top
View user's profile Send private message
NeddySeagoon
Administrator
Administrator


Joined: 05 Jul 2003
Posts: 54028
Location: 56N 3W

PostPosted: Mon Dec 03, 2012 5:21 pm    Post subject: Reply with quote

UlFie,

You cannot check a raid0 as there is no redundant data to validate.

My media server does NFS sharing for the media player. The player is diskless, so everything is over NFS version3.
My server has a quad intle NIC card though.

OK, its a little more complex than that as the media server is a KVM, which uses the virtuio network driver to attach to a bridge provided by the KVM host.
The KVM host has a quad intel NIC, to provide physical LAN ports.

It all works with no hangups
_________________
Regards,

NeddySeagoon

Computer users fall into two groups:-
those that do backups
those that have never had a hard drive fail.
Back to top
View user's profile Send private message
eccerr0r
Watchman
Watchman


Joined: 01 Jul 2004
Posts: 9601
Location: almost Mile High in the USA

PostPosted: Mon Dec 03, 2012 9:45 pm    Post subject: Reply with quote

This problem is weird however. I have another machine I use as my primary server and it has an rtl8169 chip in a PCI slot (versus a PCIe lane). I've never had it crash on its raid5. If there's an issue it might be hardware. Would be nice to just swap with a pci card or another brand just to rule it out as a possible failure mechanism.

*edit*

and ouch... I don't have any PCIe x4 slots on this afflicted machine for this dual Intel Gbit card... boo...
_________________
Intel Core i7 2700K/Radeon R7 250/24GB DDR3/256GB SSD
What am I supposed watching?
Back to top
View user's profile Send private message
UlFie
Tux's lil' helper
Tux's lil' helper


Joined: 01 Nov 2011
Posts: 112
Location: Wuppertal

PostPosted: Tue Dec 04, 2012 3:43 pm    Post subject: Reply with quote

Latest results: ``echo check ...'' works (on a new RAID 5 with three disks, after its initial build had finished and a filesystem been created and populated). Members are numbered 0, 1, and 3 (omitting 2).

RAID 0 with two disks can also be used to crash the server.

Using a WLAN USB stick for networking, I could not crash the server, but that might be due to unsatisfactory data rates (at most 2.5 MB/s in ad-hoc mode with both the client laptop and the USB stick claiming to support type `n' WiFi, and even less than 1.2 MB/s through a `g++' type router) and a strange bandwidth use pattern: When limiting bandwidth (using jetcat) to about 60% of the maximum possible, data first gets transferred at the maximum speed possible for a few seconds, then comes a break for a few seconds, then full speed again and so forth, rather than transmitting data at a nearly constant rate (as observed for ethernet transmissions). All these speed measurements were made using ksysguard on the client with an update rate of 2 seconds, so ``constant'' data rate over ethernet means a mostly constant 2-second average; that may actually hide similar use patterns with much shorter intervals.

Another strange observation (as if skipped member numbers in RAID 5 arrays were not strange enough): My Gentoo system has a manually configured kernel (from Gentoo sources), but I use genkernel to create the initial RAM disk with parameters ``--mdadm --mdadm-config=/etc/mdadm.conf''. That config file just contains a homehost line and I leave everything else to be done automagically. After having set up the two-disk RAID 0 (as /dev/md/test) and three-disk RAID 5 (as /dev/md/test5) and rebooting, results regarding md devices and links in /dev/md are somewhat weird: It is not really a surprise that I end up with one dysfunctional device that uses the sixth of my test partitions (that would still contain information for a six-disk RAID 0 from a previous experiment, originally set up as /dev/md/test), the other members are of course missing. But /dev/md/test and /dev/md/test5 point to /dev/md126, one as a relative link ../md126, the other with an absolute path; one should rather have pointed to /dev/md125 (``mdadm -D'' tells me that /dev/md125 and /dev/md126 are exactly those two newly created arrays, I am not in front of that machine and have forgotten the exact order). So maybe there is something wrong with mdadm as well.

Although having tried hard to read what is printed on the network chip, I could not. That chip is partially underneath the GPU card and very tiny, and I cannot get sufficiently close to it with a magnifying glass. Too bad I don't have a teenager's eyes anymore... So we have to rely on the specs.

At work, I borrowed a Gbit PCI card (in addition to some 100Mbit cards I have at home) and three old 40GB PATA disks to stuff into an older machine. Will keep trying...


Edit: Removed claim that suspected mdadm problem might have been fixed already; there is no new stable version of that available. I mixed something up, sorry.
Back to top
View user's profile Send private message
UlFie
Tux's lil' helper
Tux's lil' helper


Joined: 01 Nov 2011
Posts: 112
Location: Wuppertal

PostPosted: Tue Dec 11, 2012 2:44 pm    Post subject: Reply with quote

More results: I could not crash an older system based on a MSI motherboard with an Athlon XP CPU and 3 PATA-disks (booted from SysRescueCD 2.8.0), neither using on-board-100Mbit-LAN nor using a SysKonnect Gbit-PCI-card (which gave me at most -- but pretty much constantly -- a quarter of the theoretically possible bandwidth, not sure why, but I would not expect too much from a 64bit card in a 32bit slot...).

Also, I could find a (ridiculous) work-around for the crashes: If I limit the number of CPUs (or cores) used to 1 adding ``maxcpus=1'' to the kernel command line when booting (poor old Tux sitting lonely at the top of the console screen without his 5 brothers...) I don't seem to be able to crash the server, so the issue seems to be SMP-related. This might explain why it was impossible to crash the older (single-core) system mentioned above.

Using a more recent kernel (namely 3.6.9 which is the ``alternative'' kernel on the latest SysRescueCD 3.1.2) did not help to avoid crashes (with all cores in use).

The problem with mdadm creating weird links in /dev/md/ vanished when I zeroed out the one partition that was left of an earlier test (with the same name as a current test array). So while this may not be desired behaviour, it is mostly the result of a user error.

Next up are some more invasive tests, the Gbit-PCI-card and the PATA-disks will be plugged into the server (the latter using a controller on a PCI-card already tried in the older system, and if I manage to reach the plug I might even try two of the disks on the PATA controller also available on the motherboard).
Back to top
View user's profile Send private message
NeddySeagoon
Administrator
Administrator


Joined: 05 Jul 2003
Posts: 54028
Location: 56N 3W

PostPosted: Wed Dec 12, 2012 10:30 pm    Post subject: Reply with quote

UlFie,

What do you have
Code:
 [*]       RAID-4/RAID-5/RAID-6 Multicore processing (EXPERIMENTAL)
set to?

If its off, kernel raid runs single threaded. Its on on my Phenom II with 6 cores and it works here.
Whatever its set to, try flipping it.
_________________
Regards,

NeddySeagoon

Computer users fall into two groups:-
those that do backups
those that have never had a hard drive fail.
Back to top
View user's profile Send private message
UlFie
Tux's lil' helper
Tux's lil' helper


Joined: 01 Nov 2011
Posts: 112
Location: Wuppertal

PostPosted: Thu Dec 13, 2012 9:34 pm    Post subject: Reply with quote

This experimental setting is off (for the kernel I compiled myself as well as for the one kernel I checked on a SysRescueCD). Flipping it to on did not influence the crash behaviour (using a RAID5 array; I didn't bother to try RAID0 afterwards as that should not be influenced by this setting).
Back to top
View user's profile Send private message
NeddySeagoon
Administrator
Administrator


Joined: 05 Jul 2003
Posts: 54028
Location: 56N 3W

PostPosted: Thu Dec 13, 2012 9:49 pm    Post subject: Reply with quote

UlFie,

On your NFS server, change the /etc/exports, so it allows 127.0.0.1 and its own IP as clients.

Again on the server, mount the NFS share over each IP address in turn and test.
Testing via 127.0.0.1 removes all network hardware from the test but you still exercise the NFS software.
Testing via the real IP uses some but not all the network stack and none of the real network hardware.

Using the same machine for both client and server is not generally useful but it works.
In this case, it may rule out a few things.
_________________
Regards,

NeddySeagoon

Computer users fall into two groups:-
those that do backups
those that have never had a hard drive fail.
Back to top
View user's profile Send private message
UlFie
Tux's lil' helper
Tux's lil' helper


Joined: 01 Nov 2011
Posts: 112
Location: Wuppertal

PostPosted: Sat Dec 15, 2012 8:34 pm    Post subject: Reply with quote

Thank you for this idea.

Using the server as its own client removed the crashes. I tried 127.0.0.1, 127.0.0.2, the regular IP associated with eth0 and another IP I assigned using ``ifconfig eth0:0 ...''. Everything seems to work fine (well, except for the automounter which would only mount from 127.0.0.2 as regular nfs, for the others I end up with a ``bind'' type mount, so I had to mount manually). As expected, I don't see any traffic on the router's LEDs.

Also, using 100Mbit PCI cards (based on Realtek 8139D and MX98715AEC chips, respectively, the latter using the DEC Tulip driver) in the server I could not reproduce crashes. Note that I could reproduce crashes over 100Mbit with the on-board network, so they are not simply gone because of the lower network performance. So I guess we can conclude the issue is related to the on-board Gbit network (as already suspected by eccerr0r). Yet I think it is strange that crashes only occur with more than one CPU core active and using NFS, but not sshfs.

Unfortunately, the Gbit PCI card mentioned earlier (SysKonnect 9D21 based on a Broadcom chip labelled BCM5411KQM) does not seem to work in the server (note that the motherboard has 32bit PCI slots only, but the card is 64bit, yet that should not be a problem as it works on another board as mentioned before). From SysRescueCD logs I took the information that this card uses the tg3 driver module (CONFIG_TIGON3 is the relevant kernel configuration setting). But as soon as I start to transfer larger amounts of data (more than ``ping'' or what has to be transmitted for mounting or ``ls'' via NFS, all these seem to work), the whole transfer hangs (pretty much unrecoverable, I had to reboot the client to get rid of hanging NFS mounts). Here's what I found in the kernel log (this is taken from the log created by SysRescueCD, but the log for my Gentoo kernel is similar; device name is eth2 here because of udev persistent-net rules; for better readability I removed date, time and ``sysresccd kernel:'' from the line's beginnings):
Code:
[  222.186485] tg3 0000:04:08.0: eth2: DMA Status error.  Resetting chip.
[  222.188053] tg3 0000:04:08.0: eth2: 0x00000000: 0x44001148, 0x22b00006, 0x02000011, 0x00004010
   ... loads of further hex data, but with gaps in the sequence of ``addresses'' ...
[  222.188394] tg3 0000:04:08.0: eth2: 0x00006840: 0x00000039, 0x00000001, 0x000a2c2a, 0x00000000
[  222.188401] tg3 0000:04:08.0: eth2: 0: Host status block [00000005:00000000:(0000:00e5:0000):(00e5:0108)]
[  222.188406] tg3 0000:04:08.0: eth2: 0: NAPI info [00000000:00000000:(0120:0101:01ff):00e2:(01aa:0000:0000:0000)]
[  222.290578] tg3 0000:04:08.0: tg3_stop_block timed out, ofs=c00 enable_bit=2
[  222.392141] tg3 0000:04:08.0: tg3_stop_block timed out, ofs=4800 enable_bit=2
[  222.588497] tg3 0000:04:08.0: eth2: Link is down
[  222.822913] tg3 0000:04:08.0: eth2: DMA Status error.  Resetting chip.
[  222.824471] tg3 0000:04:08.0: eth2: 0x00000000: 0x44001148, 0x22b00006, 0x02000011, 0x00004010
   ... loads of further hex data, but with gaps in the sequence of ``addresses'' ...
[  222.824613] tg3 0000:04:08.0: eth2: 0x00006840: 0x00000039, 0x00000000, 0x000a2c2a, 0x00000000
[  222.824617] tg3 0000:04:08.0: eth2: 0: Host status block [00000004:00000000:(0000:0000:0000):(0000:0095)]
[  222.824619] tg3 0000:04:08.0: eth2: 0: NAPI info [00000000:00000000:(0098:008a:01ff):0000:(00c8:0000:0000:0000)]
[  222.927073] tg3 0000:04:08.0: tg3_stop_block timed out, ofs=4800 enable_bit=2
[  225.836722] tg3 0000:04:08.0: eth2: Link is up at 1000 Mbps, full duplex
[  225.836733] tg3 0000:04:08.0: eth2: Flow control is on for TX and on for RX
[  259.728245] tg3 0000:04:08.0: eth2: DMA Status error.  Resetting chip.
[  259.729827] tg3 0000:04:08.0: eth2: 0x00000000: 0x44001148, 0x22b00006, 0x02000011, 0x00004010
   ... loads of further hex data, but with gaps in the sequence of ``addresses'' ...
[  259.730504] tg3 0000:04:08.0: eth2: 0x00006840: 0x0000003d, 0x00000001, 0x000a2c2a, 0x00000000
[  259.730519] tg3 0000:04:08.0: eth2: 0: Host status block [00000004:00000000:(0000:0001:0000):(0001:0000)]
[  259.730529] tg3 0000:04:08.0: eth2: 0: NAPI info [00000000:00000000:(0009:0000:01ff):0000:(00c8:0000:0000:0000)]
[  259.833256] tg3 0000:04:08.0: tg3_stop_block timed out, ofs=4800 enable_bit=2
[  260.029646] tg3 0000:04:08.0: eth2: Link is down
[  263.332038] tg3 0000:04:08.0: eth2: Link is up at 1000 Mbps, full duplex
[  263.332049] tg3 0000:04:08.0: eth2: Flow control is on for TX and on for RX
[  282.093885] tg3 0000:04:08.0: eth2: DMA Status error.  Resetting chip.
[  282.095469] tg3 0000:04:08.0: eth2: 0x00000000: 0x44001148, 0x22b00006, 0x02000011, 0x00004010
   ... loads of further hex data, but with gaps in the sequence of ``addresses'' ...
[  282.096140] tg3 0000:04:08.0: eth2: 0x00006840: 0x0000003d, 0x00000001, 0x000a2c2a, 0x00000000
[  282.096155] tg3 0000:04:08.0: eth2: 0: Host status block [00000004:00000000:(0000:0003:0000):(0003:0000)]
[  282.096164] tg3 0000:04:08.0: eth2: 0: NAPI info [00000000:00000000:(0006:0000:01ff):0000:(00c8:0000:0000:0000)]
[  282.199089] tg3 0000:04:08.0: tg3_stop_block timed out, ofs=4800 enable_bit=2
[  282.395495] tg3 0000:04:08.0: eth2: Link is down
[  285.770597] tg3 0000:04:08.0: eth2: Link is up at 1000 Mbps, full duplex
[  285.770607] tg3 0000:04:08.0: eth2: Flow control is on for TX and on for RX
[  331.386917] ------------[ cut here ]------------
[  331.386932] WARNING: at net/sched/sch_generic.c:255 dev_watchdog+0xf6/0x193()
[  331.386937] Hardware name: GA-870A-UD3
[  331.386942] NETDEV WATCHDOG: eth2 (tg3): transmit queue 0 timed out
[  331.386945] Modules linked in: nfsd tpm_tis tpm floppy edac_core tpm_bios ppdev sp5100_tco serio_raw pcspkr edac_mce_amd microcode k10temp i2c_piix4 parport_pc parport raid10 raid456 async_raid6_recov async_pq raid6_pq async_xor xor async_memcpy async_tx raid1 raid0 multipath linear nouveau tg3 firewire_ohci aic7xxx r8169 ata_generic firewire_core broadcom usb_storage mii pata_acpi scsi_transport_spi ttm drm_kms_helper drm pata_jmicron i2c_algo_bit i2c_core mxm_wmi video wmi
[  331.387015] Pid: 0, comm: swapper/1 Not tainted 3.6.9-alt312-amd64 #2
[  331.387018] Call Trace:
[  331.387023]  <IRQ>  [<ffffffff8104aed2>] warn_slowpath_common+0x80/0x98
[  331.387042]  [<ffffffff8104af7e>] warn_slowpath_fmt+0x41/0x43
[  331.387049]  [<ffffffff8161c44c>] ? netif_tx_lock+0x45/0x7b
[  331.387058]  [<ffffffff8161c578>] dev_watchdog+0xf6/0x193
[  331.387067]  [<ffffffff81058f55>] run_timer_softirq+0x1d4/0x2a5
[  331.387074]  [<ffffffff8161c482>] ? netif_tx_lock+0x7b/0x7b
[  331.387082]  [<ffffffff81052487>] __do_softirq+0xe3/0x1e3
[  331.387091]  [<ffffffff8106b232>] ? hrtimer_interrupt+0x108/0x1b2
[  331.387099]  [<ffffffff816f5d3c>] call_softirq+0x1c/0x30
[  331.387107]  [<ffffffff81010b49>] do_softirq+0x41/0x7e
[  331.387115]  [<ffffffff8105221c>] irq_exit+0x52/0xc0
[  331.387125]  [<ffffffff8102cf80>] smp_apic_timer_interrupt+0x86/0x94
[  331.387131]  [<ffffffff816f564a>] apic_timer_interrupt+0x6a/0x70
[  331.387134]  <EOI>  [<ffffffff81033c39>] ? native_safe_halt+0x6/0x8
[  331.387149]  [<ffffffff81016920>] default_idle+0x4b/0x85
[  331.387155]  [<ffffffff81016a30>] amd_e400_idle+0xd6/0x102
[  331.387162]  [<ffffffff810160d2>] cpu_idle+0xbb/0xfa
[  331.387170]  [<ffffffff816e76af>] start_secondary+0x23a/0x23c
[  331.387175] ---[ end trace c6533279b9952a78 ]---
[  331.387183] tg3 0000:04:08.0: eth2: transmit timed out, resetting
[  331.388787] tg3 0000:04:08.0: eth2: 0x00000000: 0x44001148, 0x22b00006, 0x02000011, 0x00004010
   ... loads of further hex data, but with gaps in the sequence of ``addresses'' ...
[  331.389432] tg3 0000:04:08.0: eth2: 0x00006840: 0x0000003d, 0x00000001, 0x000a2c2a, 0x00000000
[  331.389447] tg3 0000:04:08.0: eth2: 0: Host status block [00000004:00000000:(0000:000b:0000):(000b:0005)]
[  331.389456] tg3 0000:04:08.0: eth2: 0: NAPI info [00000000:00000000:(0008:0005:01ff):000b:(00d3:0000:0000:0000)]
[  331.492113] tg3 0000:04:08.0: tg3_stop_block timed out, ofs=4800 enable_bit=2
[  331.688486] tg3 0000:04:08.0: eth2: Link is down
[  335.215601] tg3 0000:04:08.0: eth2: Link is up at 1000 Mbps, full duplex
[  335.215610] tg3 0000:04:08.0: eth2: Flow control is on for TX and on for RX

That stuff only ends at shutdown.

Any opinions on that?

Maybe I should get me some other Gbit cards for testing, but I don't know if it's good or bad that the low-cost ones (hey, it's for testing only!) all seem to be based on Realtek chips (it should at least help investigate reproducability and/or a motherboard related problem, but it would be necessary to see some other chipset work without crashes to be sure the Realtek driver is to be blamed).

Edit: No, it's not the on-board Gbit network. I placed the PATA controller PCI card (Silicon Image, Inc. PCI0680) in the server and the three disks already used before next to it. Exporting the file system on that RAID5 array via NFS and reading from it on a client at various speeds, I could not crash the server (just like reading from the the SSD in the server or a RAID1 array on the SATA disks).

Oh, and the limited bandwidth reading from the older computer is probably not caused by performance problems of the SysKonnect card in that system, but by the rather slow PATA disks. Using the on-board Gbit network of the server, the maximum bandwidth was not (or at most very little) higher, measured using ksysguard slightly more than 30MB/s.
Back to top
View user's profile Send private message
UlFie
Tux's lil' helper
Tux's lil' helper


Joined: 01 Nov 2011
Posts: 112
Location: Wuppertal

PostPosted: Mon Feb 11, 2013 8:32 pm    Post subject: Reply with quote

Still trying to get rid of this...

Kernel upgrade to 3.6.11 did not fix the problem.

Using netcat (in its very basic form) no crashes occurred.

Taking the risk of temporarily changing the configuration of a production system at work (based on a Gigabyte GA-MA785GT-UD3H motherboard featuring a Realtek 8111C network chip, AMD PhenomII-X6 1055T and two Samsung 1GB SATA disks), I could not cause a crash (booted from SysRescueCD I set up a RAID0 array on two partitions, one on each disk, normally used as swap space; this server as well as the regular client were connected to the corporate Gbit network).

In addition, I purchased a low-cost PCI Gbit network card (TP-LINK TG-3269 based on Realtek RTL8169SC chip, using the same kernel driver as the on-board controller). Using that, I could not cause a crash, but surprisingly the maximum bandwidth obtained (using ``cat somefile > /dev/null'' on NFS) was only about 60MB/s, i.e. half of what is theoretically possible and was actually obtained using the on-board controller.
Back to top
View user's profile Send private message
UlFie
Tux's lil' helper
Tux's lil' helper


Joined: 01 Nov 2011
Posts: 112
Location: Wuppertal

PostPosted: Fri Feb 15, 2013 12:58 pm    Post subject: Reply with quote

Changed the SATA connections a bit. For my tests reported above (using the partitions originally set up as swap space), I set up a RAID5 on sda1, sdc1 and sde1, and a RAID0 on sdb1 and sdd1 (where device names are assigned in the order in which the six SATA drives are detected during boot which is the same as the order in which their connectors are numbered on the board). Now I unplugged the two disks the RAID0 test array resides on as well as my SSD and optical drive on the other two connectors (which use a different controller on the motherboard) and reconnected the optical drive to one of the connectors originally used for a hard disk. Booting from SysRescueCD and exporting the remaining RAID5 via NFS, it became harder to crash the system, only after reducing jetcat's bandwidth parameters to 2048kiB/ms (i.e. roughly 2MB/s) the system crashed (but still an ``unusual'' amount of data could be transferred before the crash).

Next I connected the two disks with the RAID0 to the two connectors of the other controller. No crashes could be caused, not even by further reducing the bandwidth to as little as 256kiB/ms. Similarly, after stopping the RAID5 and setting up a RAID0 on two of its disks (those that were originally assigned device names sda1 and sde1, if that makes any difference, now they became sda1 and sdc1 as the original sdb and sdc became sde and sdf after being moved to the other controller), I was not able to crash the system. This result may explain why I was not able to crash the system at work as it has only two disks, and my RAID0 with just two disks (in the earlier tests) was at least surrounded by four other disks with two (unmounted) RAID5 arrays set up on them. Yet I have no explanation why the sheer presence of other drives can make a difference...

I'll go back to the original connections soon (well, maybe I will first try what crashes I can cause with arrays the drives of which are connected to different controllers).
Back to top
View user's profile Send private message
Display posts from previous:   
Reply to topic    Gentoo Forums Forum Index Kernel & Hardware All times are GMT
Page 1 of 1

 
Jump to:  
You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot vote in polls in this forum