Gentoo Forums
Gentoo Forums
Gentoo Forums
Quick Search: in
Abysmal SSH speeds on gigabit LAN...
View unanswered posts
View posts from last 24 hours

 
Reply to topic    Gentoo Forums Forum Index Networking & Security
View previous topic :: View next topic  
Author Message
The_Great_Sephiroth
Veteran
Veteran


Joined: 03 Oct 2014
Posts: 1602
Location: Fayetteville, NC, USA

PostPosted: Fri Oct 02, 2020 1:35 am    Post subject: Abysmal SSH speeds on gigabit LAN... Reply with quote

Something is wrong here. I got eight new PCs in. Loaded one, configured it, etc, then booted System Rescue CD 6.1.6 on all of them. Got password-less SSH going and started cloning the reference PC to the others. The speed is HORRIBLE. No idea why this is so sorry. Looking for advice here since I have done this dozens of times before and managed to get around 100MBps (yes, 100MBps). Now I peak around 2MBps...

The new systems are all Core i5-85xx, 16GB RAM, 256GB M.2 NVME, and gigabit LAN. I am also pushing the clone to a file on my laptop HDD in case I need to restore the systems in the future. The WiFi is 300Mbps so it should at LEAST be around 30MB/s but it is below 2MBps to all machines.

Laptop speed

I cloned from the reference PC as follows.
Code:

dd if=/dev/sda bs=4M status=progress | tee >(ssh root@192.168.0.2 "dd of=/dev/sda bs=4M") > >(ssh root@192.168.0.3 "dd of=/dev/sda bs=4M") > >(ssh root@192.168.0.4 "dd of=/dev/sda bs=4M") > >(ssh root@192.168.0.5 "dd of=/dev/sda bs=4M")

What is limiting me? We have all Cat6 here to a 24-port, 1Gbps switch. Using iperf3 shows me between 7 and 24 MiB/s, but even that fluctuates.
_________________
Ever picture systemd as what runs "The Borg"?
Back to top
View user's profile Send private message
The_Great_Sephiroth
Veteran
Veteran


Joined: 03 Oct 2014
Posts: 1602
Location: Fayetteville, NC, USA

PostPosted: Fri Oct 02, 2020 1:50 am    Post subject: Reply with quote

OK, I resolved it but I have no clue why this works. I simply added "> /dev/null" to the end of the command (I restarted it) and it jumped to 12.1MiB/s. No idea why.
Code:

dd if=/dev/sda bs=4M status=progress | tee >(ssh root@192.168.0.2 "dd of=/dev/sda bs=4M") > >(ssh root@192.168.0.3 "dd of=/dev/sda bs=4M") > >(ssh root@192.168.0.4 "dd of=/dev/sda bs=4M") > >(ssh root@192.168.0.5 "dd of=/dev/sda bs=4M") > >(ssh my.name@192.168.0.6 "dd of=/home/my.name/backup.img bs=4M") > /dev/null

Why was this the key?
_________________
Ever picture systemd as what runs "The Borg"?
Back to top
View user's profile Send private message
mike155
Advocate
Advocate


Joined: 17 Sep 2010
Posts: 4438
Location: Frankfurt, Germany

PostPosted: Fri Oct 02, 2020 2:18 am    Post subject: Reply with quote

We discussed this before: https://forums.gentoo.org/viewtopic-t-1094162.html. I explained the reason for ">/dev/null" in one of my posts.
Back to top
View user's profile Send private message
Hu
Moderator
Moderator


Joined: 06 Mar 2007
Posts: 21619

PostPosted: Fri Oct 02, 2020 5:40 pm    Post subject: Re: Abysmal SSH speeds on gigabit LAN... Reply with quote

The_Great_Sephiroth wrote:
The new systems are all Core i5-85xx, 16GB RAM, 256GB M.2 NVME, and gigabit LAN.
Code:
dd if=/dev/sda bs=4M status=progress | tee >(ssh root@192.168.0.2 "dd of=/dev/sda bs=4M") > >(ssh root@192.168.0.3 "dd of=/dev/sda bs=4M") > >(ssh root@192.168.0.4 "dd of=/dev/sda bs=4M") > >(ssh root@192.168.0.5 "dd of=/dev/sda bs=4M")
Why are you doing block level copies to an NVMe drive? That writes all the free space too, which is wasteful of your time and guarantees the drive thinks that everything has been written and needs to be preserved. That will cause write amplification later. Modern drives may not die early from write amplification, but you are still compelling the drive to work harder for no gain.
Back to top
View user's profile Send private message
The_Great_Sephiroth
Veteran
Veteran


Joined: 03 Oct 2014
Posts: 1602
Location: Fayetteville, NC, USA

PostPosted: Mon Oct 05, 2020 1:52 pm    Post subject: Reply with quote

OK, I had forgotten that, but here is the worst part, the entire command failed. I went back to an older method which works. Regardless, I now have the information in my notes and I will be adding them to my own manual.

Hu, how else should I make mirror copies of disks? I had eight PCs. I loaded one (Windows 10 Pro 64bit, all drivers, no software, built-in crap removed) and wanted to clone it to the other seven, identical PCs. I thought that this was the best way. I did change things up a bit and used 'xz' to compress and decompress. It allowed me to get around 161MB/s on the LAN.
Code:

dd if=/dev/sda bs=4M status=progress | xz -z -c -T0 | ssh root@1.2.3.4 "xz -d -c -T0 | dd of=/dev/sda bs=4M"

That worked wonders! But if I am not supposed to clone block-level devices with dd, then how? I chose 4MiB for the size because most erase-blocks are 4MiB in size, which means I read each one (and write each one on the remote system) only once.

Oh, and the abysmal speeds were the previous tech company. Five switches. All 100Mbps. We are replacing them with two 24-port gig switches soon. We used our switches for the cloning.
_________________
Ever picture systemd as what runs "The Borg"?
Back to top
View user's profile Send private message
Hu
Moderator
Moderator


Joined: 06 Mar 2007
Posts: 21619

PostPosted: Mon Oct 05, 2020 5:18 pm    Post subject: Reply with quote

If you want a bit-perfect clone, dd is the right way to get it. If you only need a data-consistent clone, the techniques recommended for a "stage4" backup are better. They will transfer less data, and thus take less time. Cloning block-level devices that have no record of which blocks are in use (typically, spinning drives) is fine, but slow. Cloning block-level devices that remember which blocks are in use (typically SSD/NVME drives) is bad, because after the clone, every block has been written, so every block is in use. Yes, you only wrote to each block once, but you wrote to every block at least once, so now the receiving systems' drives will always need to erase a block before writing to it. If you had written only the blocks you needed to preserve your data, portions of the drive would still be marked as unused, which should improve the firmware's ability to do wear leveling. If the drive supports the TRIM command, and you trust its implementation, you may be able to TRIM the free space to reverse the consequences of the block-level clone.

Alternatively, you could just ignore the problem and hope the drive is retired due to old age before the write amplification catches up to you. I have some drives that have not experienced this type of write amplification, but still experienced relatively heavy write loads, that lasted far longer than I expected. My advice is based on the premise that write amplification is bad because (1) it makes the drive work harder, (2) it makes the drive perform useless writes, which cuts into your finite limit on number of writes before the drive fails. Your drives may perform well enough, and have enough capacity, that neither of these will matter in practice. Even if these points do matter, unless you can use TRIM, the advice only helps you for drives you have not already written.
Back to top
View user's profile Send private message
finalturismo
Guru
Guru


Joined: 06 Jan 2020
Posts: 410

PostPosted: Mon Oct 05, 2020 5:24 pm    Post subject: Reply with quote

The_Great_Sephiroth wrote:
OK, I had forgotten that, but here is the worst part, the entire command failed. I went back to an older method which works. Regardless, I now have the information in my notes and I will be adding them to my own manual.

Hu, how else should I make mirror copies of disks? I had eight PCs. I loaded one (Windows 10 Pro 64bit, all drivers, no software, built-in crap removed) and wanted to clone it to the other seven, identical PCs. I thought that this was the best way. I did change things up a bit and used 'xz' to compress and decompress. It allowed me to get around 161MB/s on the LAN.
Code:

dd if=/dev/sda bs=4M status=progress | xz -z -c -T0 | ssh root@1.2.3.4 "xz -d -c -T0 | dd of=/dev/sda bs=4M"

That worked wonders! But if I am not supposed to clone block-level devices with dd, then how? I chose 4MiB for the size because most erase-blocks are 4MiB in size, which means I read each one (and write each one on the remote system) only once.

Oh, and the abysmal speeds were the previous tech company. Five switches. All 100Mbps. We are replacing them with two 24-port gig switches soon. We used our switches for the cloning.



Why not put some cheap SFP 10GB cards in your devices you transfer files over?

They work the same as 10GB ethernet cards and have better ping.

The only pain is router config for DHCP which is what iam having issues with when bonding for extra speed.
Back to top
View user's profile Send private message
The_Great_Sephiroth
Veteran
Veteran


Joined: 03 Oct 2014
Posts: 1602
Location: Fayetteville, NC, USA

PostPosted: Wed Oct 07, 2020 2:14 pm    Post subject: Reply with quote

Hu, I see your point but the "defragmenter" crap in Windows 10 will not defrag an SSD. Instead it does a TRIM/discard. Won't that fix the issue I created? In Linux I would simply run fstrim, but that won't work on NTFS unless I am mistaken. In other words, won't trimming fix the oopsie I did for the time being? If not, would a blkdiscard work?

This also still leaves me with the question as to what I should use to clone solid-state media. You mention a stage4 backup. I have never heard of this, but I will be searching for it shortly.

Finalturismo, We are about to upgrade the LAN from 100Mbps on five switches to gigabit on two. Will drastically improve their speeds.
_________________
Ever picture systemd as what runs "The Borg"?
Back to top
View user's profile Send private message
Hu
Moderator
Moderator


Joined: 06 Mar 2007
Posts: 21619

PostPosted: Wed Oct 07, 2020 4:55 pm    Post subject: Reply with quote

Yes, a TRIM should be perfect, assuming the drive's firmware handles it properly. I had not realized these were NTFS. The stage4 is typically for Linux systems. Trying to move NTFS that way would be awkward, though there may be tools in ntfsprogs that can do what you want.
Back to top
View user's profile Send private message
The_Great_Sephiroth
Veteran
Veteran


Joined: 03 Oct 2014
Posts: 1602
Location: Fayetteville, NC, USA

PostPosted: Fri Oct 09, 2020 5:19 pm    Post subject: Reply with quote

Yeah, since the client uses Windows 10 Pro it has to be NTFS. Shame we can't install to BTRFS. I have enjoyed reliable performance with BTRFS RAID10 on my gaming rig (new games are on NTFS on an NVME drive) but Windows only boots from NTFS. It cannot even boot from ReFS.
_________________
Ever picture systemd as what runs "The Borg"?
Back to top
View user's profile Send private message
finalturismo
Guru
Guru


Joined: 06 Jan 2020
Posts: 410

PostPosted: Wed Oct 14, 2020 4:45 pm    Post subject: Reply with quote

The_Great_Sephiroth wrote:
Yeah, since the client uses Windows 10 Pro it has to be NTFS. Shame we can't install to BTRFS. I have enjoyed reliable performance with BTRFS RAID10 on my gaming rig (new games are on NTFS on an NVME drive) but Windows only boots from NTFS. It cannot even boot from ReFS.


Did you put the KVM on the nvme, this seemed to fix all my performance issues when i did it. The slow performance from the KVM having to access the virtual hard drive totally screws everything. Graphics performance, audio ... etc.....
Back to top
View user's profile Send private message
Tony0945
Watchman
Watchman


Joined: 25 Jul 2006
Posts: 5127
Location: Illinois, USA

PostPosted: Wed Oct 14, 2020 5:22 pm    Post subject: Reply with quote

Probably a stupid question, "Why not use rsync for backups?" it seems to be working on NTFS for me. At least with NTFS mounted as CIFS (samba).
Back to top
View user's profile Send private message
Display posts from previous:   
Reply to topic    Gentoo Forums Forum Index Networking & Security All times are GMT
Page 1 of 1

 
Jump to:  
You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot vote in polls in this forum