Forums

Skip to content

Advanced search
  • Quick links
    • Unanswered topics
    • Active topics
    • Search
  • FAQ
  • Login
  • Register
  • Board index Assistance Kernel & Hardware
  • Search

[solved] very slow qcow2 performance

Kernel not recognizing your hardware? Problems with power management or PCMCIA? What hardware is compatible with Gentoo? See here. (Only for kernels supported by Gentoo.)
Post Reply
Advanced search
8 posts • Page 1 of 1
Author
Message
Elleni
Veteran
Veteran
Posts: 1298
Joined: Tue May 23, 2006 10:56 pm

[solved] very slow qcow2 performance

  • Quote

Post by Elleni » Tue Feb 04, 2020 7:30 pm

I have rent a hosted rootserver for our company and setup proxmox on top of debian. While it works, the read/write performance seems wrong. Following the specs and setup.

Code: Select all

mdadm -D /dev/md0p3
/dev/md0p3:
           Version : 1.2
     Creation Time : Fri Jan 24 12:54:42 2020
        Raid Level : raid1
        Array Size : 1865361408 (1778.95 GiB 1910.13 GB)
     Used Dev Size : 1928159232 (1838.84 GiB 1974.44 GB)
      Raid Devices : 2
     Total Devices : 2
       Persistence : Superblock is persistent

     Intent Bitmap : Internal

       Update Time : Tue Feb  4 20:35:32 2020
             State : clean 
    Active Devices : 2
   Working Devices : 2
    Failed Devices : 0
     Spare Devices : 0

Consistency Policy : bitmap

              Name : 15630:0
              UUID : 0c380f2d:2126b89b:261d71dc:6eb43710
            Events : 23766

    Number   Major   Minor   RaidDevice State
       0       8        2        0      active sync   /dev/sda2
       1       8       18        1      active sync   /dev/sdb2
cpu:

Code: Select all

cat /proc/cpuinfo 
processor	: 0
vendor_id	: AuthenticAMD
cpu family	: 23
model		: 8
model name	: AMD Ryzen 7 2700 Eight-Core Processor
stepping	: 2
microcode	: 0x800820b
cpu MHz		: 3077.930
cache size	: 512 KB
physical id	: 0
siblings	: 16
core id		: 0
cpu cores	: 8
apicid		: 0
initial apicid	: 0
fpu		: yes
fpu_exception	: yes
cpuid level	: 13
wp		: yes
flags		: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf pni pclmulqdq monitor ssse3 fma cx16 sse4_1 sse4_2 movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb hw_pstate sme ssbd sev ibpb vmmcall fsgsbase bmi1 avx2 smep bmi2 rdseed adx smap clflushopt sha_ni xsaveopt xsavec xgetbv1 xsaves clzero irperf xsaveerptr arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic v_vmsave_vmload vgif overflow_recov succor smca
bugs		: sysret_ss_attrs null_seg spectre_v1 spectre_v2 spec_store_bypass
bogomips	: 6399.21
TLB size	: 2560 4K pages
clflush size	: 64
cache_alignment	: 64
address sizes	: 43 bits physical, 48 bits virtual
power management: ts ttp tm hwpstate cpb eff_freq_ro [13] [14]
Raid 1 on two spinning 2TB Disks:

Code: Select all

Disk: /dev/sda and sdb
                            Size: 1.8 TiB, 2000398934016 bytes, 3907029168 sectors
                                      Label: dos, identifier: 0xb359d18b

    Device         Boot              Start           End       Sectors     Size    Id Type
>>  /dev/sda1      *                  2048        194559        192512      94M    83 Linux                   
    /dev/sda2                       194560    3856777215    3856582656     1.8T    fd Linux raid autodetect
    /dev/sda3                   3856777216    3907028991      50251776      24G    82 Linux swap / Solaris
I have setup a small partition for the os, and the large part is for storage. One big, luksencrypted raid1 partition, formatted with ext4. While it works and a windows 10 vm seems quite responsive on normal use, I see a very bad read/write performance. For example it takes about 20 or even more minutes to copy a 7GB file with reads and writes @ ~2-3 MB/s. So I beleave that either there is a misconfiguration, or maybe a defective component in the server?

Can anybody please help me find the source/bottleneck, as I can't beleave it is normal or is it ? cpu and memory seem idle while read/write to disk is soo slow. Or can this be because of the encryption? I would have thinked that if that would be the problem, I would see more cpu load? *confused*

I will be happy to give more informations, if you point me to what could be needed to find out whats going on. The raid was built by the debian installer. I just created a new partition, formatted it with ext4 and mounted it.

Code: Select all

lspci
00:00.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) Root Complex
00:00.2 IOMMU: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) I/O Memory Management Unit
00:01.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-1fh) PCIe Dummy Host Bridge
00:01.3 PCI bridge: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) PCIe GPP Bridge
00:02.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-1fh) PCIe Dummy Host Bridge
00:03.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-1fh) PCIe Dummy Host Bridge
00:03.1 PCI bridge: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) PCIe GPP Bridge
00:04.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-1fh) PCIe Dummy Host Bridge
00:07.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-1fh) PCIe Dummy Host Bridge
00:07.1 PCI bridge: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) Internal PCIe GPP Bridge 0 to Bus B
00:08.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-1fh) PCIe Dummy Host Bridge
00:08.1 PCI bridge: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) Internal PCIe GPP Bridge 0 to Bus B
00:14.0 SMBus: Advanced Micro Devices, Inc. [AMD] FCH SMBus Controller (rev 59)
00:14.3 ISA bridge: Advanced Micro Devices, Inc. [AMD] FCH LPC Bridge (rev 51)
00:18.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) Data Fabric: Device 18h; Function 0
00:18.1 Host bridge: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) Data Fabric: Device 18h; Function 1
00:18.2 Host bridge: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) Data Fabric: Device 18h; Function 2
00:18.3 Host bridge: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) Data Fabric: Device 18h; Function 3
00:18.4 Host bridge: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) Data Fabric: Device 18h; Function 4
00:18.5 Host bridge: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) Data Fabric: Device 18h; Function 5
00:18.6 Host bridge: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) Data Fabric: Device 18h; Function 6
00:18.7 Host bridge: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) Data Fabric: Device 18h; Function 7
03:00.0 USB controller: Advanced Micro Devices, Inc. [AMD] Device 43d0 (rev 01)
03:00.1 SATA controller: Advanced Micro Devices, Inc. [AMD] 400 Series Chipset SATA Controller (rev 01)
03:00.2 PCI bridge: Advanced Micro Devices, Inc. [AMD] 400 Series Chipset PCIe Bridge (rev 01)
16:00.0 PCI bridge: Advanced Micro Devices, Inc. [AMD] 400 Series Chipset PCIe Port (rev 01)
16:01.0 PCI bridge: Advanced Micro Devices, Inc. [AMD] 400 Series Chipset PCIe Port (rev 01)
16:02.0 PCI bridge: Advanced Micro Devices, Inc. [AMD] 400 Series Chipset PCIe Port (rev 01)
16:03.0 PCI bridge: Advanced Micro Devices, Inc. [AMD] 400 Series Chipset PCIe Port (rev 01)
16:04.0 PCI bridge: Advanced Micro Devices, Inc. [AMD] 400 Series Chipset PCIe Port (rev 01)
16:08.0 PCI bridge: Advanced Micro Devices, Inc. [AMD] 400 Series Chipset PCIe Port (rev 01)
18:00.0 Ethernet controller: Realtek Semiconductor Co., Ltd. RTL8111/8168/8411 PCI Express Gigabit Ethernet Controller (rev 15)
1a:00.0 Ethernet controller: Intel Corporation 82574L Gigabit Network Connection
1c:00.0 USB controller: ASMedia Technology Inc. ASM1142 USB 3.1 Host Controller
1d:00.0 VGA compatible controller: NVIDIA Corporation GK208 [GeForce GT 710B] (rev a1)
1d:00.1 Audio device: NVIDIA Corporation GK208 HDMI/DP Audio Controller (rev a1)
1e:00.0 Non-Essential Instrumentation [1300]: Advanced Micro Devices, Inc. [AMD] Zeppelin/Raven/Raven2 PCIe Dummy Function
1e:00.2 Encryption controller: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) Platform Security Processor
1e:00.3 USB controller: Advanced Micro Devices, Inc. [AMD] USB 3.0 Host controller
1f:00.0 Non-Essential Instrumentation [1300]: Advanced Micro Devices, Inc. [AMD] Zeppelin/Renoir PCIe Dummy Function
1f:00.2 SATA controller: Advanced Micro Devices, Inc. [AMD] FCH SATA Controller [AHCI mode] (rev 51)
1f:00.3 Audio device: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) HD Audio Controller
Last edited by Elleni on Wed Feb 05, 2020 6:13 pm, edited 2 times in total.
Top
szatox
Advocate
Advocate
Posts: 3858
Joined: Tue Aug 27, 2013 12:35 pm

  • Quote

Post by szatox » Tue Feb 04, 2020 7:48 pm

You mentioned 3 machines in your setup, which one is slow?
If the problem exists exclusively on your gentoo VM: are you using virtio drivers? I think proxmox uses emulated IDE controller by default.
If the hypervisor is slow, what encryption algorithm do you use there? Is it natively supported by your hardware? Have you tested it's performance on your hardware?
Also, do you use some funny setup that amplifies IO on your HDDs?
So I beleave that either there is a misconfiguration, or maybe a defective component in the server?
Anything interesting in dmesg?
Top
Elleni
Veteran
Veteran
Posts: 1298
Joined: Tue May 23, 2006 10:56 pm

  • Quote

Post by Elleni » Tue Feb 04, 2020 8:04 pm

We ordered two ryzen 3rd gen machines, each will have 2x2TB nvme but those are not delivered yet.

This is just one hosted 2nd gen ryzen 8 core machine, which will serve als backup node, where I installed proxmox over debian buster. The vm is a windows 10 vm. Controler is virtio scsi and the virtual disk is qcow2 with writeback cache. This vm will primarly used for a business application which utilizes mssqlexpress thats why, once I have bether read/write performance, I'd like to look further if there is something to optimize for db usage. On the other hand with an average of 2-3 MB/s read/write performance, no need to look into optimizing anyway. :wink:

Here is whats going on while copying a file on the same disk within windows

Code: Select all

iostat -k 1 2
Linux 5.3.13-2-pve (hrs) 	04.02.2020 	_x86_64_	(16 CPU)

avg-cpu:  %user   %nice %system %iowait  %steal   %idle
           2.07    0.00    0.56    6.19    0.00   91.18

Device             tps    kB_read/s    kB_wrtn/s    kB_read    kB_wrtn
sda             100.28      1093.32      4586.48   43672980  183207555
sdb              91.22       576.12      4586.50   23013376  183208451
md0             131.35      1670.71      4592.52   66736792  183449030
dm-0            102.77      1576.37      4376.91   62968461  174836324

avg-cpu:  %user   %nice %system %iowait  %steal   %idle
           1.07    0.00    0.31    5.26    0.00   93.36

Device             tps    kB_read/s    kB_wrtn/s    kB_read    kB_wrtn
sda              66.00       452.00      3504.00        452       3504
sdb              63.00       764.00      2608.00        764       2608
md0              79.00      1216.00      2604.00       1216       2604
dm-0             79.00      1216.00      2620.00       1216       2620
I created the encrypted partition following our gentoo wiki.

Code: Select all

cryptsetup luksDump /dev/md0p3
LUKS header information
Version:       	2
Epoch:         	5
Metadata area: 	16384 [bytes]
Keyslots area: 	16744448 [bytes]
UUID:          	9a2bcc12-27a0-475f-9793-1b892b3a785b
Label:         	(no label)
Subsystem:     	(no subsystem)
Flags:       	(no flags)

Data segments:
  0: crypt
	offset: 16777216 [bytes]
	length: (whole device)
	cipher: aes-xts-plain64
	sector: 512 [bytes]

Keyslots:
  1: luks2
	Key:        512 bits
	Priority:   normal
	Cipher:     aes-xts-plain64
	Cipher key: 512 bits
	PBKDF:      argon2i
	Time cost:  4
	Memory:     1048576
	Threads:    4
	Salt:       xx yy zz 
	AF stripes: 4000
	AF hash:    sha256
	Area offset:290816 [bytes]
	Area length:258048 [bytes]
	Digest ID:  0
Tokens:
Digests:
  0: pbkdf2
	Hash:       sha256
	Iterations: 197397
	Salt:       xx yy zz 
	Digest:    xx yy zz
dmesg

It can't be normal performance for this setup, can it?

Code: Select all

cat /proc/mdstat 
Personalities : [raid1] [linear] [multipath] [raid0] [raid6] [raid5] [raid4] [raid10] 
md0 : active raid1 sda2[0] sdb2[1]
      1928159232 blocks super 1.2 [2/2] [UU]
      bitmap: 5/15 pages [20KB], 65536KB chunk

unused devices: <none>

Code: Select all

cat /sys/block/sda/queue/scheduler 
[mq-deadline] none

Code: Select all

sysctl -a | grep -i raid
dev.raid.speed_limit_max = 200000
dev.raid.speed_limit_min = 1000

Code: Select all

hdparm -T -t /dev/sda

/dev/sda:
 Timing cached reads:   19782 MB in  2.00 seconds = 9903.88 MB/sec
 Timing buffered disk reads: 210 MB in  3.05 seconds =  68.83 MB/sec

Code: Select all

/dev/sdb:
 Timing cached reads:   22674 MB in  2.00 seconds = 11353.60 MB/sec
 Timing buffered disk reads: 400 MB in  3.01 seconds = 132.92 MB/sec

Code: Select all

/dev/md0:
 Timing cached reads:   19846 MB in  2.00 seconds = 9935.15 MB/sec
 HDIO_DRIVE_CMD(identify) failed: Inappropriate ioctl for device
 Timing buffered disk reads: 384 MB in  3.00 seconds = 127.85 MB/sec
And finally the encrypted partition:

Code: Select all

/dev/md0p3:
 Timing cached reads:   19500 MB in  2.00 seconds = 9762.14 MB/sec
 HDIO_DRIVE_CMD(identify) failed: Inappropriate ioctl for device
 Timing buffered disk reads: 384 MB in  3.00 seconds = 127.87 MB/sec

Code: Select all

fdisk -l /dev/md0p3
Disk /dev/md0p3: 1.8 TiB, 1910130081792 bytes, 3730722816 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes

Code: Select all

=== START OF READ SMART DATA SECTION ===
SMART Self-test log structure revision number 1
Num  Test_Description    Status                  Remaining  LifeTime(hours)  LBA_of_first_error
# 1  Short offline       Completed without error       00%      1028

Code: Select all

=== START OF READ SMART DATA SECTION ===
SMART Self-test log structure revision number 1
Num  Test_Description    Status                  Remaining  LifeTime(hours)  LBA_of_first_error
# 1  Short offline       Completed without error       00%      1028         -
I hope those informations can help to find out, whats going on. I mean granted that qcow2 and virtualization in general will decrease performance, but I don't think to the point of 2-3 MB/s is acceptable?
Top
Elleni
Veteran
Veteran
Posts: 1298
Joined: Tue May 23, 2006 10:56 pm

  • Quote

Post by Elleni » Tue Feb 04, 2020 10:18 pm

As a comparison I fired up a windows 10 vm on my local gentoo system, which is a 6 core ryzen 1st gen. My gentoo is sitting on an ssd though. But I have 3 x 1 TB raid5 setup for not encrypted userhome. Here the win 10 vm is built on libvirt and it's virtual disk is sitting on the raid5 not the ssd. And I also created a 20 GB scsi virtio disk on my raid as storage and copied it within the same disk. With scsi its even faster copied 8.7GB within a minute.

It took barely a minute to copy 2 GB on system virtio blk disk, so I think, that there is probably something wrong on my proxmox vm setup? While I must admit, that my gentoo system is sitting on an ssd, the storage pool I created for comparison is said 3x1TB spinning disk setup, and its not encrypted. But the difference in read/write speed is huge.

Edit to add, that while comparing the device manager of the two vms I see that while on the proxmox vm there is QEMU QEMU HARDDISK SCSI Disk Device, in the much faster performing vm I have Red Hat VirtIO SCSI Disk Device. I will check, if I can change that on proxmox and see how it perform then, and report back. I will also try to scp the vm here and start it in libvirt and the other way round, copy my fast vm to proxmox to finally find out whats going on..
Top
Elleni
Veteran
Veteran
Posts: 1298
Joined: Tue May 23, 2006 10:56 pm

  • Quote

Post by Elleni » Wed Feb 05, 2020 6:17 pm

Converting the disk to raw brought back the expected performance. Could copy a 7 GB File in about 3 minutes. I was not aware that performance of qcow2 can decrease so dramatically especially when taking snapshots often. The size of the 120 GB qcow2 image is 222GB already. I will check if the said qcow2 also performs badly on my gentoo libvirt installation, as I would expect it now.

So I guess snapshotting is not something to do in a production scenario, right? I will try if performance comes back on qcow2 when deleting snapshots. What else to know about qcow2? *still reading and learning* But if snapshotting is not something to do I can directly stick to raw format, as the snapshot ability was the only reason why I had chosen qcow2 in the first place.

This is the said file, autocreated by ProxMox. Is there anything that one could try to not get such a dramatic decrease of performance? Is it totaly normal, that it gets that slow while the number of snapshots increases?

Code: Select all

qemu-img info /path_to/name.qcow2 
image: /path_to/name.qcow2
file format: qcow2
virtual size: 120 GiB (128849018880 bytes)
disk size: 108 GiB
cluster_size: 65536
Snapshot list:
ID        TAG                 VM SIZE                DATE       VM CLOCK
1         Snapshotname1         0 B 2020-01-31 13:28:00   00:24:04.579
2         Snapshotname2                 0 B 2020-01-31 19:16:44   00:02:15.000
3         Snapshotname3    0 B 2020-02-01 00:55:16   00:28:19.986
4         Snapshotname4            0 B 2020-02-02 23:49:41   05:47:52.731
5         Snapshotname5    0 B 2020-02-04 10:18:50   00:27:10.907
6         Snapshotname6      0 B 2020-02-04 11:37:53   01:46:09.962
7         Snapshotname7    0 B 2020-02-04 13:43:36   01:52:49.401
8         Snapshotname8    0 B 2020-02-04 15:59:06   03:05:04.159
9         Snapshotname9         0 B 2020-02-04 20:11:53   07:17:49.147
10        Snapshotname10         0 B 2020-02-05 01:11:23   12:16:58.928
Format specific information:
    compat: 1.1
    lazy refcounts: false
    refcount bits: 16
    corrupt: false
Will the performance come back if deleting all snapshots but maybe one? Still reading, maybe a problem of too small L2 cache size and/or cluster size/preallocation lazy refcounts and/or other settings?
Top
AJM
Apprentice
Apprentice
User avatar
Posts: 195
Joined: Wed Sep 25, 2002 7:46 pm
Location: Aberdeen, Scotland

  • Quote

Post by AJM » Thu Feb 06, 2020 5:39 pm

Elleni wrote:So I guess snapshotting is not something to do in a production scenario, right? I will try if performance comes back on qcow2 when deleting snapshots. Will the performance come back if deleting all snapshots but maybe one? Still reading, maybe a problem of too small L2 cache size and/or cluster size/preallocation lazy refcounts and/or other settings?
I'd be interested to know what you find. I have one production VM running on qcow2 but I just shut it down periodically and copy the image file for backup purposes before doing "risky" updates etc rather than using snapshots.

I believe it's similar on Windows / Hyper-V - I have a few server VMs on Hyper-V elsewhere and have been warned not to use snapshots / checkpoints as they will kill performance. I haven't risked trying it to confirm!

It's very annoying, since as you say this is one of the major advantages of virtualisation... snapshot running server, perform changes, discover a snag, just revert.
Top
TigerJr
Guru
Guru
Posts: 540
Joined: Tue Jun 19, 2007 9:37 am

  • Quote

Post by TigerJr » Fri Feb 07, 2020 2:07 pm

Elleni wrote:Converting the disk to raw brought back the expected performance. Could copy a 7 GB File in about 3 minutes. I was not aware that performance of qcow2 can decrease so dramatically especially when taking snapshots often. The size of the 120 GB qcow2 image is 222GB already. I will check if the said qcow2 also performs badly on my gentoo libvirt installation, as I would expect it now.

So I guess snapshotting is not something to do in a production scenario, right? I will try if performance comes back on qcow2 when deleting snapshots. What else to know about qcow2? *still reading and learning* But if snapshotting is not something to do I can directly stick to raw format, as the snapshot ability was the only reason why I had chosen qcow2 in the first place.

This is the said file, autocreated by ProxMox. Is there anything that one could try to not get such a dramatic decrease of performance? Is it totaly normal, that it gets that slow while the number of snapshots increases?

Code: Select all

qemu-img info /path_to/name.qcow2 
image: /path_to/name.qcow2
file format: qcow2
virtual size: 120 GiB (128849018880 bytes)
disk size: 108 GiB
cluster_size: 65536
Snapshot list:
ID        TAG                 VM SIZE                DATE       VM CLOCK
1         Snapshotname1         0 B 2020-01-31 13:28:00   00:24:04.579
2         Snapshotname2                 0 B 2020-01-31 19:16:44   00:02:15.000
3         Snapshotname3    0 B 2020-02-01 00:55:16   00:28:19.986
4         Snapshotname4            0 B 2020-02-02 23:49:41   05:47:52.731
5         Snapshotname5    0 B 2020-02-04 10:18:50   00:27:10.907
6         Snapshotname6      0 B 2020-02-04 11:37:53   01:46:09.962
7         Snapshotname7    0 B 2020-02-04 13:43:36   01:52:49.401
8         Snapshotname8    0 B 2020-02-04 15:59:06   03:05:04.159
9         Snapshotname9         0 B 2020-02-04 20:11:53   07:17:49.147
10        Snapshotname10         0 B 2020-02-05 01:11:23   12:16:58.928
Format specific information:
    compat: 1.1
    lazy refcounts: false
    refcount bits: 16
    corrupt: false
Will the performance come back if deleting all snapshots but maybe one? Still reading, maybe a problem of too small L2 cache size and/or cluster size/preallocation lazy refcounts and/or other settings?
what pveperf shows on proxmox?
Do not use gentoo, it die
Top
Elleni
Veteran
Veteran
Posts: 1298
Joined: Tue May 23, 2006 10:56 pm

  • Quote

Post by Elleni » Sat Feb 15, 2020 11:49 pm

CPU BOGOMIPS: 102386.08
REGEX/SECOND: 2771835
HD SIZE: 27.37 GB (/dev/md0p1)
BUFFERED READS: 100.54 MB/sec
AVERAGE SEEK TIME: 12.77 ms
FSYNCS/SECOND: 11.88
DNS EXT: 29.05 ms
Top
Post Reply

8 posts • Page 1 of 1

Return to “Kernel & Hardware”

Jump to
  • Assistance
  • ↳   News & Announcements
  • ↳   Frequently Asked Questions
  • ↳   Installing Gentoo
  • ↳   Multimedia
  • ↳   Desktop Environments
  • ↳   Networking & Security
  • ↳   Kernel & Hardware
  • ↳   Portage & Programming
  • ↳   Gamers & Players
  • ↳   Other Things Gentoo
  • ↳   Unsupported Software
  • Discussion & Documentation
  • ↳   Documentation, Tips & Tricks
  • ↳   Gentoo Chat
  • ↳   Gentoo Forums Feedback
  • ↳   Duplicate Threads
  • International Gentoo Users
  • ↳   中文 (Chinese)
  • ↳   Dutch
  • ↳   Finnish
  • ↳   French
  • ↳   Deutsches Forum (German)
  • ↳   Diskussionsforum
  • ↳   Deutsche Dokumentation
  • ↳   Greek
  • ↳   Forum italiano (Italian)
  • ↳   Forum di discussione italiano
  • ↳   Risorse italiane (documentazione e tools)
  • ↳   Polskie forum (Polish)
  • ↳   Instalacja i sprzęt
  • ↳   Polish OTW
  • ↳   Portuguese
  • ↳   Documentação, Ferramentas e Dicas
  • ↳   Russian
  • ↳   Scandinavian
  • ↳   Spanish
  • ↳   Other Languages
  • Architectures & Platforms
  • ↳   Gentoo on ARM
  • ↳   Gentoo on PPC
  • ↳   Gentoo on Sparc
  • ↳   Gentoo on Alternative Architectures
  • ↳   Gentoo on AMD64
  • ↳   Gentoo for Mac OS X (Portage for Mac OS X)
  • Board index
  • All times are UTC
  • Delete cookies

© 2001–2026 Gentoo Foundation, Inc.

Powered by phpBB® Forum Software © phpBB Limited

Privacy Policy

 

 

magic