Gentoo Forums
Gentoo Forums
Gentoo Forums
Quick Search: in
[SOLVED] PCI-Express throughput
View unanswered posts
View posts from last 24 hours

 
Reply to topic    Gentoo Forums Forum Index Kernel & Hardware
View previous topic :: View next topic  
Author Message
Vieri
l33t
l33t


Joined: 18 Dec 2005
Posts: 921

PostPosted: Mon Oct 21, 2019 7:32 am    Post subject: [SOLVED] PCI-Express throughput Reply with quote

Hi,

I'd like to know if my PCI card is working at max of its capabilities as far as throughput is concerned.

I can only connect to the system with ssh.

Dmidecode tells me that the card is connected to an "x16 PCI Express" slot.

lspci shows this:

Code:
07:00.3 Ethernet controller: Intel Corporation I350 Gigabit Network Connection (rev 01)
        Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx+
        Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
        Latency: 0, Cache Line Size: 64 bytes
        Interrupt: pin D routed to IRQ 65
        Region 0: Memory at fdd80000 (32-bit, non-prefetchable) [size=512K]
        Region 2: I/O ports at 9000 [size=32]
        Region 3: Memory at fe100000 (32-bit, non-prefetchable) [size=16K]
        Expansion ROM at fdd00000 [disabled] [size=512K]
        Capabilities: [40] Power Management version 3
                Flags: PMEClk- DSI+ D1- D2- AuxCurrent=0mA PME(D0+,D1-,D2-,D3hot+,D3cold+)
                Status: D0 NoSoftRst+ PME-Enable- DSel=0 DScale=1 PME-
        Capabilities: [50] MSI: Enable- Count=1/1 Maskable+ 64bit+
                Address: 0000000000000000  Data: 0000
                Masking: 00000000  Pending: 00000000
        Capabilities: [70] MSI-X: Enable+ Count=10 Masked-
                Vector table: BAR=3 offset=00000000
                PBA: BAR=3 offset=00002000
        Capabilities: [a0] Express (v2) Endpoint, MSI 00
                DevCap: MaxPayload 512 bytes, PhantFunc 0, Latency L0s <512ns, L1 <64us
                        ExtTag- AttnBtn- AttnInd- PwrInd- RBE+ FLReset+ SlotPowerLimit 0.000W
                DevCtl: Report errors: Correctable+ Non-Fatal+ Fatal+ Unsupported+
                        RlxdOrd- ExtTag- PhantFunc- AuxPwr- NoSnoop+ FLReset-
                        MaxPayload 128 bytes, MaxReadReq 512 bytes
                DevSta: CorrErr+ UncorrErr- FatalErr- UnsuppReq+ AuxPwr+ TransPend-
                LnkCap: Port #0, Speed 5GT/s, Width x4, ASPM L0s L1, Exit Latency L0s <4us, L1 <32us
                        ClockPM- Surprise- LLActRep- BwNot- ASPMOptComp+
                LnkCtl: ASPM Disabled; RCB 64 bytes Disabled- CommClk-
                        ExtSynch- ClockPM- AutWidDis- BWInt- AutBWInt-
                LnkSta: Speed 5GT/s, Width x4, TrErr- Train- SlotClk+ DLActive- BWMgmt- ABWMgmt-
                DevCap2: Completion Timeout: Range ABCD, TimeoutDis+, LTR+, OBFF Not Supported
                DevCtl2: Completion Timeout: 50us to 50ms, TimeoutDis-, LTR-, OBFF Disabled
                LnkSta2: Current De-emphasis Level: -6dB, EqualizationComplete-, EqualizationPhase1-
                         EqualizationPhase2-, EqualizationPhase3-, LinkEqualizationRequest-
        Capabilities: [100 v2] Advanced Error Reporting
                UESta:  DLP- SDES- TLP- FCP- CmpltTO- CmpltAbrt- UnxCmplt- RxOF- MalfTLP- ECRC- UnsupReq- ACSViol-
                UEMsk:  DLP- SDES- TLP- FCP- CmpltTO- CmpltAbrt- UnxCmplt- RxOF- MalfTLP- ECRC- UnsupReq- ACSViol-
                UESvrt: DLP+ SDES+ TLP- FCP+ CmpltTO- CmpltAbrt- UnxCmplt- RxOF+ MalfTLP+ ECRC- UnsupReq- ACSViol-
                CESta:  RxErr- BadTLP- BadDLLP- Rollover- Timeout- NonFatalErr-
                CEMsk:  RxErr- BadTLP- BadDLLP- Rollover- Timeout- NonFatalErr+
                AERCap: First Error Pointer: 00, GenCap+ CGenEn- ChkCap+ ChkEn-
        Capabilities: [140 v1] Device Serial Number e8-ea-6a-ff-ff-0c-4c-1c
        Capabilities: [150 v1] Alternative Routing-ID Interpretation (ARI)
                ARICap: MFVC- ACS-, Next Function: 0
                ARICtl: MFVC- ACS-, Function Group: 0
        Capabilities: [160 v1] Single Root I/O Virtualization (SR-IOV)
                IOVCap: Migration-, Interrupt Message Number: 000
                IOVCtl: Enable- Migration- Interrupt- MSE- ARIHierarchy-
                IOVSta: Migration-
                Initial VFs: 8, Total VFs: 8, Number of VFs: 0, Function Dependency Link: 03
                VF offset: 384, stride: 4, Device ID: 1520
                Supported Page Size: 00000553, System Page Size: 00000001
                Region 0: Memory at 0000000000000000 (64-bit, prefetchable)
                Region 3: Memory at 0000000000000000 (64-bit, prefetchable)
                VF Migration: offset: 00000000, BIR: 0
        Capabilities: [1a0 v1] Transaction Processing Hints
                Device specific mode supported
                Steering table in TPH capability structure
        Capabilities: [1d0 v1] Access Control Services
                ACSCap: SrcValid- TransBlk- ReqRedir- CmpltRedir- UpstreamFwd- EgressCtrl- DirectTrans-
                ACSCtl: SrcValid- TransBlk- ReqRedir- CmpltRedir- UpstreamFwd- EgressCtrl- DirectTrans-
        Kernel driver in use: igb


What does "Width x4" mean in "LnkSta: Speed 5GT/s, Width x4"?

Does this mean I have a PCI-Express 2.0 slot, and that the card has a throughput of 2.00 GB/s max.?

Thanks


Last edited by Vieri on Mon Oct 21, 2019 9:23 pm; edited 1 time in total
Back to top
View user's profile Send private message
NeddySeagoon
Administrator
Administrator


Joined: 05 Jul 2003
Posts: 55304
Location: 56N 3W

PostPosted: Mon Oct 21, 2019 10:18 am    Post subject: Reply with quote

Vieri,

The speed on the ethernet wire is 1GBit/sec
The speed on the PCIe interface is not a problem. Both are full duplex and the bottleneck is the ethernet wire speed.

I suspect that your
Code:
07:00.3 Ethernet controller
is a multiport device (at least four) ethernet ports as you are showing us subfunction 3.

LnkCap is the card reporting its capabilities
LnkSta is the card reperting how its working now.

Code:
LnkSta: Speed 5GT/s, Width x4
, says its using 4 PCIe Ver 2 lanes at 5GT/s each.
_________________
Regards,

NeddySeagoon

Computer users fall into two groups:-
those that do backups
those that have never had a hard drive fail.
Back to top
View user's profile Send private message
Vieri
l33t
l33t


Joined: 18 Dec 2005
Posts: 921

PostPosted: Mon Oct 21, 2019 9:23 pm    Post subject: Reply with quote

Thanks!
Back to top
View user's profile Send private message
Ant P.
Watchman
Watchman


Joined: 18 Apr 2009
Posts: 6920

PostPosted: Mon Oct 21, 2019 11:30 pm    Post subject: Reply with quote

If your device isn't running at full speed, the kernel will explicitly point that out at boot:
Code:
Sep 13 18:48:38 [kernel] pci 0000:01:00.0: 32.000 Gb/s available PCIe bandwidth, limited by 5 GT/s x8 link at 0000:00:02.0 (capable of 63.008 Gb/s with 8 GT/s x8 link)

That's a PCIe 3.0 graphics card in a 2.0 x16 slot.
Code:
[  +0.000036] pci 0000:06:00.0: 2.000 Gb/s available PCIe bandwidth, limited by 2.5 GT/s x1 link at 0000:00:15.2 (capable of 4.000 Gb/s with 5 GT/s x1 link)

That example's a USB3 controller hardwired on a sketchy Zotac motherboard. Not possible to fix, but useful to figure out why it's underperforming.
Back to top
View user's profile Send private message
Vieri
l33t
l33t


Joined: 18 Dec 2005
Posts: 921

PostPosted: Tue Oct 22, 2019 7:12 am    Post subject: Reply with quote

OK, I'm getting this in my log:

Code:
[    3.009071] igb 0000:07:00.0: eth2: (PCIe:5.0Gb/s:Width x4) e8:ea:6a:0c:4c:1c
[    3.108850] igb 0000:07:00.1: eth3: (PCIe:5.0Gb/s:Width x4) e8:ea:6a:0c:4c:1d
[    3.208868] igb 0000:07:00.2: eth4: (PCIe:5.0Gb/s:Width x4) e8:ea:6a:0c:4c:1e
[    3.308894] igb 0000:07:00.3: eth5: (PCIe:5.0Gb/s:Width x4) e8:ea:6a:0c:4c:1f
Back to top
View user's profile Send private message
NeddySeagoon
Administrator
Administrator


Joined: 05 Jul 2003
Posts: 55304
Location: 56N 3W

PostPosted: Tue Oct 22, 2019 7:50 am    Post subject: Reply with quote

Vieri,

At face value, your 4 port Ethernet card has a single PCIe per port.
That's plenty of bandwidth. As I said, the bottleneck is the Ethernet link itself.

I say "at face value" because a single PCIe:5.0Gb/s lane can keep 4 1G Ethernet ports busy but that's harder to implement on the PCIe bus end.
If you could put that card into a physical 4 lane slot that had only one lane wired, you may not notice the difference.
In theory, its enough bandwidth to keep all 4 Ethernet ports at their full data rate.

Putting what is in effect, four single lane Ethernet cards on one physical card is easier to design though.
_________________
Regards,

NeddySeagoon

Computer users fall into two groups:-
those that do backups
those that have never had a hard drive fail.
Back to top
View user's profile Send private message
Display posts from previous:   
Reply to topic    Gentoo Forums Forum Index Kernel & Hardware All times are GMT
Page 1 of 1

 
Jump to:  
You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot vote in polls in this forum