Skip to Content.
Sympa Menu

perfsonar-user - Re: [perfsonar-user] 100G perfSONAR Performance

Subject: perfSONAR User Q&A and Other Discussion

List archive

Re: [perfsonar-user] 100G perfSONAR Performance


Chronological Thread 
  • From: Chris Konger - NOAA Affiliate <>
  • To: Tim Chown <>, Eli Dart <>
  • Cc: "Fedorka, Shayne" <>, "" <>, "" <>
  • Subject: Re: [perfsonar-user] 100G perfSONAR Performance
  • Date: Tue, 22 Jun 2021 16:06:40 -0600


It IS possible to achieve higher throughputs than the 30-40 Gbps.  But it requires enabling DPDK, SR-IOV, etc. Many 3rd party firewall applications with 100G NICs achieve their higher line rates by enabling DPDK (as long as the underlying NIC hardware is Intel or Mellanox).

But that's only relevant if your App can use those features ... and not many open source apps have enabled that functionality.

I think at one point UDT developers wanted to add DPDK support ... to come up with something similar to Aspera performance ... but I'm not sure whether that was ever completed.

Chris Konger
Sr Network Engineer - Cloud (Contractor)

NOAA
Mail stop: N-Wave
325 Broadway
Boulder, CO 80305
Phone: 303-497-7125

To open "New Service" or "Request Support" tickets ...
https://sn-tools.grnoc.iu.edu/noaa-request/


On 6/18/21 2:47 AM, Tim Chown (via perfsonar-user Mailing List) wrote:
Hi,

There’s a growing interest in 100G perfSONAR, so this is a very interesting topic.

Our experience is that iperf2 is more ‘friendly’ for higher throughput as it seems a little smarter on how it distributes the multiple streams to its processors, where iperf3 needs additional parameters to be set.  perfSONAR also supports Ethr for throughput testing as of 4.3.0, which we found to perform very well as an alternative to either iperf version, albeit using more CPU.

Some best practice guidance for 100G throughput tests would be useful.

Tim

On 17 Jun 2021, at 21:59, Eli Dart  wrote:

What happens if you run two streams?

It would be good to know if you're throughput-limited globally or per-stream....

Thanks,

Eli



On Thu, Jun 17, 2021 at 9:07 AM "Fedorka, Shayne"  wrote:
I have a new 100G perfSONAR deployment in early stages of testing and I am consistently getting an average throughput between 30-40 Gbps. I’ve tried various 100G tuning configurations (increased TCP buffer size, set CPU governor to performance, updated NIC driver, etc.). I’m wondering if anyone has any suggestions as to what else I should look at to get better performance.

 

I have two servers connected to the same NVIDIA/Mellanox SN2010 switch on the same LAN. The servers are identical with the following hardware:

	• Supermicro SuperServer
	• 96GB Samsung Memory (6 x 16GB)
	• NVIDIA/Mellanox ConnectX-5 Ethernet Card
	• Intel 480GB SSD
	• Intel Xeon Gold 3.6 GHz (4.4 GHz turbo) 8-core, 16-thread processor
 

[perfsonar-100g-a ~]$ pscheduler task throughput -s 172.16.10.10 -d 172.16.10.20 -b 100G

Submitting task...

Task URL:

https://172.16.10.10/pscheduler/tasks/4050bb29-b65a-476d-8d4d-6dd1b5c3668b

Running with tool 'iperf3'

Fetching first run...

 

Next scheduled run:

https://172.16.10.10/pscheduler/tasks/4050bb29-b65a-476d-8d4d-6dd1b5c3668b/runs/2739b7fa-03c3-4f23-860e-e8c3777f9c95

Starts 2021-06-17T14:57:26Z (~7 seconds)

Ends   2021-06-17T14:57:45Z (~18 seconds)

Waiting for result...

 

* Stream ID 5

Interval       Throughput     Retransmits    Current Window

0.0 - 1.0      43.55 Gbps     0              15.88 MBytes  

1.0 - 2.0      45.43 Gbps     0              19.59 MBytes  

2.0 - 3.0      45.79 Gbps     0              19.59 MBytes  

3.0 - 4.0      44.30 Gbps     0              30.67 MBytes  

4.0 - 5.0      30.59 Gbps     0              30.67 MBytes  

5.0 - 6.0      30.02 Gbps     0              30.67 MBytes  

6.0 - 7.0      30.29 Gbps     0              30.67 MBytes  

7.0 - 8.0      28.80 Gbps     0              30.67 MBytes  

8.0 - 9.0      29.36 Gbps     0              30.67 MBytes  

9.0 - 10.0     24.09 Gbps     0              30.67 MBytes  

 

Summary

Interval       Throughput     Retransmits    Receiver Throughput

0.0 - 10.0     35.22 Gbps     0              35.01 Gbps

 

[perfsonar-100g-a ~]$ ifconfig

enp179s0f0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 9000

        inet 172.16.10.10  netmask 255.255.255.0  broadcast 172.16.10.255

        inet6 fe80::bace:f6ff:fe4e:c016  prefixlen 64  scopeid 0x20<link>

        ether b8:ce:f6:4e:c0:16  txqueuelen 10000  (Ethernet)

        RX packets 2500215  bytes 165614923 (157.9 MiB)

        RX errors 0  dropped 0  overruns 0  frame 0

        TX packets 34468724  bytes 309429388449 (288.1 GiB)

        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

 

[perfsonar-100g-a ~]$ sudo ethtool enp179s0f0

Settings for enp179s0f0:

        Supported ports: [ FIBRE ]

        Supported link modes:   1000baseKX/Full

                                10000baseKR/Full

                                40000baseKR4/Full

                                40000baseCR4/Full

                                40000baseSR4/Full

                                40000baseLR4/Full

                                25000baseCR/Full

                                25000baseKR/Full

                                25000baseSR/Full

                                50000baseCR2/Full

                                50000baseKR2/Full

                                100000baseKR4/Full

                                100000baseSR4/Full

                                100000baseCR4/Full

                                100000baseLR4_ER4/Full

        Supported pause frame use: Symmetric

        Supports auto-negotiation: Yes

        Supported FEC modes: None RS

        Advertised link modes:  1000baseKX/Full

                                10000baseKR/Full

                                40000baseKR4/Full

                                40000baseCR4/Full

                                40000baseSR4/Full

                                40000baseLR4/Full

                                25000baseCR/Full

                                25000baseKR/Full

                                25000baseSR/Full

                                50000baseCR2/Full

                                50000baseKR2/Full

                                100000baseKR4/Full

                                100000baseSR4/Full

                                100000baseCR4/Full

                                100000baseLR4_ER4/Full

        Advertised pause frame use: Symmetric

        Advertised auto-negotiation: Yes

        Advertised FEC modes: RS

        Link partner advertised link modes:  Not reported

        Link partner advertised pause frame use: No

        Link partner advertised auto-negotiation: Yes

        Link partner advertised FEC modes: Not reported

        Speed: 100000Mb/s

        Duplex: Full

        Port: FIBRE

        PHYAD: 0

        Transceiver: internal

        Auto-negotiation: on

        Supports Wake-on: d

        Wake-on: d

        Current message level: 0x00000004 (4)

                               link

        Link detected: yes

 

 

-- 

Shayne Fedorka

Network Engineer | NREL

--
To unsubscribe from this list: https://lists.internet2.edu/sympa/signoff/perfsonar-user


-- 

Eli Dart, Network Engineer                          NOC: (510) 486-7600
ESnet Science Engagement Group                           (800) 333-7638
Lawrence Berkeley National Laboratory 
--
To unsubscribe from this list: https://lists.internet2.edu/sympa/signoff/perfsonar-user

      
--
To unsubscribe from this list: https://lists.internet2.edu/sympa/signoff/perfsonar-user




Archive powered by MHonArc 2.6.24.

Top of Page