perfsonar-user - Re: [perfsonar-user] 100G perfSONAR Performance
Subject: perfSONAR User Q&A and Other Discussion
List archive
- From: Saravanaraj Ayyampalayam <>
- To: "Fedorka, Shayne" <>, "" <>
- Subject: Re: [perfsonar-user] 100G perfSONAR Performance
- Date: Thu, 17 Jun 2021 17:17:58 +0000
- Arc-authentication-results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=uga.edu; dmarc=pass action=none header.from=uga.edu; dkim=pass header.d=uga.edu; arc=none
- Arc-message-signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=aXE8STw4WpUsFHEfZ3TXC5t681pfhmbs+pUFnHHYh9g=; b=PniDNBmdiJK9I1QalmECLNP5X6sRTsMLvYgCFsCTr/bPGDVbKKKm55qmHuVdA4YTeUaUBAO23mfo+AWikq5y+p4H8O6IUTkNEFqM7XhK5Ld2kmwx3mrmQ52Cz90T5qvLLWbKVDsgI/0c2wadshPYbnCZFBt7ZaBsO1M+Ije+CoH544MKpdVYSIgggIcc8gQ/Ey1SwGETAyPgoru9NfZ/zwTchX4b6RgBosPb+fJ+7jMsQTVhSCx3OIN0sRM2OY+MSGNTRp5J/Gb0p7/Q9Iqb1OYXX/wp/9KgPH5tt3ThQ7QT7OkuSRHPy1azJz+1aZnOhiylr1ozPBeP0ZZ8fpxOPA==
- Arc-seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=lJqsecmzaq7gKlZxyNzmLREvkoqSl9OaLGUfOFpudkfl2RrJ4Vx4wsBtLQn1jHsNS+DwNcxXcvtZI9P1E66p29q4t4WG10/A/RyO5R3sNrCb6/+vjug/RwjneyLLI6gfab2P127h+blg7ZkkZO5jtWq/0PdHZWLKsFac62ertx0QKC26VbCeI3xqrKD+hOgMIofiFtdF4XiDmO2CUtyyvJR89am8ncV0RI8HMTtXNS/h54zgdouROgH9Nv4qXJAKKtwIand/4XtOX8ncDGAsZnlsfHmZOyMw20Y5e47rszhRlcjizlv1UvrhpU+5+QGVDlJiKPaL6yVVblTRQFSAhQ==
Shayne,
The limitation you are seeing is due to single stream / single thread performance. If you try multiple stream, iperf3 still uses a single core to push the data.
In our testing, it took at least 4 streams on the iperf tool to get to 100 Gb/s performance.
Raj Ayyampalayam
From:
<> on behalf of "Fedorka, Shayne" <> [EXTERNAL SENDER - PROCEED CAUTIOUSLY] I have a new 100G perfSONAR deployment in early stages of testing and I am consistently getting an average throughput between 30-40 Gbps. I’ve tried various 100G tuning configurations (increased TCP buffer size, set CPU governor to performance, updated NIC driver, etc.). I’m wondering if anyone has any suggestions as to what else I should look at to get better performance.
I have two servers connected to the same NVIDIA/Mellanox SN2010 switch on the same LAN. The servers are identical with the following hardware:
[perfsonar-100g-a ~]$ pscheduler task throughput -s 172.16.10.10 -d 172.16.10.20 -b 100G Submitting task... Task URL: https://172.16.10.10/pscheduler/tasks/4050bb29-b65a-476d-8d4d-6dd1b5c3668b Running with tool 'iperf3' Fetching first run...
Next scheduled run: https://172.16.10.10/pscheduler/tasks/4050bb29-b65a-476d-8d4d-6dd1b5c3668b/runs/2739b7fa-03c3-4f23-860e-e8c3777f9c95 Starts 2021-06-17T14:57:26Z (~7 seconds) Ends 2021-06-17T14:57:45Z (~18 seconds) Waiting for result...
* Stream ID 5 Interval Throughput Retransmits Current Window 0.0 - 1.0 43.55 Gbps 0 15.88 MBytes 1.0 - 2.0 45.43 Gbps 0 19.59 MBytes 2.0 - 3.0 45.79 Gbps 0 19.59 MBytes 3.0 - 4.0 44.30 Gbps 0 30.67 MBytes 4.0 - 5.0 30.59 Gbps 0 30.67 MBytes 5.0 - 6.0 30.02 Gbps 0 30.67 MBytes 6.0 - 7.0 30.29 Gbps 0 30.67 MBytes 7.0 - 8.0 28.80 Gbps 0 30.67 MBytes 8.0 - 9.0 29.36 Gbps 0 30.67 MBytes 9.0 - 10.0 24.09 Gbps 0 30.67 MBytes
Summary Interval Throughput Retransmits Receiver Throughput 0.0 - 10.0 35.22 Gbps 0 35.01 Gbps
[perfsonar-100g-a ~]$ ifconfig enp179s0f0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 9000 inet 172.16.10.10 netmask 255.255.255.0 broadcast 172.16.10.255 inet6 fe80::bace:f6ff:fe4e:c016 prefixlen 64 scopeid 0x20<link> ether b8:ce:f6:4e:c0:16 txqueuelen 10000 (Ethernet) RX packets 2500215 bytes 165614923 (157.9 MiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 34468724 bytes 309429388449 (288.1 GiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
[perfsonar-100g-a ~]$ sudo ethtool enp179s0f0 Settings for enp179s0f0: Supported ports: [ FIBRE ] Supported link modes: 1000baseKX/Full 10000baseKR/Full 40000baseKR4/Full 40000baseCR4/Full 40000baseSR4/Full 40000baseLR4/Full 25000baseCR/Full 25000baseKR/Full 25000baseSR/Full 50000baseCR2/Full 50000baseKR2/Full 100000baseKR4/Full 100000baseSR4/Full 100000baseCR4/Full 100000baseLR4_ER4/Full Supported pause frame use: Symmetric Supports auto-negotiation: Yes Supported FEC modes: None RS Advertised link modes: 1000baseKX/Full 10000baseKR/Full 40000baseKR4/Full 40000baseCR4/Full 40000baseSR4/Full 40000baseLR4/Full 25000baseCR/Full 25000baseKR/Full 25000baseSR/Full 50000baseCR2/Full 50000baseKR2/Full 100000baseKR4/Full 100000baseSR4/Full 100000baseCR4/Full 100000baseLR4_ER4/Full Advertised pause frame use: Symmetric Advertised auto-negotiation: Yes Advertised FEC modes: RS Link partner advertised link modes: Not reported Link partner advertised pause frame use: No Link partner advertised auto-negotiation: Yes Link partner advertised FEC modes: Not reported Speed: 100000Mb/s Duplex: Full Port: FIBRE PHYAD: 0 Transceiver: internal Auto-negotiation: on Supports Wake-on: d Wake-on: d Current message level: 0x00000004 (4) link Link detected: yes
-- Shayne Fedorka Network Engineer | NREL |
- [perfsonar-user] 100G perfSONAR Performance, Fedorka, Shayne, 06/17/2021
- Re: [perfsonar-user] 100G perfSONAR Performance, Saravanaraj Ayyampalayam, 06/17/2021
- Re: [perfsonar-user] 100G perfSONAR Performance, Eli Dart, 06/17/2021
- Re: [perfsonar-user] 100G perfSONAR Performance, Tim Chown, 06/18/2021
- Re: [perfsonar-user] 100G perfSONAR Performance, Mark Feit, 06/18/2021
- Re: [perfsonar-user] 100G perfSONAR Performance, Schopf, Jennifer M, 06/18/2021
- Re: [perfsonar-user] 100G perfSONAR Performance, Fedorka, Shayne, 06/18/2021
- Re: [perfsonar-user] 100G perfSONAR Performance, Chris Konger - NOAA Affiliate, 06/22/2021
- Re: [perfsonar-user] 100G perfSONAR Performance, Mark Feit, 06/18/2021
- Re: [perfsonar-user] 100G perfSONAR Performance, Tim Chown, 06/18/2021
Archive powered by MHonArc 2.6.24.