perfsonar-user - [perfsonar-user] 100G perfSONAR Performance
Subject: perfSONAR User Q&A and Other Discussion
List archive
- From: "Fedorka, Shayne" <>
- To: "" <>
- Subject: [perfsonar-user] 100G perfSONAR Performance
- Date: Thu, 17 Jun 2021 16:06:50 +0000
- Arc-authentication-results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=nrel.gov; dmarc=pass action=none header.from=nrel.gov; dkim=pass header.d=nrel.gov; arc=none
- Arc-message-signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=OD2b9n2yj9YmqyjNcU+QitLuEZ3agaGTuoD9sFcKJ4I=; b=ACBn1A/wYhBk374dpvjdpmgEPDHHS3D61OzXXFPKaLTXd42CeDtZYkeLvzhaN6bENg3NB6RZWed1Cj+ziutKHUUm4vEv5YE+D1F5rh7vCpqiwoVfpO6mEEONm2mgUTw7atJuS0pgVrxrNQ3RE9SVVuNiSIPNDyquFbQhUCqfiXnKIPvRMnyJmlPg8hB5P7ug3K4EN7x+hghJ6/eeUGtGgO5mcnniu00X/2F+V9xWp1gcoWfTaMEkSy01apGLyzjGANyfp7MuoycF/vu23oddTRcrCPENeOOf7S99R4APX7o1GwEtvYoj+uuJQtPUX1/Y5Pb8U5LAkCFrFXz6mgBH9g==
- Arc-seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=Z62UOBU2U/6akd+WL8z9CzjPr7UAE4G7TI1jG3+kAkvtMmhqD/ZKSgz8lEE9hnJe8v9tvQyiTU1wyswUIb8j9Aa4HN45UhtN3h/SHUelvfrpI6UR8iy0OhJc1KztkwkxwALl0XdlHQGtukQoms+tO9NXGs0jOxAgsr8gQ1I1SPiH8LFJQzJOEEFzZ+kD6NehrAye7Ydl+d/nAijp6PIJ5LOMBNvdUdMauvoR4A4YT1JgD9qKm4LVwiz12SGSQukDSsCTVEC0gSCU3adUNYJHpbYoexNf7ERQl6QhuPN2rXAEK0GF/TfrmU2++NUDr2VXe3Hj8SL0rpGIKteYzwfY2A==
I have a new 100G perfSONAR deployment in early stages of testing and I am consistently getting an average throughput between 30-40 Gbps. I’ve tried various 100G tuning configurations (increased TCP buffer size, set CPU governor to performance, updated NIC driver, etc.). I’m wondering if anyone has any suggestions as to what else I should look at to get better performance.
I have two servers connected to the same NVIDIA/Mellanox SN2010 switch on the same LAN. The servers are identical with the following hardware:
[perfsonar-100g-a ~]$ pscheduler task throughput -s 172.16.10.10 -d 172.16.10.20 -b 100G Submitting task... Task URL: https://172.16.10.10/pscheduler/tasks/4050bb29-b65a-476d-8d4d-6dd1b5c3668b Running with tool 'iperf3' Fetching first run...
Next scheduled run: https://172.16.10.10/pscheduler/tasks/4050bb29-b65a-476d-8d4d-6dd1b5c3668b/runs/2739b7fa-03c3-4f23-860e-e8c3777f9c95 Starts 2021-06-17T14:57:26Z (~7 seconds) Ends 2021-06-17T14:57:45Z (~18 seconds) Waiting for result...
* Stream ID 5 Interval Throughput Retransmits Current Window 0.0 - 1.0 43.55 Gbps 0 15.88 MBytes 1.0 - 2.0 45.43 Gbps 0 19.59 MBytes 2.0 - 3.0 45.79 Gbps 0 19.59 MBytes 3.0 - 4.0 44.30 Gbps 0 30.67 MBytes 4.0 - 5.0 30.59 Gbps 0 30.67 MBytes 5.0 - 6.0 30.02 Gbps 0 30.67 MBytes 6.0 - 7.0 30.29 Gbps 0 30.67 MBytes 7.0 - 8.0 28.80 Gbps 0 30.67 MBytes 8.0 - 9.0 29.36 Gbps 0 30.67 MBytes 9.0 - 10.0 24.09 Gbps 0 30.67 MBytes
Summary Interval Throughput Retransmits Receiver Throughput 0.0 - 10.0 35.22 Gbps 0 35.01 Gbps
[perfsonar-100g-a ~]$ ifconfig enp179s0f0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 9000 inet 172.16.10.10 netmask 255.255.255.0 broadcast 172.16.10.255 inet6 fe80::bace:f6ff:fe4e:c016 prefixlen 64 scopeid 0x20<link> ether b8:ce:f6:4e:c0:16 txqueuelen 10000 (Ethernet) RX packets 2500215 bytes 165614923 (157.9 MiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 34468724 bytes 309429388449 (288.1 GiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
[perfsonar-100g-a ~]$ sudo ethtool enp179s0f0 Settings for enp179s0f0: Supported ports: [ FIBRE ] Supported link modes: 1000baseKX/Full 10000baseKR/Full 40000baseKR4/Full 40000baseCR4/Full 40000baseSR4/Full 40000baseLR4/Full 25000baseCR/Full 25000baseKR/Full 25000baseSR/Full 50000baseCR2/Full 50000baseKR2/Full 100000baseKR4/Full 100000baseSR4/Full 100000baseCR4/Full 100000baseLR4_ER4/Full Supported pause frame use: Symmetric Supports auto-negotiation: Yes Supported FEC modes: None RS Advertised link modes: 1000baseKX/Full 10000baseKR/Full 40000baseKR4/Full 40000baseCR4/Full 40000baseSR4/Full 40000baseLR4/Full 25000baseCR/Full 25000baseKR/Full 25000baseSR/Full 50000baseCR2/Full 50000baseKR2/Full 100000baseKR4/Full 100000baseSR4/Full 100000baseCR4/Full 100000baseLR4_ER4/Full Advertised pause frame use: Symmetric Advertised auto-negotiation: Yes Advertised FEC modes: RS Link partner advertised link modes: Not reported Link partner advertised pause frame use: No Link partner advertised auto-negotiation: Yes Link partner advertised FEC modes: Not reported Speed: 100000Mb/s Duplex: Full Port: FIBRE PHYAD: 0 Transceiver: internal Auto-negotiation: on Supports Wake-on: d Wake-on: d Current message level: 0x00000004 (4) link Link detected: yes
-- Shayne Fedorka Network Engineer | NREL |
- [perfsonar-user] 100G perfSONAR Performance, Fedorka, Shayne, 06/17/2021
- Re: [perfsonar-user] 100G perfSONAR Performance, Saravanaraj Ayyampalayam, 06/17/2021
- Re: [perfsonar-user] 100G perfSONAR Performance, Eli Dart, 06/17/2021
- Re: [perfsonar-user] 100G perfSONAR Performance, Tim Chown, 06/18/2021
- Re: [perfsonar-user] 100G perfSONAR Performance, Mark Feit, 06/18/2021
- Re: [perfsonar-user] 100G perfSONAR Performance, Schopf, Jennifer M, 06/18/2021
- Re: [perfsonar-user] 100G perfSONAR Performance, Fedorka, Shayne, 06/18/2021
- Re: [perfsonar-user] 100G perfSONAR Performance, Chris Konger - NOAA Affiliate, 06/22/2021
- Re: [perfsonar-user] 100G perfSONAR Performance, Mark Feit, 06/18/2021
- Re: [perfsonar-user] 100G perfSONAR Performance, Tim Chown, 06/18/2021
Archive powered by MHonArc 2.6.24.