Skip to Content.
Sympa Menu

ndt-users - 10GigE ndt server (3.6.3) issues

Subject: ndt-users list created

List archive

10GigE ndt server (3.6.3) issues


Chronological Thread 
  • From: "Hao, Justin C" <>
  • To: "" <>
  • Subject: 10GigE ndt server (3.6.3) issues
  • Date: Mon, 27 Sep 2010 17:49:11 -0500
  • Accept-language: en-US
  • Acceptlanguage: en-US
  • Domainkey-signature: s=main; d=austin.utexas.edu; c=nofws; q=dns; h=X-IronPort-MID:X-IronPort-Anti-Spam-Filtered: X-IronPort-Anti-Spam-Result:Received:Received:From:To: Date:Subject:Thread-Topic:Thread-Index:Message-ID: Accept-Language:Content-Language:X-MS-Has-Attach: X-MS-TNEF-Correlator:acceptlanguage:Content-Type: Content-Transfer-Encoding:MIME-Version; b=Dj+SPD70bEvO/Xf5LFjJFtjjkpGCwWFe2+ZbNLHt8yT6kT/qqQ/7RMj3 B8SXbSCqy/NxYKM76m/vUojb0sV/AvGlckIxaJfeDJORCYEAuokGVS23y vbOZeufkMt0T+9wrrOB+nDiWfSaFZHZrLdBZaTD06SFBZZgDTd40/wvrk 0=;

So i'm setting up a pair of ndt test servers for our new datacenter and
running into some hurdles.

First off, i haven't connected them to each other (or to a 10Gig network), so
all my testing has been on a single box running loopback tests to itself (i
have no other 10GigE hosts available to me at the moment).

I'm running CentOS 5.5 and patched kernel 2.6.35 with the proper web100
version etc.

It's currently hooked up via 1GigE and i've run several different clients
against it and I get good results for 1Gig performance

I'm running loopback tests to tweak TCP settings while I wait for our 10Gig
environment to be made ready. I'm getting 15-18Gig/s for C2S but for S2C i'm
getting numbers all over the place. from 3.5Gig/s to 500Mb/s. and most
particularly odd, i'm seeing negative numbers in some of the test output.

I welcome any comments and suggestions for tuning this server (it's a dell
r610 w/ an intel 10GigE adapter)

note: I've also configured the ethernet interface to use 9000 MTU and a
txqueuelen of 10000

Server TCP settings:

# increase TCP max buffer size setable using setsockopt()

net.core.rmem_max = 16777216
net.core.wmem_max = 16777216

# increase Linux autotuning TCP buffer limits
# min, default, and max number of bytes to use
# set max to 16MB for 1GE, and 32M or 54M for 10GE

net.ipv4.tcp_mem = 16777216 16777216 16777216
net.ipv4.tcp_rmem = 10240 87380 16777216
net.ipv4.tcp_wmem = 10240 65536 16777216

net.ipv4.tcp_window_scaling = 1

# don't cache ssthresh from previous connection
net.ipv4.tcp_no_metrics_save = 1

# recommended to increase this for 10G NICS
net.core.netdev_max_backlog = 262144

NDT Output:

TCP/Web100 Network Diagnostic Tool v3.6.3
Click START to start the test

** Starting test 1 of 1 **
Connecting to '127.0.0.1' [/127.0.0.1] to run test
Connected to: 127.0.0.1-- Using IPv4 address
Checking for Middleboxes . . . . . . . . . . . . . . . . . . Done.
Checking for firewalls . . . . . . . . . . . . . . . . . . . Done.
running 10s outbound test (client-to-server [C2S]) . . . . . 13744.22Mb/s
running 10s inbound test (server-to-client [S2C]) . . . . . . -83702.57kb/s
Server unable to determine bottleneck link type.
[S2C]: Packet queueing detected

Click START to re-test

** Starting test 1 of 1 **
Connecting to '127.0.0.1' [/127.0.0.1] to run test
Connected to: 127.0.0.1-- Using IPv4 address
Checking for Middleboxes . . . . . . . . . . . . . . . . . . Done.
Checking for firewalls . . . . . . . . . . . . . . . . . . . Done.
running 10s outbound test (client-to-server [C2S]) . . . . . 12876.05Mb/s
running 10s inbound test (server-to-client [S2C]) . . . . . . 1006.86Mb/s
Server unable to determine bottleneck link type.
[S2C]: Packet queueing detected

Click START to re-test

** Starting test 1 of 1 **
Connecting to '127.0.0.1' [/127.0.0.1] to run test
Connected to: 127.0.0.1-- Using IPv4 address
Checking for Middleboxes . . . . . . . . . . . . . . . . . . Done.
Checking for firewalls . . . . . . . . . . . . . . . . . . . Done.
running 10s outbound test (client-to-server [C2S]) . . . . . 18466.0Mb/s
running 10s inbound test (server-to-client [S2C]) . . . . . . -1710004.63kb/s
Server unable to determine bottleneck link type.
[S2C]: Packet queueing detected

Click START to re-test


-----
Justin Hao
CCNA
Network Engineer, ITS Networking
The University of Texas at Austin

-----




Archive powered by MHonArc 2.6.16.

Top of Page