Skip to Content.
Sympa Menu

ndt-users - Re: 10GigE ndt server (3.6.3) issues

Subject: ndt-users list created

List archive

Re: 10GigE ndt server (3.6.3) issues


Chronological Thread 
  • From: "Hao, Justin C" <>
  • To: Rich Carlson <>
  • Cc: "" <>
  • Subject: Re: 10GigE ndt server (3.6.3) issues
  • Date: Wed, 29 Sep 2010 08:37:11 -0500
  • Accept-language: en-US
  • Acceptlanguage: en-US
  • Domainkey-signature: s=main; d=austin.utexas.edu; c=nofws; q=dns; h=X-IronPort-MID:X-IronPort-Anti-Spam-Filtered: X-IronPort-Anti-Spam-Result:Received:Received:From:To:CC: Date:Subject:Thread-Topic:Thread-Index:Message-ID: References:In-Reply-To:Accept-Language:Content-Language: X-MS-Has-Attach:X-MS-TNEF-Correlator:acceptlanguage: Content-Type:Content-Transfer-Encoding:MIME-Version; b=HK37vtqvn+LRf9QyJwe/Gfr+DkP9h7ZyJiyesM8lBkLgH2xEeypJmkrW BSK6xniEHAdmNZtkw8VRP5ze2DurF8gGXGydT2rTIp7sCgq2FVv3XuT/E m8PBWXFyQag75FxbaLfQFDhC2a2Hbi/c++3Hcp82kuWHQuEOKneIgPNbu w=;

Howdy Rich,

I've tried with both the command line and the web client, only the web client
has returned the negative values, but the command line client has also
returned a wide range of results. what is the recommended java RE to use?

-----
Justin Hao
CCNA
Network Engineer, ITS Networking
The University of Texas at Austin

-----

On Sep 29, 2010, at 8:30 AM, Rich Carlson wrote:

> Justin;
>
> The Web100 system uses 32bit counters, so I suspect the negative speeds
> come from the use of signed int vars instead of unsigned.
>
> When testing on the local box, the traffic goes through the loopback
> interface (lo0) instead of the NIC. This means you are testing the OS
> and it's memory management system more that anything else.
> Changing/setting variables on the ethx interface will have no effect.
>
> You should not set the tcp_mem value to 16 M. The value for this
> variable is in pages, NOT bytes. See
> Documentation/networking/ip-sysctl.txt doc in the kernel source tree for
> more details.
>
> tcp_mem - vector of 3 INTEGERs: min, pressure, max
> min: below this number of pages TCP is not bothered about its
> memory appetite.
>
> pressure: when amount of memory allocated by TCP exceeds this
> number
> of pages, TCP moderates its memory consumption and enters memory
> pressure mode, which is exited when memory consumption falls
> under "min".
>
> max: number of pages allowed for queueing by all TCP sockets.
>
> Defaults are calculated at boot time from amount of available
> memory.
>
>
> Are you using the command line client (web100clt) or the Java Applet
> (via the browser) to run these tests? It was recently reported that a
> site was getting high variability in the S2C tests with the Java client.
> It turned out that they had 2 Java consoles installed. Removing the
> older console cleared up the problem.
>
> Regards;
> Rich
>
> On 9/27/2010 6:49 PM, Hao, Justin C wrote:
>> So i'm setting up a pair of ndt test servers for our new datacenter and
>> running into some hurdles.
>>
>> First off, i haven't connected them to each other (or to a 10Gig network),
>> so all my testing has been on a single box running loopback tests to
>> itself (i have no other 10GigE hosts available to me at the moment).
>>
>> I'm running CentOS 5.5 and patched kernel 2.6.35 with the proper web100
>> version etc.
>>
>> It's currently hooked up via 1GigE and i've run several different clients
>> against it and I get good results for 1Gig performance
>>
>> I'm running loopback tests to tweak TCP settings while I wait for our
>> 10Gig environment to be made ready. I'm getting 15-18Gig/s for C2S but for
>> S2C i'm getting numbers all over the place. from 3.5Gig/s to 500Mb/s. and
>> most particularly odd, i'm seeing negative numbers in some of the test
>> output.
>>
>> I welcome any comments and suggestions for tuning this server (it's a dell
>> r610 w/ an intel 10GigE adapter)
>>
>> note: I've also configured the ethernet interface to use 9000 MTU and a
>> txqueuelen of 10000
>>
>> Server TCP settings:
>>
>> # increase TCP max buffer size setable using setsockopt()
>>
>> net.core.rmem_max = 16777216
>> net.core.wmem_max = 16777216
>>
>> # increase Linux autotuning TCP buffer limits
>> # min, default, and max number of bytes to use
>> # set max to 16MB for 1GE, and 32M or 54M for 10GE
>>
>> net.ipv4.tcp_mem = 16777216 16777216 16777216
>> net.ipv4.tcp_rmem = 10240 87380 16777216
>> net.ipv4.tcp_wmem = 10240 65536 16777216
>>
>> net.ipv4.tcp_window_scaling = 1
>>
>> # don't cache ssthresh from previous connection
>> net.ipv4.tcp_no_metrics_save = 1
>>
>> # recommended to increase this for 10G NICS
>> net.core.netdev_max_backlog = 262144
>>
>> NDT Output:
>>
>> TCP/Web100 Network Diagnostic Tool v3.6.3
>> Click START to start the test
>>
>> ** Starting test 1 of 1 **
>> Connecting to '127.0.0.1' [/127.0.0.1] to run test
>> Connected to: 127.0.0.1-- Using IPv4 address
>> Checking for Middleboxes . . . . . . . . . . . . . . . . . . Done.
>> Checking for firewalls . . . . . . . . . . . . . . . . . . . Done.
>> running 10s outbound test (client-to-server [C2S]) . . . . . 13744.22Mb/s
>> running 10s inbound test (server-to-client [S2C]) . . . . . . -83702.57kb/s
>> Server unable to determine bottleneck link type.
>> [S2C]: Packet queueing detected
>>
>> Click START to re-test
>>
>> ** Starting test 1 of 1 **
>> Connecting to '127.0.0.1' [/127.0.0.1] to run test
>> Connected to: 127.0.0.1-- Using IPv4 address
>> Checking for Middleboxes . . . . . . . . . . . . . . . . . . Done.
>> Checking for firewalls . . . . . . . . . . . . . . . . . . . Done.
>> running 10s outbound test (client-to-server [C2S]) . . . . . 12876.05Mb/s
>> running 10s inbound test (server-to-client [S2C]) . . . . . . 1006.86Mb/s
>> Server unable to determine bottleneck link type.
>> [S2C]: Packet queueing detected
>>
>> Click START to re-test
>>
>> ** Starting test 1 of 1 **
>> Connecting to '127.0.0.1' [/127.0.0.1] to run test
>> Connected to: 127.0.0.1-- Using IPv4 address
>> Checking for Middleboxes . . . . . . . . . . . . . . . . . . Done.
>> Checking for firewalls . . . . . . . . . . . . . . . . . . . Done.
>> running 10s outbound test (client-to-server [C2S]) . . . . . 18466.0Mb/s
>> running 10s inbound test (server-to-client [S2C]) . . . . . .
>> -1710004.63kb/s
>> Server unable to determine bottleneck link type.
>> [S2C]: Packet queueing detected
>>
>> Click START to re-test
>>
>>
>> -----
>> Justin Hao
>> CCNA
>> Network Engineer, ITS Networking
>> The University of Texas at Austin
>>
>> -----
>>
>>




Archive powered by MHonArc 2.6.16.

Top of Page