Skip to Content.
Sympa Menu

ndt-users - Re: Using NDT with 10 gigabit interfaces

Subject: ndt-users list created

List archive

Re: Using NDT with 10 gigabit interfaces


Chronological Thread 
  • From: Aaron Brown <>
  • To: <>
  • Cc: Brian Tierney <>, <>
  • Subject: Re: Using NDT with 10 gigabit interfaces
  • Date: Wed, 15 May 2013 16:27:51 -0400
  • Authentication-results: sfpop-ironport01.merit.edu; dkim=neutral (message not signed) header.i=none

It's almost definitely the software. The NDT server does more work during the server->client test so i'd imagine that it's the limitation.

Cheers,
Aaron

On May 15, 2013, at 4:25 PM, Byron Hicks <> wrote:

I'm reasonably certain that I have a clean path.

Both boxes are running NDT, and I get the same result in both directions:

Houston:

running 10s outbound test (client to server) . . . . .  9123.91 Mb/s
running 10s inbound test (server to client) . . . . . . 1434.11 Mb/s

Dallas:

running 10s outbound test (client to server) . . . . .  8953.05 Mb/s
running 10s inbound test (server to client) . . . . . . 1440.18 Mb/s

If it were a traffic loss issue, I would expect that the
outbound/inbound numbers would flip, with the lower number being on the
"leg" of the duplex path that had the traffic loss.

But I'm not.  Client to Server is 9Gb/s and Server to Client is 1.4Gb/s,
regardless of which NDT server I'm testing from/to.  And considering
that I'm getting 9Gb/s on a 10Gb/s link using iperf in both directions,
I'm pretty sure packet loss is a not a factor.

How do I interpret the following:

Information [S2C]: Packet queuing detected: 80.16% (remote buffers)

Where is the packet queuing happening?


On 05/15/2013 01:37 PM, Brian Tierney wrote:
Another possibility is that I've seen cases where, on a path with packet loss, different clients seem to trigger different loss patterns.

For example, here is on a clean path:


web100clt -n ps-lax-10g.cenic.net -b 33554432
running 10s outbound test (client to server) . . . . .  2321.11 Mb/s
running 10s inbound test (server to client) . . . . . . 2802.95 Mb/s

vs bwctl:

bwctl -c ps-lax-10g.cenic.net -fm
bwctl: Using tool: iperf
[ 14] local 137.164.28.105 port 5001 connected with 198.129.254.98 port 5001
[ ID] Interval       Transfer     Bandwidth
[ 14]  0.0-10.0 sec  2984 MBytes  2496 Mbits/sec

performance is similar.

----------

And here are the results for a path with packet loss:

web100clt -n ps-lax-10g.cenic.net -b 33554432
running 10s outbound test (client to server) . . . . .  18.06 Mb/s
running 10s inbound test (server to client) . . . . . . 2492.69 Mb/s

bwctl -c ps-lax-10g.cenic.net -fm
[ 14] local 137.164.28.105 port 5001 connected with 198.129.254.150 port 5001
[ ID] Interval       Transfer     Bandwidth
[ 14]  0.0-10.3 sec   552 MBytes   450 Mbits/sec

Here iperf does 30x better than NDT (and btw, nuttcp results agree with the NDT results in this case)

My guess is that different tools have different burst characteristics, and these trigger different amounts of packet loss.


--
Byron Hicks
Lonestar Education and Research Network
office: 972-883-4645
google: 972-746-2549
aim/skype: byronhicks



ESnet/Internet2 Focused Technical Workshop
Network Issues for Life Sciences Research
July 17 - 18, 2013, Berkeley CA
http://events.internet2.edu/2013/ftw-life-sciences/




Archive powered by MHonArc 2.6.16.

Top of Page