Skip to Content.
Sympa Menu

ndt-users - Re: Using NDT with 10 gigabit interfaces

Subject: ndt-users list created

List archive

Re: Using NDT with 10 gigabit interfaces


Chronological Thread 
  • From: Byron Hicks <>
  • To: Brian Tierney <>
  • Cc: Aaron Brown <>,
  • Subject: Re: Using NDT with 10 gigabit interfaces
  • Date: Wed, 15 May 2013 15:25:48 -0500
  • Authentication-results: sfpop-ironport05.merit.edu; dkim=permerror (no key for signature)
  • Dkim-filter: OpenDKIM Filter v2.7.1 mta2.noc.tx-learn.net 9DA1265AFB
  • Organization: Lonestar Education and Research Network

I'm reasonably certain that I have a clean path.

Both boxes are running NDT, and I get the same result in both directions:

Houston:

running 10s outbound test (client to server) . . . . . 9123.91 Mb/s
running 10s inbound test (server to client) . . . . . . 1434.11 Mb/s

Dallas:

running 10s outbound test (client to server) . . . . . 8953.05 Mb/s
running 10s inbound test (server to client) . . . . . . 1440.18 Mb/s

If it were a traffic loss issue, I would expect that the
outbound/inbound numbers would flip, with the lower number being on the
"leg" of the duplex path that had the traffic loss.

But I'm not. Client to Server is 9Gb/s and Server to Client is 1.4Gb/s,
regardless of which NDT server I'm testing from/to. And considering
that I'm getting 9Gb/s on a 10Gb/s link using iperf in both directions,
I'm pretty sure packet loss is a not a factor.

How do I interpret the following:

Information [S2C]: Packet queuing detected: 80.16% (remote buffers)

Where is the packet queuing happening?


On 05/15/2013 01:37 PM, Brian Tierney wrote:
> Another possibility is that I've seen cases where, on a path with packet
> loss, different clients seem to trigger different loss patterns.
>
> For example, here is on a clean path:
>
>
>> web100clt -n ps-lax-10g.cenic.net -b 33554432
> running 10s outbound test (client to server) . . . . . 2321.11 Mb/s
> running 10s inbound test (server to client) . . . . . . 2802.95 Mb/s
>
> vs bwctl:
>
>> bwctl -c ps-lax-10g.cenic.net -fm
> bwctl: Using tool: iperf
> [ 14] local 137.164.28.105 port 5001 connected with 198.129.254.98 port 5001
> [ ID] Interval Transfer Bandwidth
> [ 14] 0.0-10.0 sec 2984 MBytes 2496 Mbits/sec
>
> performance is similar.
>
> ----------
>
> And here are the results for a path with packet loss:
>
>> web100clt -n ps-lax-10g.cenic.net -b 33554432
> running 10s outbound test (client to server) . . . . . 18.06 Mb/s
> running 10s inbound test (server to client) . . . . . . 2492.69 Mb/s
>
>> bwctl -c ps-lax-10g.cenic.net -fm
> [ 14] local 137.164.28.105 port 5001 connected with 198.129.254.150 port
> 5001
> [ ID] Interval Transfer Bandwidth
> [ 14] 0.0-10.3 sec 552 MBytes 450 Mbits/sec
>
> Here iperf does 30x better than NDT (and btw, nuttcp results agree with the
> NDT results in this case)
>
> My guess is that different tools have different burst characteristics, and
> these trigger different amounts of packet loss.
>

--
Byron Hicks
Lonestar Education and Research Network
office: 972-883-4645
google: 972-746-2549
aim/skype: byronhicks




Archive powered by MHonArc 2.6.16.

Top of Page