Skip to Content.
Sympa Menu

ndt-users - Re: Using NDT with 10 gigabit interfaces

Subject: ndt-users list created

List archive

Re: Using NDT with 10 gigabit interfaces


Chronological Thread 
  • From: Brian Tierney <>
  • To: Aaron Brown <>
  • Cc: <>, <>
  • Subject: Re: Using NDT with 10 gigabit interfaces
  • Date: Wed, 15 May 2013 11:37:05 -0700
  • Authentication-results: sfpop-ironport01.merit.edu; dkim=neutral (message not signed) header.i=none


Another possibility is that I've seen cases where, on a path with packet
loss, different clients seem to trigger different loss patterns.

For example, here is on a clean path:


>web100clt -n ps-lax-10g.cenic.net -b 33554432
running 10s outbound test (client to server) . . . . . 2321.11 Mb/s
running 10s inbound test (server to client) . . . . . . 2802.95 Mb/s

vs bwctl:

>bwctl -c ps-lax-10g.cenic.net -fm
bwctl: Using tool: iperf
[ 14] local 137.164.28.105 port 5001 connected with 198.129.254.98 port 5001
[ ID] Interval Transfer Bandwidth
[ 14] 0.0-10.0 sec 2984 MBytes 2496 Mbits/sec

performance is similar.

----------

And here are the results for a path with packet loss:

>web100clt -n ps-lax-10g.cenic.net -b 33554432
running 10s outbound test (client to server) . . . . . 18.06 Mb/s
running 10s inbound test (server to client) . . . . . . 2492.69 Mb/s

>bwctl -c ps-lax-10g.cenic.net -fm
[ 14] local 137.164.28.105 port 5001 connected with 198.129.254.150 port 5001
[ ID] Interval Transfer Bandwidth
[ 14] 0.0-10.3 sec 552 MBytes 450 Mbits/sec

Here iperf does 30x better than NDT (and btw, nuttcp results agree with the
NDT results in this case)

My guess is that different tools have different burst characteristics, and
these trigger different amounts of packet loss.



On May 15, 2013, at 7:07 AM, Aaron Brown
<>
wrote:

> Since it's happening only on the server -> client side, my guess is that
> its related to NDT taking web100 snapshots. My recollection is that the
> default is to check every 5ms. Try editing /etc/sysconfig/ndt and adding
> --snapdelay 25 to the WEB100SRV_OPTIONS line, and then restart NDT. This
> will back off that snapshotting so that it occurs every 25ms. Now, I'm not
> positive how that might affect some peak calculations, but it can't hurt to
> try.
>
> Cheers,
> Aaron
>
> On May 14, 2013, at 5:30 PM, Byron Hicks
> <>
> wrote:
>
>>
>> I'm having this same issue. I'm using pstoolkit 3.2.2, NDT 3.6.4, on
>> servers with Myri10GE interfaces.
>>
>> NDT and bwctl using iperf get very different answers:
>>
>> NDT:
>>
>> [root@ps1-akard-dlls
>> init.d]# web100clt -n ps1-hardy-hstn
>> Testing network path for configuration and performance problems --
>> Using IPv4 address
>> Checking for Middleboxes . . . . . . . . . . . . . . . . . . Done
>> checking for firewalls . . . . . . . . . . . . . . . . . . . Done
>> running 10s outbound test (client to server) . . . . . 8251.97 Mb/s
>> running 10s inbound test (server to client) . . . . . . 1349.95 Mb/s
>> sending meta information to server . . . . . Done
>> The slowest link in the end-to-end path is a 10 Gbps 10 Gigabit
>> Ethernet/OC-192 subnet
>> Information: Other network traffic is congesting the link
>> Information [S2C]: Packet queuing detected: 83.58% (remote buffers)
>> Server 'ps1-hardy-hstn' is not behind a firewall. [Connection to the
>> ephemeral port was successful]
>> Client is not behind a firewall. [Connection to the ephemeral port was
>> successful]
>> Packet size is preserved End-to-End
>> Server IP addresses are preserved End-to-End
>> Client IP addresses are preserved End-to-End
>>
>> BWCTL/IPERF:
>>
>> [root@ps1-akard-dlls
>> init.d]# bwctl -f g -s ps1-hardy-hstn
>> bwctl: Using tool: iperf
>> bwctl: 15 seconds until test results available
>>
>> RECEIVER START
>> bwctl: exec_line: iperf -B 74.200.187.90 -s -f g -m -p 5058 -t 10
>> bwctl: start_tool: 3577554914.806607
>> ------------------------------------------------------------
>> Server listening on TCP port 5058
>> Binding to local address 74.200.187.90
>> TCP window size: 0.00 GByte (default)
>> ------------------------------------------------------------
>> [ 15] local 74.200.187.90 port 5058 connected with 74.200.187.98 port 5058
>> [ ID] Interval Transfer Bandwidth
>> [ 15] 0.0-10.0 sec 10.9 GBytes 9.34 Gbits/sec
>> [ 15] MSS size 8948 bytes (MTU 8988 bytes, unknown interface)
>> bwctl: stop_exec: 3577554928.880958
>>
>> RECEIVER END
>>
>> [root@ps1-akard-dlls
>> init.d]# bwctl -f g -c ps1-hardy-hstn
>> bwctl: Using tool: iperf
>> bwctl: 15 seconds until test results available
>>
>> RECEIVER START
>> bwctl: exec_line: iperf -B 74.200.187.98 -s -f g -m -p 5022 -t 10
>> bwctl: start_tool: 3577554939.532031
>> ------------------------------------------------------------
>> Server listening on TCP port 5022
>> Binding to local address 74.200.187.98
>> TCP window size: 0.00 GByte (default)
>> ------------------------------------------------------------
>> [ 14] local 74.200.187.98 port 5022 connected with 74.200.187.90 port 5022
>> [ ID] Interval Transfer Bandwidth
>> [ 14] 0.0-10.0 sec 10.6 GBytes 9.09 Gbits/sec
>> [ 14] MSS size 8948 bytes (MTU 8988 bytes, unknown interface)
>> bwctl: stop_exec: 3577554952.353805
>>
>> RECEIVER END
>>
>> Any ideas why they would be so different?
>>
>> On 01/30/2013 10:02 PM, Matt Mathis wrote:
>>> Try the c client to see if the problem is at the client or server end.
>>>
>>> Thanks,
>>> --MM--
>>> The best way to predict the future is to create it. - Alan Kay
>>>
>>> Privacy matters! We know from recent events that people are using our
>>> services to speak in defiance of unjust governments. We treat privacy
>>> and security as matters of life and death, because for some users, they
>>> are.
>>>
>>>
>>> On Wed, Jan 30, 2013 at 10:12 AM, Nat Stoddard
>>> <
>>> <mailto:>>
>>> wrote:
>>>
>>>
>>> Dear members:
>>>
>>> I thought I would try again to find an answer to this since I
>>> noticed recent messages related to NDT on 10 gig devices:
>>>
>>> I have tried several approaches to use NDT on a server with a 10
>>> gigabit
>>> interface. I wonder if there are any limitations on the server to
>>> client
>>> tests using either web100clt or the Java client. I have not been
>>> able to get more than around 2.6 gigs
>>> server-to-client. The client-to-server test can go over 9 gigs even
>>> without
>>> extensive tuning. On the same server, I can get over 9 gigs in each
>>> direction
>>> to a neighbor server using iperf tests.
>>>
>>> Are there any tips on running NDT on a 10gig capable server?
>>>
>>> Thanks,
>>> Nat Stoddard
>>>
>>>
>>
>>
>> --
>> Byron Hicks
>> Lonestar Education and Research Network
>> office: 972-883-4645
>> google: 972-746-2549
>> aim/skype: byronhicks
>>
>>
>> --
>> Byron Hicks
>> Lonestar Education and Research Network
>> office: 972-883-4645
>> google: 972-746-2549
>> aim/skype: byronhicks
>>
>>
>>
>> --
>> Byron Hicks
>> Lonestar Education and Research Network
>> office: 972-883-4645
>> google: 972-746-2549
>> aim/skype: byronhicks
>>
>>
>>
>>
>> --
>> Byron Hicks
>> Lonestar Education and Research Network
>> office: 972-883-4645
>> google: 972-746-2549
>> aim/skype: byronhicks
>>
>>
>>
>
> ESnet/Internet2 Focused Technical Workshop
> Network Issues for Life Sciences Research
> July 17 - 18, 2013, Berkeley CA
> http://events.internet2.edu/2013/ftw-life-sciences/
>




Archive powered by MHonArc 2.6.16.

Top of Page