Skip to Content.
Sympa Menu

ndt-users - Re: Windows Configuration issues when testing to NDT servers

Subject: ndt-users list created

List archive

Re: Windows Configuration issues when testing to NDT servers


Chronological Thread 
  • From: Matt Mathis <>
  • To:
  • Cc:
  • Subject: Re: Windows Configuration issues when testing to NDT servers
  • Date: Mon, 13 Feb 2012 21:53:13 -0800

No, but I can guess. My understanding is the issue comes from
rounding the RTT up to one tick in the autotuning code, suggesting
that manual tuning might be faster.

Good manual tuning is often faster than autotuning, as you know.

Thanks,
--MM--
The best way to predict the future is to create it.  - Alan Kay



On Mon, Feb 13, 2012 at 6:35 PM, Jason Zurawski
<>
wrote:
> Hey Matt;
>
> Do you happen to know if any of the traditional work-around methods work to
> combat this, or is the sort of thing that just can't be changed?
>
> Thanks;
>
> -jason
>
> On 2/13/12 7:11 PM, thus spake Matt Mathis:
>
>> Some back story (repeated from some friends at MS): the current
>> behavior was considered a bug during internal testing, until it was
>> realized that the fix would cause a pair of vista machines lock all
>> other users off of the LAN (read up on capture effect and RFC 2309).
>> So the fix was withdrawn.   Not filling the network was preferred to
>> mauling everybody else.
>>
>> Also read up on buffer bloat and other consequences of well tuned TCP
>> in a network without AQM.
>>
>> Hope this helps,
>> --MM--
>> The best way to predict the future is to create it.  - Alan Kay
>>
>>
>>
>> On Mon, Feb 13, 2012 at 3:55 PM, Jason
>> Zurawski<>
>>  wrote:
>>>
>>> Hi Bob/All;
>>>
>>> As the other respondents have noted - tuning windows is a dark art.
>>>
>>> I have heard some claims that windows 7 (in particular the server OS) has
>>> some bugs in its auto-tuning capabilities (I wasn't able to find
>>> reference
>>> to a paper I read on this unfortunately ...).  For small latencies (e.g.
>>> less than 8ms or so) autotuning does not enable fully, and the window
>>> size
>>> won't grow past 64K.  Even for small latency, this is horrid for
>>> performance.  E.g. the Bandwidth Delay Product (see a good writeup +
>>> calculator here:
>>> http://www.switch.ch/network/tools/tcp_throughput/index.html) to reach
>>> 1Gbps
>>> on a 1ms latency network still has to be>  128K, (2ms is 256K, etc.).
>>>
>>> One thing you can do is examine the results (specifically the more
>>> statistics/more details screens) after running on a windows client and
>>> see
>>> what the 'client window size' was observed to be.  See if your settings
>>> match reality if you changed them, or in the event that you didn't see
>>> what
>>> autotuning is doing to the buffers.
>>>
>>> With regards to packets arriving out of order was this only seen when
>>> testing to windows machines?  Are those servers running any form of 'tcp
>>> offload' in the NIC card/drivers?
>>>
>>> Hope this helps, good luck!
>>>
>>> -jason
>>>
>>> On 2/13/12 5:02 PM, thus spake Bob Gerdes:
>>>
>>>>
>>>> Greetings,
>>>>
>>>> We have been doing a series of NDT tests on one of our campuses, and
>>>> found that there is inconsistent results between Linux systems and
>>>> Windows (XP and 7) systems. We have used the recommendations from DrTCP
>>>> (noted on fasterdata.es.net) and the psc.edu site. And the Windows
>>>> results consistently look much worse than the Linux systems. Has anyone
>>>> been able to get better results with Windows and could suggest specific
>>>> configuration settings?
>>>>
>>>> One additional note is that there was a 1.5% of packets that arrived out
>>>> of order.
>>>>
>>>> Any advice would be greatly appreciated.
>>>> Thanks,
>>>> Bob
>>>>
>>>> Bob Gerdes
>>>> Office of Instructional and Research Technologies (OIRT)
>>>> Office of Information Technology (OIT)
>>>> Rutgers, The State University
>>>> ASB Annex I, room 101G, Busch Campus
>>>> 56 Bevier Road, Piscataway, NJ 08854
>>>> Phone: (732) 445-1438 Fax: 445-5539
>>>>
>>>>> We followed this reference which talks about tuning Windows XP:
>>>>> http://www.psc.edu/networking/projects/tcptune/
>>>>> as well as this
>>>>> http://fasterdata.es.net/fasterdata/host-tuning/
>>>>>
>>>>> Also, we ran DrTCP, rebooted and still the exact same results from my
>>>>> XP machine...
>>>>>
>>>>>> using linux in room 313
>>>>>>
>>>>>> Camden-to-Camden:
>>>>>> running 10s outbound test (client to server) . . . . . 898.23 Mb/s
>>>>>> running 10s inbound test (server to client) . . . . . . 880.40 Mb/s
>>>>>>
>>>>>> Camden-to-Newark:
>>>>>> running 10s outbound test (client to server) . . . . . 478.16 Mb/s
>>>>>> running 10s inbound test (server to client) . . . . . . 518.90 Mb/s
>>>>>>
>>>>>> Camden-to-NB:
>>>>>> running 10s outbound test (client to server) . . . . . 841.83 Mb/s
>>>>>> running 10s inbound test (server to client) . . . . . . 834.33 Mb/s
>>>>>>
>>>>>> and, when we switch the PC next to Linux system in BSB 113 to 1Gb
>>>>>> (uses Windows 7), we get:
>>>>>>
>>>>>> Camden-to-Camden:
>>>>>> running 10s outbound test (client to server) . . . . . 272.50 Mb/s
>>>>>> running 10s inbound test (server to client) . . . . . . 726.08 Mb/s
>>>>>>
>>>>>> Camden-to-Newark:
>>>>>> running 10s outbound test (client to server) . . . . . 55.24 Mb/s
>>>>>> running 10s inbound test (server to client) . . . . . . 219.61 Mb/s
>>>>>>
>>>>>> Camden-to-NB:
>>>>>> running 10s outbound test (client to server) . . . . . 60.01 Mb/s
>>>>>> running 10s inbound test (server to client) . . . . . . 229.60 Mb/s
>>>>>>
>>>>>> comparable to Windows XP at Gig speed.
>>>>>>
>>>>>> It must be WINDOWS? and, it's configuration?
>>>>>>
>>>>>> So, is there an optimal setting for a Windows machine that I could
>>>>>> try?
>>>>>>
>>>>>>>
>>>>>>> With the 3 systems tested giving basically the same results, this
>>>>>>> kind of eliminates the possibility of cable issues or port
>>>>>>> malfunctioning as well as OS and NIC issues.
>>>>>>>
>>>>>>> The traffic shaping and receiver limited issue seem puzzling.
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>>>>  From two linux servers, got the following:
>>>>>>>>>>
>>>>>>>>>>  From Office System (165.230.105 subnet):
>>>>>>>>>> Camden-to-Camden:
>>>>>>>>>> running 10s outbound test (client to server) . . . . . 862.62 Mb/s
>>>>>>>>>> running 10s inbound test (server to client) . . . . . . 850.73
>>>>>>>>>> Mb/s
>>>>>>>>>>
>>>>>>>>>> Camden-to-Newark:
>>>>>>>>>> running 10s outbound test (client to server) . . . . . 574.44 Mb/s
>>>>>>>>>> running 10s inbound test (server to client) . . . . . . 519.07
>>>>>>>>>> Mb/s
>>>>>>>>>>
>>>>>>>>>> Camden-to-NB:
>>>>>>>>>> running 10s outbound test (client to server) . . . . . 846.37 Mb/s
>>>>>>>>>> running 10s inbound test (server to client) . . . . . . 816.39
>>>>>>>>>> Mb/s
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>  From clamshell (165.230.99 subnet):
>>>>>>>>>> Camden-to-Camden:
>>>>>>>>>> running 10s outbound test (client to server) . . . . . 882.16 Mb/s
>>>>>>>>>> running 10s inbound test (server to client) . . . . . . 891.93
>>>>>>>>>> Mb/s
>>>>>>>>>>
>>>>>>>>>> Camden-to-Newark:
>>>>>>>>>> running 10s outbound test (client to server) . . . . . 490.35 Mb/s
>>>>>>>>>> running 10s inbound test (server to client) . . . . . . 531.92
>>>>>>>>>> Mb/s
>>>>>>>>>>
>>>>>>>>>> Camden-to-NB:
>>>>>>>>>> running 10s outbound test (client to server) . . . . . 838.58 Mb/s
>>>>>>>>>> running 10s inbound test (server to client) . . . . . . 869.51
>>>>>>>>>> Mb/s
>>>>>>>>>>
>>>>>>>>>> The NDT servers that I used were:
>>>>>>>>>> ndt-cam.rutgers.edu
>>>>>>>>>> ndt-nwk.rutgers.edu
>>>>>>>>>> ndt-nbp.rutgers.edu



Archive powered by MHonArc 2.6.16.

Top of Page