Skip to Content.
Sympa Menu

ndt-users - Re: Problem of report of NDT server

Subject: ndt-users list created

List archive

Re: Problem of report of NDT server


Chronological Thread 
  • From: Rich Carlson <>
  • To: Jeremy Palmer <>
  • Cc: ,
  • Subject: Re: Problem of report of NDT server
  • Date: Thu, 01 Apr 2010 14:18:12 -0400

Jeremy;

You are correct in recognizing that the "sender" and "receiver" functions change depending on which test is being run (s2c or c2s).

In this case, I'm talking about the server-to-client test. Due to TCP's design, the TCP sender has a lot of information about the path. It actively probes the path to determine what the capacity is, it responds to loss indications and timeouts. Most of this state is stored in the TCP control block and the web100 enhancements expose this information to user level programs.

On the other hand, the TCP receiver is pretty passive, it sends information back to the sender when it can but it isn't as robust (meaning it doesn't use timeouts to ensure that ACK's are actually delivered).

Due to these TCP design limitations the NDT server gathers a bunch of useful information during the s2c test. The triage data, and all of the web100 data come from this test.

Rich
On 4/1/2010 1:32 PM, Jeremy Palmer wrote:
Can you clarify the definition of "sender" and "receiver"? It seems to me that
when the "client to server" test is running then the client is the sender and
the NDT box is the receiver but would be reversed when its running the "server
to client" test. Or am I looking at this the wrong way?

Rich Carlson wrote:
Lize;

These are part of the Web100 triage values. The web100 kernel maintains
3 possible states for a TCP connection. These are sender limited (pkts
can't leave the host because the sender's TCP window is closed - waiting
for ACK's), receiver limited (the receiver's TCP window is closed -
waiting for data to move up to the application), or network limited
(either the Congestion window is limiting the flow, or no other limit is
being hit).

As the NDT server streams data to the client the kernel records what
state it was in when it stopped sending. In this case 40% of the time
the flow stops because the NDT server reached the max value for the TCP
transmit window. The other 60% of the time it was in the network
limited state. The More Details page will also include counters that
tell you how many times the state changed.

This seems to indicate that you have not tuned the NDT server and it is
becoming the bottleneck for the test. I.E. the server can't send data
faster because the TCP xmit window is too small.

What link speeds are you using?
What is the server's CPU utilization?
What are the TCP tuning parameters set to?

Rich

On 4/1/2010 5:53 AM,

wrote:
This connection is sender limited 39.55% of the time.
This connection is network limited 60.45% of the time.

excuse me, i don't understand what does the two line above mean
could anyone explain for me?






Archive powered by MHonArc 2.6.16.

Top of Page