Skip to Content.
Sympa Menu

ndt-users - Re: NDT server buffer settings

Subject: ndt-users list created

List archive

Re: NDT server buffer settings


Chronological Thread 
  • From: "Nathan C. Broome" <>
  • To: Teresa Beamer <>
  • Cc:
  • Subject: Re: NDT server buffer settings
  • Date: Thu, 16 Mar 2006 13:35:55 -0500

Teresa,

This might help:

http://www.psc.edu/networking/projects/tcptune/

About 3/4 of the way into the document they talk about how to tune the TCP/IP stack in Linux. I changed mine to the settings they recommended in this document, and it seems to be fine.


Here's my info if it helps:

ndt:/proc/sys/net/core # cat rmem_max
2500000

ndt:/proc/sys/net/core # cat wmem_max
2500000

ndt:/proc/sys/net/ipv4 # cat tcp_wmem
4096 65536 5000000

ndt:/proc/sys/net/ipv4 # cat tcp_rmem
4096 5000000 5000000


Nathan Broome
Oberlin College














Teresa Beamer wrote:

I've finally gotten some time to review the statistics we've been gathering
via
NDT and I am consistently seeing a low kb/s for NDT server to Client with a
high kb/s from Client to NDT server. I'm beginning to think I need to adjust
some parameters on the NDT server, but am not quite sure where to make the
changes (I'm fairly new to linux). I've included sample below of a run from
my
machine. I did notice this time a line that says: "The NDT server has a 214.0
KByte buffer which limits the throughput to 208.98 Mbps."

Should I make changes to the buffers on the server? And if so where do I go
to
make those changes. A pointer to documentation that describes this
information
would be great.

Thanks for any help you can provide.

Teresa Beamer
Computing Services
Denison University

TCP/Web100 Network Diagnostic Tool v5.3.3e
click START to begin
Checking for Middleboxes . . . . . . . . . . . . . . . . . . Done
running 10s outbound test (client to server) . . . . . 863.86Kb/s
running 10s inbound test (server to client) . . . . . . 237.58kb/s
The slowest link in the end-to-end path is a 100 Mbps Full duplex Fast
Ethernet
subnet

WEB100 Enabled Statistics:
Checking for Middleboxes . . . . . . . . . . . . . . . . . . Done
running 10s outbound test (client to server) . . . . . 863.86Kb/s
running 10s inbound test (server to client) . . . . . . 237.58kb/s

------ Client System Details ------
OS data: Name = Windows XP, Architecture = x86, Version = 5.1
Java data: Vendor = Sun Microsystems Inc., Version = 1.4.2_03

------ Web100 Detailed Analysis ------
100 Mbps FastEthernet link found.
Link set to Full Duplex mode
No network congestion discovered.
Good network cable(s) found
Normal duplex operation found.

Web100 reports the Round trip time = 8.0 msec; the Packet size = 1460 Bytes;
and
There were 201 packets retransmitted, 2 duplicate acks received, and 5 SACK
blocks received
The connection stalled 1 times due to packet loss
The connection was idle 0.21 seconds (1.5%) of the time
This connection is network limited 99.87% of the time.
Excessive packet loss is impacting your performance, check the auto-negotiate
function on your local PC and network switch

Web100 reports TCP negotiated the optional Performance Settings to:
RFC 2018 Selective Acknowledgment: ON
RFC 896 Nagle Algorithm: ON
RFC 3168 Explicit Congestion Notification: OFF
RFC 1323 Time Stamping: OFF
RFC 1323 Window Scaling: OFF
Packet size is preserved End-to-End
Server IP addresses are preserved End-to-End
Client IP addresses are preserved End-to-End

WEB100 Kernel Variables:
Client: localhost/127.0.0.1
AckPktsIn: 82
AckPktsOut: 0
BytesRetrans: 292165
CongAvoid: 67
CongestionOverCount: 0
CongestionSignals: 7
CountRTT: 1
CurCwnd: 7300
CurMSS: 1460
CurRTO: 208
CurRwinRcvd: 0
CurRwinSent: 5840
CurSsthresh: 2920
DSACKDups: 0
DataBytesIn: 0
DataBytesOut: 785985
DataPktsIn: 0
DataPktsOut: 520
DupAcksIn: 2
ECNEnabled: 0
FastRetran: 1
MaxCwnd: 7300
MaxMSS: 1460
MaxRTO: 208
MaxRTT: 8
MaxRwinRcvd: 0
MaxRwinSent: 5840
MaxSsthresh: 2920
MinMSS: 1460
MinRTO: 208
MinRTT: 8
MinRwinRcvd: 2147483647
MinRwinSent: 5840
NagleEnabled: 1
OtherReductions: 5
PktsIn: 82
PktsOut: 520
PktsRetrans: 201
X_Rcvbuf: 219136
RcvWinScale: 2147483647
SACKEnabled: 3
SACKsRcvd: 5
SendStall: 0
SlowStart: 13
SampleRTT: 8
SmoothedRTT: 8
X_Sndbuf: 219136
SndWinScale: 2147483647
SndLimTimeRwin: 0
SndLimTimeCwnd: 14495340
SndLimTimeSender: 19349
SndLimTransRwin: 0
SndLimTransCwnd: 1
SndLimTransSender: 1
SndLimBytesRwin: 0
SndLimBytesCwnd: 785985
SndLimBytesSender: 0
SubsequentTimeouts: 5
SumRTT: 8
Timeouts: 1
TimestampsEnabled: 0
WinScaleRcvd: 2147483647
WinScaleSent: 2147483647
DupAcksOut: 0
StartTimeUsec: 15742
Duration: 14520643
c2sData: 5
c2sAck: 5
s2cData: 7
s2cAck: 4
half_duplex: 0
link: 100
congestion: 0
bad_cable: 0
mismatch: 0
spd: 0.00
bw: 12.00
loss: 0.013461538
avgrtt: 8.00
waitsec: 0.21
timesec: 14.00
order: 0.0244
rwintime: 0.0000
sendtime: 0.0013
cwndtime: 0.9987
rwin: 0.0000
swin: 1.6719
cwin: 0.0557
rttsec: 0.008000
Sndbuf: 219136
aspd: 0.30399

Checking for mismatch on uplink
(speed > 50 [0>50], (xmitspeed < 5) [0.86<5]
(rwintime > .9) [0>.9], (loss < .01) [0.01<.01]
Checking for excessive errors condition
(loss/sec > .15) [9.61>.15], (cwndtime > .6) [0.99>.6],
(loss < .01) [0.01<.01], (MaxSsthresh > 0) [2920>0]
Checking for 10 Mbps link
(speed < 9.5) [0<9.5], (speed > 3.0) [0>3.0]
(xmitspeed < 9.5) [0.86<9.5] (loss < .01) [0.01<.01], (mylink > 0)
[100.0>0]
Checking for Wireless link
(sendtime = 0) [0.00=0], (speed < 5) [0<5]
(Estimate > 50 [12.0>50], (Rwintime > 90) [0>.90]
(RwinTrans/CwndTrans = 1) [0/1=1], (mylink > 0) [100.0>0]
Checking for DSL/Cable Modem link
(speed < 2) [0<2], (SndLimTransSender = 0) [1=0]
(SendTime = 0) [0.0013=0], (mylink > 0) [100.0>0]
Checking for half-duplex condition
(rwintime > .95) [0>.95], (RwinTrans/sec > 30) [0>30],
(SenderTrans/sec > 30) [0.07>30], OR (mylink <= 10) [100.0<=10]
Checking for congestion
(cwndtime > .02) [0.99>.02], (mismatch = 0) [0=0]
(MaxSsthresh > 0) [2920>0]

estimate = 12.0 based on packet size = 11Kbits, RTT = 8.0msec, and loss =
0.013461538
The theoretical network limit is 12.0 Mbps
The NDT server has a 214.0 KByte buffer which limits the throughput to 208.98
Mbps
Your PC/Workstation has a 0 KByte buffer which limits the throughput to 0 Mbps
The network based flow control limits the throughput to 6.96 Mbps

Client Data reports link is 'FastE', Client Acks report link is 'FastE'
Server Data reports link is 'GigE', Server Acks report link is 'T3'





Archive powered by MHonArc 2.6.16.

Top of Page