ndt-users - V5.3.3e against V5.35a
Subject: ndt-users list created
List archive
- From: Peter Van Epp <>
- To:
- Subject: V5.3.3e against V5.35a
- Date: Wed, 26 Jul 2006 13:54:29 -0700
We just tried upgrading to V5.35a and are seeing problems. V5.3.3e
seems to be OK. The server is an IBM P510 Power 5 box with Suse 10.1 and a
linux-2.6.16.13-4 kernel (with PF_RING as well as web100 loaded) and a gig
connection. The Client is a Mac G4 running OS X 10.3 (although Lixin's
desktop
Suse box gets much the same results) both with 100 full connections:
Web100 Network Diagnostic Tool v5.3.3e
click START to begin
Checking for Middleboxes . . . . . . . . . . . . . . . . . . Done
running 10s outbound test (client to server) . . . . . 88.82Mb/s
running 10s inbound test (server to client) . . . . . . 88.53Mb/s
The slowest link in the end-to-end path is a 100 Mbps Full duplex Fast
Ethernet subnet
click START to re-test
START to re-test
Checking for Middleboxes . . . . . . . . . . . . . . . . . . Done
running 10s outbound test (client to server) . . . . . 93.34Mb/s
running 10s inbound test (server to client) . . . . . . 92.46Mb/s
The slowest link in the end-to-end path is a 100 Mbps Full duplex Fast
Ethernet subnet
click START to re-test
But when (on the same machines) I start V5.35a things get much worse.
Sometimes we get the error below (haven't managed to have the sniffer on when
I see it yet though):
START to begin
Checking for Middleboxes . . . . . . . . . . . . . . . . . . Done
Server failed: 'C2S Open-Connection' flag not received
click START to re-test
Checking for Middleboxes . . . . . . . . . . . . . . . . . . Done
running 10s outbound test (client to server) . . . . . 89.83Mb/s
running 10s inbound test (server to client) . . . . . . 7.27Mb/s
The slowest link in the end-to-end path is a 100 Mbps Full duplex Fast
Ethernet subnet
click START to re-test
WEB100 Kernel Variables:
Client: localhost/127.0.0.1
CurMSS: 1460
X_Rcvbuf: 16777216
X_Sndbuf: 16777216
AckPktsIn: 2726
AckPktsOut: 0
BytesRetrans: 0
CongAvoid: 0
CongestionOverCount: 0
CongestionSignals: 0
CountRTT: 2704
CurCwnd: 27740
CurRTO: 230
CurRwinRcvd: 4096
CurRwinSent: 5888
CurSsthresh: 2147483647
DSACKDups: 0
DataBytesIn: 0
DataBytesOut: 11258164
DataPktsIn: 0
DataPktsOut: 8311
DupAcksIn: 22
ECNEnabled: 0
FastRetran: 0
MaxCwnd: 27740
MaxMSS: 1460
MaxRTO: 330
MaxRTT: 110
MaxRwinRcvd: 200020
MaxRwinSent: 5888
MaxSsthresh: 0
MinMSS: 1460
MinRTO: 230
MinRTT: 0
MinRwinRcvd: 0
MinRwinSent: 5840
NagleEnabled: 1
OtherReductions: 0
PktsIn: 2726
PktsOut: 8311
PktsRetrans: 0
RcvWinScale: 8
SACKEnabled: 0
SACKsRcvd: 0
SendStall: 0
SlowStart: 17
SampleRTT: 10
SmoothedRTT: 10
SndWinScale: 2
SndLimTimeRwin: 12029644
SndLimTimeCwnd: 216114
SndLimTimeSender: 58146
SndLimTransRwin: 1
SndLimTransCwnd: 2
SndLimTransSender: 2
SndLimBytesRwin: 11011064
SndLimBytesCwnd: 225180
SndLimBytesSender: 21920
SubsequentTimeouts: 0
SumRTT: 12090
Timeouts: 0
TimestampsEnabled: 0
WinScaleRcvd: 2
WinScaleSent: 8
DupAcksOut: 0
StartTimeUsec: 451630
Duration: 12303949
c2sData: 5
c2sAck: 5
s2cData: 3
s2cAck: 3
half_duplex: 0
link: 10
congestion: 0
bad_cable: 0
mismatch: 0
spd:
-15091813658919106230416634111332588283121667184890004615298028527334754549900660050401863129673422583531329362077142025816137006240475850502847050308767497996132523888814155499135731301922758857284670668391877473086192514873281866004401595067114534736664527226952181828913561615282269175372413433045481160704.00
bw: 2491.28
loss: 0.000001000
avgrtt: 4.47
waitsec: 0.00
timesec: 12.00
order: 0.0081
rwintime: 0.9777
sendtime: 0.0047
cwndtime: 0.0176
rwin: 1.5260
swin: 128.0000
cwin: 0.2116
rttsec: 0.004471
Sndbuf: 16777216
aspd: 4503601774854144.00000
CWND-Limited: 118.92
Checking for mismatch on uplink
(speed > 50 [-1.50>50], (xmitspeed < 5) [89.83<5]
(rwintime > .9) [0.97>.9], (loss < .01) [1.0E<.01]
Checking for excessive errors condition
(loss/sec > .15) [8.33>.15], (cwndtime > .6) [0.01>.6],
(loss < .01) [1.0E<.01], (MaxSsthresh > 0) [0>0]
Checking for 10 Mbps link
(speed < 9.5) [-1.50<9.5], (speed > 3.0) [-1.50>3.0]
(xmitspeed < 9.5) [89.83<9.5] (loss < .01) [1.0E<.01], (mylink > 0)
[100.0>0]
Checking for Wireless link
(sendtime = 0) [0.00=0], (speed < 5) [-1.50<5]
(Estimate > 50 [2491.28>50], (Rwintime > 90) [0.97>.90]
(RwinTrans/CwndTrans = 1) [1/2=1], (mylink > 0) [100.0>0]
Checking for DSL/Cable Modem link
(speed < 2) [-1.50<2], (SndLimTransSender = 0) [2=0]
(SendTime = 0) [0.0047=0], (mylink > 0) [100.0>0]
Checking for half-duplex condition
(rwintime > .95) [0.97>.95], (RwinTrans/sec > 30) [0.08>30],
(SenderTrans/sec > 30) [0.16>30], OR (mylink <= 10) [100.0<=10]
Checking for congestion
(cwndtime > .02) [0.01>.02], (mismatch = 0) [0=0]
(MaxSsthresh > 0) [0>0]
estimate = 2491.28 based on packet size = 11Kbits, RTT = 4.47msec, and loss =
1.0E-6
The theoretical network limit is 2491.28 Mbps
The NDT server has a 8192.0 KByte buffer which limits the throughput to
28628.94 Mbps
Your PC/Workstation has a 195.0 KByte buffer which limits the throughput to
341.31 Mbps
The network based flow control limits the throughput to 47.32 Mbps
Client Data reports link is 'FastE', Client Acks report link is 'FastE'
Server Data reports link is 'Ethernet', Server Acks report link is 'Ethernet'
Peter Van Epp / Operations and Technical Support
Simon Fraser University, Burnaby, B.C. Canada
- V5.3.3e against V5.35a, Peter Van Epp, 07/26/2006
Archive powered by MHonArc 2.6.16.