ndt-users - Re: NDT Server unable to determine bottleneck link type.
Subject: ndt-users list created
List archive
- From: Alex Y Fadeyev <>
- To: Richard Carlson <>
- Cc:
- Subject: Re: NDT Server unable to determine bottleneck link type.
- Date: Tue, 25 Jan 2005 11:11:38 +0300
Thanks for answer, Richard.
Problem was in interfase definition as you sugest. Now it's work.
Small remark - it's imposible to run several copy of NDT (with differ interface specified by -i options) because socket allready bind by first runned copy of NDT.
And thanks you for NDT - it' really usefull tool for network troubleshooting!
Richard Carlson wrote:
Hi Alex;
I'm the developer of the NDT software so I'll try and answer your question.
I just brought up a Fedora Core 2 server with the 2.6.10 w/2.5.2 web100 patch. The NDT code operated properly on this server.
The 'System Fault" message indicates that the web100srv process is unable to capture raw data from the network interface. The possible problems are:
1) The server is monitoring the wrong network interface. If your server has multiple ethernet interfaces, the libpcap routine will pick the 'first' interface in the list (eth0). In this case you can specify the correct interface using the -i option (e.g. web100srv -i eth1).
2) The web100srv process needs root access to obtain raw access to the network interface. Make sure you have the correct ownership and permissions.
There should be some log entries in the /var/log/messages or 'dmesg' log file that can tell you what interface is being used of if the interface failed to enter promiscuous mode.
Check the interface id first and then look at the ownership/permissions and let us know if you still have problems.
Regards;
Rich Carlson
At 04:02 PM 1/22/2005 +0300, Alex Y Fadeyev wrote:
Hello.
NDT I've installed unable to determine bottleneck link type.
Configuration:
NDT-3.0.23
web100 userland Alpha 1.4 (I've tried 1/3 - same results)
web100 kernel-patch 2.5.2 on kernel 2.6.10 from kernel.org.
libpcap-0.8.3
Server system - Fedora Core 3 on AMD x86_64 uniprocessor machine. 1Gbit/s connection.
Client - Windows XP. 100 Mbit/s LAN connection.
As I notice this because
Client Data reports link is 'System Fault', Client Acks report link is 'System Fault' (see in More datails below)
web100srv has been runned with -x options and without - there are same results in both cases.
I shall be grateful for any advice.
Example of ouput:
Main:
TCP/Web100 Network Diagnostic Tool v5.3.3a
click START to begin
Checking for Middleboxes . . . . . . . . . . . . . . . . . . Done
running 10s outbound test (client to server) . . . . . 94.07Mb/s
running 10s inbound test (server to client) . . . . . . 94.60Mb/s
Server unable to determine bottleneck link type.
click START to re-test
Statistics:
WEB100 Enabled Statistics:
Checking for Middleboxes . . . . . . . . . . . . . . . . . . Done
running 10s outbound test (client to server) . . . . . 94.07Mb/s
running 10s inbound test (server to client) . . . . . . 94.60Mb/s
------ Client System Details ------
OS data: Name = Windows XP, Architecture = x86, Version = 5.1
Java data: Vendor = Sun Microsystems Inc., Version = 1.5.0
------ Web100 Detailed Analysis ------
Interprocess communications failed, unknown link type.
Link set to Full Duplex mode
No network congestion discovered.
Good network cable(s) found
Normal duplex operation found.
Web100 reports the Round trip time = 5.46 msec; the Packet size = 1460 Bytes; and
No packet loss - but packets arrived out-of-order 0.03% of the time
This connection is receiver limited 83.56% of the time.
This connection is sender limited 16.33% of the time.
Web100 reports TCP negotiated the optional Performance Settings to:
RFC 2018 Selective Acknowledgment: ON
RFC 896 Nagle Algorithm: ON
RFC 3168 Explicit Congestion Notification: OFF
RFC 1323 Time Stamping: OFF
RFC 1323 Window Scaling: ON
Packet size is preserved End-to-End
Server IP addresses are preserved End-to-End
Client IP addresses are preserved End-to-End
More datails:
WEB100 Kernel Variables:
Client: localhost/127.0.0.1
AckPktsIn: 40908
AckPktsOut: 0
BytesRetrans: 0
CongAvoid: 0
CongestionOverCount: 0
CongestionSignals: 0
CountRTT: 40895
CurCwnd: 67160
CurMSS: 1460
CurRTO: 205
CurRwinRcvd: 65535
CurRwinSent: 5840
CurSsthresh: -616
DSACKDups: 0
DataBytesIn: 0
DataBytesOut: 120942818
DataPktsIn: 0
DataPktsOut: 80483
DupAcksIn: 14
ECNEnabled: 0
FastRetran: 0
MaxCwnd: 67160
MaxMSS: 1460
MaxRTO: 206
MaxRTT: 6
MaxRwinRcvd: 65535
MaxRwinSent: 5840
MaxSsthresh: 0
MinMSS: 1460
MinRTO: 201
MinRTT: 0
MinRwinRcvd: 567
MinRwinSent: 5840
NagleEnabled: 1
OtherReductions: 0
PktsIn: 40914
PktsOut: 80483
PktsRetrans: 0
X_Rcvbuf: 87380
RcvWinScale: -1
SACKEnabled: 3
SACKsRcvd: 0
SendStall: 0
SlowStart: 44
SampleRTT: 5
SmoothedRTT: 5
X_Sndbuf: 131072
SndWinScale: -1
SndLimTimeRwin: 8364151
SndLimTimeCwnd: 10895
SndLimTimeSender: 1634675
SndLimTransRwin: 3331
SndLimTransCwnd: 10
SndLimTransSender: 3340
SndLimBytesRwin: 108172272
SndLimBytesCwnd: 202876
SndLimBytesSender: 12567670
SubsequentTimeouts: 0
SumRTT: 223227
Timeouts: 0
TimestampsEnabled: 0
WinScaleRcvd: -1
WinScaleSent: -1
DupAcksOut: 0
StartTimeUsec: 725862
Duration: 10014196
c2sData: -1
c2sAck: -1
s2cData: 8
s2cAck: -1
half_duplex: 0
link: 100
congestion: 0
bad_cable: 0
mismatch: 0
spd: 0.00
bw: 2040.64
loss: 0.000001000
avgrtt: 5.46
waitsec: 0.00
timesec: 10.00
order: 0.0003
rwintime: 0.8356
sendtime: 0.1633
cwndtime: 0.0011
rwin: 0.5000
swin: 1.0000
cwin: 0.5124
rttsec: 0.005459
Sndbuf: 131072
Checking for mismatch condition
(cwndtime > .3) [0.00>.3], (MaxSsthresh > 0) [0>0],
(PktsRetrans/sec > 2) [0>2], (estimate > 2) [2040.64>2]
Checking for mismatch on uplink
(speed > 50 [0>50], (xmitspeed < 5) [94.07<5]
(rwintime > .9) [0.83>.9], (loss < .01) [1.0E<.01]
Checking for excessive errors condition
(loss/sec > .15) [1.0E>.15], (cwndtime > .6) [0.00>.6],
(loss < .01) [1.0E<.01], (MaxSsthresh > 0) [0>0]
Checking for 10 Mbps link
(speed < 9.5) [0<9.5], (speed > 3.0) [0>3.0]
(xmitspeed < 9.5) [94.07<9.5] (loss < .01) [1.0E<.01], (mylink > 0) [0.0>0]
Checking for Wireless link
(sendtime = 0) [0.16=0], (speed < 5) [0<5]
(Estimate > 50 [2040.64>50], (Rwintime > 90) [0.83>.90]
(RwinTrans/CwndTrans = 1) [3331/10=1], (mylink > 0) [0.0>0]
Checking for DSL/Cable Modem link
(speed < 2) [0<2], (SndLimTransSender = 0) [3340=0]
(SendTime = 0) [0.1633=0], (mylink > 0) [0.0>0]
Checking for half-duplex condition
(rwintime > .95) [0.83>.95], (RwinTrans/sec > 30) [333.1>30],
(SenderTrans/sec > 30) [334.0>30], OR (mylink <= 10) [0.0<=10]
Checking for congestion
(cwndtime > .02) [0.00>.02], (mismatch = 0) [0=0]
(MaxSsthresh > 0) [0>0]
estimate = 2040.64 based on packet size = 11Kbits, RTT = 5.46msec, and loss = 1.0E-6
The theoretical network limit is 2040.64 Mbps
The NDT server has a 128.0 KByte buffer which limits the throughput to 183.18 Mbps
Your PC/Workstation has a 63.0 KByte buffer which limits the throughput to 91.59 Mbps
The network based flow control limits the throughput to 93.86 Mbps
Client Data reports link is 'System Fault', Client Acks report link is 'System Fault'
Server Data reports link is 'OC-48', Server Acks report link is 'System Fault'
--
Sinrecery Yours,
Alex Y. Fadeyev
MIPT-telecom http://telecom.mipt.ru
Tel.: +7 095 576-4381
Fax.: +7 095 576-4563
e-mail:
--
Sinrecery Yours,
Alex Y. Fadeyev
MIPT-telecom http://telecom.mipt.ru
Tel.: +7 095 576-4381
Fax.: +7 095 576-4563
e-mail:
- NDT Server unable to determine bottleneck link type., Alex Y Fadeyev, 01/22/2005
- Re: NDT Server unable to determine bottleneck link type., Richard Carlson, 01/24/2005
- Re: NDT Server unable to determine bottleneck link type., Alex Y Fadeyev, 01/25/2005
- Re: NDT Server unable to determine bottleneck link type., Richard Carlson, 01/24/2005
Archive powered by MHonArc 2.6.16.