Skip to Content.
Sympa Menu

ndt-users - Re: ndt on copper gig

Subject: ndt-users list created

List archive

Re: ndt on copper gig


Chronological Thread 
  • From: Richard Carlson <>
  • To: Peter Van Epp <>,
  • Subject: Re: ndt on copper gig
  • Date: Tue, 26 Jul 2005 16:27:22 -0400

Hi Peter;

Thanks for the feedback. I've used copper GigE cards before and never saw this error, but that doesn't really tell you anything. My guess is the NIC is doing some pacing or packet bunching (maybe interrupt moderation).

The NDT server uses the libpcap utility to timestamp packets as they arrive at the interface. These timestamps are put on in the kernel, so it's not the arrival time at the interface, but the arrival time at the kernel. If the packets are getting bunched up then it could skew the timestamps enough to make the report come out as you see.

So, I'd suggest the following:

1) What OS/hardware is having the problem?
a) what driver got started?
b) what specific NIC is having problems?
2) Start looking at the NIC documents to see how it handles interrupt processing
a) what are the current settings?
b) what can be changed?
3) Look at the NDT server settings.
a) look in the Documents/networking directory for .txt files
b) try running the /sbin/ethtool utility to see what is currently set.

Send us some details and I'll see what other helpful comments I can make .-)

Rich

At 03:39 PM 7/26/2005, Peter Van Epp wrote:
Is there a known problem with copper GigE cards (note the one we are
using is also exhibiting odd UDP rate limiting on jumbo frames with an
Apparant Networks sequencer)? I didn't see anything likely in the list archive.

On Tue, Jul 26, 2005 at 11:36:19AM -0700, Lixin Liu wrote:
> Just tried fibre to fibre (from .2 to .6) and it correctly reports the
> connection is GigE, not 10GigE:
>
> TCP/Web100 Network Diagnostic Tool v5.3.3e
> click START to begin
> Checking for Middleboxes . . . . . . . . . . . . . . . . . . Done
> running 10s outbound test (client to server) . . . . . 990.11Mb/s
> running 10s inbound test (server to client) . . . . . . 989.22Mb/s
> The slowest link in the end-to-end path is a 1.0 Gbps Gigabit Ethernet subnet
>
> When using copper interface on blowfish (.6), we get
>
> TCP/Web100 Network Diagnostic Tool v5.3.3e
> click START to begin
> Checking for Middleboxes . . . . . . . . . . . . . . . . . . Done
> running 10s outbound test (client to server) . . . . . 989.60Mb/s
> running 10s inbound test (server to client) . . . . . . 989.50Mb/s
> The slowest link in the end-to-end path is a 10 Gbps 10 Gigabit Ethernet/OC-192 subnet
> Information: The receive buffer should be 60595.70 Kbytes to maximize throughput
>
>
> Lixin.
>

Peter Van Epp / Operations and Technical Support
Simon Fraser University, Burnaby, B.C. Canada




Archive powered by MHonArc 2.6.16.

Top of Page