Skip to Content.
Sympa Menu

ndt-users - Re: Am I seeing the right results?

Subject: ndt-users list created

List archive

Re: Am I seeing the right results?


Chronological Thread 
  • From: Simon Leinen <>
  • To: Peter Van Epp <>
  • Cc:
  • Subject: Re: Am I seeing the right results?
  • Date: Sat, 29 Sep 2007 11:14:40 +0200

Peter Van Epp writes:
> The "slowest link is gig" is likely caused by the server NIC
> card having interrupt reduction (it has a correct name but I don't
> remember it :-)) on.

(People seem to prefer the fancier names of "interrupt coalescence" or
"interrupt moderation" :-)

http://kb.pert.geant2.net/PERTKB/InterruptCoalescence

> NDT guesses link speed by packet interarrival time from the NIC if
> it delivers multiple packets per interrupt that timing is disrupted
> (this can usually be disabled in the NIC driver although perhaps not
> easily).

Very interesting, I hadn't known that. But how does NDT measure
*packet* interarrival times - doesn't it only do TCP (where the
application only sees a byte stream)?

> Before I turned it off on our gig link it used to claim I had an
> OC192 (which was of course news to me). Throughput looks about right
> for a well performing 100 meg link though.

Because interrupt coalescence is quickly becoming prevalent (even my
laptop has it), it would be useful to think about measurement methods
that are "robust" to it.

In general, I would favour it if everybody used kernel timestamps
(e.g. SO_TIMESTAMP), and every adapter that performs interrupt
coalescence would decorate incoming frames with hardware timestamps.
That wouldn't require much (if any) new hardware on the adapters, just
a little more logic in the driver to convert hardware timestamps into
OS-level timestamps.

Still I have no idea on how to provide such timestamps to an
application that only uses TCP...
--
Simon.



Archive powered by MHonArc 2.6.16.

Top of Page