Skip to Content.
Sympa Menu

ndt-users - Re: FW: Connection was reset -- FC5 on x86-64

Subject: ndt-users list created

List archive

Re: FW: Connection was reset -- FC5 on x86-64


Chronological Thread 
  • From: Peter Van Epp <>
  • To:
  • Subject: Re: FW: Connection was reset -- FC5 on x86-64
  • Date: Thu, 18 May 2006 12:24:08 -0700

On Thu, May 18, 2006 at 01:35:00PM -0400, Andrew Mabe wrote:
>
> David et al....
>
> I am ready to implement one of may NDT servers at MCNC / NCREN and have the
> same question. I have a few IBM eServers with PIII processors w/ a Gig of
> ram. I have word from a few fellow UNC systems schools they had issues with
> Non-Intel Netowrk cards. Prior to working thorough and finding the same
> issue I would like to hear from the NCD community what the best server and
> OS I should pursue. I can obtain any boxes or OS needed (including network
> cards) but would LOVE to hear from you and successful implentations
> (Including using multiple Gig Nic cards?)
>
> I will be placing these on the NCREN network in North Caroina for use
> against the OC48's and 10GBE connections to customers.
>
> Any advice would be greatly appreciated.
>
> In addition are your NDT servers JUST serving NDT, or is anyone running
> "Router Node Proxy" or Cacti/Rrdtool on the same boxes?
>
> Thanks,
> Andrew Mabe
> Sr. Solutions Engineer
> MCNC/NCREN
> 919-248-4124
>

We are running ndt on an old dual Athelon box (1.4 Gig CPUs I believe)
512 Megs of RAM with (at the moment :-)) a SysKonnect and Intel Server pro
gig
fibre NICs and two on board copper 10/100s. The kernel is somewhat strange
(and
currently slightly unstable) because as well as web100 we also have the ntop
ring buffer libpcap code installed. NDT will happily operate out of any nic
and will queue tests from different networks and execute them in order without
problem. This machine is the test box for our production argus boxes which is
why the strange kernel mix :-). The most important thing to note (other than
using good NICs, we tried a Dlink that our HPC guys have with poor results as
I recall, the SysKonnect or these days the half height Intel server NIC are
preferred) is to shut off interrupt moderation in the driver. Otherwise you
get
10 gigs our of your 1 gig interconnects (or at least so ndt thinks from the
rapid packet interarrival when the timer expires and spits out a bunch of
packets at once). I'm in the process of ordering a Sun 4200 dual Opteron box
with as much RAM as I can afford for a new test machine (the 4200's claim to
fame being 5 64/64 or higher PCI slots against two in my Athelon boxes in a
2U case which will take Intel half height gig cards) so we will eventually see
how that goes. While we have all kinds of things loaded (ndt, iperf, netperf,
argus, etc.) we tend to only run one at a time so I can't say what the impact
of one on the others would be (iperf will flatten my Athelons and I doubt
ndt could run with it, the Sun may be able to). I'd also stear clear of
Broadcom
NICs. Our HPC grid guys have a cluster across town with a Broadcom and two
Intel server NICs in it. The Broadcom is the NIC for the backup path to the
storage facility (which is here) and has been seen (because I have the only
gig capable sniffer that I know of on the grid path and a fibre tap in the
circuit :-)) to put good CRC on an undersize packet (presumably because of an
underrun due to insufficient memory bandwidth) which causes trouble. The
Intel
NICs have not been seen to do this.

Peter Van Epp / Operations and Technical Support
Simon Fraser University, Burnaby, B.C. Canada



Archive powered by MHonArc 2.6.16.

Top of Page