Skip to Content.
Sympa Menu

perfsonar-user - Re: [perfsonar-user] strange owamp loss followed by very high latency

Subject: perfSONAR User Q&A and Other Discussion

List archive

Re: [perfsonar-user] strange owamp loss followed by very high latency


Chronological Thread 
  • From: Ben Nelson <>
  • To:
  • Subject: Re: [perfsonar-user] strange owamp loss followed by very high latency
  • Date: Mon, 19 May 2014 11:12:33 -0400

Hey Aaron,

Up until the event, latency was fairly low, but we started seeing some
loss. Then we saw 100% loss during the middle of the issue. As things
started settling down, and we started seeing packet loss drop down,
that's when the high latency was noticed.

These are the same hosts running bwctl tests. The bwctl tests ran
occasionally during the event, but didn't notice any issues with either
throughput or loss. The bwctl tests ran a few minutes after the event
started, although they were running on the same interface as owamp.

We have metrics running on the host and we didn't notice any strange cpu
spikes.

I'm not sure what else might be the cause, but if you have any thoughts,
that would be helpful.

Thanks,
Ben


On 5/19/14, 8:20 AM, Aaron Brown wrote:
> Hey Ben,
>
> I’ve seen that “allocating additional nodes” error a number of times when
> the loss rates were absurdly high, though I can’t recall how high they
> were. Is there something else that might have been running on the host
> itself that could be causing problems (either by monopolizing CPU, or
> sending a bunch of traffic)? Was the latency consistently 9.9s, or did it
> just spike up to there?
>
> Cheers,
> Aaron
>
> On May 15, 2014, at 4:26 PM, Ben Nelson
> <>
> wrote:
>
>> Hi All,
>>
>> I'm wondering if anyone has run into this issue before. All the owamp
>> tests to/from one of our hosts for ~30 minutes started showing large
>> amounts of loss, and as the event began winding down, we were seeing
>> very high latency on the order of 9.8-9.9 seconds. It doesn't appear to
>> be a network issue from what we could tell.
>>
>> Additionally, the owamp debug logs reported these, which I hadn't seen
>> before:
>>
>> 2014-05-15T18:30:19+00:00|local5|debug|powstream|powstream[5196]:
>> alloc_node: Pre-alloc buffer too small. Allocating additional nodes for
>> lost-packet-buffer.
>> 2014-05-15T18:30:20+00:00|local5|debug|powstream|powstream[5165]:
>> alloc_node: Pre-alloc buffer too small. Allocating additional nodes for
>> lost-packet-buffer.
>> 2014-05-15T18:30:20+00:00|local5|debug|powstream|powstream[5524]:
>> alloc_node: Pre-alloc buffer too small. Allocating additional nodes for
>> lost-packet-buffer.
>> 2014-05-15T18:30:21+00:00|local5|debug|powstream|powstream[5324]:
>> alloc_node: Pre-alloc buffer too small. Allocating additional nodes for
>> lost-packet-buffer.
>> 2014-05-15T18:30:21+00:00|local5|debug|powstream|powstream[5382]:
>> alloc_node: Pre-alloc buffer too small. Allocating additional nodes for
>> lost-packet-buffer.
>> 2014-05-15T18:30:21+00:00|local5|debug|powstream|powstream[5273]:
>> alloc_node: Pre-alloc buffer too small. Allocating additional nodes for
>> lost-packet-buffer.
>> 2014-05-15T18:30:22+00:00|local5|debug|powstream|powstream[5387]:
>> alloc_node: Pre-alloc buffer too small. Allocating additional nodes for
>> lost-packet-buffer.
>> 2014-05-15T18:30:23+00:00|local5|debug|powstream|powstream[5153]:
>> alloc_node: Pre-alloc buffer too small. Allocating additional nodes for
>> lost-packet-buffer.
>>
>> They occurred fairly frequently during the same period we experienced
>> the loss.
>>
>> Anyone have any thoughts?
>>
>> Thanks,
>> Ben
>>
>> --
>> Ben Nelson, Systems Engineer
>> Indiana University GlobalNOC
>> <>
>>

--
Ben Nelson, Systems Engineer
Indiana University GlobalNOC
<>




Archive powered by MHonArc 2.6.16.

Top of Page