Skip to Content.
Sympa Menu

perfsonar-user - Re: [perfsonar-user] more than 2000 threads

Subject: perfSONAR User Q&A and Other Discussion

List archive

Re: [perfsonar-user] more than 2000 threads


Chronological Thread 
  • From: Mark Feit <>
  • To: Pete Siemsen <>, Marian Babik <>
  • Cc: "" <>
  • Subject: Re: [perfsonar-user] more than 2000 threads
  • Date: Thu, 13 Jun 2019 19:39:17 +0000

Pete Siemsen writes:

 

1.5 years later, and I have this problem again. When it arose a few days ago, I got frustrated and did a clean install. This time, I installed Debian 9.9, and installed perfsonar-toolkit 4.1.6 via apt-get. There are no tests configured and no maddash. Similar process counts to those I reported earlier in this thread:

 

perfsonar-1850$ ps -eTf | wc -l
2155

 

Seems crazy. This time around, the system is quite responsive, so I am simply going to raise the threshold from 2000 to 3000 and call it good.

 

 

That’s the right thing to do for the time being if the system isn’t performing badly.  The sheer number of processes will make the load average look higher.

 

The current situation is that every streaming latency session is going to spawn a total of four powstream-related processes on the source and one owampd on the destination.  It’s not ideal, but we have to live with it for the time being because owamp and powstream are old programs and weren’t really written with this use case in mind.   (I assume that machine is part of a mesh, because something has to set all of those tests up.)

 

The good news for the future is that we are in the very early design stages of a replacement for powstream.  It should cut the number of always-on processes by 75% and will turn the remaining 25% into inexpensive threads under a single process.  The nature of this test is that there has to be one execution context for each, so we won’t be able to trim it back more than that.  Owampd isn’t particularly complicated, and we may be able to make it multithreaded instead of forking, but it will have to be done carefully.

 

--Mark

 




Archive powered by MHonArc 2.6.19.

Top of Page