perfsonar-user - Re: [perfsonar-user] Fwd: Re: Problems with perfSonar instance at UKI-SOUTHGRID-CAM-HEP
Subject: perfSONAR User Q&A and Other Discussion
List archive
Re: [perfsonar-user] Fwd: Re: Problems with perfSonar instance at UKI-SOUTHGRID-CAM-HEP
Chronological Thread
- From: John Hill <>
- To: Mark Feit <>, "" <>
- Cc: Marian Babik <>
- Subject: Re: [perfsonar-user] Fwd: Re: Problems with perfSonar instance at UKI-SOUTHGRID-CAM-HEP
- Date: Tue, 15 Jan 2019 14:10:59 +0000
- Ironport-phdr: 9a23:aFNcqBKJ3Ef1UZ9tQdmcpTZWNBhigK39O0sv0rFitYgfK/TxwZ3uMQTl6Ol3ixeRBMOHs6IC07KempujcFRI2YyGvnEGfc4EfD4+ouJSoTYdBtWYA1bwNv/gYn9yNs1DUFh44yPzahANS47xaFLIv3K98yMZFAnhOgppPOT1HZPZg9iq2+yo9JDffwZFiCChbb9uMR67sRjfus4KjIV4N60/0AHJonxGe+RXwWNnO1eelAvi68mz4ZBu7T1et+ou+MBcX6r6eb84TaFDAzQ9L281/szrugLdQgaJ+3ART38ZkhtMAwjC8RH6QpL8uTb0u+ZhxCWXO9D9QKsqUjq+8ahkVB7oiD8GNzEn9mHXltdwh79frB64uhBz35LYbISTOfFjfK3SYMkaSHJcUMhPWSxPAoCyYYUBAOUOP+lXs4bzqkASrRa8HwSgGP/jxzFKi3LwwKY00/4hEQbD3AE4G9wOt3TUrNPoP6kQUOC1yK3IxijEYvNW2Df97IzIfwshof6SRbJ8a9LRyUkvFg/fklqfs4nlMymP2esRqWSb8ulgWPuphmU6qA9xuiCiytoih4XUnI4Z103I+ThjzIs2P9G0VUB2bcC8HJdNuSyWKpF6Tt4/T211oio3y7wLtYSmcCQXzpks2gTRZOadc4eS5xLuTOaRLil8hHJiYL+/nw6y8VOuy+HlWMS4zkxGoTZektnNrHwCywbc6s2dRvRn4kitwyuP1wPL5uFFJ0A7i7bbJoY8zrM+i5Yfq1nPEjLrlEnsj6KabFgo9+a25+j/Z7XpvJ6cN4t6igHkNaQun9SyAeQ5MggKW2iW4uS826P7/UHjWLVKjv03nrPFv5/AIMQXvLS2DBNP3oY+6BazFy2m38gAnXkbMFJFfwqKj5D3NFHULvD4FvC/g1K2nzdx3vzGI6bhDYvXLnXYlLfhfK1961JHyAYt19xf5pRUCq0fL/LpXE/+qsDYAgEjPwOq3unnFc131pkCVmKXHq+ZLKTSvEeN5uIhPeaMZZMVtS38K/gj+/7hk2U5mVkDcqm1w5cbcm63Eel7IxbRXX25p94bEi8kswsiBLjvhlCDTRZSYWq/RaQx+mt9BY67W8OLDJigmrKa2yGyBNhLfW1cIlGKDXrycYiYAbEBZD/Yapt5nyYKTr+nQpVkyAqjrif7zaZqNOzZ5ndeuJ7+gotb/erWwDM19To8L8WZ1ynZT2BwmksISidwxKBu50d0zxGK2u5lgKoLRpRo+/pVX1JiZtbnxOtgBoW3A1qZcw==
Thanks for the suggestions. I checked the hardware, but there was no evidence of any problem. I've now reinstalled perfSonar on the host and so far it looks a lot healthier, though it will be a while before it's clear whether everything is working correctly.
John
On 11/01/2019 15:30, Mark Feit wrote:
Marian Babik and John Hill write:
I noticed that the toolkit web page shows only 35 entries, but the
auto-URL for the host shows a lot more hosts to be tested as the node
was added to the ATLAS and LHCb meshes - unsure when exactly this
happened...
This would be a good thing to run down since a change like that will put
significantly more load on a system. ATLAS and LHC are running the largest
meshes I know of and are often where we find the hairy edges of what
perfSONAR can do.
> On Jan 10, 2019, at 4:26 PM, John Hill <> wrote:
>
> The problem showed up about 24 hours after the host was updated to
4.1.5 - is this new version more resource hungry?
It shouldn't be. The last release to introduce anything new was 4.1, and
as pScheduler goes, that was almost a non-event. Everything since has been
bugfixes and very minor improvements.
> I see quite a few errors in /var/log/pscheduler/pscheduler.log of the
type
>
> Failed to post run for result: Database connection pool exhausted.
Unable to get connection after 60 attempts.
Some under-the-hood insight: One of the internal parts of pScheduler is
called the runner, which is responsible for overseeing the execution of
measurements and storing the results. It's multithreaded and maintains a
pool of connections to the PostgreSQL database that the threads can acquire,
use and return when needed. The size of that pool depends on the maximum
number of connections available on the database server. Currently, we have
that set at 500 and the runner takes half to leave some for other programs
that use it. The pool will spend up to a minute waiting for a connection to
be available, making it more resilient in situations where demand has spiked
enough to exhaust it.
A large-enough workload can make exhaustion of the pool a regular event.
Most measurements don't contribute too much because they're either relatively
infrequent (RTT, trace) or self-regulating (throughput, which doesn't run
more than one at a time). Because it produces a continuous stream of
results, each streaming latency task causes its corresponding thread in the
runner to be almost always in possession of a connection from the pool. As
Marian pointed out, the process of getting measurements stored is I/O-bound,
so it's entirely possible that the database isn't going fast enough to
prevent pool exhaustion. Raising the connection limit might help, but the
other possibility is that a system problem is reducing I/O throughput enough
that threads hold onto connections longer than they usually would, exhausting
the pool.
The lowest-hanging fruit would be to check the system's general health,
especially if it's an old machine. Make sure the kernel isn't complaining
about I/O retires or memory problems. Check that the swap space isn't in
heavy use. If the disk is RAIDed, make sure the controller isn't spending a
lot of time reconstructing data because of a failed disk that wasn't replaced.
Hope that helps.
--Mark
- [perfsonar-user] Fwd: Re: Problems with perfSonar instance at UKI-SOUTHGRID-CAM-HEP, John Hill, 01/11/2019
- Re: [perfsonar-user] Fwd: Re: Problems with perfSonar instance at UKI-SOUTHGRID-CAM-HEP, Mark Feit, 01/11/2019
- Re: [perfsonar-user] Fwd: Re: Problems with perfSonar instance at UKI-SOUTHGRID-CAM-HEP, John Hill, 01/15/2019
- Re: [perfsonar-user] Fwd: Re: Problems with perfSonar instance at UKI-SOUTHGRID-CAM-HEP, Mark Feit, 01/11/2019
Archive powered by MHonArc 2.6.19.