Skip to Content.
Sympa Menu

perfsonar-dev - [pS-dev] Lookup Service performance tests

Subject: perfsonar development work

List archive

[pS-dev] Lookup Service performance tests


Chronological Thread 
  • From: Antoine Delvaux <>
  • To: perfSONAR developers <>
  • Subject: [pS-dev] Lookup Service performance tests
  • Date: Mon, 19 Dec 2011 19:17:41 +0000
  • Accept-language: en-US, en-GB
  • Acceptlanguage: en-US, en-GB

Hi All,

I'm trying to improve the GEANT Lookup Service performance. To be able to do
that in a meaningful manner, I'd like to have a better understanding at what
are the most common queries a LS has to answer. I also have a few questions
on the LS expected behavior and I hope I can find answers here (if not,
please direct me where you think I'll find it).

From the document at
http://anonsvn.internet2.edu/svn/nmwg/trunk/nmwg/doc/dLS/gLS/phase_1_color.html#api
there is a description of a 3 levels API (level 0, 1 and 2). It is said
that the LS infrastructure is evolving from only using xQuery/xPath
statements to using the level 1 and level 2 API. This document dates back
from 2007, but where are we exactly now? Are LS clients (visualization tools
or other PS services) still using the level 0 API (raw xqueries)? Or can I
consider this level 0 API to be deprecated and only used by developers and
service maintainers? This can have a big impact on if and how the LS
performance can be improved.

I'd also like to know what are the most common types of queries currently
seen on production LS. Is it discovery requests (looking for a hLS holding
the information we want) or metadata queries (looking for
services/measurements information)? Does anybody have production statistics
on that?

Trying to improve the GEANT Lookup Service performance, I designed load test
scenarios and scripts ran in soapUI. I also tried to compare performance of
the GEANT LS and the Internet2 LS. My preliminary findings shows that the
eXistDB is the bottleneck in the GEANT LS, but, depending on which LS API
level is actually used, there are possibilities to greatly improve that.
Testing the Internet2 LS, I noticed that concurrent requests, at least
registration/deregistration requests, are not supported. Is it the reason
why the LS cache service comes into play?

Comparing the performances of the 2 different systems, I also noticed that
some requests are handled differently by the 2 services. For example, a
failing registration request on the GEANT LS becomes a successful
registration request on the Internet2 LS. I'm not sure if this kind of
behavior is expected and/or benign, but I'll try to do a more thorough report
on it. I've also noticed the event types returned in case of error differ
slightly between the 2 services. If we want to have a good working network
of LS, I guess we need to have more sync there.

If there is interest, I could share, here or on some common development
repository, my soapUI test suite. I think having a common set of test cases,
to which the community as a whole can contribute, can be a good way towards
having a good and reliable LS network for perfSONAR.

Thanks for any answer regarding all this. As soon as I have more sensible
performance results, I'll be happy to share here. And if all this has
already been discussed in the past and there are useful documents or archives
I can use, I'm sorry to bother all of you, just tell me where I can find it.

Cheers,

Antoine.




Archive powered by MHonArc 2.6.16.

Top of Page