Skip to Content.
Sympa Menu

perfsonar-dev - Re: Lookup Service performance tests (1)

Subject: perfsonar development work

List archive

Re: Lookup Service performance tests (1)


Chronological Thread 
  • From: Maciej Glowiak <>
  • To: "Jeff W. Boote" <>
  • Cc: Roman Lapacz <>, Loukik Kudarimoti <>, Vedrin Jeliazkov <>, Martin Swany <>, Jason Zurawski <>, Nicolas Simar <>, Szymon Trocha <>, Eric Boyd <>,
  • Subject: Re: Lookup Service performance tests (1)
  • Date: Wed, 12 Jul 2006 10:39:33 +0200
  • Face: iVBORw0KGgoAAAANSUhEUgAAADAAAAAwAQMAAABtzGvEAAAABlBMVEUAAAD///+l2Z/dAAAA CXBIWXMAAEU1AABFNQF8gVf5AAAAB3RJTUUH1QYQDjo6uEWvwgAAAM5JREFUGNNN0LFqAkEUheGj KRZsfATrvENgYyH4APabxwgWGUUQC99BsNDCInUq7VImbbDZ0kayxBXMuN7jvTuKVh//mZlmQKZ1 EhQ8GAVgZECspEBdWQHRjR70KlgFKkoUaCw3ijSYQ4n5HfBK4a4jDcdDQPol/80Sr9BxZOOL4Fmr Jq8VBx7eopaSPvWGOm67fqol3j1q0XNs7Nk2cs6MU6gPNzf+ZGKQX4Ek8H6rAnFZnXB2vJxJcv8g C2P+WzL4tD+Txc4KydrIkh+eAdo01QbjQ84vAAAAAElFTkSuQmCC
  • Organization: Poznan Supercomputing and Networking Center

Hi Jeff,

Thanks for your comments. Find my answers inline:

Jeff W. Boote wrote:
Maciej Glowiak wrote:
---------------------------------------------------------------------

Conclusions:

1. Queries that produce a lot of result data took more time (what is
obvious of course), so better is to ask LS for smaller set of data

I suspect that once you are querying the LS from across the network you will find that there is a 'sweet spot' size that is in fact probably a little larger than what you currently call a 'smaller set'. And, that size will be at least somewhat dependent upon the RTT to the server from the client.

Basically, the prorogation delay for TCP handshake could end up consuming a large enough fraction of the over-all request response time from the client point of view that you will want the response to be large enough so that you don't have to do TCP setup again. For example, if jumbo-packets are being used I would expect all results of up to about 9K to take about the same amount of time from the client perspective.

I tested both - eXist and LS on localhost, so I believe network communication doesn't really matter in this case.

In order to test performance of LS I used localhost communication as an ideal case. Other may be only worse :)


2. For smaller queries one of the most significant times is consumed by
conversion to NMWG. It's unnecessary overhead in this case, because
LSQuery needs only to extract XQuery expression from request. I don't
want to discuss here whether we should remove NMWG or not, but the
fact is that time of such conversion is significant

In point 1, you state that large queries take more time. Unless the

Large results of queries take more time. Not queries itself -- just to ensure we're talking about the same thing.

conversion to NMWG takes more than linear time (and I would expect it to be better than linear) I think concentrating on it here is misplaced. If you read the previous threads we have discussed this topic several times.

If I'm not mistaken, we all agree for the case of the LS conversion to nmwg is less efficient. However, there are many other cases where it is more efficient. (Especially if you don't convert to DOM at all.) Unless

Now, with our generic we can't do it. All messages are parsed into DOM first and we don't have any influence on it (because it's inside Axis).

you have a more compelling reason than LS performance (because you have already said that it is pretty much fixed time for that case), my opinion is that it is easier to use the nmwg classes.

LS Lookup/query message is of course only one of our messages, but I have a few similar cases: LSDeregister, LSKeepalive. They both send small amount of data and the only thing we must get from them is a key.
LSRegister is somehow different, because it sends a lot of data, and I think NMWG conversion is in this case even less required :) because data is put into database as string, so we have DOM->NMWG->String->database instead of DOM->String->database.

I think that's what Martin stated some time ago. NMWG is not bad itself (and I agree with it). The problem is in multiple conversions.
And I think a developer should have a choice: what conversions to use. If someone needs NMWG inside his service, he should use it. But if he decide that it decrease the performance, he should have a possibility to omit this conversion.


If we need to change anything here, it is my view that we should be changing the parsing model. Not messing with the representation of the marshaled objects. If we used an on-demand parsing model, it would be possible to detect that some sequence of XML elements is being parsed within an LS message type, and that the contents should just be handled as a string for XPath/XQuery parsing. This eliminates the nmwg overhead for LS message types.

But it needs severe changes in generic/base. That's what I proposed some times ago. Let's write generic/base version 2 (or version 1, because what we have is still a prototype). then we could make it as flexible as we need (and make all conversions optional).


Then, for other message types the nmwg classes could still be used - because for many of them it is more efficient and much more easy to use and understand.

3. Another significant time (which wasn't measured here directly, sorry)
is initialization of eXist DB XML StorageManager. In Tomcat this
StorageManager is initialized only once.

4. The most significant time is querying the eXist DB server. We can't
speed it up with old StorageManager which uses XML:DB for communi-
cation (default way of communication to DB server from Java), so I
wrote new HTTP access to the DB server which is much faster for
smaller queries. Additional tests will be provided soon, but it's
already on SVN repository

Is there any downside? What functionality is provided by XML:DB that is not provided by your HTTP access? Do we care about any of that functionality? This sounds like really good, useful work to me. But, I would like to know what (if anything) we are giving up.

Hmm, I think there is no downside. We use only basic communication to the eXist DB. XML:DB is Java API for accessing XML databaseses. It's a common way. Recently I read a discussion on eXist wiki on this topic. Someone asked why eXist DB Java API used XML:DB which was not so easy to use. The answer was that it was a standard which should be followed.

We have own classes for communicating databases (StorageManager) so we don't need standards on database communication layer. HTTP access is faster and should speed up services. (I'll send some results soon)

I think we don't loose anything important here.



--------------------------------------------------------------------------

Bottlenecks:

Two main bottlenecks (except Axis/Tomcat which wasn't tested this time) are communication to eXist DB by XML:DB API (see conclusions: 3 & 4) and conversion to NMWG (see conclusion: 2).

--------------------------------------------------------------------------

Required improvements:

1. StorageManager (use new HTTP Storage Manager)

I'd like a few more details - but this sounds absolutely reasonable.

The code is on SVN. StorageManager is still missing for that, but I am waiting for Roman and his requirements for it (he may use it as well)

The example of use is inside:

org.perfsonar.service.commons.storage.xmldb.existHttp.ExistDbHTTPAccess

java file.

2. NMWG (?)

As I said above, I don't believe the problem is the nmwg classes specifically - but the parsing model we are using.

Right, but changes in generic/base are required to solve this problem. Otherwise we won't increase the performance a lot.


Best regards
Maciej



--

--------------------------------------------------------------------
| Maciej Glowiak Network Research and Development ||
|

Poznan Supercomputing and Networking Center ||
| (+48 61) 858 2024 http://monstera.man.poznan.pl/ ||
====================================================================



Archive powered by MHonArc 2.6.16.

Top of Page