Skip to Content.
Sympa Menu

perfsonar-dev - Re: [pS-dev] Performance Testing

Subject: perfsonar development work

List archive

Re: [pS-dev] Performance Testing


Chronological Thread 
  • From: "Jeff W. Boote" <>
  • To: Nina Jeliazkova <>
  • Cc: Maciej Glowiak <>, Michael Michalis <>, 'Andreas Hanemann' <>, 'Loukik Kudarimoti' <>, 'Ilias Tsompanidis' <>, , "'Athanassios C. Liakopoulos'" <>, 'Jochen Reinwand' <>
  • Subject: Re: [pS-dev] Performance Testing
  • Date: Fri, 11 May 2007 10:03:34 -0600

VERY well stated.

jeff

Nina Jeliazkova wrote:
Hello all,

It depends what will be the main goal of the performance testing. Let me
stress once again, that testing the service in real life situations is highly
desirable, because this will represent the end-user experience, which is the
most important thing after all. I'm not against testing separate snippets of
code, but I'm pretty sure that this will not be sufficient. Let me give some
examples:

- network latency is an important factor, especially in cases when the
protocol happens to be too chatty - we should have some figures whether such
problem might exist and then what should be done to avoid it;

- axis is a black box (both on service and client side), however a complete
end-to-end test will include its performance impact; we definitely need such
figures in order to decide which particular version of axis (or another tool
in its place) should be used in the future;

- when performing local (inside a LAN or even inside a single machine) tests,
request processing will finish quickly and this will free resources (memory,
cpu), while on the other hand a combination of slow network+chatty
protocol+big requests/responses+high number of concurrent requests would
definitely stress much more the service's host;

One could come up with a long list of such arguments. In brief, what comes to
my mind is the usual surprise when discovering that a well behaving database,
developed and running in a LAN is extremely inefficient when accessed through
WANs. This story is often repeated unfortunately. I'm not saying that this
would be the case for perfSONAR, but we have to be sure about this. That's why
end-to-end WAN performance testing would be most welcome.

Regards,
Nina


Maciej Glowiak
<>
wrote:

Michalis,

Please find some of my comments:

1)

You mentioned about measuring network connectivity times: T3, T4. In case of Java services it'll be difficult to measure these times, because of Axis. Perhaps we could use some internal Axis handlers or listeners, but we should ask someone who knows Axis better than me :)

As far as I understand, your document describes how the performance tests should be done in general, but your test case example was LS, so I assume it'll be one of the first tested services, so understanding our Java architecture would be quite important.

So, We have:

Client -> network -> Axis -> RequestHandler -> MessageHandler -> Service
(and the reverse sequence in back way)

Network connectivity time is dependent on various circumstances, and should be regarded separately.

Axis is now a black box for us. We can't measure time between sending message to the service and passing message as DOM to Request Handler.

RequestHandler, AFAIR, already measures time in milliseconds (I don't remember exactly, for sure I did it for my LS internal performance testing). The same may be done for MessageHandler and Service.

In fact RequestHandler and MessageHandler may be consider together as "PerfSONAR-base".

If you need any changes in perfSONAR-base classes in order to measure times of various components, just let me know.

2)

Categorization of requests is good idea in general, but without understanding the way how it works may be false.

For instance LSQuery and LSRegister. My understanding of your thoughts was that LSQuery may be more time consuming because it's dependent on what LS was asked for. That's true of course. But LSRegister may be even more time-consuming because of internal storage of XML database (sometime simple registration takes tens of seconds!).

So, I think in all cases there should be agreement between testing team and developers how to categorize requests and what they're dependent on.


Best regards

Maciej




Michael Michalis wrote:
Sorry forgot the doc.

-----Original Message-----
From: Andreas Hanemann
[mailto:]
Sent: Thursday, May 10, 2007 2:39 PM
To: Michael Michalis
Cc: Nina Jeliazkova; Loukik Kudarimoti; Ilias Tsompanidis; perfsonar-
;
Athanassios C. Liakopoulos; Jochen Reinwand
Subject: Re: [pS-dev] Performance Testing

Hi Michalis,

I have used the commented version from Nina and added some comments on
my own. The testing method that you describe is very useful at least as
a first step. We have to see whether we need more specific tests later.
These could be needed to exactly determine the conditions of a service
performance problem (e.g. the differences between response time for LS
registration and deregistration requests reported during the perfSONAR
meeting by RNP).

Best regards
Andreas

Nina Jeliazkova wrote:
Hello Michael,

Please find my comments in track changes (attached).

Best regards,
Nina

Michael Michalis
<>
wrote:


Hi all,



I've put together an initial document describing performance testing. I
would greatly appreciate any comment or suggestions made on the
document.



Best Regards,







Michalis Michael



--
Andreas Hanemann,

Boltzmannstrasse 1, 85748 Garching, Germany
Telefon: +49 89 35831-8712
Fax: +49 89 35831-9700








Archive powered by MHonArc 2.6.16.

Top of Page