perfsonar-dev - RE: [pS-dev] Performance Testing
Subject: perfsonar development work
List archive
- From: Michael Michalis <>
- To: 'Nina Jeliazkova' <>, 'Maciej Glowiak' <>
- Cc: 'Andreas Hanemann' <>, 'Loukik Kudarimoti' <>, 'Ilias Tsompanidis' <>, , "'Athanassios C. Liakopoulos'" <>, 'Jochen Reinwand' <>
- Subject: RE: [pS-dev] Performance Testing
- Date: Mon, 14 May 2007 08:28:17 +0300
Hi Nina,
Please find some comments in line.
> -----Original Message-----
> From: Nina Jeliazkova
> [mailto:]
> Sent: Friday, May 11, 2007 3:49 PM
> To: Maciej Glowiak; Michael Michalis
> Cc: 'Andreas Hanemann'; 'Nina Jeliazkova'; 'Loukik Kudarimoti'; 'Ilias
> Tsompanidis';
> ;
> 'Athanassios C. Liakopoulos';
> 'Jochen Reinwand'
> Subject: Re: [pS-dev] Performance Testing
>
> Hello all,
>
> It depends what will be the main goal of the performance testing. Let me
> stress once again, that testing the service in real life situations is
> highly
> desirable, because this will represent the end-user experience, which is
> the
> most important thing after all. I'm not against testing separate snippets
> of
> code, but I'm pretty sure that this will not be sufficient. Let me give
> some
> examples:
>
> - network latency is an important factor, especially in cases when the
> protocol happens to be too chatty - we should have some figures whether
> such
> problem might exist and then what should be done to avoid it;
>
> - axis is a black box (both on service and client side), however a
> complete
> end-to-end test will include its performance impact; we definitely need
> such
> figures in order to decide which particular version of axis (or another
> tool
> in its place) should be used in the future;
Yes you are right. I'm not sure if testing the Axis alone is included in
this stage of the tests. This is something we definitely need to take in
account.
> - when performing local (inside a LAN or even inside a single machine)
> tests,
> request processing will finish quickly and this will free resources
> (memory,
> cpu), while on the other hand a combination of slow network+chatty
> protocol+big requests/responses+high number of concurrent requests would
> definitely stress much more the service's host;
>
> One could come up with a long list of such arguments. In brief, what comes
> to
> my mind is the usual surprise when discovering that a well behaving
> database,
> developed and running in a LAN is extremely inefficient when accessed
> through
> WANs. This story is often repeated unfortunately. I'm not saying that this
> would be the case for perfSONAR, but we have to be sure about this. That's
> why
> end-to-end WAN performance testing would be most welcome.
So we all agree, that initial tests in a LAN environment should be followed
by tests in a WAN environment.
Thanks for your feedback and useful comments, and also congratulations for
Bulgaria's place in the Eurovision song contest :).
Michalis
> Regards,
> Nina
>
>
> Maciej Glowiak
> <>
> wrote:
>
> > Michalis,
> >
> > Please find some of my comments:
> >
> > 1)
> >
> > You mentioned about measuring network connectivity times: T3, T4. In
> > case of Java services it'll be difficult to measure these times, because
> > of Axis. Perhaps we could use some internal Axis handlers or listeners,
> > but we should ask someone who knows Axis better than me :)
> >
> > As far as I understand, your document describes how the performance
> > tests should be done in general, but your test case example was LS, so I
> > assume it'll be one of the first tested services, so understanding our
> > Java architecture would be quite important.
> >
> > So, We have:
> >
> > Client -> network -> Axis -> RequestHandler -> MessageHandler -> Service
> > (and the reverse sequence in back way)
> >
> > Network connectivity time is dependent on various circumstances, and
> > should be regarded separately.
> >
> > Axis is now a black box for us. We can't measure time between sending
> > message to the service and passing message as DOM to Request Handler.
> >
> > RequestHandler, AFAIR, already measures time in milliseconds (I don't
> > remember exactly, for sure I did it for my LS internal performance
> > testing). The same may be done for MessageHandler and Service.
> >
> > In fact RequestHandler and MessageHandler may be consider together as
> > "PerfSONAR-base".
> >
> > If you need any changes in perfSONAR-base classes in order to measure
> > times of various components, just let me know.
> >
> > 2)
> >
> > Categorization of requests is good idea in general, but without
> > understanding the way how it works may be false.
> >
> > For instance LSQuery and LSRegister. My understanding of your thoughts
> > was that LSQuery may be more time consuming because it's dependent on
> > what LS was asked for. That's true of course. But LSRegister may be even
> > more time-consuming because of internal storage of XML database
> > (sometime simple registration takes tens of seconds!).
> >
> > So, I think in all cases there should be agreement between testing team
> > and developers how to categorize requests and what they're dependent on.
> >
> >
> > Best regards
> >
> > Maciej
> >
> >
> >
> >
> > Michael Michalis wrote:
> > > Sorry forgot the doc.
> > >
> > >> -----Original Message-----
> > >> From: Andreas Hanemann
> > >> [mailto:]
> > >> Sent: Thursday, May 10, 2007 2:39 PM
> > >> To: Michael Michalis
> > >> Cc: Nina Jeliazkova; Loukik Kudarimoti; Ilias Tsompanidis; perfsonar-
> > >> ;
> > >> Athanassios C. Liakopoulos; Jochen Reinwand
> > >> Subject: Re: [pS-dev] Performance Testing
> > >>
> > >> Hi Michalis,
> > >>
> > >> I have used the commented version from Nina and added some comments
> on
> > >> my own. The testing method that you describe is very useful at least
> as
> > >> a first step. We have to see whether we need more specific tests
> later.
> > >> These could be needed to exactly determine the conditions of a
> service
> > >> performance problem (e.g. the differences between response time for
> LS
> > >> registration and deregistration requests reported during the
> perfSONAR
> > >> meeting by RNP).
> > >>
> > >> Best regards
> > >> Andreas
> > >>
> > >> Nina Jeliazkova wrote:
> > >>> Hello Michael,
> > >>>
> > >>> Please find my comments in track changes (attached).
> > >>>
> > >>> Best regards,
> > >>> Nina
> > >>>
> > >>> Michael Michalis
> > >>> <>
> > >>> wrote:
> > >>>
> > >>>
> > >>>> Hi all,
> > >>>>
> > >>>>
> > >>>>
> > >>>> I've put together an initial document describing performance
> testing. I
> > >>>> would greatly appreciate any comment or suggestions made on the
> > >> document.
> > >>>>
> > >>>>
> > >>>>
> > >>>>
> > >>>> Best Regards,
> > >>>>
> > >>>>
> > >>>>
> > >>>>
> > >>>>
> > >>>>
> > >>>>
> > >>>> Michalis Michael
> > >>>>
> > >>>>
> > >>>>
> > >>>
> > >> --
> > >> Andreas Hanemann,
> > >>
> > >> Boltzmannstrasse 1, 85748 Garching, Germany
> > >> Telefon: +49 89 35831-8712
> > >> Fax: +49 89 35831-9700
> > >
> >
>
>
- Re: [pS-dev] Performance Testing, (continued)
- Re: [pS-dev] Performance Testing, Andreas Hanemann, 05/10/2007
- RE: [pS-dev] Performance Testing, Michael Michalis, 05/11/2007
- Re: [pS-dev] Performance Testing, Andreas Hanemann, 05/11/2007
- Re: [pS-dev] Performance Testing, Nina Jeliazkova, 05/11/2007
- Re: [pS-dev] Performance Testing, Fausto Vetter, 05/11/2007
- Re: [pS-dev] Performance Testing, Andreas Hanemann, 05/11/2007
- Re: [pS-dev] Performance Testing, Andreas Hanemann, 05/11/2007
- RE: [pS-dev] Performance Testing, Michael Michalis, 05/11/2007
- Re: [pS-dev] Performance Testing, Maciej Glowiak, 05/11/2007
- Re: [pS-dev] Performance Testing, Nina Jeliazkova, 05/11/2007
- Re: [pS-dev] Performance Testing, Jeff W. Boote, 05/11/2007
- RE: [pS-dev] Performance Testing, Michael Michalis, 05/14/2007
- RE: [pS-dev] Performance Testing, Michael Michalis, 05/14/2007
- Re: [pS-dev] Performance Testing, Nina Jeliazkova, 05/11/2007
- Re: [pS-dev] Performance Testing, Maciej Glowiak, 05/11/2007
- RE: [pS-dev] Performance Testing, Michael Michalis, 05/11/2007
- Re: [pS-dev] Performance Testing, Andreas Hanemann, 05/10/2007
Archive powered by MHonArc 2.6.16.