Skip to Content.
Sympa Menu

perfsonar-user - Re: [perfsonar-user] Multiple meshes with unique testspec's in maddash

Subject: perfSONAR User Q&A and Other Discussion

List archive

Re: [perfsonar-user] Multiple meshes with unique testspec's in maddash


Chronological Thread 
  • From: Aaron Brown <>
  • To: Chad Kotil <>
  • Cc: Andrew Lake <>, "" <>
  • Subject: Re: [perfsonar-user] Multiple meshes with unique testspec's in maddash
  • Date: Fri, 13 Jun 2014 16:44:19 +0000
  • Accept-language: en-US

Hey Chad,

Looking through the generator, it doesn’t take the test parameters into
account, just that there are tests of that type between the two hosts (on the
theory that folks would have one test of each type between two hosts) . The
best bet is probably to put an issue into the tracker to request a “strict”
generation mode that puts in the test parameters to be able to differentiate
those kinds of different tests.

Cheers,
Aaron

On Jun 12, 2014, at 1:29 PM, Chad Kotil
<>
wrote:

> Hi Andy and Aaron,
> I'm still unable to get maddash to display the correct data from both of
> my meshes that use a unique testspec with different parameters.
>
> I've generated a mesh config json file from the example here:
> /opt/perfsonar_ps/mesh_config/doc/example.conf Then ran build_json. This
> worked really well. From this I generated a new maddash.yaml using
> generate_gui_configuration and even a owmesh.conf with
> generate_configuration on an agent node. However I'm hitting the same
> issue. Without the knowledge of the testpsec_id the data that maddash is
> displaying is not correct. owmesh.conf knows about the different
> testspecs, as defined here (in my Config::General format config):
>
> <test_spec bwctl_1h_tcp_core>
> type perfsonarbuoy/bwctl # Perform a bwctl test (i.e.
> achievable bandwidth)
> tool bwctl/iperf # Use 'iperf' to do the
> bandwidh test
> protocol tcp # Run a TCP bandwidth test
> interval 3600 # Run the test every hour
> duration 30 # Perform a 30 second test
> random_start_percentage 25
> report_interval 2
> </test_spec>
>
> <test_spec bwctl_1h_udp_core>
> # Define a test spec for testing UDP bandwidth once every hour
>
> type perfsonarbuoy/bwctl
> tool bwctl/iperf
> protocol udp # Run a UDP bandwidth test
> interval 3600 # Run the test every hour
> duration 30 # Perform a 30 second test
> udp_bandwidth 800000000 # Perform a 800Mbps test
> buffer_length 35896
> window_size 64m
> random_start_percentage 25
> report_interval 1
> </test_spec>
>
> and another testepc for udp, notice the different parameters.
> owmesh.conf has these same params and testpecids, so collecting the data
> is OK..
>
> <test_spec bwctl_1h_udp_edge>
> # Define a test spec for testing UDP bandwidth once every hour
>
> type perfsonarbuoy/bwctl
> tool bwctl/iperf
> protocol udp # Run a UDP bandwidth test
> interval 3600 # Run the test every hour
> duration 30 # Perform a 30 second test
> udp_bandwidth 200000000 # Perform a 800Mbps test
> buffer_length 35896
> window_size 64m
> random_start_percentage 25
> report_interval 1
> </test_spec>
>
>
> However , in maddash.yaml for each of the meshes i've defined, the
> service check commands and the graphURL commands are the same. This
> causes unexpected results in the data that is returned. You never know
> which testspec your're going to get. (I also didnt set my thresholds
> right, but that's another issue ill clean up later)
>
> id: Bandwidth_Mesh-CORE_UDP_-_Throughput
> name: Throughput
> ok_description: Throughput >= 900Mbps
> params:
> command: '/opt/perfsonar_ps/nagios/bin/check_throughput.pl -u
> %maUrl -w 0.9: -c 0.5: -r 86400 -s %row -d %col'
> graphUrl:
> http://localhost/serviceTest/bandwidthGraph.cgi?url=%maUrl&dst=%col&src=%row&length=2592000
>
> id: Bandwidth_Mesh_-EDGE_UDP_-_Throughput
> name: Throughput
> ok_description: Throughput >= 900Mbps
> params:
> command: '/opt/perfsonar_ps/nagios/bin/check_throughput.pl -u
> %maUrl -w 0.9: -c 0.5: -r 86400 -s %row -d %col'
> graphUrl:
> http://localhost/serviceTest/bandwidthGraph.cgi?url=%maUrl&dst=%col&src=%row&length=2592000
>
> If the issue still is not clear, look at my groups, I think this really
> helps to explain the issue. Notice the overlap, and how node d and e
> exist in both meshes. The UDP parameters are different. Core has a udp
> bandwidth of 800M and edge has 200M.
>
> <group bwctl_udp_core>
> type mesh
> member bwctl.a.net
> member bwctl.b.net
> member bwctl.c.net
> member bwctl.d.net
> member bwctl.e.net
> </group>
> <group bwctl__udp_edge>
> type mesh
> member bwctl.d.net
> member bwctl.e.net
> member bwctl.f.net
> member bwctl.g.net
> </group>
>
> Then the test definitions:
>
> <test>
> description Bandwidth CORE UDP
> group bwctl_udp_core
> test_spec bwctl_1h_udp_core
> </test>
>
> <test>
> description Bandwidth EDGE UDP
> group bwctl__udp_edge
> test_spec bwctl_1h_udp_edge
> </test>
>
> Any ideas how I might get this to work? Do we need to make some feature
> requests for maddash and the config gen scripts?
>
> Let me know if you need more info.
>
> Thanks,
> --Chad
>
>
> On 6/11/14, 12:30 PM, Andrew Lake wrote:
>> Hi Chad,
>>
>> Just to supplement what Aaron sent, this document also has some details on
>> specifically using MaDDash with the mesh configuration software:
>> http://code.google.com/p/perfsonar-ps/wiki/MaDDashInstall#Advanced_Topic:_Using_the_perfSONAR_Mesh_Configuration_Software
>>
>> One thing we will likely need to add is some smarts to the mesh config to
>> add on the "-p" option to check_throughput.pl so it only grabs udp tests,
>> but that's an easy change. We may also need to add an option to filter
>> based on the UDP bandwidth rate, but that is also a relatively trivial
>> addition. I think as long as we have a fine-grained enough set of filters,
>> you can get away without explicitly setting a test ID.
>>
>> Thanks,
>> Andy
>>
>>
>>
>> On Jun 11, 2014, at 11:23 AM, Chad Kotil
>> <>
>> wrote:
>>
>>> OK Great. I've actually used this before, but never in creating my own.
>>> I will give it a shot and let you know how it goes.
>>> It does look like I can define multiple meshes with different parameters
>>> for defining UDP tests.
>>>
>>> Thanks,
>>> --Chad
>>>
>>>
>>> On 6/11/14, 11:20 AM, Aaron Brown wrote:
>>>> Hey Chad,
>>>>
>>>> The plan is to replace the owmesh configuration with the mesh
>>>> configuration stuff. You can get a (broad) overview of it at
>>>> https://code.google.com/p/perfsonar-ps/wiki/MeshConfiguration along with
>>>> a pointer an example configuration. If there’s some property in your
>>>> current meshes that isn’t easily translatable into the new mesh
>>>> configuration, we may be able to make some changes to better support it.
>>>>
>>>> Cheers,
>>>> Aaron
>>>>
>>>> On Jun 11, 2014, at 10:47 AM, Chad Kotil
>>>> <>
>>>> wrote:
>>>>
>>>>> Ah good to know about the fate of owmesh.conf.
>>>>
>>>>>
>>>>> The reason we have multiple meshes are because of the UDP tests mainly.
>>>>> So that we can define slower speed UDP tests for some hosts. Some of our
>>>>> hosts have a fast backbone connection and others are very slow. To avoid
>>>>> causing congestion on the backbones we chose to create these separate
>>>>> meshes with new testspecs. How can I accomplish this without the use of
>>>>> owmesh.conf and unique testspecs?
>>>>>
>>>>> Another complicating issue im seeing in maddash with my current setup is
>>>>> that there is overlap where a host exist in multiple meshes. Since
>>>>> maddash and the checks are unaware of the tspec_id data can now come
>>>>> from any of the meshes.
>>>>>
>>>>> --Chad
>>>>>
>>>>>
>>>>>
>>>>> On 6/11/14, 9:49 AM, Andrew Lake wrote:
>>>>>> Hi Chad,
>>>>>>
>>>>>> There are no plans to add testspec_id to maddash because the
>>>>>> owmesh.conf file (and thus the spec id) is going away completely in
>>>>>> 3.4. You can already define multiple maddash "checks" that filter on
>>>>>> different test parameters though. For example, for throughput you
>>>>>> could have two grids with the exact same hosts but one is for UDP
>>>>>> tests and another is for TCP tests. In the maddash check definition
>>>>>> you'd just need to define a "command" option in one that has "-p tcp"
>>>>>> in one and "-p udp". Likewise you would need to update the graph URL
>>>>>> with whatever extra options it needs to grab the different results
>>>>>> (iirc, bandwidthBraph.cgi has a "protocol=udp/tcp" GET parameter you
>>>>>> could set in this example). I think another way to ask the question is
>>>>>> what types of test parameters do you want to distinguish between tests
>>>>>> and are they all supported as parameters to the graphs and underlying
>>>>>> tools?
>>>>>>
>>>>>> Thanks,
>>>>>> Andy
>>>>>>
>>>>>>
>>>>>> On Jun 11, 2014, at 9:22 AM, Chad Kotil
>>>>>> <>
>>>>>> wrote:
>>>>>>
>>>>>>> Hello perfsonar-users,
>>>>>>> Is there a way to add multiple meshes to maddash with hosts that are a
>>>>>>> member of multiple testspec's?
>>>>>>>
>>>>>>> I am trying to have maddash display multiple meshes which rely on
>>>>>>> different testspec's as defined in my owmesh.conf. It doesnt look like
>>>>>>> you can specify a testpec_id however in maddash right out of the box.
>>>>>>> The old bwplot.cgi handles this case just fine , as it accepts a
>>>>>>> testspec via the name parameter.
>>>>>>>
>>>>>>> I've done some digging and found that the service check command, the
>>>>>>> graphUrl and metaDataKeyLookup in maddash.yaml do not seem to accept a
>>>>>>> testspec_id. However I was able to modify the service check command,
>>>>>>> check_throughput.pl to accept a testspec and modified the query to
>>>>>>> match
>>>>>>> on the testspec, so that part does seem to be working. However without
>>>>>>> metaKeyReq.cgi and delayGraph.cgi accepting a testspec_id I think I am
>>>>>>> stuck.
>>>>>>>
>>>>>>> The only thing I can even think to do is use a unique MA, but that is
>>>>>>> not ideal.
>>>>>>>
>>>>>>> Are there any other options available for what I am trying to do? And
>>>>>>> are there any plans to add support for multiple meshes and testspecs
>>>>>>> into maddash?
>>>>>>>
>>>>>>> Thanks,
>>>>>>> --Chad
>>>>>>>
>>>>>
>>>
>




Archive powered by MHonArc 2.6.16.

Top of Page