Skip to Content.
Sympa Menu

perfsonar-user - Re: [perfsonar-user] PerfSONAR bug in IPv6 tests and visualization

Subject: perfSONAR User Q&A and Other Discussion

List archive

Re: [perfsonar-user] PerfSONAR bug in IPv6 tests and visualization


Chronological Thread 
  • From: Shawn McKee <>
  • To: Stefan Piperov <>
  • Cc: perfsonar-user <>
  • Subject: Re: [perfsonar-user] PerfSONAR bug in IPv6 tests and visualization
  • Date: Tue, 3 Oct 2017 16:29:37 -0400

Hi Stefan,

I don't think there is any problem.  You just have 51 measurement results and the default is to show 10 per page.

In the search box under "Test Results" on your toolkit instances, type 'cern' and you should see both IPv4 and IPv6 results.

Let me know if this doesn't seem to be working correctly.  Thanks,

Shawn

On Tue, Oct 3, 2017 at 4:09 PM, Stefan Piperov <> wrote:

Hi,
I am trying to establish a periodic IPv6 bandwidth test between Purdue University and CERN. I see a problem/bug in the sense that the graphical interface shows all tests as being IPv4, although I have tests which are configured as IPv6 only, and run between hosts that are IPv6 capable.

It is not clear to me wether the problem is in the tests being performed, or in the way they are (mis)reported.

Any help will be appreciated.
Regards,
Stefan Piperov



---------- Forwarded message ----------
Date: Tue, 3 Oct 2017 22:02:01 +0200
From: Stefan Piperov <>
To: Shawn McKee <>
Subject: Re: PerfSONAR slides & problems - please investigate (fwd)


Dear Shawn,
James Letts advised me to forward this question to you.
Perhaps you can tell me more about the IPv6 measurements in PerfSONAR.

Thank you in advance!
- Stefan

---------- Forwarded message ----------
Date: Tue, 3 Oct 2017 01:27:33 +0200
From: JAMES LETTS <>
To: Stefan Piperov <>
Cc: cms-t2 <>
Subject: Re: PerfSONAR slides & problems - please investigate

Please ask Shawn McKee. His team would know.

Regards,

James

On Oct 2, 2017, at 2:27 PM, Stefan Piperov <> wrote:


Dear James, All,
I've been monitoring the new PerfSONAR tests that the Purdue servers are now part of (Thanks!), and keep wondering: Is it just the reporting by PerfSONAR broken, or the testing itself? On the results graphs all tests are listed as IPv4 - that does not seem normal.

Cheers,
Stefan

On Mon, 25 Sep 2017, Stefan Piperov wrote:


Thanks, James!

Yes - please add:
perfosnar-cms1.itns.purdue.edu (2001:18e8:804:7::80d3:8f03) to the latency mesh, and
perfosnar-cms2.itns.purdue.edu (2001:18e8:804:7::80d3:8f04) to the bandwidth mesh.

We are interested in testing against CERN and a few US-sites (for a baseline comparison).

Regards,
Stefan


On Mon, 25 Sep 2017, JAMES LETTS wrote:

Hi Stefan,
I got this reply from Shawn McKee:
Yes, we have a "Dual-stack" mesh which independently tests IPv4 and IPv6 for the hosts that are participating.
The challenge for this is that it won't scale well.  As more end-systems become dual-stacked we end up doubling the amount of testing.   The default is to just let the system choose which gets tested (two dual-stack hosts testing to each other will use IPv6 and otherwise use IPv4).
If you are interesting in testing IPv6, we can see about getting you added to the dual-stack mesh.  Just let me know.
Regards,
James
On Sep 22, 2017, at 6:22 AM, Stefan Piperov <> wrote:
Purdue is fixed now.
We had IPv6 configuration error on the latency server.
I want to use the opportunity to repeat my question about IPv6 Perfsonar testing. Do we have an established mesh for that?
Regards,
Stefan
On Thu, 21 Sep 2017, JAMES LETTS wrote:
Hello,
I managed to get a new link for the perfSONAR monitoring data, and wrote some slides for the current status, attached here and also uploaded to indico for yesterday’s meeting.
For the bandwidth and traceroute tests, we are not getting data from Florida, MIT, and Fermilab.
For the latency test, we are not getting data from a completely disjoint set of sites: UCSD, SPRACE, and Purdue. Packet loss is generally worrying at Florida and Nebraska …
Please investigate and fix.
Thanks!
James




Archive powered by MHonArc 2.6.19.

Top of Page