Skip to Content.
Sympa Menu

perfsonar-user - [perfsonar-user] Fwd: Perfsonar renewal and Helux Nebula

Subject: perfSONAR User Q&A and Other Discussion

List archive

[perfsonar-user] Fwd: Perfsonar renewal and Helux Nebula


Chronological Thread 
  • From: Bas Kreukniet <>
  • To:
  • Cc:
  • Subject: [perfsonar-user] Fwd: Perfsonar renewal and Helux Nebula
  • Date: Fri, 16 Mar 2018 15:02:54 +0100
  • Ironport-phdr: 9a23:/jIofRKTldRtZVwsTNmcpTZWNBhigK39O0sv0rFitYgeIv7xwZ3uMQTl6Ol3ixeRBMOHs6kC07KempujcFRI2YyGvnEGfc4EfD4+ouJSoTYdBtWYA1bwNv/gYn9yNs1DUFh44yPzahANS47xaFLIv3K98yMZFAnhOgppPOT1HZPZg9iq2+yo9JDffwtFiCChbb9uMR67sRjfus4KjIV4N60/0AHJonxGe+RXwWNnO1eelAvi68mz4ZBu7T1et+ou+MBcX6r6eb84TaFDAzQ9L281/szrugLdQgaJ+3ART38ZkhtMAwjC8RH6QpL8uTb0u+ZhxCWXO9D9QLYpUjqg8qhrUgflhicbODE27W/ZhMJwgrxYrhymvBFw2ZLYYISPOfp+Yq/Qf9UXTndBUMZLUCxBB5uxYpYVAOoaIO1WqpP9qEUTrRu9AwmsBf3gyiNVjXLxxqI1yf8hHRvF3Aw6Ad0OrXfUrNP0NKgMTeC417LIzSjZb/NYwjfy8pLIfQo7rfGKWbJ9aMzcwlQhGQPCi1Wfs43lPzWN2+QVrWeb9eRgVfmoi24hsQ5xuCKjxsEyhYnVno4VxVHE9Tl5wIYoPtK0Uk97Ydm8HJRMqS6aLY12Ttk+TGFovisx174IuYajcSQU1Jgr2wPTZvmGfoSV/h7uUfudLSt7iX9gZr6zmxm//VSlx+D5S8W51FVHojJYntTDtn0BzQHf5tSbRvdn40utwzSC2x7V5+pZO047j7DbJIQkwrMolpocr0DDHijulUXzkK+ZbF8o+vO16+T9bLXmvYWTN5VuhQ3kNKQuntSzAeU+MgcQQ2iW4fqw2KH/8UHkRbhHj+A6nrXcvZzHOcgWpau0DxFJ3oss9xqyCjKr3MkckHQENF5FfQiIj4ntO1HAOvD4CvK/jky3nzhx3PDKJL7hAo/TIXjek7fhe7d95FBAyAco1tBf+ohUCr8aIP3pQE/+rsbUDhk9MwCs2eboFM191p8CWWKIGqKZKL3dsUWG5uI0JOmMYpUauCzkJ/g4/P7hk2U5lEQZfamoxpsXdGu4Eup8L0WYZ3rsnskOEX0MvgUgUOzmlkeOXiBOaHavDOoA4WQjBZioFoDFT5ronaeMxg+6GIFbfGZLFgrKHHv1JKueXPJZQiuJaux7nyEHU7msV8d13xi18gzgxqdkKOvZ5gUDqIjtz9Fv7qvVkUdhpnRPE82B3jTVHClPlWQSSmpu0Q==


Hi,

We have purchased a machine for various performance tests. We also plan to set up Vms for perfsonar in LHC and for Helux Nebula on this box. 
However, it is an IBM power 8 with a ppc64 architecture. Would it be possible to run Perfsonar on it? 

Kind regards,
Bas 




Begin forwarded message:

From: Marian Babik <>
Subject: Re: Perfsonar renewal and Helux Nebula
Date: 16 March 2018 at 14:43:28 CET
To: "" <>
Cc: "" <>, "" <>, "wlcg-perfsonar-support (WLCG perfSONAR support mailing list)" <>

Hi Bas,
as I mentioned before I don’t think ppc64 is supported and you would need to check with the developers at

Supporting a new architecture requires a major effort, so while it could make a lot of sense I think it won’t be easy to convince them. Anyway, I wonder how do you plan to run the other software ? I would guess that most of the existing grid middleware has no ppc64 support (also I wonder if basic tools such as iperf3, nuttcp, etc support ppc64).

— Marian

On Mar 16, 2018, at 2:21 PM, Bas Kreukniet <> wrote:




Hi,

In your installation guides there are references to the x86 32/64bit architectures.
https://docs.perfsonar.net/install_hardware_details.html
However, the IBM power8 has ppc64 (64bit). In theory this can be ideal for perfsonar-like services, but is this supported (yet)?
Thanks,
Bas


On 29 Nov 2017, at 15:57, Marian Babik <> wrote:

Hi Bas,
we can run both latency and bw on a single bare metal with two NICs for sure, we do have already nodes like this running. However I have no experience in running perfSONAR on power systems, I understand this is will come with Power processor, right ? So we should probably ask in the perfSONAR community if someone has tried that. There is no longer a special kernel built for perfSONAR, so if you can run centos7 with stock kernel it should work fine, but you might be the first one to try (especially with 100G NIC, I would probably opt for 2x10G + 100G just in case). We can for sure re-use the system for both WLCG and HNSciCloud and you can of course run tests outside of LHC as long as you can ensure that the tests don’t impact each other (if you run them with perfSONAR then it’s fine as scheduler will take care of that, if you’ll have a special setup then you’ll need to take it into account).

Kind regards,
Marian


On Nov 29, 2017, at 3:38 PM, Bas Kreukniet <> wrote:

Hi Marian,

We are quiet sure to purchase the IBM Power 8 for NL-T1's  Perfsonar renewal. That will be this year.

Stil a few questions remain:
- You can run multiple Perfsonar instances on the same device? We prefer to deploy only this machine for both Perfsonar BW and Latency. For bandwidth we ll prefer to pick a 100g interface and for latency a 10g.
This system should also be used for Helux Nebula tests.

We do like to use the box for various other perfomance tests also outside LHC / Grid networks.

I hope all this is possible and suits you as well.

Kind regards,
Bas





On 25 Oct 2017, at 15:17, Marian Babik <> wrote:

Hi Bas,
there is WLCG network throughput WG which coordinates deployment of perfSONARs in WLCG (https://twiki.cern.ch/twiki/bin/view/LCG/NetworkTransferMetrics). The same servers are re-used for different projects, which currently means LHCOPN/LHCONE, LHC experiments, Belle II and recently HNSciCloud. The installation/configuration of the servers for WLCG purposes is documented at:
https://opensciencegrid.github.io/networking/ (we’re currently updating it so it’s a work in progress and some sections might already be outdated)

We just gave an update on the WG activities at HEPiX last week (https://indico.cern.ch/event/637013/contributions/2739243/), which summarises our core activities.

Concerning configuration, it works as you described, for SARA nodes there is a mesh config URLs which once configured on your nodes will fetch all central configurations and will run them along what you have configured locally (if anything), this is not new and has been in place for quite some time. You can also get access to the central web configuration interface in case you’d like to manage some particular mesh.

In case you’re planning to upgrade the boxes, please note that it’s now possible to run both bandwidth and latency node on one box provided that it has two NICs (for bandwidth 10G/40G NIC is recommended, but 100G should work as well though I think only ESNet has some boxes that you can test to and some tuning might be needed to get it full speed; for latency 1G is sufficient as that’s low bandwidth testing). Usually bandwidth node is not very busy as the test schedule is quite relaxed but does need corresponding RAM to its NIC, latency on the other hand runs small tests but continuously so it does need good I/O to disk and some CPU. We now collect and store all results centrally so there is no need to have a lot of space on the box itself (SSD might be preferred to having large HDD). The box you’re proposing looks very good to me. Also as perfSONAR team plans to drop support for SL6/centos6 in Q1 2018, we recommend to install CentOS7 on all new boxes.

If you have any questions/comments please let us know, either directly mailing to me and/or Shawn (in CC) or contacting

Thanks,
Marian

On Oct 25, 2017, at 3:01 PM, Bas Kreukniet <> wrote:


Hi Marian,

Joao refers to you for the Perfsonar service at large.

We (at NL-T1) are about to replace our current systems for bandwidth and latency measurements in LHC networks.

I have two inquiries:
1) We buy the hardware, built up the systema (probably CentOS) en built the Perfsonar services.
We stay responsible for the hardware and the OS, CERN takes over the Perfsonar entirely and makes the config, MA's etc for LHC and Helix Nebula? or how doe this work?
Is one instance of Perfsonar enough for multiple projects (LHCopn's , Helix Nebula)?

2) what kind of hardware is commonly used? For bandwidth we like to have a box with 100g connections and able to do max performance test.
We are planning to buy a IBM S822L server for bandwidth measurements.
This is a 2x 4,15 GHz (8-core) cpus, 128GB RAM hardware raid 1 with 600g disks. Networking cards Mellanox Connect X4 dual port and a Chelsio
T62100-LP-CR and an Intel XL710-QDA2. Our collegues at Nikhef have very positive experiences with this box.

What system type would you recommend for Latency measurements?

Kind regards,

Bas
Network Specialist SURFsara



On 25 Oct 2017, at 14:42, Joao Fernandes <> wrote:

Hi Bas, All,

My role with PerfSONAR is at the technical coordination level across commercial cloud providers and CERN/other RIs participating in the HNSciCloud project. The scope is limited to network tests between commercial providers and the participating institutes.
The service manager for the service for the LHC at large from CERN for perfSONAR is Marian Babik (cc’ed on this email). He should be able to help on this.
Best,
João

On 25 Oct 2017, at 14:30, Bas Kreukniet <> wrote:



Hi Joao,

We are soon to replace our Perfsonar equipment.
At this moment we use one system voor bandwidth and one for latency meaurements in the LHC networks.

I udnerstand you are the contact for Helux Nebula. How does it work? we built a new perfsonar server, and CERN takes it over?
Are there multiple instances of Perfsonar or can the same server be used for a number of projects?

I was looking on the perfsonar.net website but could not easily find answers to these questions.

Kind regards,
Bas
SURFsara













Archive powered by MHonArc 2.6.19.

Top of Page