Skip to Content.
Sympa Menu

perfsonar-user - Re: [perfsonar-user] Strange performance results - AT&T VPLS circuit

Subject: perfSONAR User Q&A and Other Discussion

List archive

Re: [perfsonar-user] Strange performance results - AT&T VPLS circuit


Chronological Thread 
  • From: Jared Schlemmer <>
  • To: "" <>
  • Subject: Re: [perfsonar-user] Strange performance results - AT&T VPLS circuit
  • Date: Mon, 31 Jul 2017 14:12:18 -0400
  • Ironport-phdr: 9a23:AZjnfhMBJuAsjhwoMfol6mtUPXoX/o7sNwtQ0KIMzox0LfXzrarrMEGX3/hxlliBBdydsKMUzbKO+4nbGkU4qa6bt34DdJEeHzQksu4x2zIaPcieFEfgJ+TrZSFpVO5LVVti4m3peRMNQJW2aFLduGC94iAPERvjKwV1Ov71GonPhMiryuy+4ZPebgFLiTanfb9+MAi9oBnMuMURnYZsMLs6xAHTontPdeRWxGdoKkyWkh3h+Mq+/4Nt/jpJtf45+MFOTav1f6IjTbxFFzsmKHw65NfqtRbYUwSC4GYXX3gMnRpJBwjF6wz6Xov0vyDnuOdxxDWWMMvrRr0yRD+s7bpkSAXwhSkHKzE2/3zZhMJ+jKxFoh2vpAdyw4HIbIGQLvdyYr/RcNEcSGFcXshRTStBAoakYoULFeUBJ/hXoJTgrFUTsRS+BQ2sC/3qyj9NmHD2x7Ax3uMjEQ7YxwwvA9IOsHDKo9XwL6oSUOa1w7TJzTrZafNZwy3x55bVfRA8uPyBW697f8nJyUQ3CQ/JkkmcpZHgMj+I1ekCrWuW4u9uVeKhl2IrtwR8riahy8opj4TEhp8Zx1bZ/itj2ok1P8e3SEtjbN6kDpRQsyaaOpNzQsw4QmFovDw2xaEauZGnZiQKx44nxxjYa/ObaYSI4w/jWPyPLjhlmXJpYLO/hxCs/ki80uDwSNW43EpXoidAj9XBtW4C2h/W58iJRPtx4lut1DOR2w3d7+xJJEA5mbfDJ54k2LEwl54TsUrZHi/xnUX7lK2WeVs/+ue06+TnZqvpppqHOo91jAHxL6Uulda5AesiKAQBQXWU+fmk2L354UL5WKlKjuExkqTBqJDVO94bpqCiAw9S1IYs8Qy/Ay670NQDg3YHNklIeBaGj4jyJ1HOO+70Ae2+g1SqjDdk2erGPrv/DZXRMHTPiqnucqtg6x0U9A1mh8hS/ZxPDbcIOrfuQULrnN3eEhIjNQGomaDqBMg3ntcGVHiBGaifObmXrESF/MouJfWBfokYpGy7JvQ4sa3Al3g8zH0UZ6SllaAcaHS1G/FrOQ3NY3f3idcFHGIivwwyTeXsgRuPXSMFNCX6ZL41+jxuUNHuNozEXI342LE=

Thanks for the quick responses - the low latency leading to modest buffer
requirements makes a lot of sense. I’ll try to answer everyone’s questions
below:

- Both perf hosts are directly connected to the routers in Sunnyvale and
Monterey Bay by 1GE connections.
- The path that I have visibility into is Monterey PERFSONAR <—> Juniper MX
router <—> AT&T Ciena 3930 switch <—> AT&T “cloud” <—> AT&T Ciena 3930 switch
<—> Juniper MX router <—> Sunnyvale PERFSONAR. We manage both perf boxes and
both Juniper routers. We connect directly to the Ciena switches, which
cohabitate the same rack as our routers, but are managed by AT&T.
- No errors are on the interfaces at either location, although we do see MTU
output errors slowly incrementing on the Monterey interface facing Sunnyvale.
I point this out although I think it’s unrelated - it’s incrementing very
slowly, and I just ran a couple tests out of Monterey and the output MTU
errors didn’t increment at all. I suspect this is some kind of broadcast
traffic or something else related to these hosts being connected via VPLS
cloud.

Here is a 1G test from Monterey to Chicago:

Connecting to host port 5840
[ 16] local port 37714 connected to port 5840
[ ID] Interval Transfer Bitrate Retr Cwnd
[ 16] 0.00-1.00 sec 2.36 MBytes 0.02 Gbits/sec 0 386 KBytes
[ 16] 1.00-2.00 sec 28.6 MBytes 0.24 Gbits/sec 0 3.64 MBytes
[ 16] 2.00-3.00 sec 101 MBytes 0.85 Gbits/sec 0 7.62 MBytes
[ 16] 3.00-4.00 sec 106 MBytes 0.89 Gbits/sec 32 4.07 MBytes
[ 16] 4.00-5.00 sec 63.8 MBytes 0.53 Gbits/sec 0 4.09 MBytes
[ 16] 5.00-6.00 sec 65.0 MBytes 0.55 Gbits/sec 0 4.18 MBytes
[ 16] 6.00-7.00 sec 67.5 MBytes 0.57 Gbits/sec 0 4.42 MBytes
[ 16] 7.00-8.00 sec 72.5 MBytes 0.61 Gbits/sec 0 4.81 MBytes
[ 16] 8.00-9.00 sec 80.0 MBytes 0.67 Gbits/sec 0 5.34 MBytes
[ 16] 9.00-10.00 sec 88.8 MBytes 0.74 Gbits/sec 0 6.03 MBytes
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval Transfer Bitrate Retr
[ 16] 0.00-10.00 sec 676 MBytes 0.57 Gbits/sec 32 sender
[ 16] 0.00-10.00 sec 666 MBytes 0.56 Gbits/sec receiver

I checked interfaces along this path and bandwidth contention should not be
contributing. This test looks like what you would expect - throughput ramping
up as window increases, then a period of tries and the window backing back
down. Thanks again,

Jared Schlemmer
Network Engineer, GlobalNOC at Indiana University




> On Jul 31, 2017, at 1:53 PM, Matthew J Zekauskas
> <>
> wrote:
>
> Some thoughts...
>
> I wonder if you could also characterize what you see as "good"?
>
> I would posit that Monterey to Sunnyvale is relatively short, so the
> latency is relatively low, and TCP can recover relatively quickly, and
> maintain throughput in the face of modest loss. ~500K may well be
> sufficient buffer to keep this path filled.
>
> Are the endpoints 1GE connected? (so they would not be likely to overrun
> the connection in the middle).
> Could it be that there is existing traffic so you are congesting in one
> direction but not the other?
> Do you see any other indications of loss - errors or drops on interfaces?
>
> When you ask about "real world impact" -- are you talking about the
> tests themselves which will saturate the path and could adversely affect
> user performance, or the presence of some loss, which might affect user
> performance elsewhere, depending on the application and distance from
> the user?
>
> --Matt
>
>
> On 7/31/17 1:40 PM, Jared Schlemmer wrote:
>> We just turned up a new network endpoint that connects to an existing
>> aggregation site via a 1gb AT&T VPLS connection and I’m seeing some
>> interesting performance results. The sites are Monterey Bay and Sunnyvale,
>> CA. Tests from Sunnyvale to Monterey Bay are good, but the reverse
>> direction, Monterey Bay toward Sunnyvale, I see this:
>>
>> Connecting to host port 5332
>> [ 16] local port 58534 connected to port 5332
>> [ ID] Interval Transfer Bitrate Retr
>> Cwnd
>> [ 16] 0.00-1.00 sec 110 MBytes 0.92 Gbits/sec 0 1.16 MBytes
>> [ 16] 1.00-2.00 sec 113 MBytes 0.95 Gbits/sec 64 553 KBytes
>> [ 16] 2.00-3.00 sec 111 MBytes 0.93 Gbits/sec 32 498 KBytes
>> [ 16] 3.00-4.00 sec 112 MBytes 0.94 Gbits/sec 32 434 KBytes
>> [ 16] 4.00-5.00 sec 112 MBytes 0.94 Gbits/sec 32 362 KBytes
>> [ 16] 5.00-6.00 sec 112 MBytes 0.94 Gbits/sec 0 669 KBytes
>> [ 16] 6.00-7.00 sec 112 MBytes 0.94 Gbits/sec 32 622 KBytes
>> [ 16] 7.00-8.00 sec 111 MBytes 0.93 Gbits/sec 32 574 KBytes
>> [ 16] 8.00-9.00 sec 112 MBytes 0.94 Gbits/sec 32 519 KBytes
>> [ 16] 9.00-10.00 sec 112 MBytes 0.94 Gbits/sec 32 458 KBytes
>> - - - - - - - - - - - - - - - - - - - - - - - - -
>> [ ID] Interval Transfer Bitrate Retr
>> [ 16] 0.00-10.00 sec 1.09 GBytes 0.94 Gbits/sec 288
>> sender
>> [ 16] 0.00-10.00 sec 1.09 GBytes 0.93 Gbits/sec
>> receiver
>>
>> My questions are, a) how is it that we see retries and such a small window
>> size and yet still get near line-rate throughput, and b) what is the real
>> world impact of a test like this? Users at the Monterey site are reporting
>> wildly varying performance out to the internet.
>>
>> There are likely a lot of factors going on here, but I wanted to focus
>> just on the testing between these two sites through the AT&T cloud. Any
>> insights, theories or suggestions would be much appreciated. Thanks,
>>
>>
>> Jared Schlemmer
>> Network Engineer, GlobalNOC at Indiana University
>>
>>
>>
>>
>

Attachment: smime.p7s
Description: S/MIME cryptographic signature




Archive powered by MHonArc 2.6.19.

Top of Page