transport - Re: [transport] Re: some comments on the design document
Subject: Transport protocols and bulk file transfer
List archive
- From: stanislav shalunov <>
- To:
- Subject: Re: [transport] Re: some comments on the design document
- Date: 13 Dec 2004 14:43:53 -0500
[Buffer memory part snipped.]
"Injong Rhee"
<>
writes:
> I suppose that it is hard to predict the network delays (even base
> RTT delays) at the time of router deployment since there are many
> other Internet paths that might be connecting to that high speed
> routers, so the base RTT will be likely increasing in the future.
Why should the base RTT increase? The Earth stays the same size, and
the speed of light stays constant. The inefficiency coefficient
(ratio of real path to great circle) rarely exceeds 2 on non-broken
networks. (I can trade good stories about horrific inefficiency, with
packets destined to the same city visiting another continent, but this
is brokenness, and there's less of it now than there used to be.)
> So it is a little difficult to imagine that each router (at the time
> of deployment) is set its buffer space to the worst case delays
> (what is the worst case then?).
Rules of thumb are used. Most networks are run with the largest
buffer setting the routers support (this includes Abilene). Some have
policies like 150ms or 250ms.
> Yes you're right. So are you referring to the "tolerance" of
> loss-based ones to be close to 1/sqrt(p)?
Suppose a network loses 10% of the packets because of a bad cable.
What's the throughput you should get? The TCP answer is ``that's a
broken network, I won't run on it.'' So far so good. (A naive answer
might be 90% and try explaining to a user why not. Indeed, pure
delay-based control would get close to 90% throughput and close to 80%
goodput.) How about 1%? TCP's answer is ``not much.'' That is still
tolerable. How about 0.01%? That should be tolerated (naively
speaking), shouldn't it? But no, TCP still limits it to only 12Mb/s
(at 100-ms RTT and 1500-B MTU). Even going to a 1/p protocol still
won't let you soak a 10-Gb/s link (you'd get 1.2Gb/s).
> Not all loss based ones have 1/sqrt(p) response functions.
Of course. And going from 1/p^{0.5} for Reno to 1/p^{0.82} for
HighSpeed to 1/p for MIMDy protocols is progress. But it seems to end
at 1/p. And that, I believe, is not enough even for today's network.
People who buy circuits with grant money instead of supporting the
packet-switched network implicitly agree. We need to convince them
that the advantages of statistical multiplexing and self-contained
packets are worth it. We've been telling them about the need to get
their non-congestive loss down to 0.0000000001% to run their 10-Gb/s
streams (assuming 1/sqrt(p) protocol, MTU of 1500B, and RTT of 100ms)
for too long. Would replacing the message with a new non-congestive
loss bound of 0.001% (changing the protocol to 1/p) do it?
And then there's the issue of future networks.
--
Stanislav Shalunov http://www.internet2.edu/~shalunov/
Just my 0.086g of Ag.
- [transport] Re: some comments on the design document, Injong Rhee, 12/10/2004
- [transport] Re: some comments on the design document, Lawrence D. Dunn, 12/10/2004
- Re: [transport] Re: some comments on the design document, stanislav shalunov, 12/13/2004
- Re: [transport] Re: some comments on the design document, stanislav shalunov, 12/13/2004
- Re: [transport] Re: some comments on the design document, stanislav shalunov, 12/13/2004
- Re: [transport] Re: some comments on the design document, stanislav shalunov, 12/13/2004
- [transport] Re: some comments on the design document, Lawrence D. Dunn, 12/10/2004
Archive powered by MHonArc 2.6.16.