Skip to Content.
Sympa Menu

transport - RE: [transport] End-to-End Transmission Control,by Modeling Uncertainty about the Network State

Subject: Transport protocols and bulk file transfer

List archive

RE: [transport] End-to-End Transmission Control,by Modeling Uncertainty about the Network State


Chronological Thread 
  • From: Narasimha Reddy <>
  • To: 'Larry Dunn' <>, Scott Brim <>
  • Cc: Transport WG <>
  • Subject: RE: [transport] End-to-End Transmission Control,by Modeling Uncertainty about the Network State
  • Date: Tue, 20 Mar 2012 17:00:57 +0000
  • Accept-language: en-US

Larry, there is a potential solution to 2.2 --the issue you raise about
delay-based protocols competing with TCP-like flows.

The key is to increase the window at a faster rate than TCP when the buffer
lengths are small. We have shown that this is feasible with PERT, with
theory, simulations and limited real-network testing. This strategy can lead
to lower packet loss rates for delay-based schemes even when they compete
with TCP. The insight is to pump more packets when buffers are less full to
compensate for backing off at higher buffer lengths.

Kiran Kotla and A. L. Narasimha Reddy, Making a delay-based protocol adaptive
to heterogeneous environments, Proc. of IWQOS, June 2008.

Regards,
Reddy

-----Original Message-----
From:


[mailto:]
On Behalf Of Larry Dunn
Sent: Tuesday, March 20, 2012 11:19 AM
To: Scott Brim; Larry Dunn
Cc: Transport WG
Subject: Re: [transport] End-to-End Transmission Control,by Modeling
Uncertainty about the Network State

Scott,

re: the paper in particular:
1. read it.
2. several issues that make me "not too enthusiastic"
a partial list:
2.1. probably OK as a clean-slate idea,
but not realistic for deployment in the real Internet, due to:
2.2. incremental deployment properties
claims like "drains the buffer" - how? If there are competing
TCP flows, they will work to "fill the buffer",
and this scheme will suffer like most delay-based approaches- where
they get "polite" when buffers increase, and therefore get "crushed".
2.3. very high complexity; implementing on one node is interesting;
not sure I can grok the interaction of all the feedback loops
of tons of clients running such code and simulations simultaneously
2.4. assumptions about "the state" of the network-
"states" are trimmed when found to be unrealistic.
But the network is dynamic, right?
How to possible states get re-injected as valid states-to-consider?
Not sure that the control dynamics work out (timescales,
free variables, dynamics, time-to-sense-a-"state", etc.)
2.5. parameters of even this experiment are questionable:
bottleneck buffer <= 14KBytes;
network carries one 1500byte pkt/sec
Will results achieved with this "network characterization"
be valid with more-realistic parameters? How do we know?
2.6 comments about "tracking all states" and "knowing *the* state"
worry me- seems to have a lot of machinery, to assert knowledge
about a (fatally) oversimplified version of reality?


re: intermediaries
See previous discussions on incremental deployment.

For that class of algorithms that
"just need a little help from-the-network, or from-each-router-on-path",
like XCP, RCP, etc. -
their authors, as far as I recall, never really
painted a plausible incremental deployment picture
(both on the motivation for putting what's needed for
their algo. in router h/w, and w.r.t. competing on-path new-reno
or similar interaction).

So I resonate with your concerns about getting the right monitoring
locations, working in concert, etc.

Inproving "performance" is straightforward; you can bias even delay-based
schemes to improve *their* perceived performance.
The tough part, it seems, has been to build an algo.
or scheme that:
works better for your idea,
has good incremental deployment properties, and is "fair enough" to existing
flows...

Larry
--


On Mar 20, 2012, at 8:54 AM, Scott Brim wrote:

> I'd appreciate opinions about
>
> http://conferences.sigcomm.org/hotnets/2011/papers/hotnetsX-final100.p
> df
>
> Extensions will be discussed at ICCRG next week.
>
> Why do I care? It would seem that instead of improving on the
> endpoints' guesswork about the behavior of all segments of the network
> between them, we should just install intermediaries that _know_ about
> the behavior of particular segments -- Phoebus for example, right?
> However, it's not easy to get the right intermediaries at the right
> points all working together, especially when you're interconnecting
> different networks, e.g. going from Urbana to Lyon, so the e2e
> argument continues to be powerful.
>
> So what do you think about this paper particularly, and any other
> attempts to improve on performance without any help from intermediaries?
>
> Thanks ... Scott




Archive powered by MHonArc 2.6.16.

Top of Page