Skip to Content.
Sympa Menu

wg-multicast - Re: Need Advice on Multicasting Large File Data Sets

Subject: All things related to multicast

List archive

Re: Need Advice on Multicasting Large File Data Sets


Chronological Thread 
  • From: Bill Owens <>
  • To: Michael Laufer <>
  • Cc:
  • Subject: Re: Need Advice on Multicasting Large File Data Sets
  • Date: Thu, 21 Feb 2013 20:37:37 -0500
  • Authentication-results: sfpop-ironport05.merit.edu; dkim=neutral (message not signed) header.i=none

On Thu, Feb 21, 2013 at 08:06:38PM -0500, Michael Laufer wrote:
> I recently joined this working group and would like some advice.
>
> My organization is considering the possibility of using new methodologies
> to distribute near real time satellite weather data streams to
> international & domestic partners/users, probably via Internet2 and
> international peers. The previous generation of satellites would produce
> ~30 Gbytes/day of data products but the new generation produces 3-4
> Terabytes/day. Existing distribution methods will not economically scale,
> especially internationally. The data streams are all files and any packet
> loss would cause a file to become corrupted and unusable.

The "any packet loss" part is obviously an obstacle to using conventional
multicast, since it's UDP and there is a near-certainty that some packets
will be dropped. There's been work done on multicast file transfer going back
probably 20 years, so a literature search may turn up something that you can
just start using.

> Security: UDP would security easier. Must be able to permit/deny each
> separate partner/user request.

There's a big problem - multicast generally does not permit the sender to
permit or deny a given receiver. About the best you could do is encrypt the
files and only hand out keys to your friends, but that will require some fast
encrypt/decrypt capabilities to meet your throughput goals.

> Prefer separate sending system and any
> return/feedback system (for missed files/retransmission notification).
> Time frame: ~ Summer to start initial testing.

And there's another problem. Unless your institutions are already geared up
for high-bandwidth multicast it will take some real effort to get there -
possibly including things like upgraded or replaced (or bypassed) firewalls,
upgraded routers, considerable education of network folks, etc. It's also
likely that you will run into other networks in the way that won't support
multicast, or won't be happy to have multi-gigabit streams. It's an
unfortunate fact of life with multicast that an accidental join to such a
stream would be an effective DoS attack for many sites. Honestly, unless you
had an intentional receiver within my network I'd take steps to make sure
your groups were blocked at the edge in order to avoid accidents.

I'd suggest looking at something else, either some form of peer-to-peer, or a
manually configured tier system with dedicated paths to the first level from
your data source, perhaps using AL2S, so that the first tier has enough
bandwidth to receive all of the products and distribute them as needed.
That's a stretch for summertime too, of course, but perhaps more achievable.

Bill.



Archive powered by MHonArc 2.6.16.

Top of Page