Skip to Content.
Sympa Menu

shibboleth-dev - RE: Help with HA Shib

Subject: Shibboleth Developers

List archive

RE: Help with HA Shib


Chronological Thread 
  • From: "Wilcox, Mark" <>
  • To: <>, <>
  • Subject: RE: Help with HA Shib
  • Date: Wed, 24 Aug 2005 21:51:03 -0400

Title: RE: Help with HA Shib
Silly question - what exactly is driving the use case for HA Shib?  And how will you know if you have succeeded?
 
For example - do you actually need to replicate state across the nodes in the HA cluster?  If you can avoid state replication, then you don't need this multicast stuff and then clustering becomes a lot easier and more reliable IMHO.
 
Mark

From: Scott Cantor [mailto:]
Sent: Wed 8/24/2005 9:46 PM
To:
Subject: RE: Help with HA Shib

> I need some advice on some thing.  The current configuration file for
> the intra-node replication mechanism is a bear, and I'm unsure how to
> make it easier.

It's multi-cast, so like the Tomcat cluster stuff (which doesn't work
reliably, BTW, I hope this does ;-) it can't really get much better.

> One thought I had was to put an element for each bit
> configuration data in the idp.xml NameMapping element (which I'd then
> read and use in the code) but I'm not sure I like that, I think it might
> be to confusing or cluttered.

I think it depends because what you really have is a shared configuration
between the NameMapper and ArtifactMapper plugins for handling the
multicast, but then specific configuration that applies to each.

If this was something supported in idp.xml itself, you'd want to create a
separate element for the cluster config, and then reference it (or maybe
not, since there'd only be one) in the plugin elements. Without that
top-level support, I don't know how you could really do it, but I guess I'd
maybe expect the cluster config to be in a separate file, at the very least,
since it would be shared.

> Anyways, please take a look at the
> Configuring HA Shib, in the following document, and give me some
> feedback on what you'd like to see if you were deploying this at your
> university.

What does the eviction timeout mean exactly? Is this the actual ttl for the
mappings themselves in memory? If so, your example for handles seems low. It
should match whatever handleTTL is set to, right?

Also a JBoss question...if I'm using TCP, I assume the clustering system
handles nodes that go down and then reinstates them when they reappear? In
other words, do I need to reconfigure everything to take a machine down for
maintanence or can I just let it drop out and then reappear?

-- Scott




Archive powered by MHonArc 2.6.16.

Top of Page