Skip to Content.
Sympa Menu

shibboleth-dev - [Shib-Dev] replacing the storage service

Subject: Shibboleth Developers

List archive

[Shib-Dev] replacing the storage service


Chronological Thread 
  • From: Paul Hethmon <>
  • To: Shibboleth Dev <>
  • Subject: [Shib-Dev] replacing the storage service
  • Date: Fri, 22 Apr 2011 14:00:23 +0000
  • Accept-language: en-US

I'm working on a replacement of the EventingMapBasedStorageService for Shib 2.1.5. Based on some prior threads here, I know some folks have elected to use a filter to monitor for session events instead of replacing the storage service. My goal is to drop Terracotta and instead use a cluster aware version of the storage service. It appears the major stumbling block to most folks is knowing when to replicate the data to other nodes, hence the filter approach.

In looking at my requirements, I'm thinking that hooking into the storage service will be enough, but I'm looking for feedback in case I'm missing something. My cluster requirements:

1. Hardware load balancer maintaining sticky sessions
2. Replication to other nodes accomplished within 5 minutes

Since I'm ok with maintaining sticky, I don't have to achieve near real time data replication. What I want is to simply make sure the sessions exist on the other nodes if the user returns after the sticky timeout.

So the general algorithm I'm using is to toss objects on a FIFO queue when either the storage service does a get, put, or remove. So any access will trigger that object to be replicated, regardless of changes (or no changes for that matter). I'm planning on leaving them on the queue for 5 minutes before moving them. That time is simply to make sure the login or previous session activity has completed.

This seems rather simple and straightforward, therefore I must be overlooking something.

thanks,

Paul





Archive powered by MHonArc 2.6.16.

Top of Page