Skip to Content.
Sympa Menu

grouper-dev - Re: [grouper-dev] [ldappcng] real-time & batch scheduling and clobbering avoidance ?

Subject: Grouper Developers Forum

List archive

Re: [grouper-dev] [ldappcng] real-time & batch scheduling and clobbering avoidance ?


Chronological Thread 
  • From: Tom Zeller <>
  • To: Chris Hyzer <>
  • Cc: Grouper Dev <>
  • Subject: Re: [grouper-dev] [ldappcng] real-time & batch scheduling and clobbering avoidance ?
  • Date: Fri, 21 Oct 2011 10:29:13 -0500

Right. But I am confused. Loader jobs are StatefulJobs :

public class GrouperLoaderJob implements Job, StatefulJob {

so can I just add another Job to the loader to do the full-sync ?

I see you added LDAP_SIMPLE, LDAP_GROUP_LIST, etc. Can I re-use
GrouperLoaderType.MAINTENANCE somehow ?

On Thu, Oct 20, 2011 at 10:04 AM, Chris Hyzer
<>
wrote:
> Note that the incremental should be reading the static with a method, and
> nothing outside of the full sync should be able to edit it, right?  :)
>
> Thanks,
> chris
>
> -----Original Message-----
> From:
>
>
> [mailto:]
> On Behalf Of Tom Zeller
> Sent: Thursday, October 20, 2011 10:41 AM
> To: Chris Hyzer
> Cc: Grouper Dev
> Subject: Re: [grouper-dev] [ldappcng] real-time & batch scheduling and
> clobbering avoidance ?
>
> Yeah. I need a static "lock" variable and a static instance of the
> provisioning class.
>
> These statics will exist just in the loader jvm, right ?
>
> I was avoiding statics because I was not sure what all might share the
> instance of the provisioning class, like any other process controlled
> by the loader.
>
> I will give it a try.
>
> On Thu, Oct 20, 2011 at 8:16 AM, Chris Hyzer
> <>
> wrote:
>> Can you just set a static variable in a try/finally block which says the
>> full sync is running, and if so, just return the first index which means
>> it didn't make any progress.  I think (hope) that doesn't log an error...  
>> alternately, the incremental could just sleep for 5 seconds in a loop
>> until the full is done (by static variable), no more incrementals will
>> start while the one didn't finish...
>>
>> Thanks,
>> Chris
>>
>> -----Original Message-----
>> From:
>>
>>
>> [mailto:]
>> On Behalf Of Tom Zeller
>> Sent: Wednesday, October 19, 2011 4:26 PM
>> To: Chris Hyzer
>> Cc: Grouper Dev
>> Subject: Re: [grouper-dev] [ldappcng] real-time & batch scheduling and
>> clobbering avoidance ?
>>
>>> Nothing except the change log process should be inserting stuff into the
>>> change log table.
>>> I don't think inserting records into the change log table will solve the
>>> problem, you will always have race conditions
>>>
>>> What about this part of my previous email:
>>>
>>>> the full sync decides to add a member to a group, but the incremental
>>>> also wants to do that.
>>>> Each process will need to ignore it if the member if already in the
>>>> group, right?
>>>
>>> Can it be idempotent?  So if it is already done it doesn't fail?
>>
>> Yup, for membership adds and deletes we have to search first, then
>> perform necessary modifications. In other words :
>>
>> 1 calculate correct provisioning
>> 2 lookup current provisioning
>> 3 determine the difference between correct and current provisioning
>> 4 apply any necessary modifications
>>
>> Here is a use case :
>>
>>  0 changelog : "add member to group"
>>  1 incremental sync : "add member to group" - lookup membership
>>  2 incremental sync : "add member to group" - add member to group
>>
>>  3 full sync : calculate correct provisioning of group
>>  4 changelog : "delete member from group"
>>
>> If we perform the incremental sync while the full sync is running :
>>
>>  5a incremental sync : "delete member from group" : lookup membership
>>  6a incremental sync : "delete member to group" - delete member from group
>>  7a full sync : lookup current provisioning of group
>>  8a full sync : diff - will add member to group
>>  9a full sync : modify - add member to group
>>
>> Performing the incremental sync during the full sync will incorrectly
>> provision the group.
>>
>> If we wait to perform the incremental sync until the full sync is
>> finished running :
>>
>>  5b incremental sync : begin wait
>>  6b full sync : lookup current provisioning of group
>>  7b full sync : diff
>>  8b full sync : modify
>>  9b incremental sync : end wait
>> 10b incremental sync : "delete member from group" : lookup membership
>> 11b incremental sync : "delete member to group" - delete member from group
>>
>> Performing the incremental sync after the full sync completes will
>> correctly provision the group.
>>
>> Here is a slightly different use case demonstrating the need for
>> idempotence :
>>
>>  0 changelog : "add member to group"
>>  1 incremental sync : "add member to group" - lookup membership
>>  2 incremental sync : "add member to group" - add member to group
>>
>>  3 changelog : "delete member from group"
>>  4 full sync : calculate correct provisioning of group
>>
>>  5c incremental sync : begin wait
>>  6c full sync : lookup current provisioning of group
>>  7c full sync : diff - will delete member from group
>>  8c full sync : modify - delete member from group
>>  9c incremental sync : end wait
>> 10b incremental sync : "delete member from group" : lookup membership
>> 11b incremental sync : "delete member to group" - no changes necessary
>>
>> So, I guess the ldappcng change log consumer will need a cron entry in
>> grouper-loader.properties to know when to trigger the full sync, and
>> it will need to know that a full sync is running to delay provisioning
>> of *any* change log entries until the full sync is complete.
>>
>> I had originally thought that we could delay provisioning per
>> identifier, but it is simpler to just delay the whole incremental job.
>> So, we can say we have real-time provisioning except when the full
>> sync is running, which should be configurable and can default to once
>> nightly.
>>
>> Are loader jobs Quartz Scheduler jobs in any way ? Can I extend
>> StatefulJob somehow in the change log consumer class to prevent
>> concurrency ? That would be a nice one-liner.
>>
>



Archive powered by MHonArc 2.6.16.

Top of Page