grouper-users - Re: [grouper-users] Large number of changes and provisioning
Subject: Grouper Users - Open Discussion List
List archive
- From: "Waldbieser, Carl" <>
- To: Jeffrey Crawford <>
- Cc: Gouper Users List <>
- Subject: Re: [grouper-users] Large number of changes and provisioning
- Date: Thu, 20 Jul 2017 10:32:32 -0400 (EDT)
- Ironport-phdr: 9a23:EkMn5hPzMz5SPr8eY5Ql6mtUPXoX/o7sNwtQ0KIMzox0KPz+rarrMEGX3/hxlliBBdydsK0UzbeO+4nbGkU+or+5+EgYd5JNUxJXwe43pCcHRPC/NEvgMfTxZDY7FskRHHVs/nW8LFQHUJ2mPw6aijSI4DUTAhTyMxZubqSwQ9aKzpf/6+fn2ZDdbR9FlXKWe7ptIhKsoU2FtMQYj5FvO60Z1xDSqT1Fd/kAlk1yIlfG1Sn14su6/ZN4/j4U89ko7coKGfHldqA0R71VBxwiOm489cD3qRSFQAeSsChPGl4KmwZFVlCWpCrxWY38526j7rJw
Jeffery,
The issue is specific to a class of provisioners. If I assume updates
dominate the work performed by LDAP services, then work performed by
incremental updates to a group is O(n). If I perform an update where I know
the end state, that is O(1).
Compare that to a target that is a database (without transactions, as that
will just make the example complex). Suppose the database represents each
group member as a single row. In that case, incremental updates and bulk
updates actually must perform the same amount of work.
In your specific example, it would actually be ideal that the composite could
be edited in place without causing every member to be removed and many
re-added. It would be useful if one could create a new composite and
"replace into" an existing composite. In that case, only the actual
differences would be reported. This would more accurately reflect the
*intent* of the changes an operator wanted to make.
The Lafayette LDAP provisioner does optimize to some extent to handle this
case. The provisioner does not process LDAP changes immediately as it
receives them. Instead, it collects the incremental changes in a database
and processes them in batches at some short, configurable, regular interval
(~20s in production). This allows a couple optimizations:
1) If a subject has multiple add/removes to a specific group, only the last
operation needs to be processed.
2) If multiple subjects are added/removed to/from a group, the group only
needs to be updated 1 time for that batch. The wider the update interval,
the more subjects you can process per batch.
I have run into the scenario you are talking about. In general, since I know
it is going to create a lot of churn, my approach is to temporarily route the
changes to a null route (via our rabbitMQ exchange) so the messages are
discarded. Once I am finished with the change, I re-instate the original
route and then fire off a bulk sync. Your suggestion would make that to
happen automatically, and I agree it would be useful. I am unsure though
whether Grouper should *not* produce incremental changes for the change
logger in this case, though.
Thanks,
Carl Waldbieser
ITS Systems Programmer
Lafayette College
----- Original Message -----
From: "Jeffrey Crawford"
<>
To: "Gouper Users List"
<>
Sent: Tuesday, July 18, 2017 2:26:52 PM
Subject: [grouper-users] Large number of changes and provisioning
We had an interesting case show up not so long ago, basically there was a
change in a group that in effect, removed everyone, and then added them all
back (composite group change), there were > 122,000 members of the group so
it cause a huge back log of changes that wound up taking quite a few hours.
Eventually I just stopped grouper, tagged the psp entry in the
grouper_change_log_consumer, to be the same number as syncGroups and
restarted, which performed a bulkSync. That only takes 30 min.
Additionally what I noticed is that our ldap servers were backed up quite a
bit as they were busy deleting records one at a time, and then adding them
again one at a time.
It got me to thinking that perhaps there should be a setting that will
identify how many records are supposed to change from the change log and
say if it's over 10,000, instead of processing the change log, it would
sync up the psp record to match syncGroups, and perform a bulk sync, which
is also easier on the LDAP servers as it does a compare and just modifies
what needs to change.
This setting would be settable by the admin since different environments
might find they should process 30,000 before the change log takes longer
than a bulk sync for example
Thoughts?
Jeffrey E. Crawford
Enterprise Service Team
<>
^ ^
/ \ ^ / \ ^
/ \/ \ / \ / \
/ \/ \/ \
/ \
You have been assigned this mountain to prove to others that it *can* be
moved.
- [grouper-users] Large number of changes and provisioning, Jeffrey Crawford, 07/18/2017
- Re: [grouper-users] Large number of changes and provisioning, Waldbieser, Carl, 07/20/2017
- Re: [grouper-users] Large number of changes and provisioning, Jeffrey Crawford, 07/20/2017
- Re: [grouper-users] Large number of changes and provisioning, Waldbieser, Carl, 07/20/2017
Archive powered by MHonArc 2.6.19.